Vision in an AI World
Challenge the way you perceive the world. Computer scientists and entrepreneurs working with artificial intelligence reveal how they adapt visual perception to enable robots and intelligent systems to see and interpret the environment.
Much of how we perceive our daily lives stems from the visual input we receive - paintings, photographs, and videos reveal how we think, feel and perceive light. Creating sensors for robots, intelligent machines, and augmented reality, researchers enable machines to perceive information related to the task-at-hand, without having to produce visual images. Tasks such as search and rescue, industrial inspection, crop monitoring or archeological surveying require an unconventional approach that delivers visual information in differing formats.
Speakers will reveal how sensory perception allows robotic devices to operate at high speeds and with a fraction of the power consumption of a conventional camera. They will also demonstrate multiple scene perception algorithms based in Computer Vision techniques and their applications.
As a participant, you will have the opportunity to challenge your own perception with interactive visual illusions and convince yourself that what your brain perceives is not what your eyes see.
Public event
Friday, 14 September 2018
14 - 17h
Venue
Schiffbau, Schiffbaustrasse 4, 8005 Zurich
external pageWebsite Digital Festivalcall_made
Programme
DownloadDownload (PDF, 3.6 MB)vertical_align_bottom
Please register external pageherecall_made.
Moderation:
external pageSusan Kishcall_made
Confirmed speakers:
Margarita Chli, Vision for Robotics Lab at ETH Zurich
Teaching robots to see
Alexander Ilic, Head of external pageMagic Leapcall_made and founder of external pageDacudacall_made
Spatial Computing: Towards a Co-Processor to the Human Brain
Markus Gross, Institute for Visual Computing at ETH Zurich and Director of Disney Research Zurich
external pageAlexander Sorkine-Hornungcall_made, external pageOculuscall_made
Machine perception for VR
external pageYulia Sandamirskayacall_made and external pageJulien Martelcall_made
external pageInstitute for Neuroinformaticscall_made, ETH Zurich and University of Zurich
Creating visual technologies in-silico inspired by what is found in-vivo
Isaac Deutsch and David Hoeller, external pageNvidiacall_made
Applications and recent advances of deep learning for robot navigation
Marc Pollefeys, Department of Computer Science at ETH Zurich
Computer Vision for Mixed Reality