Keynote Speakers

Shining Light between Real and Virtual Worlds

Paul Debevec, Google Research and USC Institute for Creative Technologies (USA)

I’ll describe a range of new techniques which bring virtual and real worlds together through lighting. These techniques build on Image-Based Lighting, the process of illuminating virtual objects using panoramic photos of real scenes, and Lighting Reproduction, the process of lighting real actors by virtual environments displayed on surrounding LEDs, part of the recent wave of interest in Virtual Production. Our Deep Light algorithm uses machine learning to estimate image-based lighting environments from ordinary background images, and is part of the Environmental HDR lighting feature in ARCore. We’ve recently shown that even better lighting estimates can be derived from pictures of people, using training data derived from our latest-generation light stage facial scanning system. This same data enables us to train algorithms to augment lighting in portrait photographs, adding virtual fill light from any direction. For full bodies, our Relightables volumetric capture system uses the light stage to perform real-time geometric and photometric acquisition of human performances, which can be inserted into real and virtual environments from any viewpoint and in any lighting. I’ll conclude by describing DeepView Video, a process which records and reproduces immersive light fields of real scenes with a spherical camera array to provide photoreal 6DOF experiences in a VR headset.

Paul is a Senior Staff Scientist in Google Research and an Adjunct Research Professor at USC. His research in High Dynamic Range imaging, image-based lighting, and photoreal digital actors has been recognized with two Scientific and Technical Academy Awards and the 2017 Progress Medal from the Society of Motion Picture and Television Engineers. Techniques from his work have been used to create key visual effects sequences in The Matrix, Spider-Man 2, Benjamin Button, Avatar, Gravity, Furious 7, Gemini Man, and to create a 3D Portrait of US President Barack Obama. He is a member of the Academy of Motion Picture Arts and Sciences Visual Effects Branch, the Television Academy’s Science and Technology Subgroup, a Fellow of the Visual Effects Society, and past Vice-President of ACM SIGGRAPH. More info at: www.debevec.org.

Augmented Surgeons and ‘Anatome’: AI & AR for IA

Ramesh Rakar, MIT Media Lab (USA)

In this presentation I am going to focus on the multimode augmented reality, which is not just head attached, but body attached as well as spatial augmented reality. Some of our research shader lamps, conducted at UNC during my PhD thesis, is one example of detached augmented reality. In the second part of my talk, I will discuss how to go from patient scale to population scale. To create these views for surgeons, we need to think about the three systems: capture, analyze and interact. And in the short term, augmented reality can improve the experience for surgeons, but over time it will transform with a library of complications, ability to analyze and find anomalies and ability to interact with display and haptics. And this will lead to the field of Anatome. We know about exposure zone or microbiome or metabolome or proteome, but Anatome is the structure function development and evolution of the anatomical structures.
So, a cellular level understanding of the anatomy with remote distant capture, as well as the analysis of the population level, we can create Anatomes. And finally, we can interact with precision health guidelines. There are many challenges in how we capture, analyze, and interact this data for surgeons. And we believe that the next 10 years are going to be a golden era to augment surgeons with augmented reality and AI.

Ramesh Raskar is an Associate Professor at MIT Media Lab and directs the Camera Culture research group. His focus is on AI and Imaging for health and sustainability. These interfaces span research in physical (e.g., sensors, health-tech), digital (e.g., automating machine learning) and global (e.g., geomaps, autonomous mobility) domains. He received the Lemelson Award (2016), ACM SIGGRAPH Achievement Award (2017), DARPA Young Faculty Award (2009), Alfred P. Sloan Research Fellowship (2009), TR100 Award from MIT Technology Review (2004) and Global Indus Technovator Award (2003). He has worked on special research projects at Google [X], Apple and Facebook and co-founded/advised several companies.

Augmenting Cognition

Yvonne Rogers, University College London (UK)

Augmented reality is a maturing technology that has much potential for providing sophisticated external structures and tools that can extend the reach of human cognition and give people more agency. At the same time, we have little understanding of how we manage our multiple and overlapping tasks that spread across a range of physical representations and tools, social situations, devices, applications and locations. How can we bridge this gap? Drawing on theories of epistemic action and external cognition, I will suggest new ways of thinking about how to design digital information that overlay aspects of the physical environment that can lead to new perceptions and enhanced cognition.

Yvonne Rogers is a Professor of Interaction Design, the director of UCLIC and a deputy head of the Computer Science department at University College London. Her research interests are in the areas of interaction design, human-computer interaction and ubiquitous computing. A central theme of her work is concerned with designing interactive technologies that augment humans. A current focus of her research is on human-data interaction and human-centered AI. Central to her work is a critical stance towards how visions, theories and frameworks shape the fields of HCI, cognitive science and Ubicomp. She has been instrumental in promulgating new theories (e.g., external cognition), alternative methodologies (e.g., in the wild studies) and far-reaching research agendas (e.g., “Being Human: HCI in 2020”). She has also published two monographs “HCI Theory: Classical, Modern and Contemporary.” and “Research in the Wild.” with Paul Marshall. She is a fellow of the ACM, BCS and the ACM CHI Academy.

©2020 by ISMAR
Sponsored by the IEEE Computer Society Visualization and Graphics Technical Committee and ACM SIGGRAPH
IEEE Privacy Policy