Tutorials
Tutorial 1: Understanding Outdoor Augmented Reality
Presenters
Mark Billinghurst, Uni. of South Australia and and Uni. of Auckland, New Zealand
Remi Driancourt, Square Enix Co., Ltd., Japan
Simon Stannus, Square Enix Co., Ltd., Japan
Stefanie Zollmann, University of Otago, New Zealand
Jonathan Ventura, California Polytechnic State University, USA
Abstract
The most common Augmented Reality applications that are available on current consumer devices allow for limited experiences by grounding virtual content to their temporary local space. This course examines some of the challenges and opportunities presented by the inevitable growth of Augmented Reality into wider use cases. It starts by covering the unique problems faced when taking AR outdoors into global and persistent coordinate systems. It also discusses the new paradigms for interaction and collaboration AR enables. It concludes by discussing the role AR will play in future business and society more broadly.
Tutorial 2: Extended Reality and Smart Immersive Environments
Presenters
Denis Gračanin, Virginia Polytechnic Institute and State University (Virginia Tech.), USA
Krešimir Matković, Virginia Polytechnic Institute and State University (Virginia Tech.), USA
Abstract
The goal of this tutorial is to introduce the participants to Smart Immersive Environments (SIE) and to discuss basic ideas and design principles for eXtended Reality (XR) applications for IoT-enabled Smart Built Environments (SBE). The participants with learn how to incorporate contextualized SBE data, information, and services into XR applications and use that data to improve user interactions with SIEs and better collaboration among the users (co-located and distributed) in SIEs. Several example applications from healthcare and education domains will be presented.
Tutorial 3: Cognitive Aspects of Interaction in Virtual and Augmented Reality Systems (CAIVARS)
Presenters
Manuela Chessa, University of Genoa, Italy
Giovanni Maria Farinella, University of Catania, Italy
Guido Maiello, University of Giessen, Germany
Dimitri Ognibene, University of Essex, UK
David Rudrauf, University of Geneva, Switzerland
Fabio Solari, University of Genoa, Italy
Abstract
The tutorial will analyze interaction in Virtual and Augmented Reality (VR/AR) systems under different points of view. Manuela Chessa, an expert in perceptual aspects of human-computer interaction, will discuss how interaction in AR affects our senses, and how misperception issues negatively affect interaction. Several technological solutions to interact in VR and AR will be discussed and, finally, the challenges and opportunities of mixed reality (MR) systems will be analyzed. Fabio Solarii, an expert in biologically inspired computer vision, will focus on foveated visual perception for action tasks and on the calibration and the geometry of interactive AR. Guido Maiello, an innovative young neuroscientist, will present the link between our eyes and hands in the real world with the aim of improving the design of better interaction techniques in VR and AR. Giovanni Maria Farinella, a leading computer vision expert, will discuss first person vision and egocentric perception. Dimitri Ognibene, whose research uniquely combines robotics with computational neuroscience and machine learning, will describe how perception is an active process in both human and machines. David Rudrauf, a leading expert in mathematical psychology, will present a computational model to investigate human consciousness in virtual worlds. Overall, this unique panel of multi-disciplinary researchers will delineate a compelling argument in favor of investigating human cognition and perception in the context of AR/VR.
Tutorial 4: OpenARK — Tackling Augmented Reality Challenges via an Open- Source Software Development Kit
Presenters
Allen Y. Yang, UC Berkeley, USA
Mohammad Keshavarzi, UC Berkeley, USA
Abstract
This tutorial is a revised and updated edition of the first OpenARK tutorial presented at ISMAR 2019. The aim of this tutorial is to present an open-source augmented reality development kit, called OpenARK. OpenARK was founded by Dr. Allen Yang at UC Berkeley in 2015. Since then, the project has received high-impact awards and visibility. Currently OpenARK is being used by several industrial alliances including HTC Vive, Siemens, Ford, and State Grid. In 2018, OpenARK won the only Mixed Reality Award at the Microsoft Imagine Cup Global Finals. In the same year in China, Open-ARK also won a Gold Medal at the Internet+ Innovation and Entrepreneurship Competition, the largest such competition in China. OpenARK currently also receives funding support from a research grant by Intel RealSense project and the ONR. OpenARK includes a multitude of core functions critical to AR developers and future products. These functions include multi-modality sensor calibration, depth-based gesture detection, depth-based deformable avatar tracking, and SLAM and 3D reconstruction. All functions are based on state-of-the-art real-time algorithms and are coded to be efficient on mobile-computing platforms.
Another core component in OpenARK is its open-source depth perception data bases. Currently we have made two unique databases available to the community, one on depth-based gesture detection and the other on mm-accuracy indoor and outdoor large-scale scene geometry models and AR attribute labeling. We would like to overview our effort in the design and construction of these databases that potentially could benefit the community at large. Finally, we will discuss our effort in making depth-based perception easily accessible to application developers, who may not have and should not be forced to learn good understanding about 3D point cloud and reconstruction algorithms. The last core component of OpenARK is an interpreter of 3D scene layouts and its compatible AR attributes based on generative design principles first invented for creating architectural design layouts. We will discuss the fundamental concepts and algorithms of generative design and how it can be used to interpret common 3D scenes and their attributes for intuitive AR application development.
Tutorial 5: TrackingExpert+ An Open Source Library for Object Detection and Tracking
Presenters
Rafael Radkowski, Iowa State University, USA
Sindhura Challa, Iowa State University, USA
Abstract
The tutorial will introduce TrackingExpert+, an open-source library written in C/C++, which provides functionalities for vision-based object detection and 6-DoF pose estimation for augmented reality applications on Windows platforms. It was developed with an industrial augmented reality (AR) context in mind, to support applications such as assembly training, maintenance, and quality control. Many AR applications in this area need to detect, track, and visually annotate individual parts or components on a workbench. These features allow an application or application designer to align visual instructions properly and provide means for quality control, e.g., is a part assembled or not. The TrackingExpert+ functionality relies on range cameras and point clouds extracted from depth images. Object detection and pose estimation operates model-based; the tool requires a 3D point cloud model of any asset of interest. Its internal functionality works with a statistical pattern matching algorithm, which utilizes the distribution of surface curvatures to detect and label objects in a scene. The library has been successfully employed for many AR applications of this kind. It is now available – with support of the U.S. National Institute of Standards and Technologies – as an open-source version on GitHub: https://github.com/rafael-radkowski/TrackingExpertPlus under MIT license; a plugin for Unity is under development.
Tutorial 6: The Replication Crisis in Empirical Science: Implications for Human Subject Research in Mixed Reality
Presenter
J. Edward Swan II, Mississippi State University, USA
Mohammed Safayet Arefin, Mississippi State University, USA
Abstract
This tutorial will first discuss the replication crisis in empirical science. This term was coined to describe recent significant failures to replicate empirical findings, in a number of fields, including medicine and psychology. In many cases, over 50% of previously reported results could not be replicated. This fact has shaken the foundations of these fields: Can empirical results really be believed? Should, for example, medical decisions really be based on empirical research? How many psychological findings can we believe? After describing the crisis, the tutorial will revisit enough of the basics of empirical science to explain the origins of the replication crisis. The key issue is that hypothesis testing, which in empirical science is used to establish truth, is the result of a probabilistic process. However, the human mind is wired to reason absolutely: Humans have a difficult time understanding probabilistic reasoning*. The tutorial will discuss some of the ways that funding agencies, such as the US National Institutes of Health (NIH), have responded to the replication crisis, by, for example, funding replication studies, and requiring that grant recipients publicly post anonymized data. Other professional organizations, including IEEE, have recently begun efforts to enhance the replicability of published research. Finally, the tutorial will consider how the Virtual Environments community might respond to the replication crisis. In particular, in our community the reviewing process often considers work that involves systems, architectures, or algorithms. In these cases, the reasoning behind the correctness of the results is usually absolute. Therefore, the standard for accepting papers is that the finding exhibits novelty—to some degree, the result should be surprising. However, this standard does not work for empirical studies (which, typically, involve human experimental subjects). Because empirical reasoning is probabilistic, important results need to be replicated, sometimes multiple times, and by different laboratories. As the replications mount, the field is justified in embracing increasing belief in the results. In other words, consider a field that, in order to accept a paper reporting empirical results, always requires surprise: This is a field that will not progress in empirical knowledge. The tutorial will end with a call for the community to be more accepting of replication studies. In addition, the tutorial will consider whether actions taken by other fields, in response to the replication crisis, might also be recommendable for the Virtual Environments community.
Tutorial 7: Storytelling for Virtual Reality
Presenter
Mirjam Vosmeer, Hogeschool van Amsterdam/Amsterdam University of Applied Sciences, The Netherlands
Abstract
In this tutorial, I will present the research program on Storytelling for Virtual Reality that I have conducted in the last years. After telling something about my own background and the research questions that lead up to my current investigations, I will present the projects and installations that I have worked on with my students and industry partners, and discuss the outcomes and insights that we gained from them. Subsequently, I will explain how these insights have led us to propose a model that differentiates between physical and narrative interaction in VR and how we intend to apply this model within our upcoming research project VR for Diversity.
©2020 by ISMAR
Sponsored by the IEEE Computer Society Visualization and Graphics Technical Committee and ACM SIGGRAPH
IEEE Privacy Policy