Demonstrations
Aipan VR: A Virtual Reality Experience for Preserving Uttarakhand’s Traditional Art Form
Nishant Chaudhary, Mihir Raj, Richik Bhattacharjee, Anmol Srivastava, Rakesh Sah, Pankaj Badoni
- PDF linkOpens in a new tab
Short abstract
A core aspect of the Augmented Mirrors is that they enable the users to dynamically interact with a real mirror while providing a reflected view of the real and virtual content of an augmented reality scene. To provide the reflected view, our concept uses the poses of an observer and a real mirror to place a virtual camera in the position of the virtual observer in a physical mirror. A virtual reflection is then generated by rendering the scene from the viewpoint of the virtual camera into a texture that is applied to the real mirror in screen space using the observer’s viewpoint. To avoid rendering virtual objects behind the mirror, we use the mirror’s plane to set up the near plane of the virtual camera.
An End-to-End Mixed Reality Product for Interior Home Furnishing
Salma Jiddi, Brian Pugh, Qiqin Dai, Luis Puig, Nektarios Lianos, Paul Gauthier, Brian Totty, Angus Dorbie, Jianfeng Yin, Kevin Wong
- PDF linkOpens in a new tab
Short abstract
In this work, we consider the challenge of achieving a coherent geometric and photometric blending of real and virtual worlds in the context of an innovative Mixed Reality (MR) indoor application. Using consumer phones, the proposed solution takes as input color images of a scene to produce a magazine-quality stitched panorama, recover its dense 3D layout, and infer its lighting model parameters. Our system handles a large variety of real-world scenes where layout, texture and material properties vary throughout the full spectrum.
AQVARIUM: A mixed reality snorkeling system
Juri Platonov, Pawel Kaczmarczyk
- PDF linkOpens in a new tab
Short abstract
The AQVARIUM system consists of a waterproof smartphone housing, a lens holder and a localization module. The smartphone housing is attached frontally to the outside of a diving mask, the lens holder correspondingly on the inside. So far, so ordinary. The real highlight is the localization module, which is attached to the top of the breathing tube. By means of four small base stations, which are placed at the edge of the pool, the localization module calculates its position in the pool and transmits it wirelessly to the smartphone. The WiFi repeater, which is also built into the localization module, ensures communication between the devices of several users. Thus, they are displayed at the right place in the virtual world – represented by freely selectable avatars. A specially developed app is responsible for displaying the virtual content and for system calibration.
Augmented Mirrors
Alejandro Martin-Gomez
- PDF linkOpens in a new tab
Short abstract
A core aspect of the Augmented Mirrors is that they enable the users to dynamically interact with a real mirror while providing a reflected view of the real and virtual content of an augmented reality scene. To provide the reflected view, our concept uses the poses of an observer and a real mirror to place a virtual camera in the position of the virtual observer in a physical mirror. A virtual reflection is then generated by rendering the scene from the viewpoint of the virtual camera into a texture that is applied to the real mirror in screen space using the observer’s viewpoint. To avoid rendering virtual objects behind the mirror, we use the mirror’s plane to set up the near plane of the virtual camera.
EmbodiMap VR. Extending body-mapping into the third dimension.
Volker Kuchelmeister, Jill Bennett, Gail Kenning, Natasha Ginnivan, Melissa Neidorf.
- PDF linkOpens in a new tab
Short abstract
EmbodiMap VR is a therapeutic research tool that enables users to engage with and map their feelings, thoughts and emotions, and how these are experienced within the body. Supporting fEEL’s research into felt experience and drawing on insights from somatic and sensori-motor psychotherapies, EmbodiMap invites participants to engage with a virtual 3D facsimile of the body, entering inside this form and using the tool to paint sensations as they are experienced. It promotes a palpable, interactive engagement with the ‘avatar’ body. EmbodiMap supports the portable Oculus Quest VR platform, multiple users represented by avatars including voice channel, a guided meditation version, an optional hand-tracking UI, an Augmented Reality spectator view, interactive poseable bodies, 3D model export and a large number of figures and props to choose from.
Exploiting ARKit Depth Maps for Mixed Reality Home Design
Kevin Wong, Salma Jiddi, Yacine Alami, Philip Guindi, Brian Totty, Qing Guo, Michael Otrada, Paul Gauthier
- PDF linkOpens in a new tab
Short abstract
Mixed reality technologies are increasingly relevant to the retail industry as it embraces digital commerce. In this work, we demon- strate the use of Apple’s new active Depth API in an ambitious, mixed reality experience being developed at IKEA, the world’s largest home furnishings retailer. Our solution offers advantages not widely available to consumers today, including the ability to capture portable and interactive room models that can be virtually furnished from anywhere, alone or collaboratively, with high fidelity rendering, expansive imagery, and fine occlusions.
Flower Factory: A Component-based Approach for Rapid Flower Modeling
Junjun Pan
- PDF linkOpens in a new tab
Short abstract
Flower Factory is a component-based framework for rapid flower modeling. The entire process includes geometry generation and texture generation. The flowers are assembled by different components, and the shapes of these components are created using simple primitives such as points and splines. After the shapes are determined, the textures are synthesized automatically based on a predefined mask. The mask is designed according to a number of rules from real flowers. The entire modeling process can be controlled by a set of parameters that describe the physical attributes of the flowers.
Generating Emotive Gaits for Virtual Agents Using Affect-Based Autoregression
Uttaran Bhattacharya
- PDF linkOpens in a new tab
Short abstract
In our demo, we first show four virtual agent gaits expressing four emotions, happy, sad, angry, and neutral. The virtual agents walk on user-driven trajectories marked with the red line. We generate these gaits using our trained autoregression network with added components for emotional expressiveness and trajectory-following. We then show how our virtual agent transitions between different emotions as they walk on the user-driven trajectories. We compare our generated emotive gaits with a prior state-of-the-art method on gait generation, as well as with two ablated versions of our network: one without emotional expressiveness and the other without the ability to follow user-driven trajectories. Finally, we deploy our virtual agents in an AR environment and showcase their gaits as they walk with different emotions.
HarpyGame: A Customizable Serious Game for Upper Limb Rehabilitation after Stroke
Gabriel Cyrino, Júlia Tannús, Edgard Lamounier, Alexandre Cardoso, Alcimar Soares
- PDF linkOpens in a new tab
Short abstract
Stroke is the most common disease that leads to the dexterity impairment of the upper limbs. Serious games have emerged as an advantageous alternative to rehabilitation treatment when compared to traditional therapies. This work presents the development of a customizable serious game, based on Virtual Reality techniques to achieve a more natural and intuitive interface. Thus, the system consists of a serious game with a realistic environment, a control panel for patient management and game customization, and a database. Results indicated significant acceptance by the patients and physiotherapists, implying the system’s potential use in the post-stroke rehabilitation process of the upper limbs.
InSight AR. Virtual sculptures in presented in a public exhibition.
Volker Kuchelmeister
- PDF linkOpens in a new tab
Short abstract
InSight AR is a location based Augmented Reality project and mobile phone app produced for the popular Sculptures by the Sea Bondi exhibition to be held in Sydney Australia late 2020. It forms uncanny relations between virtual sculptures, visitors, the environment and the art on site. The virtual sculptures are presented as 3D computer models with a semi-translucent ghost-like appearance. They represent a volume and at the same time, frame the surrounding and allow visitors to pose with those models. The project includes three parts, an outdoor exhibit using AR plane detection and geo-location to locate virtual sculptures on a coastal walk, AR image tracking for an indoor exhibition which runs alongside Sculptures by the Sea and a 3D map of the coastal walk also presented in AR.
Mobile3DRecon: Real-time Monocular 3D Reconstruction on a Mobile Phone
Hanqing Jiang
- PDF linkOpens in a new tab
Short abstract
The demo shows two examples “Printer” and “Indoor Stairs”, both using our real-time monocular 3D reconstruction system on MI8 phone to handle the occlusions and collisions between virtual objects and real scenes to achieve realistic AR effects. As a user navigates into his environment by a mobile phone, our system tracks 6DoF poses of the mobile phone and estimates depth map of each keyframe in real time. The estimated depth map is incrementally fused to generate dense surface mesh of the surrounding environment, which helps to achieve realistic AR effects for the user, including AR occlusions and collisions.
Multimedia Information Retrieval for Mixed Interaction Based on Cross-modal Retrieval and Hand Gesture Browsing
Rintaro Yanagi, Ren Togo, Takahiro Ogawa, Miki Haseyama
- PDF linkOpens in a new tab
Short abstract
In this demo, we present a novel multimedia information retrieval system assuming utilization jointly with MR devices. We construct a system by combining two components “cross-modal retrieval” and “browsing by hand gestures”. The cross-modal retrieval enables users to retrieve desired multimedia contents utilizing a sentence query from an un-labeled database. We newly developed a retrieval system that enables users to input a query sentence from users’ voice and browse retrieval results by user’s hand gestures. By harmonizing the developed system with MR devices, we can retrieve desired multimedia information without touching devices during cooking, driving, etc.
Project Esky: Enabling High Fidelity Augmented Reality Content on an Open Source Platform
Damien Constantine Rompapas, Daniel Flores Quiros, Charlton Rodda, Bryan Christopher Brown, Noah Benjamin Zerkin, Alvaro Cassinelli
- PDF linkOpens in a new tab
Short abstract
Watch in awe as Project Esky enables high fidelity AR content with Project NorthStar, featuring full hand-interactions and UI/UX capabilities, spatial mapping, and Authoring. This includes desktop demonstrations such as a piano hooked up to a live Virtual Synthesiser, an engine E-Learning Demo, and the usual Microsoft Mixed Reality toolkit.
Radarmin: A Radar-Based Mixed Reality Theremin Setup
Ryo Hajika
- PDF linkOpens in a new tab
Short abstract
Radarmin is a simple musical performance setup that creates an intuitive theremin-like experience. The system consists of a Leap Motion sensor, a Magic Leap mixed reality headset, and a miniaturized radar-based motion sensor. The two sensors will read a player’s gestural performance into musical notes to generate sound, and a Magic Leap headset displays virtual environment to visualize performance guide along with sound visualization. Players experience Radarmin in a rhythm game, where they can learn how to play a musical instrument with the appropriate audio and visual feedback in mixed reality.
RetroActivity: Rapidly Deployable Live Task Guidance Experiences
Andrey Konin, Shahram Najam Syed, Shakeeb Siddiqui, Sateesh Kumar, Quoc-Huy Tran, M. Zeeshan Zia
- PDF linkOpens in a new tab
Short abstract
Our RetroActivity system automatically builds computational models of a complex physical task, such as a maintenance activity, given only a handful of recorded demonstrations of the task. Once such a model is built, RetroActivity can finely track the job status from live video, to guide a worker through the task, provide independent training, and perform analytics. We enable ordinary AR developers to build AI-mediated feedback within hours, instead of requiring teams of specialized computer vision engineers who build temporal causation rules on top of 3rd-party visual recognition modules over months.
RockemBot Boxing: Facilitating Long-Distance Real-Time Collaborative Interactions with Limited Hand Tracking Volumes
James Cambpell, Eleanor Barnes, Jack Douglas Fraser, Bradley Twynham, Xuan Tien Pham, Nguyen Thu Hien, Geert Lugtenberg, Nishiki Yoshinari, Sarah Al Akkad, Andrew Gavin Taylor, Mark Billinghurst, Damien Constantine Rompapas
- PDF linkOpens in a new tab
Short abstract
Be amazed as two live AR actors perform the rematch of the century! In this live demonstration, you will observe a virtual/AR boxing match with participants standing at over 1.5 meters, the current social distancing standard. Aftewards, you can challenge our AR actors, or each other, using the VR port of rockembot boxing.
The Visit VR. Understanding the experience of living with dementia
Volker Kuchelmeister, Jill Bennett, Gail Kenning, Natasha Ginnivan, Melissa Neidorf, Chris Papadopouos
- PDF linkOpens in a new tab
Short abstract
The Visit is an interactive 6-degree-of-freedom real-time video installation and Virtual Reality experience, developed from an interdisciplinary research project conducted by artists and psychologists working with women living with dementia. Visitors are invited to sit with Viv, a life-sized, realistic and responsive character whose dialogue is scripted largely from verbatim interviews. The work draws us into a world of perceptual uncertainty, while at the same time confounding stereotypes and confronting fears about dementia. The characterisation has both scientific validity and the qualities of a rich, emotion-driven film narrative. The point of the work is to draw the viewer into the emotional and perceptual world of Viv.
VisuoTouch: Enabling Haptic Feedback in Augmented Reality through Visual Cues
G S Rajshekar Reddy, Damien Constantine Rompapas
- PDF linkOpens in a new tab
Short abstract
Using hands to interact with virtual content is often difficult, as we do not have haptic feedback without physical peripherals. Visuo touch aims to psychologically enable the feeling of haptic feedback via a visual cue that behaves against virtual objects in the same way as real hand would (if the object was physical). In this demo you will see how visuo touch is triggered as a live actor interacts with virtual content.
©2020 by ISMAR
Sponsored by the IEEE Computer Society Visualization and Graphics Technical Committee and ACM SIGGRAPH
IEEE Privacy Policy