Applied Sciences, Special Issue "New Frontiers in Virtual Reality: Methods, Devices and Applications", 2021
Impact of View-Dependent Image-Based Effects on Perception of Visual Realism and Presence in Virtual Reality Environments Created Using Multi-Camera Systems
Grégoire Dupont de Dinechin, Alexis Paljic, Jonathan Tanant
Abstract Several recent works have presented image-based methods for creating high-fidelity immersive virtual environments from photographs of real-world scenes. In this paper, we provide a user-centered evaluation of such methods by way of a user study investigating their impact on viewers' perception of visual realism and sense of presence. In particular, we focus on two specific elements commonly introduced by image-based approaches. First, we investigate the extent to which using dedicated image-based rendering algorithms to render the scene with view-dependent effects (such as specular highlights) causes users to perceive it as being more realistic. Second, we study whether making the scene fade out beyond a fixed volume in 3D space significantly reduces participants' feeling of being there, examining different sizes for this viewing volume. To provide details on the virtual environment used in the study, we also describe how we recreated a museum gallery for room-scale virtual reality using a custom-built multi-camera rig. The results of our study show that using image-based rendering to render view-dependent effects can effectively enhance the perception of visual realism and elicit a stronger sense of presence, even when it implies constraining the viewing volume to a small range of motion.
BibTex @article{deDinechin2021Impact,
title = {Impact of View-Dependent Image-Based Effects on Perception of Visual Realism and Presence in Virtual Reality Environments Created Using Multi-Camera Systems},
author = {de Dinechin, Gr{\'e}goire Dupont and Paljic, Alexis and Tanant, Jonathan},
journal = {Applied Sciences},
volume = {11},
year = {2021},
number = {13},
article-number = {6173},
issn = {2076-3417},
url = {https://www.mdpi.com/2076-3417/11/13/6173},
doi = {10.3390/app11136173}
}
PDF
VIDEO
6th Workshop on Everyday Virtual Reality (WEVR), 2020
From Real to Virtual: An Image-Based Rendering Toolkit to Help Bring the World Around Us Into Virtual Reality
Grégoire Dupont de Dinechin, Alexis Paljic
Abstract The release of consumer-grade head-mounted displays has helped bring virtual reality (VR) to our homes, cultural sites, and workplaces, increasingly making it a part of our everyday lives. In response, many content creators have expressed renewed interest in bringing the people, objects, and places of our daily lives into VR, helping push the boundaries of our ability to transform photographs of everyday real-world scenes into convincing VR assets. In this paper, we present an open-source solution we developed in the Unity game engine as a way to make this image-based approach to virtual reality simple and accessible to all, to encourage content creators of all kinds to capture and render the world around them in VR. We start by presenting the use cases of image-based virtual reality, from which we discuss the motivations that led us to work on our solution. We then provide details on the development of the toolkit, specifically discussing our implementation of several image-based rendering (IBR) methods. Finally, we present the results of a preliminary user study focused on interface usability and rendering quality, and discuss paths for future work.
BibTex @inproceedings{deDinechin2020From,
title = {From Real to Virtual: An Image-Based Rendering Toolkit to Help Bring the World Around Us Into Virtual Reality},
booktitle = {6th Workshop on Everyday Virtual Reality ({WEVR})},
publisher = {{IEEE}},
author = {Gr{\'e}goire Dupont de Dinechin and Alexis Paljic},
month = mar,
year = {2020}
}
PDF
Conference The 6th Workshop on Everyday Virtual Reality took place on 22 March 2020, as a virtual event (due to the pandemic-related lockdown situation). The workshop was co-located with the IEEE VR 2020 conference. It focused on "the investigation of well-known VR/AR/MR (XR) research themes in everyday contexts and scenarios other than research laboratories and specialist environments".
VIDEO
International Conference on Computer Animation and Social Agents (CASA), 2019
Virtual Agents from 360° Video for Interactive Virtual Reality
Grégoire Dupont de Dinechin, Alexis Paljic
Abstract Creating lifelike virtual humans for interactive virtual reality is a difficult task. Most current solutions rely either on crafting synthetic character models and animations, or on capturing real people with complex camera setups. As an alternative, we propose leveraging efficient learning-based models for human mesh estimation, and applying them to the popular form of immersive content that is 360° video. We demonstrate an implementation of this approach using available pre-trained models, and present user study results that show that the virtual agents generated with this method can be made more compelling by the use of idle animations and reactive verbal and gaze behavior.
BibTex @inproceedings{deDinechin2019Virtual,
location = {Paris, France},
series = {{CASA} '19},
title = {Virtual Agents from 360° Video for Interactive Virtual Reality},
isbn = {978-1-4503-7159-9},
url = {https://doi.org/10.1145/3328756.3328775},
doi = {10.1145/3328756.3328775},
booktitle = {Proceedings of the 32nd International Conference on Computer Animation and Social Agents},
publisher = {{ACM}},
author = {Gr{\'e}goire Dupont de Dinechin and Alexis Paljic},
month = jul,
year = {2019},
pages = {75--78}
}
PDF
Conference CASA 2019 (32nd International Conference on Computer Animation and Social Agents) took place on 1-3 July in Paris, France. Organised in cooperation with ACM-SIGGRAPH and jointly with the 2019 International Conference on Intelligent Virtual Agents (IVA 2019), the conference presented works on "computer animation, embodied agents, social agents, virtual and augmented reality, and visualization".
VIDEO
3rd Digital Heritage International Congress (DigitalHERITAGE) held jointly with the 24th International Conference on Virtual Systems & Multimedia (VSMM), 2018
Cinematic Virtual Reality With Motion Parallax From a Single Monoscopic Omnidirectional Image
Grégoire Dupont de Dinechin, Alexis Paljic
Abstract Complementary advances in the fields of virtual reality (VR) and reality capture have led to a growing demand for VR experiences that enable users to convincingly move around in an environment created from a real-world scene. Most methods address this issue by first acquiring a large number of image samples from different viewpoints. However, this is often costly in both time and hardware requirements, and is incompatible with the growing selection of existing, casually-acquired 360-degree images available online. In this paper, we present a novel solution for cinematic VR with motion parallax that instead only uses a single monoscopic omnidirectional image as input. We provide new insights on how to convert such an image into a scene mesh, and discuss potential uses of this representation. We notably propose using a VR interface to manually generate a 360-degree depth map, visualized as a 3D mesh and modified by the operator in real-time. We applied our method to different real-world scenes, and conducted a user study comparing meshes created from depth maps of different levels of accuracy. The results show that our method enables perceptually comfortable VR viewing when users move around in the scene.
BibTex @inproceedings{deDinechin2018Cinematic,
title = {Cinematic Virtual Reality With Motion Parallax From a Single Monoscopic Omnidirectional Image},
url = {https://doi.org/10.1109/digitalheritage.2018.8810116},
doi = {10.1109/digitalheritage.2018.8810116},
booktitle = {2018 3rd Digital Heritage International Congress ({DigitalHERITAGE}) held jointly with 2018 24th International Conference on Virtual Systems \& Multimedia ({VSMM} 2018)},
author = {Gr{\'e}goire Dupont de Dinechin and Alexis Paljic},
publisher = {{IEEE}},
month = oct,
year = {2018},
pages = {1--8}
}
PDF
Conference Digital Heritage 2018 (New Realities - Authenticity and Automation in the Digital Age, 3rd International Congress and Expo) took place on 26-30 October 2018 in San Francisco, USA. Focused on "digital technology for documenting, conserving and sharing heritage", it included the 24th International Conference on Virtual Systems and MultiMedia (VSMM 2018) and the 25th Conference of the Pacific Neighborhood Consortium (PNC 2018). Research tracks included works on reality capture (digitization, scanning, remote sensing, …), reality computing (databases and repositories, GIS, CAD, …) and reality creation (VR, AR, MR, …).
Datasets The four color and depth 360° image pairs used for the user study (Garden, Bookshelves, Snow, Museum) can be downloaded by clicking here .
IEEE Conference on Virtual Reality and 3D User Interfaces (IEEE VR), 2020
Presenting COLIBRI VR, an Open-Source Toolkit to Render Real-World Scenes in Virtual Reality
Grégoire Dupont de Dinechin, Alexis Paljic
Abstract From image-based virtual tours of apartments to digital museum exhibits, transforming photographs of real-world scenes into visually faithful virtual environments has many applications. In this paper, we present our development of a toolkit that places recent advances in the field of image-based rendering (IBR) into the hands of virtual reality (VR) researchers and content creators. We map out how these advances can improve the way we usually render virtual scenes from photographs. We then provide insight into the toolkit’s design as a package for the Unity game engine and share details on core elements of our implementation.
BibTex @inproceedings{deDinechin2020Presenting,
title = {Presenting {COLIBRI VR}, an Open-Source Toolkit to Render Real-World Scenes in Virtual Reality},
booktitle = {2020 {IEEE} Conference on Virtual Reality and {3D} User Interfaces ({VR})},
publisher = {{IEEE}},
author = {Gr{\'e}goire Dupont de Dinechin and Alexis Paljic},
month = mar,
year = {2020}
}
PDF
Conference IEEE VR 2020 (the 27th IEEE Conference on Virtual Reality and 3D User Interfaces) took place on 22-26 March 2020, as a virtual event (due to the lockdown situation). IEEE VR is "the premier international event for the presentation of research results in the broad area of virtual reality (VR)".
IEEE Conference on Virtual Reality and 3D User Interfaces (IEEE VR), 2020
Demonstrating COLIBRI VR, an Open-Source Toolkit to Render Real-World Scenes in Virtual Reality
Grégoire Dupont de Dinechin, Alexis Paljic
Abstract This demonstration showcases an open-source toolkit we developed in the Unity game engine to enable authors to render real-world photographs in virtual reality (VR) with motion parallax and view-dependent highlights. First, we illustrate the toolset's capabilities by using it to display interactive, photorealistic renderings of a museum's mineral collection. Then, we invite audience members to be rendered in VR using our toolkit, thus providing a live, behind-the-scenes look at the process.
BibTex @inproceedings{deDinechin2020Demonstrating,
title = {Demonstrating {COLIBRI VR}, an Open-Source Toolkit to Render Real-World Scenes in Virtual Reality},
booktitle = {2020 {IEEE} Conference on Virtual Reality and {3D} User Interfaces ({VR})},
publisher = {{IEEE}},
author = {Gr{\'e}goire Dupont de Dinechin and Alexis Paljic},
month = mar,
year = {2020}
}
PDF
Conference IEEE VR 2020 (the 27th IEEE Conference on Virtual Reality and 3D User Interfaces) took place on 22-26 March 2020, as a virtual event (due to the lockdown situation). IEEE VR is "the premier international event for the presentation of research results in the broad area of virtual reality (VR)".
IEEE Conference on Virtual Reality and 3D User Interfaces (IEEE VR), 2020
Illustrating COLIBRI VR, an Open-Source Toolkit to Render Real-World Scenes in Virtual Reality
Grégoire Dupont de Dinechin, Alexis Paljic
Abstract This video submission illustrates the Core Open Lab on Image-Based Rendering Innovation for Virtual Reality (COLIBRI VR), an open-source toolkit we developed to help authors render photographs of real-world people, objects, and places as responsive 3D assets in VR. We integrated COLIBRI VR as a package for the Unity game engine: in this way, the toolset's methods can easily be accessed from a convenient graphical user interface, and be used in conjunction with the game engine's built-in tools to quickly build interactive virtual reality experiences. Our primary goal is to help users render real-world photographs in VR in a way that provides view-dependent rendering effects and compelling motion parallax. For instance, COLIBRI VR can be used to render captured specular highlights, such as the bright reflections on the facets of a mineral. It also enables providing motion parallax from estimated geometry, e.g. from a depth map associated to a 360° image. We achieve this by implementing efficient image-based rendering methods, which we optimize to run at high framerates for VR. We make the toolkit openly available online, so that it might be used to more easily learn about and apply image-based rendering in the context of virtual reality content creation.
BibTex @inproceedings{deDinechin2020Illustrating,
title = {Illustrating {COLIBRI VR}, an Open-Source Toolkit to Render Real-World Scenes in Virtual Reality},
booktitle = {2020 {IEEE} Conference on Virtual Reality and {3D} User Interfaces ({VR})},
publisher = {{IEEE}},
author = {Gr{\'e}goire Dupont de Dinechin and Alexis Paljic},
month = mar,
year = {2020}
}
PDF
Conference IEEE VR 2020 (the 27th IEEE Conference on Virtual Reality and 3D User Interfaces) took place on 22-26 March 2020, as a virtual event (due to the lockdown situation). IEEE VR is "the premier international event for the presentation of research results in the broad area of virtual reality (VR)".
IEEE Conference on Virtual Reality and 3D User Interfaces (IEEE VR), 2019
Automatic Generation of Interactive 3D Characters and Scenes for Virtual Reality from a Single-Viewpoint 360-Degree Video
Grégoire Dupont de Dinechin, Alexis Paljic
Abstract This work addresses the problem of using real-world data captured from a single viewpoint by a low-cost 360-degree camera to create an immersive and interactive virtual reality scene. We combine different existing state-of-the-art data enhancement methods based on pre-trained deep learning models to quickly and automatically obtain 3D scenes with animated character models from a 360-degree video. We provide details on our implementation and insight on how to adapt existing methods to 360-degree inputs. We also present the results of a user study assessing the extent to which virtual agents generated by this process are perceived as present and engaging.
BibTex @inproceedings{deDinechin2019Automatic,
title = {Automatic Generation of Interactive {3D} Characters and Scenes for Virtual Reality from a Single-Viewpoint 360-Degree Video},
url = {https://doi.org/10.1109/vr.2019.8797969},
doi = {10.1109/vr.2019.8797969},
booktitle = {2019 {IEEE} Conference on Virtual Reality and {3D} User Interfaces ({VR})},
author = {Gr{\'e}goire Dupont de Dinechin and Alexis Paljic},
publisher = {{IEEE}},
month = mar,
year = {2019},
pages = {908--909}
}
PDF
Conference IEEE VR 2019 (26th IEEE Conference on Virtual Reality and 3D User Interfaces) took place on 23-27 March in Osaka, Japan. Focused on "all areas related to virtual reality (VR), including augmented reality (AR), mixed reality (MR),and 3D user interfaces (3DUIs)", the conference was an opportunity to present works on technologies and applications (computer graphics, immersive 360° video, modelling and simulation, …), multi-sensory experiences (virtual humans, haptics, perception and cognition, …) and interaction (collaborative interaction, locomotion and navigation, multimodal interaction, …).