Titel: Shared Mixed Reality Spaces: Visualization and Interaction Techniques for Hybrid User Groups
Sprache: Englisch
Autor*in: Freiwald, Jann Philipp
Schlagwörter: Mixed Reality; Collaboration; User study; Human-Computer Interaction; Metaverse
GND-Schlagwörter: Erweiterte Realität <Informatik>GND
Virtuelle RealitätGND
Avatar <Informatik>GND
CGI <Computergrafik>GND
Erscheinungsdatum: 2022
Tag der mündlichen Prüfung: 2022-08-29
The metaverse is commonly described as a plethora of multi-user mixed reality (MR) applications, in which users can interact with the virtual environment and each other through shared MR spaces. However, several aspects of the metaverse are still unclear, and fundamental research in human collaboration and interaction in MR hybrid setups is required. We use "hybrid setups" as umbrella term for the stark variance between individual users with regard to their technical setup, used interaction forms and virtual representations. This includes aspects such as different hardware devices, sitting and standing configurations, appearance of avatars, as well as locomotion and its visualization. This dissertation examines these components from both technical and user experience perspectives, including metrics such as usability, sense of presence and cybersickness.

First, we present techniques that enable effective hybrid collaborations between a variety of devices. More specifically, two types of MR devices require further research in terms of integration into shared MR spaces: (i) mobile devices and (ii) video see-through head-mounted displays. To integrate mobile devices, video streaming and camera-based tracking technologies are combined in a smartphone application called VR Invite, that acts as a handheld viewport to a local or remote shared MR space. The results of the accompanying experiment indicate that the opportunity for direct interaction positively influences the sense of presence of co-located mobile users. For video see-through head-mounted displays, an image reprojection algorithm named Camera Time Warp is introduced, that stabilizes virtual objects and virtual human representations in the real world reference frame. This technique addresses one of the major drawbacks of video see-through devices, the perceptual registration error caused by the inherent camera-to-photon latency. The user study demonstrates a positive effect of the Camera Time Warp technique on the participants’ spatial orientation and subjective levels of cybersickness.

Second, we investigate visualization techniques within shared MR spaces to support spatial awareness as well as the sense of spatial presence and co-presence. In the context of human-tohuman interactions, techniques to visualize a user’s gaze direction and their points of interest are compared. The experiment shows that visualizations which are coupled to virtual human representations invoked the highest grade of co-presence, while task performance was highest when additional viewports were provided. In another user study, we contrasted virtual human representations, as well as the visualization of their locomotion from first- and third-person perspective. The results emphasize the importance of natural-looking human gait animations to increase the observer’s spatial awareness and co-presence.

Third, we introduce novel techniques for locomotion and its visualization as implementations of embodied interactions. Based on findings of our prior experiments, the focus for these locomotion user interfaces was the visualization of human gait from first- and third-person perspectives. A locomotion user interfaces for seated MR experiences called VR Strider is presented, which maps cycling biomechanics of the user’s legs to virtual walking movements. An experiment revealed a significant positive effect on the participants’ angular and distance estimation, sense of presence and feeling of comfort compared to other established locomotion techniques. A second confirmatory study indicates the necessity of synchronized avatar animations for virtual locomotion.

Lastly, we present a toolkit for continuous and noncontinuous locomotion with matching human representations based on intelligent virtual agents. This toolkit includes two techniques each for avatar visualizations, i.e., Smart Avatars and Puppeteer, as well as locomotion, i.e., PushPull and Stuttered Locomotion. Smart Avatars deliver continuous full-body human representations for noncontinuous locomotion in shared VR spaces. They can smoothly visualize Stuttered Locomotion, a technique which transforms continuous user movement into a series of short-distance teleport steps. Stuttered Locomotion is also applicable to our motion-based PushPull technique, which employs a dynamic velocity multiplier based on the user’s hand pose. An experiment indicates that Smart Avatars were preferred over regular virtual human representations, while remote controlling one’s own virtual body through Puppeteer produced body ownership close to first-person interactions. The experiment evaluation shows that both PushPull and Stuttered Locomotion significantly reduce the occurrence of cybersickness. In summary, this work offers novel techniques and insight for the implementation of hybrid MR setups with respect to virtual interactions and their visualization.
URL: https://ediss.sub.uni-hamburg.de/handle/ediss/9792
URN: urn:nbn:de:gbv:18-ediss-102966
Dokumenttyp: Dissertation
Betreuer*in: Steinicke, Frank
Enthalten in den Sammlungen:Elektronische Dissertationen und Habilitationen

Dateien zu dieser Ressource:
Datei Beschreibung Prüfsumme GrößeFormat  
Dissertation_JannPhilippFreiwald_2022.pdfDissertation4a0917dcff0af972b0f2d4be7a3e217d6.83 MBAdobe PDFÖffnen/Anzeigen
Zur Langanzeige



Letzte Woche
Letzten Monat
geprüft am 31.03.2023


Letzte Woche
Letzten Monat
geprüft am 31.03.2023

Google ScholarTM