A Practical Real-Time Model for Diffraction on Rough Surfaces
Olaf Clausen, Martin Mišiak, Arnulph Fuhrmann, Ricardo Marroquim and Marc Erich Latoschik In: Journal of Computer Graphics Techniques Abstract Wave optics...
Olaf Clausen, Martin Mišiak, Arnulph Fuhrmann, Ricardo Marroquim and Marc Erich Latoschik In: Journal of Computer Graphics Techniques Abstract Wave optics...
Kristoffer Waldow, Arnulph Fuhrmann and Daniel Roth, In:Proceedings of 31th IEEE Virtual Reality Conference (VR ’24), Orlando, USA Abstract: Facial recognition is crucial in sign language communication. Especially for virtual...
Kristoffer Waldow, Arnulph Fuhrmann, Daniel Roth in 6th IEEE International Conference on Artificial Intelligence & extended and Virtual Reality 2024 (AIxVR ’24) Abstract: Hands are fundamental to conveying emotions and...
Martin Mišiak, Tom Müller, Arnulph Fuhrmann and Marc Erich Latoschik In: Virtuelle und Erweiterte Realität – 20. Workshop der GI-Fachgruppe VR/AR, 2023, Köln, Germany Abstract: Recent research has shown that...
Martin Mišiak, Arnulph Fuhrmann, Marc Erich Latoschik in ACM Symposium on Applied Perception 2023 (SAP ’23) Abstract: Virtual Reality (VR) systems rely on established real-time rendering techniques to uphold a...
Olaf Clausen, Yang Chen, Arnulph Fuhrmann and Ricardo Marroquim In: Computer Graphics Forum, 42: 245-260. Abstract Simulating light-matter interaction is a fundamental problem in computer graphics. A particular challenge is...
An image-based rendering approach to accelerate rendering time of XR scenes containing a large number of complex high poly count objects. Our approach replaces complex objects by impostors specifically designed to work in Virtual-, Augmented- and Mixed Reality scenarios. A dynamic run-time recreation of impostors is implemented for larger changes in view position. In addition to the significant performance benefit, our impostors compare favorably against the original mesh representation, as geometric and textural temporal aliasing artifacts are heavily suppressed.
The ventriloquism effect (VQE) describes the illusion that the puppeteer’s voice seems to come out of the puppet’s mouth. This effect can even be observed in virtual reality (VR) when a spatial discrepancy between the auditory and visual component occurs. However, previous studies have never fully investigated the impact of visual quality on VQE. Therefore, we conducted an exploratory experiment to investigate the influence of the visual appearance of a loudspeaker on the VQE in VR. Our evaluation yields significant differences in the vertical plane which leads to the assumption that the less realistic model had a stronger VQE than the realistic one.
We present an approach to reduce high-resolution polygonal clothing meshes for Mixed Reality (VR/AR) scenarios. Due to hardware limitations, current mobile devices require 3D models with a strongly reduced triangle count to be displayed smoothly. A particular challenge for mesh reduction of clothing models is that these models usually consist of several fabric layers, which are spatially tightly together, touching in many places.
When rendering large amounts of decals, huge overheads can considerably reduce performance, which is especially harmful for critical applications such as virtual reality. To reduce these overheads, we propose a novel algorithm packing arbitrarily parametrized decal textures into a sparse texture, which is realized via a reference atlas holding the necessary information to access tiles stored in a tile atlas.
In this paper, a remote rendering system for an AR app based on Unity is presented. The system was implemented for an edge server, which is located within the network of the mobile network operator.
We propose an easy to integrate Automatic Speech Recognition and textual visualization extension for an avatar-based MR remote collaboration system that visualizes speech via spatial floating speech bubbles. In a small pilot study, we achieved word accuracy of our extension of 97% by measuring the widely used word error rate.
This paper investigates the effects of normal mapping on the perception of geometric depth between stereoscopic and non-stereoscopic views.
This paper investigates the influence of embodied visualization on the effectiveness of remote collaboration in a worker-instructor scenario in augmented reality (AR). For this purpose, we conducted a user study where we used avatars in a remote collaboration system in AR to allow natural human communication. In a worker-instructor scenario, spatially separated pairs of subjects have to solve a common task, while their respective counterpart is either visualized as an avatar or without bodily representation. As a baseline, a Face-to-face (F2F) interaction is carried out to define an ideal interaction. In the subsequent analysis of the results, the embodied visualization indicates significant differences in copresence and social presence, but no significant differences in the performance and workload. Verbal feedback of our subjects hints that augmentations, like the visualization of the viewing direction, are more important in our scenario than the visualization of the interaction partner.
We investigate the influence of four different audio representations on visually induced self-motion (vection). Our study followed the hypothesis, that the feeling of visually induced vection can be increased by audio sources while lowering negative feelings such as visually induced motion sickness.
Nowadays, virtual prototyping is an established and increasingly important part of the development cycle of new products. Often CAVEs and Powerwalls are used as Virtual Reality (VR) systems to provide an immersive reproduction of virtual content. These VR systems are space and cost-intensive. With the advent of the recent consumer Virtual Reality Head Mounted Displays (VR HMDs), HMDs got more attention from the industry. To increase the acceptance for HMDs as VR system for virtual prototypes, color consistency has to be improved. In this paper, we present an approach to characterize and calibrate displays of consumer VR HMDs. The approach is based on a simple display model, which is commonly used for calibration of conventional displays, but has not yet been applied for VR HMDs. We implemented this approach with the HTC Vive Pro and the Pimax 5k+. In combination with our calibration approach, the Vive Pro provides a color reproduction without perceivable color differences.
When rendering images in real-time, shading pixels is a comparatively expensive operation. Especially for head-mounted displays, where separate images are rendered for each eye and high frame rates need to be achieved. Upscaling algorithms are one possibility of reducing the pixel shading costs. Four basic upscaling algorithms are implemented in a VR rendering system, with a subsequent user study on subjective image quality. We find that users preferred methods with a better contrast preservation.
In this paper, we present a Mixed Reality telepresence system that allows the connection of multiple AR or VR devices to create a shared virtual environment by using the simple MQTT networking protocol. It follows a subscribe-publish pattern for reliable and easy platform independent integration. Therefore, it is possible to realize different clients that handle communication and allow remote collaboration. To allow embodied natural human interaction, the system maps the human interaction channels, gestures, gaze and speech, to an abstract stylized avatar by using an upper body inverse kinematic approach. This setup allows spatially separated persons to interact with each other via an avatar-mediated communication.
The simulation of light-matter interaction is a major challenge in computer graphics. Particularly challenging is the modelling of light-matter interaction of rough surfaces, which contain several different scales of roughness where many different scattering phenomena take place. There are still appearance critical phenomena that are weakly approximated or even not included at all by current BRDF models. One of these phenomena is the reddening effect, which describes a tilting of the reflectance spectra towards long wavelengths especially in the specular reflection. The observation that the reddening effect takes place on rough surfaces is new and the characteristics and source of the reddening effect have not been thoroughly researched and explained. Furthermore, it was not even clear whether the reddening really exists or the observed effect resulted from measurement errors. In this work we give a short introduction to the reddening effect and show that it is indeed a property of the material reflectance function, and does not originate from measurement errors or optical aberrations.
Um die Attraktivität des Antizipations- und Reaktionstrainings für jugendliche Torwarte zu erhöhen war es das Ziel eine sportartspezifische Umgebung in virtueller Realität (VR) zu entwickeln.
In Augmented Reality, interaction with the environment can be achieved with a number of different approaches. In current systems, the most common are hand and gesture inputs. However experimental applications also integrated smartphones as intuitive interaction devices and demonstrated great potential for different tasks. One particular task is constrained object manipulation, for which we conducted a user study. In it we compared standard gesture-based approaches with a touch-based interaction via smartphone. We found that a touch-based interface is significantly more efficient, although gestures are being subjectively more accepted. From these results we draw conclusions on how smartphones can be used to realize modern interfaces in AR.
The ability to localize a device or user precisely within a known space, would allow many use cases on the context of location-based augmented reality. We propose a localization service based on sparse visual information using ARCore, a state-of-the-art augmented reality platform for mobile devices.
In this work, we acquired a set of precisely and spectrally resolved ground truth data. It consists of the precise description of a new developed reference scene including isotropic BRDFs of 24 color patches, as well as the reference measurements of all patches under 13 different angles inside the reference scene.
For locomotion in virtual environments (VE) the method of redirected walking (RDW) enables users to explore large virtual areas within a restricted physical space by (almost) natural walking. The trick behind this method is to manipulate the virtual camera in an user-undetectable manner that leads to a change of his movements. If the virtual camera is manipulated too strong then the user recognizes this manipulation and reacts accordingly. We studied the effect of human perception of RDW under the influence of the level of realism in rendering the virtual scene.
We present an approach for exploring OSGi-based software systems in virtual reality. We employ an island metaphor, which represents every module as a distinct island. The resulting island system is displayed in the confines of a virtual table, where users can explore the software visualization on multiple levels of granularity by performing intuitive navigational tasks. Our approach allows users to get a first overview about the complexity of an OSGi-based software system by interactively exploring its modules as well as the dependencies between them.
We present a new, physically plausible, real-time approach to compute directional occlusion for dynamic objects, lit with image based lighting.
In this paper, we present SIAM-C, an avatar-mediated communication platform to study socially immersive interaction in virtual environments.
The following paper investigates the effect on the intensity of perceived vection by changing the field of view (FOV) using a headmounted display (HMD) in a virtual environment (VE).
Recent advances in real-time rendering enable virtual production pipelines in a broad range of industries. These pipelines are based on a fast, latency-free handling in addition to the accurate appearance of results. This requires the use of high dynamic range rendering for photo-realistic results and a tone mapping operator for a matched display.
We compare a full body marker set with a reduced rigid body marker set supported by inverse kinematics.
We describe an experimental method to investigate the effects of reduced social information and behavioral channels in immersive virtual environments with full-body avatar embodiment.
This paper presents a system for a shared virtual experience which was developed within a student project. The main idea is to have two or more persons at different locations, who can interact with each other in the same virtual environment. In order to realize this idea every person is motion-captured and wears a head-mounted display (HMD). The virtual environment is rendered with the Unity game engine and the tracked positions are updated via the internet. The virtual environment developed in this project is highly immersive and users felt a strong sense of presence.
Inzwischen ermöglichen neue Algorithmen und hoch entwickelte GPUs das Rendering mit physikalisch basierten Modellen, wodurch die Darstellungsqualität realistisch geworden ist. Gleichzeitig kann die Geometrie mittels Echtzeitsimulation dynamisch verformt werden, so dass der Anwender sich nicht nur in der 3D Szene interaktiv bewegen, sondern diese auch sofort verändern kann.
In this paper, we present a possible guideline towards a coupled simulation of textiles and human soft tissue. We have developed a new simulator for soft tissue which is able to simulate the skin of a virtual human in a realistic manner.
We present different approaches for accelerating the process of continuous collision detection for deformable triangle meshes.
We give an ontology for garment patterns that can be incorporated into the simulation of virtual clothing. On the basis of this ontology and extensions to garments we can specify and manipulate the process of virtual dressing on a higher semantic level.
Collision detection is an enabling technology for many virtual environments, games, and virtual prototyping applications containing some kind of physically-based simulation (such as rigid bodies, cloth, surgery, etc.). This tutorial will give an overview of the different classes of algorithms, and then provide attendees with in-depth knowledge of some of the important algorithms within each class.
This paper summarizes recent research in the area of deformable collision detection. Various approaches based on bounding volume hierarchies, distance fields, and spatial partitioning are discussed. Further, image-space techniques and stochastic methods are considered.
In this paper we present a method for illuminating a dynamic scene with a high dynamic range environment map with real-time or interactive frame rates, taking into account self shadowing. Current techniques require static geometry, are limited to few and small area lights or are limited in the frequency of the shadows.
In this paper we address the problem of rapid distance computation between rigid objects and highly deformable objects, which is important in the context of physically based modeling of e.g hair or clothing.
We describe a system for interactive animation of cloth, which can be used in e-commerce applications, games or even in virtual prototyping systems.
We describe an interaction free method for a geometric pre-positioning of virtual cloth patterns around human 3D-scans. Combined with the following physically-based sewing simulation a fully automated dressing of virtual humas becomes possible.