02nd Apr 2024

A Practical Real-Time Model for Diffraction on Rough Surfaces

Olaf Clausen, Martin Mišiak, Arnulph Fuhrmann, Ricardo Marroquim and Marc Erich Latoschik In: Journal of Computer Graphics Techniques                   Abstract Wave optics...

28th Mar 2024

Facial Feature Enhancement for Immersive Real-Time Avatar-Based Sign Language Communication using Personalized CNNs

Kristoffer Waldow, Arnulph Fuhrmann and Daniel Roth, In:Proceedings of 31th IEEE Virtual Reality Conference (VR ’24), Orlando, USA Abstract: Facial recognition is crucial in sign language communication. Especially for virtual...

28th Mar 2024

Investigating Incoherent Depth Perception Features in Virtual Reality using Stereoscopic Impostor-Based Rendering

🏆Best Poster Award

16th Feb 2024
Deep Neural Labeling: Hybrid Hand Pose Estimation Using Unlabeled Motion Capture Data With Color Gloves in Context of German Sign Language - Teaser small

Deep Neural Labeling: Hybrid Hand Pose Estimation Using Unlabeled Motion Capture Data With Color Gloves in Context of German Sign Language

Kristoffer Waldow, Arnulph Fuhrmann, Daniel Roth in 6th IEEE International Conference on Artificial Intelligence & extended and Virtual Reality 2024 (AIxVR ’24) Abstract: Hands are fundamental to conveying emotions and...

20th Sep 2023

An Evaluation of Dichoptic Tonemapping in Virtual Reality Experiences

Martin Mišiak, Tom Müller, Arnulph Fuhrmann and Marc Erich Latoschik In: Virtuelle und Erweiterte Realität – 20. Workshop der GI-Fachgruppe VR/AR, 2023, Köln, Germany Abstract: Recent research has shown that...

24th May 2023

The Impact of Reflection Approximations on Visual Quality in Virtual Reality

Martin Mišiak, Arnulph Fuhrmann, Marc Erich Latoschik in ACM Symposium on Applied Perception 2023 (SAP ’23) Abstract: Virtual Reality (VR) systems rely on established real-time rendering techniques to uphold a...

17th Oct 2022

Investigation and Simulation of Diffraction on Rough Surfaces

Olaf Clausen, Yang Chen, Arnulph Fuhrmann and Ricardo Marroquim In: Computer Graphics Forum, 42: 245-260. Abstract Simulating light-matter interaction is a fundamental problem in computer graphics. A particular challenge is...

08th Dec 2021

Impostor-based Rendering Acceleration for Virtual, Augmented, and Mixed Reality

An image-based rendering approach to accelerate rendering time of XR scenes containing a large number of complex high poly count objects. Our approach replaces complex objects by impostors specifically designed to work in Virtual-, Augmented- and Mixed Reality scenarios. A dynamic run-time recreation of impostors is implemented for larger changes in view position. In addition to the significant performance benefit, our impostors compare favorably against the original mesh representation, as geometric and textural temporal aliasing artifacts are heavily suppressed.

01st Apr 2021

Investigating the Influence of Sound Source Visualization on the Ventriloquism Effect in an Auralized Virtual Reality Environment

The ventriloquism effect (VQE) describes the illusion that the puppeteer’s voice seems to come out of the puppet’s mouth. This effect can even be observed in virtual reality (VR) when a spatial discrepancy between the auditory and visual component occurs. However, previous studies have never fully investigated the impact of visual quality on VQE. Therefore, we conducted an exploratory experiment to investigate the influence of the visual appearance of a loudspeaker on the VQE in VR. Our evaluation yields significant differences in the vertical plane which leads to the assumption that the less realistic model had a stronger VQE than the realistic one.

24th Sep 2020

Intersection-free mesh decimation for high resolution cloth models

We present an approach to reduce high-resolution polygonal clothing meshes for Mixed Reality (VR/AR) scenarios. Due to hardware limitations, current mobile devices require 3D models with a strongly reduced triangle count to be displayed smoothly. A particular challenge for mesh reduction of clothing models is that these models usually consist of several fabric layers, which are spatially tightly together, touching in many places.

24th Sep 2020

An Atlas-Packing Algorithm for Efficient Rendering of Arbitrarily Parametrized Decal Textures

When rendering large amounts of decals, huge overheads can considerably reduce performance, which is especially harmful for critical applications such as virtual reality. To reduce these overheads, we propose a novel algorithm packing arbitrarily parametrized decal textures into a sparse texture, which is realized via a reference atlas holding the necessary information to access tiles stored in a tile atlas.

24th Sep 2020

Performance of Augmented Reality Remote Rendering via Mobile Network

In this paper, a remote rendering system for an AR app based on Unity is presented. The system was implemented for an edge server, which is located within the network of the mobile network operator.

31st Mar 2020

Accelerated Stereo Rendering with Hybrid Reprojection-Based Rasterization and Adaptive Ray-Tracing

🏆Best Paper Award

13th Mar 2020

Addressing Deaf or Hard-of-Hearing People in Avatar-Based Mixed Reality Collaboration Systems

We propose an easy to integrate Automatic Speech Recognition and textual visualization extension for an avatar-based MR remote collaboration system that visualizes speech via spatial floating speech bubbles. In a small pilot study, we achieved word accuracy of our extension of 97% by measuring the widely used word error rate.

01st Nov 2019

The Impact of Stereo Rendering on the Perception of Normal Mapped Geometry in Virtual Reality

This paper investigates the effects of normal mapping on the perception of geometric depth between stereoscopic and non-stereoscopic views.

25th Sep 2019

Investigating the Effect of Embodied Visualization in Remote Collaborative Augmented Reality

This paper investigates the influence of embodied visualization on the effectiveness of remote collaboration in a worker-instructor scenario in augmented reality (AR). For this purpose,  we conducted a user study where we used avatars in a remote collaboration system in AR to allow natural human communication. In a worker-instructor scenario, spatially separated pairs of subjects have to solve a common task, while their respective counterpart is either visualized as an avatar or without bodily representation. As a baseline, a Face-to-face (F2F) interaction is carried out to define an ideal interaction. In the subsequent analysis of the results, the embodied visualization indicates significant differences in copresence and social presence, but no significant differences in the performance and workload. Verbal feedback of our subjects hints that augmentations, like the visualization of the viewing direction, are more important in our scenario than the visualization of the interaction partner.

17th Sep 2019

The influence of different audio representations on linear visually induced self-motion

We investigate the influence of four different audio representations on visually induced self-motion (vection). Our study followed the hypothesis, that the feeling of visually induced vection can be increased by audio sources while lowering negative feelings such as visually induced motion sickness.

11th Sep 2019

Towards Predictive Virtual Prototyping: Color Calibration of Consumer VR HMDs

Nowadays, virtual prototyping is an established and increasingly important part of the development cycle of new products. Often CAVEs and Powerwalls are used as Virtual Reality (VR) systems to provide an immersive reproduction of virtual content. These VR systems are space and cost-intensive. With the advent of the recent consumer Virtual Reality Head Mounted Displays (VR HMDs), HMDs got more attention from the industry. To increase the acceptance for HMDs as VR system for virtual prototypes, color consistency has to be improved. In this paper, we present an approach to characterize and calibrate displays of consumer VR HMDs. The approach is based on a simple display model, which is commonly used for calibration of conventional displays, but has not yet been applied for VR HMDs. We implemented this approach with the HTC Vive Pro and the Pimax 5k+. In combination with our calibration approach, the Vive Pro provides a color reproduction without perceivable color differences.

11th Sep 2019

Perceptual Comparison of Four Upscaling Algorithms for Low-Resolution Rendering for Head-mounted VR Displays

When rendering images in real-time, shading pixels is a comparatively expensive operation. Especially for head-mounted displays, where separate images are rendered for each eye and high frame rates need to be achieved. Upscaling algorithms are one possibility of reducing the pixel shading costs. Four basic upscaling algorithms are implemented in a VR rendering system, with a subsequent user study on subjective image quality. We find that users preferred methods with a better contrast preservation.

09th Sep 2019

Using MQTT for Platform Independent RemoteMixed Reality Collaboration

In this paper, we present a Mixed Reality telepresence system that allows the connection of multiple AR or VR devices to create a shared virtual environment by using the simple MQTT networking protocol. It follows a subscribe-publish pattern for reliable and easy platform independent integration. Therefore, it is possible to realize different clients that handle communication and allow remote collaboration. To allow embodied natural human interaction, the system maps the human interaction channels, gestures, gaze and speech, to an abstract stylized avatar by using an upper body inverse kinematic approach. This setup allows spatially separated persons to interact with each other via an avatar-mediated communication.

05th Aug 2019

What is the Reddening Effect and does it really exist?

The simulation of light-matter interaction is a major challenge in computer graphics. Particularly challenging is the modelling of light-matter interaction of rough surfaces, which contain several different scales of roughness where many different scattering phenomena take place. There are still appearance critical phenomena that are weakly approximated or even not included at all by current BRDF models. One of these phenomena is the reddening effect, which describes a tilting of the reflectance spectra towards long wavelengths especially in the specular reflection. The observation that the reddening effect takes place on rough surfaces is new and the characteristics and source of the reddening effect have not been thoroughly researched and explained. Furthermore, it was not even clear whether the reddening really exists or the observed effect resulted from measurement errors. In this work we give a short introduction to the reddening effect and show that it is indeed a property of the material reflectance function, and does not originate from measurement errors or optical aberrations.

26th May 2019

torVRt – Entwicklung eines Torwarttrainings zur Schulung von Antizipation und Reaktion in virtueller Realität

Um die Attraktivität des Antizipations- und Reaktionstrainings für jugendliche Torwarte zu erhöhen war es das Ziel eine sportartspezifische Umgebung in virtueller Realität (VR) zu entwickeln.

13th Dec 2018

An Evaluation of Smartphone-Based Interaction in AR for Constrained Object Manipulation

In Augmented Reality, interaction with the environment can be achieved with a number of different approaches. In current systems, the most common are hand and gesture inputs. However experimental applications also integrated smartphones as intuitive interaction devices and demonstrated great potential for different tasks. One particular task is constrained object manipulation, for which we conducted a user study. In it we compared standard gesture-based approaches with a touch-based interaction via smartphone. We found that a touch-based interface is significantly more efficient, although gestures are being subjectively more accepted. From these results we draw conclusions on how smartphones can be used to realize modern interfaces in AR.

16th Oct 2018

Localization Service Using Sparse Visual Information Based on Recent Augmented Reality Platforms

The ability to localize a device or user precisely within a known space, would allow many use cases on the context of location-based augmented reality. We propose a localization service based on sparse visual information using ARCore, a state-of-the-art augmented reality platform for mobile devices.

28th Jun 2018

Acquisition and Validation of Spectral Ground Truth Data for Predictive Rendering of Rough Surfaces

In this work, we acquired a set of precisely and spectrally resolved ground truth data. It consists of the precise description of a new developed reference scene including isotropic BRDFs of 24 color patches, as well as the reference measurements of all patches under 13 different angles inside the reference scene.

15th Mar 2018

Do Textures and Global Illumination Influence the Perception of Redirected Walking Based on Translational Gain?

For locomotion in virtual environments (VE) the method of redirected walking (RDW) enables users to explore large virtual areas within a restricted physical space by (almost) natural walking. The trick behind this method is to manipulate the virtual camera in an user-undetectable manner that leads to a change of his movements. If the virtual camera is manipulated too strong then the user recognizes this manipulation and reacts accordingly. We studied the effect of human perception of RDW under the influence of the level of realism in rendering the virtual scene.

15th Mar 2018

Immersive Exploration of OSGi-based Software Systems in Virtual Reality

We present an approach for exploring OSGi-based software systems in virtual reality. We employ an island metaphor, which represents every module as a distinct island. The resulting island system is displayed in the confines of a virtual table, where users can explore the software visualization on multiple levels of granularity by performing intuitive navigational tasks. Our approach allows users to get a first overview about the complexity of an OSGi-based software system by interactively exploring its modules as well as the dependencies between them.

17th Jul 2017

Directional Occlusion via Multi-Irradiance Mapping

We present a new, physically plausible, real-time approach to compute directional occlusion for dynamic objects, lit with image based lighting.

18th Mar 2017

Socially Immersive Avatar-Based Communication

In this paper, we present SIAM-C, an avatar-mediated communication platform to study socially immersive interaction in virtual environments.

10th Mar 2017

The Effectiveness of Changing the Field of View in a HMD on the Perceived Self-Motion

The following paper investigates the effect on the intensity of perceived vection by changing the field of view (FOV) using a headmounted display (HMD) in a virtual environment (VE).

23rd Oct 2016

Real-time tone mapping – An evaluation of color-accurate methods for luminance compression

Recent advances in real-time rendering enable virtual production pipelines in a broad range of industries. These pipelines are based on a fast, latency-free handling in addition to the accurate appearance of results. This requires the use of high dynamic range rendering for photo-realistic results and a tone mapping operator for a matched display.

11th Oct 2016

A Simplified Inverse Kinematic Approach for Embodied VR Applications

We compare a full body marker set with a reduced rigid body marker set supported by inverse kinematics.

11th Oct 2016

Avatar Realism and Social Interaction Quality in Virtual Reality

We describe an experimental method to investigate the effects of reduced social information and behavioral channels in immersive virtual environments with full-body avatar embodiment.

18th May 2015

SVEn – Shared Virtual Environment

This paper presents a system for a shared virtual experience which was developed within a student project. The main idea is to have two or more persons at different locations, who can interact with each other in the same virtual environment. In order to realize this idea every person is motion-captured and wears a head-mounted display (HMD). The virtual environment is rendered with the Unity game engine and the tracked positions are updated via the internet. The virtual environment developed in this project is highly immersive and users felt a strong sense of presence.

15th May 2013

Interaktive 3D Visualisierung

Inzwischen ermöglichen neue Algorithmen und hoch entwickelte GPUs das Rendering mit physikalisch basierten Modellen, wodurch die Darstellungsqualität realistisch geworden ist. Gleichzeitig kann die Geometrie mittels Echtzeitsimulation dynamisch verformt werden, so dass der Anwender sich nicht nur in der 3D Szene interaktiv bewegen, sondern diese auch sofort verändern kann.

09th Jun 2008

Towards a Coupled Simulation of Cloth and Soft Tissue

In this paper, we present a possible guideline towards a coupled simulation of textiles and human soft tissue. We have developed a new simulator for soft tissue which is able to simulate the skin of a virtual human in a realistic manner.

15th Mar 2007

Optimized Continuous Collision Detection for Deformable Triangle Meshes

We present different approaches for accelerating the process of continuous collision detection for deformable triangle meshes.

20th Apr 2005

Ontologies for Virtual Garments

We give an ontology for garment patterns that can be incorporated into the simulation of virtual clothing. On the basis of this ontology and extensions to garments we can specify and manipulate the process of virtual dressing on a higher semantic level.

02nd Apr 2005

Real-Time Collision Detection for Dynamic Virtual Environments

Collision detection is an enabling technology for many virtual environments, games, and virtual prototyping applications containing some kind of physically-based simulation (such as rigid bodies, cloth, surgery, etc.). This tutorial will give an overview of the different classes of algorithms, and then provide attendees with in-depth knowledge of some of the important algorithms within each class.

30th Mar 2005

Collision Detection for Deformable Objects

This paper summarizes recent research in the area of deformable collision detection. Various approaches based on bounding volume hierarchies, distance fields, and spatial partitioning are discussed. Further, image-space techniques and stochastic methods are considered.

01st Feb 2005

Self-Shadowing of dynamic scenes with environment maps using the GPU

In this paper we present a method for illuminating a dynamic scene with a high dynamic range environment map with real-time or interactive frame rates, taking into account self shadowing. Current techniques require static geometry, are limited to few and small area lights or are limited in the frequency of the shadows.

05th Sep 2003

Distance Fields for Rapid Collision Detection in Physically Based Modeling

In this paper we address the problem of rapid distance computation between rigid objects and highly deformable objects, which is important in the context of physically based modeling of e.g hair or clothing.

27th Feb 2003

Interactive Animation of Cloth including Self Collision Detection

We describe a system for interactive animation of cloth, which can be used in e-commerce applications, games or even in virtual prototyping systems.

01st Jan 2003

Interaction Free Dressing of Virtual Humans

We describe an interaction free method for a geometric pre-positioning of virtual cloth patterns around human 3D-scans. Combined with the following physically-based sewing simulation a fully automated dressing of virtual humas becomes possible.