Refine
Document Type
- Conference proceeding (13)
- Journal article (3)
- Book chapter (1)
Is part of the Bibliography
- yes (17)
Institute
- Informatik (15)
- Technik (2)
Publisher
- ACM (3)
- Gesellschaft für Informatik e.V (2)
- IEEE (2)
- SCITEPRESS (2)
- Association for Computing Machinery (1)
- De Gruyter Oldenbourg (1)
- Eurographics Association (1)
- IGI Global (1)
- International Academy Publishing (1)
- Shaker Verlag (1)
In recent years robotic systems have matured enough to perform simple home or office tasks, guide visitors in environments such as museums or stores and aid people in their daily life. To make the interaction with service and even industrial robots as fast and intuitive as possible, researchers strive to create transparent interfaces close to human-human interaction. As facial expressions play a central role in human-human communication, robot faces were implemented with varying degrees of human-likeness and expressiveness. We propose an emotion model to parameterize a screen based facial animation via inter-process communication. A software will animate transitions and add additional animations to make a digital face appear “alive” and equip a robotic system with a virtual face. The result will be an inviting appearance to motivate potential users to seek interaction with the robot.
Redirected walking techniques allow people to walk in a larger virtual space than the physical extents of the laboratory. We describe two experiments conducted to investigate human sensitivity to walking on a curved path and to validate a new redirected walking technique. In a psychophysical experiment, we found that sensitivity to walking on a curved path was significantly lower for slower walking speeds (radius of 10 meters versus 22 meters). In an applied study, we investigated the influence of a velocity-dependent dynamic gain controller and an avatar controller on the average distance that participants were able to freely walk before needing to be reoriented. The mean walked distance was significantly greater in the dynamic gain controller condition, as compared to the static controller (22 meters versus 15 meters). Our results demonstrate that perceptually motivated dynamic redirected walking techniques, in combination with reorientation techniques, allow for unaided exploration of a large virtual city model.
Public transport maps are typically designed in a way to support route finding tasks for passengers, while they also provide an overview about stations, metro lines, and city-specific attractions. Most of those maps are designed as a static representation, maybe placed in a metro station or printed in a travel guide. In this paper, we describe a dynamic, interactive public transport map visualization enhanced by additional views for the dynamic passenger data on different levels of temporal granularity. Moreover, we also allow extra statistical information in form of density plots, calendar-based visualizations, and line graphs. All this information is linked to the contextual metro map to give a viewer insights into the relations between time points and typical routes taken by the passengers. We also integrated a graph-based view on user-selected routes, a way to interactively compare those routes, an attribute- and property-driven automatic computation of specific routes for one map as well as for all available maps in our repertoire, and finally, also the most important sights in each city are included as extra information to include in a user-selected route. We illustrate the usefulness of our interactive visualization and map navigation system by applying it to the railway system of Hamburg in Germany while also taking into account the extra passenger data. As another indication for the usefulness of the interactively enhanced metro maps we conducted a controlled user experiment with 20 participants.
This paper compares the influence a video self-avatar and a lack of a visual representation of a body have on height estimation when standing at a virtual visual cliff. A height estimation experiment was conducted using a custom augmented reality Oculus Rift hardware and software prototype also described in this paper. The results show a consistency with previous research demonstrating that the presence of a visual body influences height estimates, just as it has been shown to influence distance estimates and affordance estimates.
We investigated the influence of body shape and pose on the perception of physical strength and social power for male virtual characters. In the first experiment, participants judged the physical strength of varying body shapes, derived from a statistical 3D body model. Based on these ratings, we determined three body shapes (weak, average, and strong) and animated them with a set of power poses for the second experiment. Participants rated how strong or powerful they perceived virtual characters of varying body shapes that were displayed in different poses. Our results show that perception of physical strength was mainly driven by the shape of the body. However, the social attribute of power was influenced by an interaction between pose and shape. Specifically, the effect of pose on power ratings was greater for weak body shapes. These results demonstrate that a character with a weak shape can be perceived as more powerful when in a high-power pose.
Socially interactive robots with human-like speech synthesis and recognition, coupled with humanoid appearance, are an important subject of robotics and artificial intelligence research. Modern solutions have matured enough to provide simple services to human users. To make the interaction with them as fast and intuitive as possible, researchers strive to create transparent interfaces close to human-human interaction. Because facial expressions play a central role in human-human communication, robot faces were implemented with varying degrees of human-likeness and expressiveness. We propose a way to implement a program that believably animates changing facial expressions and allows to influence them via inter-process communication based on an emotion model. This will can be used to create a screen based virtual face for a robotic system with an inviting appearance to stimulate users to seek interaction with the robot.
Representing users within an immersive virtual environment is an essential functionality of a multi-person virtual reality system. Especially when communicative or collaborative tasks must be performed, there exist challenges about realistic embodying and integrating such avatar representations. A shared comprehension of local space and non-verbal communication (like gesture, posture or self-expressive cues) can support these tasks. In this paper, we introduce a novel approach to create realistic, video-texture based avatars of colocated users in real-time and integrate them in an immersive virtual environment. We show a straight forward and low-cost hard- and software solution to do so. We discuss technical design problems that arose during implementation and present a qualitative analysis on the usability of the concept from a user study, applying it to a training scenario in the automotive sector.
Virtual Reality (VR) technology has the potential to support knowledge communication in several sectors. Still, when educators make use of immersive VR technology in favor of presenting their knowledge, their audience within the same room may not be able to see them anymore due to wearing head-mounted displays (HMDs). In this paper, we propose the Avatar2Avatar system and design, which augments the visual aspect during such a knowledge presentation. Avatar2Avatar enables users to see both a realistic representation of their respective counterpart and the virtual environment at the same time. We point out several design aspects of such a system and address design challenges and possibilities that arose during implementation. We specifically explore opportunities of a system design for integrating 2D video-avatars in existing roomscale VR setups. An additional user study indicates a positive impact concerning spatial presence when using Avatar2Avatar.
In this paper we describe an interactive web-based visual analysis tool for Formula one races. It first provides an overview about all races on a yearly basis in a calendar-like representation. From this starting point, races can be selected and visually inspected in detail. We support a dynamic race position diagram as well as a more detailed lap times line plot for showing the drivers’ lap times in comparison. Many interaction techniques are supported like selections, filtering, highlighting, color coding, or details-on demand. We illustrate the usefulness of our visualization tool by applying it to a Formula one dataset while we describe the different dynamic visual racing patterns for a number of selected races and drivers.
Ganz gleich, ob im privaten oder beruflichen Alltag, begleiten uns digitale Medien heute nahezu überall. Dabei dienen sie nicht nur zur Unterhaltung, sondern helfen uns, Arbeitsabläufe effizienter und produktiver durchzuführen. Doch die Arbeit des Menschen ist bei Weitem nicht überflüssig geworden. Durch die steigenden Anforderungen ist die Nachfrage nach qualifiziertem Fachpersonal heute höher denn je. Währenddessen müssen Mitarbeiter in der Lage sein, mit der rasanten Entwicklung neuer Produkte und Technologien Schritt zu halten. Dabei ist eine qualitative Aus- und Weiterbildung unumgänglich. Beginnend mit der Bildung von Medienkompetenz in Schulen bis hin zur Fach- und Berufsbildung sowie beruflichen Weiterbildung, muss der Umgang mit digitalen Technologien gelehrt sein. Darüber hinaus bieten diese Technologien neue Potenziale zur Verbesserung von Bildungskonzepten und können zudem dabei helfen, den Lernerfolg zu steigern.
Diese Arbeit beschäftigt sich mit der Evaluation einer VR-basierten Lernumgebung und untersucht mögliche Auswirkungen auf den Lernerfolg durch die verkörperte Darstellung eines virtuellen Instruktors. Dazu wurde die technische Implementierung einer kollaborativen Lernumgebung vorgenommen, mit welcher anschließend eine Versuchsreihe mit 16 Probanden durchgeführt wurde. Im Hinblick auf eine mögliche Steigerung der Effizienz in der eigenständigen Bewältigung von Montageaufgaben nach unterschiedlichen Instruktionsarten, wurden keine signifikanten Leistungsverbesserungen festgestellt.