Refine
Document Type
- Conference proceeding (14)
- Journal article (3)
- Book chapter (1)
Is part of the Bibliography
- yes (18)
Institute
- Informatik (16)
- Technik (2)
Publisher
- Association for Computing Machinery (4)
- Gesellschaft für Informatik e.V (2)
- IEEE (2)
- SciTePress (2)
- Cambridge University Press (1)
- De Gruyter (1)
- Eurographics Association (1)
- IGI Global (1)
- International Academy Publishing (1)
- Shaker Verlag (1)
- Springer (1)
- The Association for Computing Machinery (1)
This paper compares the influence a video self-avatar and a lack of a visual representation of a body have on height estimation when standing at a virtual visual cliff. A height estimation experiment was conducted using a custom augmented reality Oculus Rift hardware and software prototype also described in this paper. The results show a consistency with previous research demonstrating that the presence of a visual body influences height estimates, just as it has been shown to influence distance estimates and affordance estimates.
Redirected walking techniques allow people to walk in a larger virtual space than the physical extents of the laboratory. We describe two experiments conducted to investigate human sensitivity to walking on a curved path and to validate a new redirected walking technique. In a psychophysical experiment, we found that sensitivity to walking on a curved path was significantly lower for slower walking speeds (radius of 10 meters versus 22 meters). In an applied study, we investigated the influence of a velocity-dependent dynamic gain controller and an avatar controller on the average distance that participants were able to freely walk before needing to be reoriented. The mean walked distance was significantly greater in the dynamic gain controller condition, as compared to the static controller (22 meters versus 15 meters). Our results demonstrate that perceptually motivated dynamic redirected walking techniques, in combination with reorientation techniques, allow for unaided exploration of a large virtual city model.
We investigated the influence of body shape and pose on the perception of physical strength and social power for male virtual characters. In the first experiment, participants judged the physical strength of varying body shapes, derived from a statistical 3D body model. Based on these ratings, we determined three body shapes (weak, average, and strong) and animated them with a set of power poses for the second experiment. Participants rated how strong or powerful they perceived virtual characters of varying body shapes that were displayed in different poses. Our results show that perception of physical strength was mainly driven by the shape of the body. However, the social attribute of power was influenced by an interaction between pose and shape. Specifically, the effect of pose on power ratings was greater for weak body shapes. These results demonstrate that a character with a weak shape can be perceived as more powerful when in a high-power pose.
Socially interactive robots with human-like speech synthesis and recognition, coupled with humanoid appearance, are an important subject of robotics and artificial intelligence research. Modern solutions have matured enough to provide simple services to human users. To make the interaction with them as fast and intuitive as possible, researchers strive to create transparent interfaces close to human-human interaction. Because facial expressions play a central role in human-human communication, robot faces were implemented with varying degrees of human-likeness and expressiveness. We propose a way to implement a program that believably animates changing facial expressions and allows to influence them via inter-process communication based on an emotion model. This will can be used to create a screen based virtual face for a robotic system with an inviting appearance to stimulate users to seek interaction with the robot.
Lots of movies are produced every year, too many to watch all of them and in particular, to get an overview about the evolution of typical movie genres and actors playing in them. Moreover, it is a challenging problem to detect correlations among the movies and the actors in those movies, in particular, if we are interested in time-varying data patterns like trends, countertrends, or anomalies and outliers. Those correlations are specifically interesting if they can be inspected on different levels of granularity, e.g., temporal, but also hierarchical in form of country- or continent-based correlations. In this paper we describe the IMDb Explorer, a webbased visualization tool that consists of two major views denoted by the movie cosmos and the career lines. Both views are linked and interactively manipulable while a list of user-defined metrics are explorable. We illustrate the usefulness of the visualization tool by applying it to the entire movie database provided by IMDb.
Virtual Reality (VR) technology has the potential to support knowledge communication in several sectors. Still, when educators make use of immersive VR technology in favor of presenting their knowledge, their audience within the same room may not be able to see them anymore due to wearing head-mounted displays (HMDs). In this paper, we propose the Avatar2Avatar system and design, which augments the visual aspect during such a knowledge presentation. Avatar2Avatar enables users to see both a realistic representation of their respective counterpart and the virtual environment at the same time. We point out several design aspects of such a system and address design challenges and possibilities that arose during implementation. We specifically explore opportunities of a system design for integrating 2D video-avatars in existing roomscale VR setups. An additional user study indicates a positive impact concerning spatial presence when using Avatar2Avatar.
Representing users within an immersive virtual environment is an essential functionality of a multi-person virtual reality system. Especially when communicative or collaborative tasks must be performed, there exist challenges about realistic embodying and integrating such avatar representations. A shared comprehension of local space and non-verbal communication (like gesture, posture or self-expressive cues) can support these tasks. In this paper, we introduce a novel approach to create realistic, video-texture based avatars of colocated users in real-time and integrate them in an immersive virtual environment. We show a straight forward and low-cost hard- and software solution to do so. We discuss technical design problems that arose during implementation and present a qualitative analysis on the usability of the concept from a user study, applying it to a training scenario in the automotive sector.
In diesem Beitrag wird ein neuer Ansatz vorgestellt, welcher eine schwerkraftreduzierte Navigation innerhalb einer VR-Umgebung erlaubt, wie beispielsweise ein simulierter Mondspaziergang. Zur Navigation in der VR-Umgebung wird der Cyberith Virtualizer ein-gesetzt. Die Schwerkraftsimulation erfolgt mittels eines einstellbaren Gurtsystems, das anelastischen Seilen aufgehängt wird und abgestufte Schwerkraftkompensationen erlaubt. Als Umgebung wurde ein Raumschiffszenario sowie eine Mondoberfläche generiert. Hier sind in der aktuellen Anwendung einfache Interaktionen möglich. In Anlehnung an existierende Gravity Offload Systeme wird die Lösung ViRGOS bezeichnet. ViRGOS wurde bereits bei verschiedenen Besuchsterminen und Hochschulevents eingesetzt, so dass erste Rückmeldungen von Nutzern eingeholt werden konnten.
JumpAR kombiniert die Welt der Augmented Reality (AR) mit dem weltbekannten Jump ’n’ Run Genre in einem Mobile Game. Der Spieler kreiert einen individuellen Spielparcours in seiner realen Umgebung und navigiert seine Spielfigur auf virtuellen Plattformen durch diesen. Der mit Unity entwickelte JumpAR Prototyp wurde nach Umsetzungen der Grundfunktionen und Mechaniken im Rahmen eines Nutzertests analysiert. Die Integration von echten Gegenständen aus dem Umfeld des Spielers führt im Spielfluss zu einer starken Verknüpfung der virtuellen und realen Welt, was eine neue AR-Interaktionsform für Handyspiele darstellt.
Public transport maps are typically designed in a way to support route finding tasks for passengers while they also provide an overview about stations, metro lines, and city-specific attractions. Most of those maps are designed as a static representation, maybe placed in a metro station or printed in a travel guide. In this paper we describe a dynamic, interactive public transport map visualization enhanced by additional views for the dynamic passenger data on different levels of temporal granularity. Moreover, we also allow extra statistical information in form of density plots, calendar-based visualizations, and line graphs. All this information is linked to the contextual metro map to give a viewer insights into the relations between time points and typical routes taken by the passengers. We illustrate the usefulness of our interactive visualization by applying it to the railway system of Hamburg in Germany while also taking into account the extra passenger data. As another indication for the usefulness of the interactively enhanced metro maps we conducted a user experiment with 20 participants.