Informatik
Refine
Year of publication
- 2015 (56) (remove)
Document Type
- Conference proceeding (56) (remove)
Has full text
- yes (56) (remove)
Is part of the Bibliography
- yes (56)
Institute
- Informatik (56)
Publisher
- Gesellschaft für Informatik e.V (19)
- Springer (11)
- Hochschule Reutlingen (8)
- IEEE (5)
- Association for Computing Machinery (3)
- IARIA (3)
- Deutsche Gesellschaft für Computer- und Roboterassistierte Chirurgie e. V. (1)
- RWTH Aachen (1)
- SPIE. The International Society for Optical Engineering (1)
- Universität Konstanz (1)
- vwh Verlag Werner Hülsbusch (1)
Mit dem Kunstbegriff "Virtuelle Realität" beschreibt man die Darstellung von künstlichen Welten und die Interaktion mit den selbigen. Meist verbindet man damit teure Spiel- und Filmproduktionen. Doch durch derzeitige Entwicklungen können auch kleine Entwicklerstudios und Endanwender auf Bewegungserkennungssysteme zurückgreifen. In dieser Ausarbeitung werden zwei Prototypen vorgestellt, die auf eben diese Systeme zurückgreifen. In den Prototypen soll eine Interaktion mit der Umwelt und ein "Mittendringefühl" im Rahmen von Serious Games ermöglicht werden.
Two Stream Hypothesis: Adaptationseffekte bei sozialen Interaktionen mit Avataren in Virtual Reality
(2015)
In diesem Paper wird ein Experiment zur Two-Streams-Hypothese vorgestellt. Dabei werden zunächst die psychologischen und technischen Grundlagen erarbeitet, welche für das Experiment benötigt werden. Anschließend wird die Forschungsfrage definiert und der Versuchsaufbau erörtert. Im Experiment soll getestet werden, ob es unterschiedliche Adaptationseffekte bei der Erkennung und dem Ausführen von nicht-eindeutigen sozialen Handlungen gibt. Es wird ein Versuchsaufbau entwickelt, bei welchem Probanden entweder aktiv durch komplementäre Handlungen auf die Handlungen von virtuellen Avataren reagieren sollen oder passiv durch das Drücken von Buttons. Abschließend werden die Ergebnisse ausgewertet und ein Fazit
gezogen.
Ziel der wissenschaftlichen Vertiefung ist es, ein Konzept einer Benutzeroberfläche für ein Fahrassistenzsystem zu entwickeln und zu evaluieren. Das Fahrassistensystem soll dem Fahrer dabei helfen, sicher und energieeffizient zu fahren. Aufgabe ist es, ein Darstellungskonzept zu erstellen und zu evaluieren. Dabei sind die besonderen Anforderungen an Sekundärinteraktionen im Fahrzeug zu berücksichtigen. Ziel der konzeptionellen Phase ist es, eine möglichst ablenkungsfreihe Darstellung zu entwickeln. Dazu werden Normen, Guidelines und Standards der In-Car Interaction zusammenfassend beschrieben und angewendet. Ergebnis ist ein modular umsetzbares Darstellungskonzept, dessen Ablenkungsfreiheit durch einen Lane- Change-Test evaluiert wird.
Die Wahrnehmung unermesslicher Weite kann Ehrfurcht beim Menschen auslösen. Dies kann positive Reaktionen im Menschen zur Folge haben. Während Ehrfurcht theoretisch und praktisch bereits gut erforscht ist, gibt es nur sehr wenig Forschung zum Thema der unermesslichen Weite. Dieses Wissen wäre nützlich, um gezielt Ehrfurcht beim Menschen auszulösen. Aus diesem Grunde wurde eine Studie durchgeführt, mit der festgestellt werden soll, in wie weit sich ein Gefühl unermesslicher Weite in virtueller Realität unter Verwendung eines Head-Mounted Displays erzeugen lässt und ob dadurch Ehrfurcht entsteht.
Scroll-activated animations eröffnen Webentwicklern neue Möglichkeiten der Interaktion und Präsentation von Inhalten. Durch die Animation von Bildern, Texten und weiteren Elementen einer Website soll der Nutzer durch die neue Darstellungsart positiv überrascht werden. Ziel ist es, dem Nutzer die Inhalte interessanter und möglichst gezielt zu vermitteln. Es stellt sich jedoch die Frage, ob die dadurch gesteigerte User Experience zulasten der Usability erfolgt. Unter Umständen führen die Animationen beim Nutzer zwar zu einem Aha-Effekt, setzen jedoch die Benutzerfreundlichkeit herab. Aus diesem Grund geht die Arbeit auf den Aspekt der Usability und User Experience dieser Animationen ein und untersucht den tatsächlichen Mehrwert des Einsatzes von Scroll-Animationen mithilfe von Webanalysetools. Durch den Vergleich mit einer inhaltlich identischen Seite sollen die oben genannten Effekte untersucht werden. Zusätzlich sollen die Ergebnisse nach Gerätetypen aufgeschlüsselt werden, um mögliche Unterschiede aufzudecken.
Gescannte Menschmodelle werden zunehmend für Experimente im VR-Bereich verwendet. Doch realistische Bewegungsabläufe bereitzustellen, ist eine zeitaufwendige Arbeit. Ziel der Ausarbeitung ist es, einen Workflow zu finden, der es ermöglicht, eine große Anzahl solcher Modelle innerhalb kürzester Zeit zu verarbeiten. Dafür betrachtet die Arbeit unterschiedliche Methoden zum Automatisieren von Skinning und Rigging, um Modelle in virtuellen Umgebungen auf Basis von Motion Tracking einsetzen zu können. Die Qualität der verarbeiteten Modelle wird anhand von Scans in unterschiedlichen Posen geprüft.
Interdisziplinarität ist zwar in aller Munde, ist allerdings häufig schwer praktizierbar. Jedoch erfolgt interessante Forschung häufig an den Schnittstellen einzelner Gebiete. Als Besucher der Konferenz erwarten Sie Beiträge aus unterschiedlichsten Bereichen, wie zum Beispiel e-Learning, automatische Emotionserkennung und -animation, der Mensch-Roboter Interaktion, Fahrerassistenzsysteme, Mechanismen der Wahrnehmung in Virtuellen Welten und der Verarbeitung von digitalen Menschmodellen. Die vorgestellten Arbeiten sind entweder an der Informatik-Fakultät selbst oder extern in Zusammenarbeit mit einem forschenden Unternehmen bzw. mit einem Forschungsinstitut entstanden. Darüber hinaus werden Arbeiten von anderen Fakultäten präsentiert.
In order to explore an image, the human eye functions like a spotlight, scanning the content from one object to the next. This visual search behavior is implemented with the help of attention control. The following work surveys the visual search behavior in "Wimmelpictures", a special type of busy pictures. The research objective is to analyze different search strategies and to work out possible differences concerning age and gender. The university experiment is carried out by an eye tracker that records the fixations and saccades of the test persons. The results indicate three forms of search strategy: based on a pattern, based on feature selection, or a mixture of both. Our data shows the search for special features of the target is the most successful. Furthermore there are no differences concerning gender but some concerning age. All age groups need more time to locate the target with an increasing number of distractors in the image. The size of the target is also relevant as a larger target is found more quickly than the smaller one.
Flash SSDs are omnipresent as database storage. HDD replacement is seamless since Flash SSDs implement the same legacy hardware and software interfaces to enable backward compatibility. Yet, the price paid is high as backward compatibility masks the native behaviour, incurs significant complexity and decreases I/O performance, making it non-robust and unpredictable. Flash SSDs are black-boxes. Although DBMS have ample mechanisms to control hardware directly and utilize the performance potential of Flash memory, the legacy interfaces and black-box architecture of Flash devices prevent them from doing so.
In this paper we demonstrate NoFTL, an approach that enables native Flash access and integrates parts of the Flashmanagement functionality into the DBMS yielding significant performance increase and simplification of the I/O stack. NoFTL is implemented on real hardware based on the OpenSSD research platform. The contributions of this paper include: (i) a description of the NoFTL native Flash storage architecture; (ii) its integration in Shore-MT and (iii) performance evaluation of NoFTL on a real Flash SSD and on an on-line data-driven Flash emulator under TPCB, C,E and H workloads. The performance evaluation results indicate an improvement of at least 2.4x on real hardware over conventional Flash storage; as well as better utilisation of native Flash parallelism.
In the present tutorial we perform a cross-cut analysis of database systems from the perspective of modern storage technology, namely Flash memory. We argue that neither the design of modern DBMS, nor the architecture of flash storage technologies are aligned with each other. The result is needlessly suboptimal DBMS performance and inefficient flash utilisation as well as low flash storage endurance and reliability. We showcase new DBMS approaches with improved algorithms and leaner architectures, designed to leverage the properties of modern storage technologies. We cover the area of transaction management and multi-versioning, putting a special emphasis on: (i) version organisation models and invalidation mechanisms in multi-versioning DBMS; (ii) Flash storage management especially on append-based storage in tuple granularity; (iii) Flash-friendly buffer management; as well as (iv) improvements in the searching and indexing models. Furthermore, we present our NoFTL approach to native Flash access that integrates parts of the flash-management functionality into the DBMS yielding significant performance increase and simplification of the I/O stack. In addition, we cover the basics of building large Flash storage for DBMS and revisit some of the RAID techniques and principles.
Real Time Charging (RTC) applications that reside in the telecommunications domain have the need for extremely fast database transactions. Today´s providers rely mostly on in-memory databases for this kind of information processing. A flexible and modular benchmark suite specifically designed for this domain provides a valuable framework to test the performance of different DB candidates. Besides a data and a load generator, the suite also includes decoupled database connectors and use case components for convenient customization and extension. Such easily produced test results can be used as guidance for choosing a subset of candidates for further tuning/testing and finally evaluating the database most suited to the chosen use cases. This is why our benchmark suite can be of value for choosing databases for RTC use cases.
Distraction of the driver is one of the most frequent causes for car accidents. We aim for a computational cognitive model predicting the driver’s degree of distraction during driving while performing a secondary task, such as talking with co-passengers. The secondary task might cognitively involve the driver to differing degrees depending on the topic of the conversation or the number of co-passengers. In order to detect these subtle differences in everyday driving situations, we aim to analyse in-car audio signals and combine this information with head pose and face tracking information. In the first step, we will assess driving, video and audio parameters reliably predicting cognitive distraction of the driver. These parameters will be used to train the cognitive model in estimating the degree of the driver’s distraction. In the second step, we will train and test the cognitive model during conversations of the driver with co-passengers during active driving. This paper describes the work in progress of our first experiment with preliminary results concerning driving parameters corresponding to the driver’s degree of distraction. In addition, the technical implementation of our experiment combining driving, video and audio data and first methodological results concerning the auditory analysis will be presented. The overall aim for the application of the cognitive distraction model is the development of a mobile user profile computing the individual distraction degree and being applicable also to other systems.
Managers recognize that software development project teams need to be developed and guided. Although technical skills are necessary, non-technical (NT) skills are equally, if not more, necessary for project success. Currently, there are no proven tools to measure the NT skills of software developers or software development teams. Behavioral markers (observable behaviors that have positive or negative impacts on individual or team performance) are beginning to be successfully used by airline and medical industries to measure NT skill performance. The purpose of this research is to develop and validate the behavior marker system tool that can be used by different managers or coaches to measure the NT skills of software development individuals and teams. This paper presents an empirical study conducted at the Software Factory where users of the behavior marker tool rated video clips of software development teams. The initial results show that the behavior marker tool can be reliably used with minimal training.
Entrepreneurs and small and medium enterprises usually have issues on developing new prototypes, new ideas or testing new techniques. In order to help them, in the last years, academic Software Factories, a new concept of collaboration between universities and companies have been developed. Software Factories provide a unique environment for students and companies. Students benefit from the possibility of working in a real work environment learning how to apply the state of the art of the existing techniques and showing their skills to entrepreneurs. Companies benefit from the risk-free environment where they can develop new ideas, in a protected environment. Universities finally benefit from this setup as a perfect environment for empirical studies in industrial-like environment. In this paper, we present the network of academic Software Factories in Europe, showing how companies had already benefit from existing Software Factories and reporting success stories. The results of this paper can increase the network of the factories and help other universities and companies to setup similar environment to boost the local economy.
Software process improvement (SPI) is around for decades: frameworks are proposed, success factors are studied, and experiences have been reported. However, the sheer mass of concepts, approaches, and standards published over the years overwhelms practitioners as well as researchers. What is out there? Are there new emerging approaches? What are open issues? Still, we struggle to answer the question for what is the current state of SPI and related research? In this paper, we present initial results from a systematic mapping study to shed light on the field of SPI and to draw conclusions for future research directions. An analysis of 635 publications draws a big picture of SPI-related research of the past 25 years. Our study shows a high number of solution proposals, experience reports, and secondary studies, but only few theories. In particular, standard SPI models like CMMI and ISO/IEC 15504 are analyzed, enhanced, and evaluated for applicability, whereas these standards are critically discussed from the perspective of SPI in small-to- medium-sized companies, which leads to new specialized frameworks. Furthermore, we find a growing interest in success factors to aid companies in conducting SPI.
For years, agile methods are considered the most promising route toward successful software development, and a considerable number of published studies the (successful) use of agile methods and reports on the benefits companies have from adopting agile methods. Yet, since the world is not black or white, the question for what happened to the traditional models arises. Are traditional models replaced by agile methods? How is the transformation toward Agile managed, and, moreover, where did it start? With this paper we close a gap in literature by studying the general process use over time to investigate how traditional and agile methods are used. Is there coexistence or do agile methods accelerate the traditional processes’ extinction? The findings of our literature study comprise two major results: First, studies and reliable numbers on the general process model use are rare, i.e., we lack quantitative data on the actual process use and, thus, we often lack the ability to ground process-related research in practically relevant issues. Second, despite the assumed dominance of agile methods, our results clearly show that companies enact context-specific hybrid solutions in which traditional and agile development approaches are used in combination.
Rapid value delivery requires a company to utilize empirical evaluation of new features and products in order to avoid unnecessary product risks. This helps to make data-driven decisions and to ensure that the development is focused on features that provide real value for customers. Short feedback loops are a prerequisite as they allow for fast learning and reduced reaction times. Continuous experimentation is a development practice where the entire R&D process is guided by constantly conducting experiments and collecting feedback. Although principles of continuous experimentation have been successfully applied in domains such as game software or SAAS, it is not obvious how to transfer continuous experimentation to the business to-business domain. In this article, a case study from a medium-sized software company in the B2B domain is presented. The study objective is to analyze the challenges, benefits and organizational aspects of continuous experimentation in the B2B domain. The results suggest that technical challenges are only one part of the challenges a company encounters in this transition. The company also has to address challenges related to the customer and organizational culture. Unique properties in each customers business play a major role and need to be considered when designing experiments. Additionally, the speed by which experiments can be conducted is relative to the speed by which production deployments can be made. Finally, the article shows how the study results can be used to modify the development in the case company in a way that more feedback and data is used instead of opinions.
Software development as an experiment system : a qualitative survey on the state of the practice
(2015)
An experiment-driven approach to software product and service development is gaining increasing attention as a way to channel limited resources to the efficient creation of customer value. In this approach, software functionalities are developed incrementally and validated in continuous experiments with stakeholders such as customers and users. The experiments provide factual feedback for guiding subsequent development. Although case studies on experimentation in industry exist, the understanding of the state of the practice and the encountered obstacles is incomplete. This paper presents an interview-based qualitative survey exploring the experimentation experiences of ten software development companies. The study found that although the principles of continuous experimentation resonated with industry practitioners, the state of the practice is not yet mature. In particular, experimentation is rarely systematic and continuous. Key challenges relate to changing organizational culture, accelerating development cycle speed, and measuring customer value and product
success.
Delivering value to customers in real-time requires companies to utilize real-time deployment of software to expose features to users faster, and to shorten the feedback loop. This allows for faster reaction and helps to ensure that the development is focused on features providing real value. Continuous delivery is a development practice where the software functionality is deployed continuously to customer environment. Although this practice has been established in some domains such as B2C mobile software, the B2B domain imposes specific challenges. This article presents a case study that is conducted in a medium-sized software company operating in the B2B domain. The objective of this study is to analyze the challenges and benefits of continuous delivery in this domain. The results suggest that technical challenges are only one part of the challenges a company encounters in this transition. The company must also address challenges related to the customer and procedures. The core challenges are caused by having multiple customers with diverse environments and unique properties, whose business depends on the software product. Some customers require to perform manual acceptance testing, while some are reluctant towards new versions. By utilizing continuous delivery, it is possible for the case company to shorten the feedback cycles, increase the reliability of new versions, and reduce the amount of resources required for deploying and testing new releases.