Informatik
Refine
Year of publication
- 2017 (64) (remove)
Document Type
- Conference proceeding (51)
- Journal article (12)
- Working Paper (1)
Language
- English (64) (remove)
Is part of the Bibliography
- yes (64)
Institute
- Informatik (64)
Publisher
This paper examines the efficacy of social media systems in customer complaint handling. The emergence of social media, as a useful complement and (possibly) a viable alternative to the traditional channels of service delivery, motivates this research. The theoretical framework, developed from literature on social media and complaint handling, is tested against data collected from two different channels (hotline and social media) of a German telecommunication services provider, in order to gain insights into channel efficacy in complaint handling. We contribute to the understanding of firm’s technology usage for complaint handling in two ways:
(a) by conceptualizing and evaluating complaint handling quality across traditional and social media channels and (b) by comparing the impact of complaint handling quality on key performance outcomes such as customer loyalty, positive word-of-mouth, and crosspurchase intentions across traditional and social media channels.
Pokémon Go was the first mobile augmented reality (AR) game to reach the top of the download charts of mobile applications. However, little is known about this new generation of mobile online AR games. Existing theories provide limited applicability for user understanding. Against this background, this research provides a comprehensive framework based on uses and gratification theory, technology risk research, and flow theory. The proposed framework aims to explain the drivers of attitudinal and intentional reactions, such as continuance in gaming or willingness to invest money in in-app purchases. A survey among 642 Pokémon Go players provides insights into the psychological drivers of mobile AR games. The results show that hedonic, emotional, and social benefits and social norms drive consumer reactions while physical risks (but not data privacy risks) hinder consumer reactions. However, the importance of these drivers differs depending on the form of user behavior.
Managing decentralized corporate energy systems is a challenging task for enterprises. However, the integration of energy objectives into business strategy creates difficulties resulting in inefficient decisions. To improve this, practice-proven methods such as the balanced scorecard and enterprise architecture management are transferred to the energy domain. The methods are evaluated based on a case study. Managing multi-dimensionality and high complexity are the main drivers for an effective and efficient energy management system. Both methods show a positive impact on managing decentralized corporate energy systems and are adaptable to the energy domain.
In any autonomous driving system, the map for localization plays a vital part that is often underestimated. The map describes the world around the vehicle outside of the sensor view and is a main input into the decision making process in highly complicated scenarios. Thus there are strict requirements towards the accuracy and timeliness of the map. We present a robust and reliable approach towards crowd based mapping using a GraphSLAM framework based on radar sensors. We show on a parking lot that even in dynamically changing environments, the localization results are very accurate and reliable even in unexplored terrain without any map data. This can be achieved by collaborative map updates from multiple vehicles. To show these claims experimentally, the Joint Graph Optimization is compared to the ground truth on an industrial parking space. Mapping performance is evaluated using a dense map from a total station as reference and localization results are compared with a deeply coupled DGPS/INS system.
Database management systems (DBMS) are critical performance components in large scale applications under modern update intensive workloads. Additional access paths accelerate look-up performance in DBMS for frequently queried attributes, but the required maintenance slows down update performance. The ubiquitous B+ tree is a commonly used key-indexed access path that is able to support many required functionalities with logarithmic access time to requested records. Modern processing and storage technologies and their characteristics require reconsideration of matured indexing approaches for today's workloads. Partitioned B-trees (PBT) leverage characteristics of modern hardware technologies and complex memory hierarchies as well as high update rates and changes in workloads by maintaining partitions within one single B+-Tree. This paper includes an experimental evaluation of PBTs optimized write pattern and performance improvements. With PBT transactional throughput under TPC-C increases 30%; PBT results in beneficial sequential write patterns even in presence of updates and maintenance operations.
Characteristics of modern computing and storage technologies fundamentally differ from traditional hardware. There is a need to optimally leverage their performance, endurance and energy consumption characteristics. Therefore, existing architectures and algorithms in modern high performance database management systems have to be redesigned and advanced. Multi Version Concurrency Control (MVCC) approaches in data-base management systems maintain multiple physically independent tuple versions. Snapshot isolation approaches enable high parallelism and concurrency in workloads with almost serializable consistency level. Modern hardware technologies benefit from multi-version approaches. Indexing multi-version data on modern hardware is still an open research area. In this paper, we provide a survey of popular multi-version indexing approaches and an extended scope of high performance single-version approaches. An optimal multi-version index structure brings look-up efficiency of tuple versions, which are visible to transactions, and effort on index maintenance in balance for different workloads on modern hardware technologies.
Using measurement and simulation for understanding distributed development processes in the Cloud
(2017)
Organizations increasingly develop software in a distributed manner. The Cloud provides an environment to create and maintain software-based products and services. Currently, it is widely unknown which software processes are suited for Cloud-based development and what their effects in specific contexts are. This paper presents a process simulation to study distributed development in the Cloud. We contribute a simulation model, which helps analyzing different project parameters and their impact on projects carried out in the Cloud. The simulator helps reproducing activities, developers, issues and events in the project, and it generates statistics, e.g., on throughput, total time, and lead and cycle time. The aim of this simulation model is thus to analyze the tradeoffs regarding throughput, total time, project size, and team size. Furthermore, the modified simulation model aims to help project managers select the most suitable planning alternative. Based on observed projects in Finland and Spain, we simulated a distributed project using artificial and real data. Particularly, we studied the variables project size, team size, throughput, and total project duration. A comparison of the real project data with the results obtained from the simulation shows the simulation producing results close to the real data, and we could successfully replicate a distributed software project. By improving the understanding of distributed development processes, our simulation model thus supports project managers in their decision-making.
The business landscape is changing radically because of software. Companies in all industry sectors are continously finding new flexibilities in this programmable world. They are able to deliver new functionalities even after the product is already in the customer's hands. But success is far from guaranteed if they cannot validate their assumptions about what their customers actually need. A competitor with better knowledge of customer needs can disrupt the market in an instant.
This book introduces continuous experimentation, an approach to continuously and systematically test assumptions about the company's product or service strategy and verify customers' needs through experiments. By observing how customers actually use the product or early versions of it, companies can make better development decisions and avoid potentially expensive and wasteful activities. The book explains the cycle of continuous experimentation, demonstrates its use through industry cases, provides advice on how to conduct experiments with recipes, tools, and models, and lists some common pitfalls to avoid. Use it to get started with continuous experimentation and make better product and service development decisions that are in-line with your customers' needs.
Due to rapidly changing technologies and business contexts, many products and services are developed under high uncertainties. It is often impossible to predict customer behaviors and outcomes upfront. Therefore, product and service developers must continuously find out what customers want, requiring a more experimental mode of management and appropriate support for continuously conducting experiments. We have analytically derived an initial model for continuous experimentation from prior work and matched it against empirical case study findings from two startup companies. We examined the preconditions for setting up an experimentation system for continuous customer experiments. The resulting RIGHT model for Continuous Experimentation (Rapid Iterative value creation Gained through High-frequency Testing) illustrates the building blocks required for such a system and the necessary infrastructure. The major findings are that a suitable experimentation system requires the ability to design, manage, and conduct experiments, create so-called minimum viable products or features, link experiment results with a product roadmap, and manage a flexible business strategy. The main challenges are proper, rapid design of experiments, advanced instrumentation of software to collect, analyse, and store relevant data, and integration of experiment results in the product development cycle, software development process, and business strategy. This summary refers to the article The RIGHT Model for Continuous Experimentation, published in the Journal of Systems and Software [Fa17].
First International Workshop on Hybrid dEveLopmENt Approaches in Software Systems Development
(2017)
A software process is the game plan to organize project teams and run projects. Yet, it still is a challenge to select the appropriate development approach for the respective context. A multitude of development approaches compete for the users’ favor, but there is no silver bullet serving all possible setups. Moreover, recent research as well as experience from practice shows companies utilizing different development approaches to assemble the bestfitting approach for the respective company: a more traditional process provides the basic framework to serve the organization, while project teams embody this framework with more agile (and/or lean) practices to keep their flexibility. The first HELENA workshop aims to bring together the community to discuss recent findings and to steer future work.