Refine
Document Type
- Conference proceeding (750) (remove)
Language
- English (750) (remove)
Has full text
- yes (750) (remove)
Is part of the Bibliography
- yes (750)
Institute
- Informatik (402)
- Technik (200)
- ESB Business School (124)
- Texoversum (15)
- Life Sciences (10)
- Zentrale Einrichtungen (2)
Publisher
- IEEE (221)
- Springer (112)
- Gesellschaft für Informatik e.V (40)
- Association for Computing Machinery (39)
- Hochschule Reutlingen (31)
- Association for Information Systems (23)
- SciTePress (20)
- IARIA (19)
- VDE Verlag (19)
- Elsevier (18)
IT environments that consist of a very large number of rather small structures like microservices, Internet of Things (IoT) components, or mobility systems are emerging to support flexible and agile products and services in the age of digital transformation. Biological metaphors of living and adaptable ecosystems with service-oriented enterprise architectures provide the foundation for self-optimizing, resilient run-time environments and distributed information systems. We are extending Enterprise Architecture (EA) methodologies and models that cover a high degree of heterogeneity and distribution to support the digital transformation and related information systems with micro-granular architectures. Our aim is to support flexibility and agile transformation for both IT and business capabilities within adaptable digital enterprise architectures. The present research paper investigates mechanisms for integrating Microservice Architectures (MSA) by extending original enterprise architecture reference models with elements for more flexible architectural metamodels and EA-mini-descriptions.
The aim of this work is the development of artificial intelligence (AI) application to support the recruiting process that elevates the domain of human resource management by advancing its capabilities and effectiveness. This affects recruiting processes and includes solutions for active sourcing, i.e. active recruitment, pre-sorting, evaluating structured video interviews and discovering internal training potential. This work highlights four novel approaches to ethical machine learning. The first is precise machine learning for ethically relevant properties in image recognition, which focuses on accurately detecting and analysing these properties. The second is the detection of bias in training data, allowing for the identification and removal of distortions that could skew results. The third is minimising bias, which involves actively working to reduce bias in machine learning models. Finally, an unsupervised architecture is introduced that can learn fair results even without ground truth data. Together, these approaches represent important steps forward in creating ethical and unbiased machine learning systems.
AI technologies such as deep learning provide promising advances in many areas. Using these technologies, enterprises and organizations implement new business models and capabilities. In the beginning, AI-technologies have been deployed in an experimental environment. AI-based applications have been created in an ad-hoc manner and without methodological guidance or engineering approach. Due to the increasing importance of AI-technologies, however, a more structured approach is necessary that enable the methodological engineering of AI-based applications. Therefore, we develop in this paper first steps towards methodological engineering of AI-based applications. First, we identify some important differences between the technological foundations of AI- technologies, in particular deep learning, and traditional information technologies. Then we create a framework that enables to engineer AI-applications using four steps: identification of an AI-application type, sub-type identification, lifecycle phase, and definition of details. The introduced framework considers that AI-applications use an inductive approach to infer knowledge from huge collections and streams of data. It not only enables the rapid development of AI-application but also the efficient sharing of knowledge on AI-applications.
Distraction of the driver is one of the most frequent causes for car accidents. We aim for a computational cognitive model predicting the driver’s degree of distraction during driving while performing a secondary task, such as talking with co-passengers. The secondary task might cognitively involve the driver to differing degrees depending on the topic of the conversation or the number of co-passengers. In order to detect these subtle differences in everyday driving situations, we aim to analyse in-car audio signals and combine this information with head pose and face tracking information. In the first step, we will assess driving, video and audio parameters reliably predicting cognitive distraction of the driver. These parameters will be used to train the cognitive model in estimating the degree of the driver’s distraction. In the second step, we will train and test the cognitive model during conversations of the driver with co-passengers during active driving. This paper describes the work in progress of our first experiment with preliminary results concerning driving parameters corresponding to the driver’s degree of distraction. In addition, the technical implementation of our experiment combining driving, video and audio data and first methodological results concerning the auditory analysis will be presented. The overall aim for the application of the cognitive distraction model is the development of a mobile user profile computing the individual distraction degree and being applicable also to other systems.
A large body of literature is concerned with models of presence— the sensory illusion of being part of a virtual scene— but there is still no general agreement on how to measure it objectively and reliably. For the presented study, we applied contemporary theory to measure presence in virtual reality. Thirty-seven participants explored an existing commercial game in order to complete a collection task. Two startle events were naturally embedded in the game progression to evoke physical reactions and head tracking data was collected in response to these events. Subjective presence was recorded using a post-study questionnaire and real-time assessments. Our novel implementation of behavioral measures lead to insights which could inform future presence research: We propose a measure in which startle reflexes are evoked through specific events in the virtual environment, and head tracking data is compared to the range and speed of baseline interactions.
Continuous refactoring is necessary to maintain source code quality and to cope with technical debt. Since manual refactoring is inefficient and error prone, various solutions for automated refactoring have been proposed in the past. However, empirical studies have shown that these solutions are not widely accepted by software developers and most refactorings are still performed manually. For example, developers reported that refactoring tools should support functionality for reviewing changes. They also criticized that introducing such tools would require substantial effort for configuration and integration into the current development environment.
In this paper, we present our work towards the Refactoring-Bot, an autonomous bot that integrates into the team like a human developer via the existing version control platform. The bot automatically performs refactorings to resolve code smells and presents the changes to a developer for asynchronous review via pull requests. This way, developers are not interrupted in their workflow and can review the changes at any time with familiar tools. Proposed refactorings can then be integrated into the code base via the push of a button. We elaborate on our vision, discuss design decisions, describe the current state of development, and give an outlook on planned development and research activities.
Digital transformation has changed corporate reality and, with that, firms’ IT environments and IT governance (ITG). As such, the perspective of ITG has shifted from the design of a relatively stable, closed and controllable system of a self-sufficient enterprise to a relatively fluid, open, agile and transformational system of networked co adaptive entities. Related to this paradigm shift in ITG, this paper aims to clarify how the concept of an effective ITG framework has changed in terms of the demand for agility in organizations. Thus, this study conducted 33 qualitative interviews with executives and senior managers from the banking industry in Germany, Switzerland and Austria. Analysis of the interviews focused on the formation of categories and the assignment of individual text parts (codings) to these categories to allow for a quantitative evaluation of the codings per category. Regarding traditional and agile ITG dimensions, 22 traditional and 25 agile dimensions were identified. Moreover, agile strategies within the agile ITG construct and ten ITG patterns were identified from the interview data. The data show relevant perspectives on the implementation of traditional and new ITG dimensions and highlight ambidextrous aspects in ITG frameworks.
While there has been increased digitization of private homes, only little has been done to understand these specific home technologies, how they serve consumers, among other issues. “Smart home technology” (SHT) refer to a wide range of artifacts from cleaning aids to energy advisors. Given this breadth, clarity surrounding the key characteristics and the multi-faceted impact of SHT is needed to conduct more directed research on SHT. We propose a taxonomy to help outline the salient intended outcomes of SHT. Through a process involving five iterations, we analyzed and classified 79 technologies (gathered from literature and industry reports). This uncovered seven dimensions encompassing 20 salient characteristics. We believe these dimensions/characteristics will help researchers and organizations better design and study the impacts of these technologies. Our long-term agenda is to use the proposed taxonomy for an exploratory inquiry to understand tensions occurring when personal and sustainability-related outcomes compete.
Towards a practical maintainability quality model for service- and microservice-based systems
(2017)
Although current literature mentions a lot of different metrics related to the maintainability of service-based systems (SBSs), there is no comprehensive quality model (QM) with automatic evaluation and practical focus. To fill this gap, we propose a Maintainability Model for Services (MM4S), a layered maintainability QM consisting of service properties (SPs) related with automatically collectable Service Metrics (SMs). This research artifact created within an ongoing Design Science Research (DSR) project is the first version ready for detailed evaluation and critical feedback. The goal of MM4S is to serve as a simple and practical tool for basic maintainability estimation and control in the context of BSs and their specialization
microservice-based systems (μSBSs).
While there are several theoretical comparisons of Object Orientation (OO) and Service Orientation (SO), little empirical research on the maintainability of the two paradigms exists. To provide support for a generalizable comparison, we conducted a study with four related parts. Two functionally equivalent systems (one OO and one SO version) were analyzed with coupling and cohesion metrics as well as via a controlled experiment, where participants had to extend the systems. We also conducted a survey with 32 software professionals and interviewed 8 industry experts on the topic. Results indicate that the SO version of our system possesses a higher degree of cohesion, a lower degree of coupling, and could be extended faster. Survey and interview results suggest that industry sees systems built with SO as more loosely coupled, modifiable, and reusable. OO systems, however, were described as less complex and easier to test.
Current approaches for enterprise architecture lack analytical instruments for cyclic evaluations of business and system architectures in real business enterprise system environments. This impedes the broad use of enterprise architecture methodologies. Furthermore, the permanent evolution of systems desynchronizes quickly model representation and reality. Therefore we are introducing an approach for complementing the existing top-down approach for the creation of enterprise architecture with a bottom approach. Enterprise Architecture Analytics uses the architectural information contained in many infrastructures to provide architectural information. By applying Big Data technologies it is possible to exploit this information and to create architectural information. That means, Enterprise Architectures may be discovered, analyzed and optimized using analytics. The increased availability of architectural data also improves the possibilities to verify the compliance of Enterprise Architectures. Architectural decisions are linked to clustered architecture artifacts and categories according to a holistic EAM Reference Architecture with specific architecture metamodels. A special suited EAM Maturity Framework provides the base for systematic and analytics supported assessments of architecture capabilities.
Smart cities are considered data factories that generate an enormous amount of data from various sources. In fact data is the backbone of any smart services. Therefore, the strategic beneficial handling of this digital capital is crucial for cities. Some smart city pioneers have already written down their approach to data in the form of data strategies, but what should a city's data strategy include, and how can the goals and measures defined in the strategies be operationalized? This paper addresses these questions by looking closely at the data strategies of cities in Germany and the top three countries in the EU Digital Economy and Society Index. The in-depth analysis of 8 city data strategies has yielded 11 dimensions that cities should consider in their data strategy. These are relevance of data, principles, methods, data sharing, technology, data culture, data ethics, organizational structure, data security and privacy, collaborations, data literacy. In addition, data governance is a concept to put these 11 strategic dimensions into practice through standardization measures, training programs, and defining roles and responsibilities by developing a data catalog.
While the concepts of object-oriented antipatterns and code smells are prevalent in scientific literature and have been popularized by tools like SonarQube, the research field for service-based antipatterns and bad smells is not as cohesive and organized. The description of these antipatterns is distributed across several publications with no holistic schema or taxonomy. Furthermore, there is currently little synergy between documented antipatterns for the architectural styles SOA and Microservices, even though several antipatterns may hold value for both. We therefore conducted a Systematic Literature Review (SLR) that identified 14 primary studies. 36 service-based antipatterns were extracted from these studies and documented with a holistic data model. We also categorized the antipatterns with a taxonomy and implemented relationships between them. Lastly, we developed a web application for convenient browsing and implemented a GitHub-based repository and workflow for the collaborative evolution of the collection. Researchers and practitioners can use the repository as a reference, for training and education, or for quality assurance.
Analysis is an important part of the enterprise architecture management process. Prior to decisions regarding transformation of the enterprise architecture, the current situation and the outcomes of alternative action plans have to be analysed. Many analysis approaches have been proposed by researchers and current enterprise architecture management tools implement analysis functionalities. However, few work has been done structuring and classifying enterprise architecture analysis approaches. This paper collects and extends existing classification schemes, presenting a framework for enterprise architecture analysis classification. For evaluation, a collection of enterprise architecture analysis approaches has been classified based on this framework. As a result, the description of these approaches has been assessed, a common set of important categories for enterprise architecture analysis classification has been derived and suggestions for further development are drawn.
Autonomous driving is becoming the next big digital disruption in the automotive industry. However, the possibility of integrating autonomous driving vehicles into current transportation systems not only involves technological issues but also requires the acceptance and adoption of users. Therefore, this paper develops a conceptual model for user acceptance of autonomous driving vehicles. The corresponding model is tested through a standardized survey of 470 respondents in Germany. Finally, the findings are discussed in relation to the current developments in the automotive industry, and recommendations for further research are given.
Many start-ups are in search of cooperation partners to develop their innovative business models. In response, incumbent firms are introducing increasingly more cooperation systems to engage with start-ups. However, many of these cooperations end in failure. Although qualitative studies on cooperation models have tried to improve the effectiveness of incumbent start-up strategies, only a few have empirically examined start-up cooperation behavior. Considering the lack of adequate measurement models in current research, this paper focuses on developing a multi-item scale on cooperation behavior of start-ups, drawing from a series of qualitative and quantitative studies. The resultant scale contributes to recent research on start-up cooperation and provides a framework to add an empirical perspective to current research.
In this work, a comparison between different brushless harmonic-excited wound-rotor synchronous machines is performed. The general idea of all topologies is the elimination of the slip rings and auxiliary windings by using the already existing stator and rotor winding for field excitation. This is achieved by injecting a harmonic airgap field with the help of power electronics. This harmonic field does not interact with the fundamental field, it just transfers the excitation power across the airgap. Alternative methods with varying number of phases, different pole-pair combinations, and winding layouts are covered and compared with a detailed Finite-Element-parameterized model. Parasitic effects due to saturation and coupling between the harmonic and main windings are considered.
Distributed ledger technologies such as the blockchain technology offer an innovative solution to increase visibility and security to reduce supply chain risks. This paper proposes a solution to increase the transparency and auditability of manufactured products in collaborative networks by adopting smart contract-based virtual identities. Compared with existing approaches, this extended smart contract-based solution offers manufacturing networks the possibility of involving privacy, content updating, and portability approaches to smart contracts. As a result, the solution is suitable for the dynamic administration of complex supply chains.
The increase in distributed energy generation, such as photovoltaic systems (PV) or combined heat and power plants (CHP), poses new challenges to almost every distribution network operator (DNO). In the low-voltage (LV) grids, where installed PV capacity approaches the magnitude of household load, reverse power flow occurs at the secondary substa-tions. High PV penetration leads to voltage rise, flicker and loading problems. These problems have been addressed by the application of various techniques amongst which is the deployment of step voltage regulators (SVR). SVR can solve the voltage problem, but do not prevent or reduce reverse power flows. Therefore, the application of SVR in low voltage grids can result in significant power losses upstream. In this paper we present part of a research project investi-gating the application of remote-controlled cable cabinets (CC) with metering units in a low-voltage network as a possible alternative for SVR. A new generation of custom-made remote-control cable cabinets has been deployed and dynamic network reconfigurations (NR) have been realized with the following objectives: (i) reduction of reverse power flow through the secondary substation to the upstream network and therefore a reduction of upstream losses, (ii) reduction of the voltage rise caused by distributed energy resources and (iii) load balancing in the low-voltage grid. Secondary objec-tives are to improve the DNO's insight into the state of the network and to provide further information on future smart grid integration.
For large-scale processes as implemented in organizations that develop software in regulated domains, comprehensive software process models are implemented, e.g., for compliance requirements. Creating and evolving such processes is demanding and requires software engineers having substantial modeling skills to create consistent and certifiable processes. While teaching process engineering to students, we observed issues in providing and explaining models. In this paper, we present an exploratory study in which we aim to shed light on the challenges students face when it comes to modeling. Our findings show that students are capable of doing basic modeling tasks, yet, fail in utilizing models correctly. We conclude that the required skills, notably abstraction and solution development, are underdeveloped due to missing practice and routine. Since modeling is key to many software engineering disciplines, we advocate for intensifying modeling activities in teaching.
Due to the rising need for palliative care in Russia, it is crucial to provide timely and high-quality solutions for patients, relatives, and caregivers. A methodology for remote monitoring of patients in need of palliative care and the requirements will be developed for a hardware-software complex for remote monitoring of patients' health at home.
The typed graph model
(2020)
In recent years, the Graph Model has become increasingly popular, especially in the application domain of social networks. The model has been semantically augmented with properties and labels attached to the graph elements. It is difficult to ensure data quality for the properties and the data structure because the model does not need a schema. In this paper, we propose a schema bound Typed Graph Model with properties and labels. These enhancements improve not only data quality but also the quality of graph analysis. The power of this model is provided by using hyper-nodes and hyper edges, which allows to present a data structure on different abstraction levels. We demonstrate by example the superiority of this model over the property graph data model of Hidders and other prevalent data models, namely the relational, object-oriented, and XML model.
The time has come : application of artificial intelligence in small- and medium-sized enterprises
(2022)
Artificial intelligence (AI) is not yet widely used in small- and medium-sized industrial enterprises (SME). The reasons for this are manifold and range from not understanding use cases, not enough trained employees, to too little data. This article presents a successful design-oriented case study at a medium-sized company, where the described reasons are present. In this study, future demand forecasts are generated based on historical demand data for products at a material number level using a gradient boosting machine (GBM). An improvement of 15% on the status quo (i.e. based on the root mean squared error) could be achieved with rather simple techniques. Hence, the motivation, the method, and the first results are presented. Concluding challenges, from which practical users should derive learning experiences and impulses for their own projects, are addressed.
The tale of 1000 cores: an evaluation of concurrency control on real(ly) large multi-socket hardware
(2020)
In this paper, we set out the goal to revisit the results of “Starring into the Abyss [...] of Concurrency Control with [1000] Cores” and analyse in-memory DBMSs on today’s large hardware. Despite the original assumption of the authors, today we do not see single-socket CPUs with 1000 cores. Instead multi-socket hardware made its way into production data centres. Hence, we follow up on this prior work with an evaluation of the characteristics of concurrency control schemes on real production multi-socket hardware with 1568 cores. To our surprise, we made several interesting findings which we report on in this paper.
Compared to the automotive sector, where automation is the rule, in many other less standardized sectors automation is still the exception. This could soon hurt the productivity of industrialized countries, where the unemployment is low and the population is aging. Phenomena like the recent downfall in productivity, due to lockdowns and social distancing for prevention of health hazards during the COVID19 pandemic, only add to the problem. For these reasons, the relevance, motivation and intention for more automation in less standardized sectors has probably never been higher. However, available statistics say that providers and users of technologies struggle to bring more automation into action in automation-unfriendly sectors. In this paper, we present a decision support method for investment in automation that tackles the problem: the STIC analysis. The method takes a holistic and quantitative approach tying together technological, context-related and economic input parameters and synthetizing them in a final economic indicator. Thanks to the modelling of such parameters, it is possible to gain sensibility on the technological and/or process adjustments that would have the highest impact on the efficiency of the automation, thereby delivering value for both technology users and technology providers.
The success of an autonomous robotic system is influenced by several interdependent factors not easily identifiable. This paper is set to lay the foundation of a new integrated approach in order to deeply examine all the parameters and understand their contribution to success. After introducing the problem, two cutting edge autonomous systems for the process of unloading of containers will be presented. Then the STIC analysis, a recently developed method for modelling and interpreting all the parameters, will be introduced. The preliminary results of applying such a methodology to a first study case, based on one of the two systems available to the authors, will be shortly presented. Future research is in the end recommended in order to prove that this methodology is the only way to efficiently and effectively mitigate the risk that stops potential users from investing in autonomous systems in the logistics sector.
Due to rapidly changing technologies and business contexts, many products and services are developed under high uncertainties. It is often impossible to predict customer behaviors and outcomes upfront. Therefore, product and service developers must continuously find out what customers want, requiring a more experimental mode of management and appropriate support for continuously conducting experiments. We have analytically derived an initial model for continuous experimentation from prior work and matched it against empirical case study findings from two startup companies. We examined the preconditions for setting up an experimentation system for continuous customer experiments. The resulting RIGHT model for Continuous Experimentation (Rapid Iterative value creation Gained through High-frequency Testing) illustrates the building blocks required for such a system and the necessary infrastructure. The major findings are that a suitable experimentation system requires the ability to design, manage, and conduct experiments, create so-called minimum viable products or features, link experiment results with a product roadmap, and manage a flexible business strategy. The main challenges are proper, rapid design of experiments, advanced instrumentation of software to collect, analyse, and store relevant data, and integration of experiment results in the product development cycle, software development process, and business strategy. This summary refers to the article The RIGHT Model for Continuous Experimentation, published in the Journal of Systems and Software [Fa17].
The relevance of technology knowledge in digital transformation especially in small and mediumsized enterprises (SMEs) that are still largely dependent on physical human capital has become increasingly obvious. This is due to the rapid revolution in business environment coupled with increased living examples of firms disrupted by advancement in technological knowledge. Consequently, we find it progressively vital for SMEs to spot and mitigate both threats and take advantage of opportunities arising from digital transformation dynamism.
Our study aims at exploring the relevance of technology knowledge in SMEs for digital transformation to uncover the opportunities, roadmaps, and models that SMEs can take advantage of in the digital transformation and gain a competitive edge.
We conclude that irrespective relevance of technology knowledge for digital transformation coupled with its low costs and accessibility, SMEs are yet to realize the full potential of technological knowledge. This is mainly due to technologies appearing, changing and also vanishing so rapidly in the digital age, that gaining proper understanding without dedicated resources is utterly difficult for SMEs - making them less competitive as incumbent large firms in the market.
In standardized sectors such as the automotive, the cost-benefit ratio of automation solutions is high as they contribute to increase capacity, decrease costs and improve product quality. In less standardized application fields, the contribution of automation to improvements in capacity, cost and quality blurs. The automation of complex and unstructured tasks requires sophisticated, expensive and low-performing systems, whose impact on product quality is oftentimes not directly perceived by customers. As a result, the full automation of process chains in the general manufacturing or the logistic sectors is often a sub optimal solution. Taking the distance from the false idea that a process should be either fully automated, or fully manual, this paper presents a novel heuristic method for design of lean human-robot interaction, the Quality Interaction Function Deployment, with the objective of the “right level of automation”. Functions are divided among human and automated agents and several automation scenarios are created and evaluated with respect to their compliance to the requirements of all process´ stakeholders. As a result, synergies among operators (manual tasks) and machines (automated tasks) are improved, thus reducing time-losses and increasing productivity.
Context: Organizations are increasingly challenged by high market dynamics, rapidly evolving technologies and shifting user expectations. In consequence, many organizations are struggling with their ability to provide reliable product roadmaps by applying traditional roadmapping approaches. Currently, many companies are seeking opportunities to improve their product roadmapping practices and strive for new roadmapping approaches. A typical first step towards advancing the roadmapping capabilities of an organization is to assess the current situation. Therefore, the so-called maturity model DEEP for assessing the product roadmapping capabilities of companies operating in dynamic and uncertain environments has been developed and published by the authors.
Objective: The aim of this article is to conduct an initial validation of the DEEP model in order to understand its applicability better and to see if important concepts are missing. In addition, the aim of this article is to evolve the model based on the findings from the initial validation.
Method: The model has been given to practitioners such as product managers with the request to perform a self-assessment of the current product roadmapping practices in their company. Afterwards, interviews with each participant have been conducted in order to gain insights.
Results: The initial validation revealed that some of the stages of the model need to be rearranged and minor usability issues were found. The overall structure of the model was well received. The study resulted in the development of the version 1.1 of the DEEP product roadmap maturity model which is also presented in this article.
Steady growing research material in a variety of databases, repositories and clouds make academic content more than ever hard to discover. Finding adequate material for the own research however is essential for every researcher. Based on recent developments in the field of artificial intelligence and the identified digital capabilities of future universities a change in the basic work of academic research is predicted. This study defines the idea of how artificial intelligence could simplifiy academic research at a digital university. Today's studies in the field of AI spectacle the true potential and its commanding impact on academic research.
Electromigration (EM) is becoming a progressively severe reliability challenge due to increased interconnect current densities. A shift from traditional (post-layout) EM verification to robust (pro-active) EM aware design - where the circuit layout is designed with individual EM-robust solutions - is urgently needed. This tutorial will give an overview of EM and its effects on the reliability of present and future integrated circuits (ICs). We introduce the physical EM process and present its specific characteristics that can be affected during physical design. Examples of EM countermeasures which are applied in today’s commercial design flows are presented. We show how to improve the EM-robustness of metallization patterns and we also consider mission proiles to obtain application-oriented current density limits. The increasing interaction of EM with thermal migration is investigated as well. We conclude with a discussion of application examples to shift from the current post layout EM verification towards an EM aware physical design process. Its methodologies, such as EM-aware routing, increase the EM-robustness of the layout with the overall goal of reducing the negative impact of EM on the circuit’s reliability.
This paper describes the design and outcomes of an experimental study that addresses stock-and-flow-failure from a cognitive perspective. It is based on the assumption that holistic (global) and analytic (local) processing are important cognitive mechanisms underlying the ability to infer the behavior of dynamic systems. In a stock-and-flow task that is structurally equivalent to the department store task, we varied the format in which participants are primed to think about an environmental system, in particular whether they are primed to concentrate on lower-level (local) or higher-level (global) system elements. 148 psychology, geography and business students participated in our study. Students’ answers support our hypothesis that global processing increases participants’ ability to infer the overall system behavior. The beneficial influence of global presentation is even stronger when data are presented numerically rather than in the form of a graph. Our results suggest presenting complex dynamic systems in a way that facilitates global processing. This is particularly important as policy-designers and decision makers deal with complex issues in their everyday and professional life.
During the first years of their employment, the graduates are a liability to industry. The employer goes an extra mile to bridge the gap between university-exiting and profitable employment of engineering graduates. Unfortunately some cannot take this risk. Given this scenario, this paper presents a learning factory approach as a platform for the application of knowledge so as to develop the required engineering competences in South African engineering graduates before they enter the labour market. It spells out the components of a Stellenbosch University Learning Factory geared towards production of engineering graduates with the required industrial skills. It elaborates on the didactics embedded in the learning factory environment, tailor-made to produce engineers who can productively contribute to the growth of the industry upon exiting the university.
Internet of things innovations and the industrial internet these days become more and more decisive factors of future success for companies. Especially manufacturing oriented SME will face the challenge to develop innovative technology driven business models alongside technology innovations in this field which will be essential for future competitiveness. Failing in developing these technology driven business models in an internationally highly competitive environment will have a serious impact both on companies and on the society. Hence, securing economic stability and success of these technology driven business models is an indispensable task. To identify challenges for innovative industrial internet business models first it is necessary to understand what the industrial internet means to the leading parties and applying companies and start-ups in the field. Second, challenges from general business model development will be outlined. In a third step risks and challenges in business model development will be discussed with regard to the special characteristics of technology driven business models in the context of the industrial internet and the important role of the technological key component of the business model. Especially the capability to deal with an integrated consideration of the indivisible linked dimensions of economic and technological aspects of these business models is questioned. In the fourth place the specific challenges for industrial internet business models are derived. On the basis of these results it is also discussed what might be done to handle these challenges successfully with the goal to turn them into chances. The need for future research on the integration of the risk management perspective into the development of these technology driven business models is derived. This will help established companies and start-ups to realize great technological innovations for the industrial internet in sound and successful innovative business models.
IT Governance (ITG) is crucial due to its significant impact on enabling innovation and enhancing firm performance. Hence, in the last decade ITG has become important in both academic and in practical research. Although several studies have investigated individual aspects of ITG success and its impact on single determinants, the causal relationship of how ITG promotes firm performance remains unclear. Thus, a more comprehensive understanding about the link between ITG and firm performance is needed. To address this gap, this research aims at understanding how ITG and firm performance are related. Therefore, we conducted a systematic literature review (1) to create an overview on how current research structures the link between ITG mechanisms and firm performance, (2) to uncover key constructs as potential mediators or moderators on the general link between ITG and performance, and (3) to set the basis for future studies on the ITG-firm performance relationship.
Digitization transforms business process models and processes in many enterprises. However, many of them need guidance, how digitization is impacting the design of their information systems. Therefore, this paper investigates the influence of digitization on information system design. We apply a two-phase research method applying a literature review and an exploratory case study. The case study took place in the IT service provider of a large insurance enterprise. The study’s results suggest that a number of areas of information system design are affected, such as architecture, processes, data and services.
The advent of chatbots in customer service solutions received increasing attention by research and practice throughout the last years. However, the relevant dimensions and features for service quality and service performance for chatbots remain quite unclear. Therefore, this research develops and tests a conceptual model for customer service quality and customer service performance in the context of chatbots. Additionally, the impact of the developed service dimensions on different customer relationship metrics is measured across different service channels (hotline versus chatbots). Findings of six independent studies indicate a strong main effect of the conceptualized service dimensions on customer satisfaction, service costs, intention to service reusage, word-of-mouth, and customer loyalty. However, different service dimensions are relevant for chatbots compared to a traditional service hotline.
It is essential for the success of a company to set a strategic direction in which a product offering will be developed over time to achieve the company vision. For this reason, roadmaps are used in practice. in general, roadmaps can be expressed in various forms such as technology roadmaps, product roadmaps or industry roadmaps. From the point of view of industry, the basic purpose of a roadmap is to explore, visualize and communicate the dynamic linkage between markets, products and technology.
Electronic design automation approaches can roughly be divided into optimizers and procedures. While the former have enabled highly automated synthesis flows for digital integrated circuits, the latter play a vital (but mostly underestimated role) in the analog domain. This paper describes both automation strategies in comparison, identifying two fundamentally different automation paradigms that reflect the two basic design practices known as “top-down” and “bottom-up”. Then, with a focus on the latter, the history of procedural approaches is traced from their
early beginnings until today’s evolvements and future prospects to underline their practical importance and to accentuate their scientific value, both in itself and in the overall context of EDA.
The paper studies the deciding parameters that influence business students' selection of internships in Germany. The findings are based on literature research and a survey amongst students and company representatives and asks to rate the importance of 24 different aspects of internships. The benefits and negative impacts of internships on students, companies and universities are discussed in detail. The results of different demographic groups are compared.
Since project managers still face problems in managing interorganizational R&D projects, it is a promising approach to manage these projects project-culturally-aware. However, an important prerequisite for a project-culture-aware management is that the involved individual organizations pursue a collaborative strategy. Therefore, our article provides a conceptual approach including a new tool, the Collaborative Iron Triangle, which supports both project sponsors and managers in different phases of the collaboration process to pursue a collaborative strategy in interorganizational R&D projects.
Early reduction of risks in a startup or an innovation project is highly important. Appropriate means for risk reduction, such as testing business models with different kinds of experiments exist. However, deciding what to test and how to select the right test, is challenging for many startups and innovation projects. This article presents the so-called Business Experiments Navigator (BEN), a toolkit to assist startup and innovation processes. It compliments other tools such as the Business Model Canvas or the Lean Startup process. The main contribution of BEN is to bridge the gap between the riskiest assumptions of a business model and the multitude of available testing techniques by providing assumption templates. The Business Experiments Navigator has been validated in several workshops. Results show that it creates awareness among the workshop participants that a business model is based on assumptions which impose risks and need to be validated. Further, users of BEN were able to identify relevant assumptions and map different kinds of assumptions to appropriate testing techniques. The process applied in the workshops, as well as the assumption templates, helped the participants understand the main concepts and transfer their learnings, to their own business ideas.
Digital transformation has changed corporate reality and, with that, firms’ IT environments and IT governance (ITG). As such, the perspective of ITG has shifted from the design of a relatively stable, closed and controllable System of a self-sufficient Enterprise to a relatively fluid, open, agile and transformational system of networked co-adaptive entities. Related to this paradigm shift in ITG, this paper aims to clarify how the concept of an effective ITG framework has changed in terms of the demand for agility in organizations. Thus, this study conducted 33 qualitative interviews with executives and senior managers from the banking industry in Germany, Switzerland and Austria. Analysis of the interviews focused on the formation of categories and the assignment of individual text parts (codings)
to these categories to allow for a quantitative evaluation of the codings per category. Regarding traditional and agile ITG dimensions, 22 traditional and 25 agile dimensions in terms of structures, processes and relational mechanisms were identified. Moreover, agile strategies within the agile ITG construct and ten ITG patterns were identified from the interview data. The data show relevant perspectives on the implementation of traditional and new ITG dimensions and highlight ambidextrous aspects in ITG in the German-speaking banking industry.
The shift of populations to cities is creating challenges in many respects, thus leading to increasing demand for smart solutions of urbanization problems. Smart city applications range from technical and social to economic and ecological. The main focus of this work is to provide a systematic literature review of smart city research to answer two main questions: (1) How is current research on smart cities structured? And (2) What directions are relevant for future research on smart cities? To answer these research questions, a text-mining approach is applied to a large number of publications. This provides an overview and gives insights into relevant dimensions of smart city research. Although the main dimensions of research are already described in the literature, an evaluation of the relevance of such dimensions is missing. Findings suggest that the dimensions of environment and governance are popular, while the dimension of economy has received only limited attention.
Most Question-answering (QA) systems rely on training data to reach their optimal performance. However, acquiring training data for supervised systems is both time-consuming and resource-intensive. To address this, in this paper, we propose TFCSG, an unsupervised similar question retrieval approach that leverages pre-trained language models and multi-task learning. Firstly, topic keywords in question sentences are extracted sequentially based on a latent topic-filtering algorithm to construct unsupervised training corpus data. Then, the multi-task learning method is used to build the question retrieval model. There are three tasks designed. The first is a short sentence contrastive learning task. The second is the question sentence and its corresponding topic sequence similarity judgment task. The third is using question sentences to generate their corresponding topic sequence task. The three tasks are used to train the language model in parallel. Finally, similar questions are obtained by calculating the cosine similarity between sentence vectors. The comparison experiment on public question datasets that TFCSG outperforms the comparative unsupervised baseline method. And there is no need for manual marking, which greatly saves human resources.
3D assisted 2D face recognition involves the process of reconstructing 3D faces from 2D images and solving the problem of face recognition in 3D. To facilitate the use of deep neural networks, a 3D face, normally represented as a 3D mesh of vertices and its corresponding surface texture, is remapped to image-like square isomaps by a conformal mapping. Based on previous work, we assume that face recognition benefits more from texture. In this work, we focus on the surface texture and its discriminatory information content for recognition purposes. Our approach is to prepare a 3D mesh, the corresponding surface texture and the original 2D image as triple input for the recognition network, to show that 3D data is useful for face recognition. Texture enhancement methods to control the texture fusion process are introduced and we adapt data augmentation methods. Our results show that texture-map-based face recognition can not only compete with state-of-the-art systems under the same precon ditions but also outperforms standard 2D methods from recent years.
Being able to monitor the heart activity of patients during their daily life in a reliable, comfortable and affordable way is one main goal of the personalized medicine. Current wearable solutions lack either on the wearing comfort, the quality and type of the data provided or the price of the device. This paper shows the development of a Textile Sensor Platform (TSP) in the form of an electrocardiogram (ECG)-measuring T-shirt that is able to transmit the ECG signal to a smartphone. The development process includes the selection of the materials, the design of the textile electrodes taking into consideration their electrical characteristics and ergonomy, the integration of the electrodes on the garment and their connection with the embedded electronic part. The TSP is able to transmit a real-time streaming of the ECG-signal to an Android smartphone through Bluetooth Low Energy (BLE). Initial results show a good electrical quality in the textile electrodes and promising results in the capture and transmission of the ECG signal. This is still a working- progress and it is the result of an interdisciplinary master project between the School of Informatics and the School of Textiles & Design of the Reutlingen University.
Virtual prototyping of integrated mixed-signal smart-sensor systems requires high-performance co-simulation of analog frontend circuitry with complex digital controller hardware and embedded real-time software. We use SystemC/TLM 2.0 in combination with a cycle-count accurate temporal decoupling approach to simulate digital components and firmware code execution at high speed while preserving clock cycle accuracy and, thus, real-time behavior at time quantum boundaries. Optimal time quanta ensuring real-time capability can be calculated and set automatically during simulation if the simulation engine has access to exact timing information about upcoming communication events. These methods fail in case of non-deterministic, asynchronous events resulting in a possibly invalid simulation result. In this paper, we propose an extension of this method to the case of asynchronous events generated by blackbox sources from which a-priori event timing information is not available, such as coupled analog simulators or hardware in the loop. Additional event processing latency and/or rollback effort caused by temporal decoupling is minimized by calculating optimal time quanta dynamically in a SystemC model using a linear prediction scheme. For an example smart-sensor system model, we show that quasi- periodic events that trigger activities in temporally decoupled processes are handled accurately after the predictor has settled.
When a bonding wire becomes too hot, it fuses and fails. The ohmic heat that is generated in the wire can be partially dissipated to a mold package. For this cooling effect the thermal contact between wire and package is an important parameter. Because this parameter can degrade over lifetime, the fusing of a bonding wire can also occur as a long-term effect. Another important factor is the thermal power generated in the vicinity of the bond pads. Nowadays, the reliability of bond wires relies on robust dimensioning based on estimations. Smaller package sizes increase the need for better predictive methods.
The Bond Calculator, a new thermo-electrical simulation tool, is able to predict the temperature profiles along bond wires of arbitrary dimensions in dependence on the applied arbitrary transient current profile, the mold surrounding the wire, and the thermal contact between wire and mold.
In this paper we closely investigated the spatial temperature profiles along different bond wires in air in order to make a first step towards the experimental verification of the simulation model. We are using infrared microscopy in order to measure the thermal radiation generated along the bond wire. This is easier to perform quantitatively in air than in the mold package, because of the non-negligible absorbance of the mold material in the infrared wavelength region.
The 17 SDGs, as agreed upon by the international community, are designed to be implemented across all levels of human activity. Alongside the level of international politics, this also includes the local levels, national politics, wider society, and the economic sphere. Many channels are called on to further implementation, including the transfer of technology to developing and emerging countries. As the patent holders, this must include the active participation of companies. While the literature examines the important role of technology transfer in North-South business-to-business (B2B) partnerships, studies on the technology transfer between European and African companies are scarce. Therefore, in this study we use original data from 26 interviews conducted with managers engaged in sales partnerships between German manufacturers and their distributors in African markets to examine the existence and forms of technology transfer. We find that training and marketing excellence are the predominant forms of technology transfer and based on that suggest a refinement of established frameworks on B2B technology transfer.
During two researches the influence of technologies on sleep were analyzed. The first one is about the effect of light on the circadian rhythm and as consequence on sleep quality of persons in a vegetative state. The second one, which is still running, surveys the influence of several technical tools on the sleep of elderly people living in a nursing home.
Ecuador, traditionally an agricultural based economy, has a great potential for valorizing their industrial residues. This study, presents a techno-economic analysis for applying a novel biomass oxidation method to produce formic and acetic acids from coffee husk residues in Machala, Ecuador. The analysis determined that the time of return of investment was lower than 5 years, making this project economically feasible, when producing approx. 1000 tons of formic acid per year, which is enough for supplying the Ecuadorian market. This production, would reduce imports costs and develop the chemical industry in the country.
Creating new business models, products or services is challenging in fast changing unpredictable environments. Often, product teams need to make many assumptions (e.g., assumptions about future demands) that might not be true. These assumptions impose risks to the success and these risks need to be mitigated early. One of the principles of the Lean Startup approach is to identify and prioritize the riskiest assumptions in order to validate them as early as possible. This helps to avoid wasting effort and time. In the literature there are several different methods for identifying and prioritizing the riskiest assumptions reported. However, only little research exists about the practical application of these methods in practice and how to teach them. In this paper, we present and empirically analyze a workshop format that we have developed for teaching the prioritization of Lean Startup assumptions. We aim at raising the awareness for assumption thinking among the participants and teach them through group work how to prioritize assumptions. The results of the analysis of a multitude of conducted workshops show that the applied method did lead to reasonable results and accompanying learning effects. In addition, the participants got aware of assumption thinking and liked learning in a practical way.
With the capability of employing virtually unlimited compute resources, the cloud evolved into an attractive execution environment for applications from the High Performance Computing (HPC) domain. By means of elastic scaling, compute resources can be provisioned and decommissioned at runtime. This gives rise to a new concept in HPC: Elasticity of parallel computations. However, it is still an open research question to which extent HPC applications can benefit from elastic scaling and how to leverage elasticity of parallel computations. In this paper, we discuss how to address these challenges for HPC applications with dynamic task parallelism and present TASKWORK, a cloud-aware runtime system based on our findings. TASKWORK enables the implementation of elastic HPC applications by means of higher level development frameworks and solves corresponding coordination problems based on Apache ZooKeeper. For evaluation purposes, we discuss a development framework for parallel branch-and-bound based on TASKWORK, show how to implement an elastic HPC application, and report on measurements with respect to parallel efficiency and elastic scaling.
Context: Agile practices as well as UX methods are nowadays well-known and often adopted to develop complex software and products more efficiently and effectively. However, in the so called VUCA environment, which many companies are confronted with, the sole use of UX research is not sufficient to find the best solutions for customers. The implementation of Design Thinking can support this process. But many companies and their product owners don’t know how much resources they should spend for conducting Design Thinking.
Objective: This paper aims at suggesting a supportive tool, the “Discovery Effort Worthiness (DEW) Index”, for product owners and agile teams to determine a suitable amount of effort that should be spent for Design Thinking activities.
Method: A case study was conducted for the development of the DEW index. Design Thinking was introduced into the regular development cycle of an industry Scrum team. With the support of UX and Design Thinking experts, a formula was developed to determine the appropriate effort for Design Thinking.
Results: The developed “Discovery Effort Worthiness Index” provides an easy-to-use tool for companies and their product owners to determine how much effort they should spend on Design Thinking methods to discover and validate requirements. A company can map the corresponding Design Thinking methods to the results of the DEW Index calculation, and product owners can select the appropriate measures from this mapping. Therefore, they can optimize the effort spent for discovery and validation.
Organizational agility may be an antidote against threats from volatile, uncertain, complex, or ambiguous corporate environments. While agility has been extensively examined in manufacturing enterprises, comparably less is known about agility in knowledge-intensive organizations. As results may not be transferable, there is still some confusion about how agility in knowledge-intensive organizations can be characterized, what factors facilitate its development, what its organizational effects are, and what environmental conditions favor these effects. This study closes these gaps by presenting a systematic literature review on agility in knowledge-intensive organizations. A systematic literature search led to a sample of 37 relevant papers for our review. Integrating the knowledge-based view and a dynamic capabilities perspective, we (1) present different relevant conceptualizations of organizational agility, (2) discuss relevant knowledge management-related as well as information technology-related capabilities that support the development of organizational agility, and (3) shed light on the moderating role of environmental conditions in enhancing organizational agility and its effect on organizational performance. This academic paper adds value to theory by synthesizing existing research on agility in knowledge-intensive organizations. It furthermore may serve as a map for closing research gaps by proposing an extensive agenda for future research. Our study expands existing literature reviews on agility with its specific focus on a knowledge-intensive context and its integration of the research streams of knowledge management capabilities as well as information technology capabilities. It integrates relevant organizational knowledge management practices and the use of knowledge management systems to ensure superior performance effects. Our study can serve as a base for future examinations of organizational agility by illustrating fruitful topics for further examination as well as open questions. It may also provide value to practitioners by showing what factors favor the development of agility in knowledge-intensive organizations and what organizational effects can be achieved under which conditions.
In recent years, significant progress was made on switched-capacitor DCDC converters as they enable fully integrated on chip power management. New converter topologies overcame the fixed input-to-output voltage limitation and achieved high efficiency at high power densities. SC converters are attractive to not only mobile handheld devices with small input and output voltages, but also for power conversion in IoTs, industrial and automotive applications, etc. Such applications need to be capable of handling high input voltages of more than 10V. This talk highlights the challenges of the required supporting circuits and high voltage techniques, which arise for high Vin SC converters. It includes level shifters, charge pumps and back-to-back switches. High Vin conversion is demonstrated in a 4:1 SC DCDC converter with an input voltage as high as 17V with a peak efficiency of 45 %, and a buckboost SC converter with an input voltage range starting from 2 up to 13V, which utilizes a total of 17 ratios and achieves a peak efficiency of 81.5 %. Furthermore a highly integrated micro power supply approach is introduced, which is connected directly to the 120/230 Vrms mains, with an output power of 3mW, resulting in a power density >390μW/mm², which exceeds prior art by a factor of 11.
Facing ever-looming climate change, studying the drivers for individuals' Information Systems (IS) Use to reduce environmental harm gains momentum. While extant research on the antecedents of sustainable IS Use has focused on specific theories, interventions, contexts, and technologies, a holistic understanding has become increasingly elusive, with a synthesis remaining absent. We employ a systematic literature review methodology to shed light on the driving antecedents for sustainable IS Use among individual consumers. Our results build on findings of 29 empirical studies drawn from 598 articles retrieved from our premier outlets and a forward/backward search. The analysis reveals six salient complementary antecedents: Relief, Empowerment, Default, User-centricity, Salience, and Encouragement. We recommend considering these concepts when developing, deploying, promoting, or regulating digital technologies to mitigate individual consumers' emissions. Along with memorable and implementable concepts, our theoretical framework offers a novel conceptualization and four promising avenues for researchers on sustainable IS Use.
This article explores the question of how sustainability and labour law are interrelated. The modern world of work is characterised by the growing social and environmental responsibility of companies. Especially in the post-COVID era, sustainability also plays an increasingly important role in the corporate context, which is also noticeable in the so-called ‘war for talent’. Achieving personal career goals is no longer enough for employees today. Corporate values and in particular the so-called ESG criteria (Environment, Social, Governance) are thus also becoming increasingly important in the employment relationship and in corporate reporting requirements. In terms of social sustainability, labour law instruments can, for example, promote the creation of a discrimination-free working environment, the introduction of flexible working time models or the protection of whistleblowers. From an ecological perspective, labour regulations are also suitable for implementing ‘green mobility’ and other measures to reduce companies’ ecological footprints. Working from home, which experienced a huge boom during the COVID-19 pandemic, is also sustainable, especially from an ecological point of view. Appropriate consideration of these sustainable work tools in future corporate social responsibility (CSR) strategies not only creates a competitive advantage but can also be beneficial in recruitment.
This paper evaluates experimentally the susceptibility of IT-networks under influences and the threats of HPEM (High Power Electromagnetic) and IEMI (Intentional Electromagnetic Interferences). As HPEM source a PBG 5 (Pulse Burst Generator) adapted to a TEM (Transversal Electromagnetic) Horn type antenna and a 90 cm IRA (Impulse Radiating Antenna) type antenna is used. Different network cable types and categories with different lengths are used. The immunity of the IT network is examined and the breakdown failure rate of the system is defined for a PRF (Pulse Repetition Frequency) of 500 s-1 in duration of 10 seconds. Series of measurements were carried out and disturbances of keyboards, mouse, switches, distortions on monitors and failures of the IT network and, even crash of PCs were observed. It is shown amongst other that by increasing the pulse repetition rate or frequency, generic test IT-networks are more susceptible to interference. Obtained results provide another view of the susceptibility analysis of modern generic IT-networks against UWB-Threats.
The fifth generation of mobile communication (5G) is a wireless technology developed to provide reliable, fast data transmission for industrial applications, such as autonomous mobile robots and connect cyber-physical systems using Internet of Things (IoT) sensors. In this context, private 5G networks enable the full performance of industrial applications built on dedicated 5G infrastructures. However, emerging wireless communication technologies such as 5G are a complex and challenging topic for training in learning factories, often lacking physical or visual interaction. Therefore, this paper aims to develop a real-time performance monitoring system of private 5G networks and different industrial 5G devices to visualise the performance and impact factors influencing 5G for students and future connectivity experts. Additionally, this paper presents the first long-term measurements of private 5G networks and shows the performance gap between the actual and targeted performance of private 5G networks.
Substrate coupling is a critical failure mechanism especially in fast-switching integrated power stages controlling high-side NMOS power FETs. The parasitic coupling across the substrate in integrated power stages at rise times of up to 500 ps and input voltages of up to 40V is investigated in this paper. The coupling has been studied for the power stage of an integrated buck converter. In particular, dedicated diverting and isolation structures against substrate coupling are analyzed by simulations and evaluated with measurements from test chips in 180nm high-voltage BiCMOS. The results are compared regarding effectiveness, area as well as implementation effort and cost. Back-side metalization shows superior characteristics with nearly 100% noise suppression. Readily available p-guard ring structures bring 75% disturbance reduction. The results are applicable to advanced and future power management solutions with fully integrated switched-mode power supplies at switching frequencies >10 MHz.
Strategy to test mobile apps
(2014)
Nowadays the development of a mobile app implies challenges and difficulties, which have to be faced by mobile app developers. Innovations lead to a rapidly evolving mobile app market, therefore apps should be developed faster and offered in short release cycles to the market. Testing is a decisive activity within the development process that helps to improve the quality of the app. This research paper describes a strategy to test mobile apps that overcomes the challenges that mobile apps confront and permits to test the app in a structural test environment.
Database management systems and K/V-Stores operate on updatable datasets – massively exceeding the size of available main memory. Tree-based K/V storage management structures became particularly popular in storage engines. B+ -Trees [1, 4] allow constant search performance, however write-heavy workloads yield in inefficient write patterns to secondary storage devices and poor performance characteristics. LSM-Trees [16, 23] overcome this issue by horizontal partitioning fractions of data – small enough to fully reside in main memory, but require frequent maintenance to sustain search performance.
Firstly, we propose Multi-Version Partitioned BTrees (MV-PBT) as sole storage and index management structure in key-sorted storage engines like K/V-Stores. Secondly, we compare MV-PBT against LSM-Trees. The logical horizontal partitioning in MV-PBT allows leveraging recent advances in modern B+ -Tree techniques in a small transparent and memory resident portion of the structure. Structural properties sustain steady read performance, yielding efficient write patterns and reducing write amplification.
We integrated MV-PBT in the WiredTiger [15] KV storage engine. MV-PBT offers an up to 2× increased steady throughput in comparison to LSM-Trees and several orders of magnitude in comparison to B+ -Trees in a YCSB [5] workload.
Small and Medium Enterprises (SMEs) which play substantial role in the development of any economy have been on the rise in the recent periods. Consequently, these enterprises are faced with a myriad of challenges which could potentially be solved through adoption of technology. Nonetheless, it has been observed that the new technological uptake among SMEs remains limited with the majority of them opting to maintain the status quo with regards to technology awareness and innovation strategies.
In a literature review, this paper explores three major dynamics curtailing adoption of new technologies by SMEs in the manufacturing: Knowledge absorptive capacity and management factors, organisational structures as well as technological awareness. Firstly, with regards to knowledge absorptive capacity and management factors, this study shows how these factors drive innovation potentials in SMEs.
Secondly, with regards to technological awareness factors, this study documents how perceived usefulness, costs, network and infrastructure, education and skills, training and attitude as well as knowledge influence adoption of new technologies among SMEs in the world. Lastly, the study concludes by analysing how organisational structures drive innovation potentials of SMEs in the wake of swift and profound technological changes in the market.
This publication gives a short introduction and overview of the European project SCOUT and introduces a methodology for a holistic approach to record the state of the art in technical (vehicle and connectivity, human factors regarding physiologic and ergonomic level) and non-technical enablers (societal, economic, legal, regulatory and policy level) of connected and automated driving in Europe. The paper addresses beside the technical topics of environmental perception, E/E architecture, actuators and security, the state of the art of the legal framework in the context of connected and automated driving.
Micro grids often consist of energy generators, storages and consumers with controllers which are not prepared for their integration into communication networks for energy systems. In this paper it will be presented, how standards from the field of energy automation can be applied in such controllers. The data for communication interfaces can be structured according to the IEC 61850- or the VHPREADY standard. It is investigated which requirements must be supported to implement such data models within the controllers. For the transmission of the data we propose the OPC UA protocol, which supports extensive security measures and which is today available for nearly all modern types of controllers and computers.
Software process improvement (SPI) is around for decades, but it is a critically discussed topic. In several waves, different aspects of SPI have been discussed in the past, e.g., large scale company-level SPI programs, maturity models, success factors, and in-project SPI. It is hard to find new streams or a consensus in the community, but there is a trend coming along with agile and lean software development. Apparently, practitioners reject extensive and prescriptive maturity models and move towards smaller, faster and continuous project-integrated SPI. Based on data from two survey studies conducted in Germany (2012) and Europe (2016), we analyze the process customization for projects and practices for implementing SPI in the participating companies. Our findings indicate that, even in regulated industry sectors, companies increasingly adopt in-project SPI activities, primarily with the goal to continuously optimize specific processes. Therefore, with this paper, we want to stimulate a discussion on how to evolve traditional SPI towards a continuous learning environment.
The implementation of a web based portal QA solution will lead to a high acceptance of the staff as the usage of commonly known standard software (e.g. web browser) allows intuitive handling. In the daily use a significant simplification of the workflow and Performance enhancement can be achieved by easy access to the check documents. As the data is now saved in a database it can easily be processed and long-term trends can be displayed. Therefore possible errors can be detected much easier and earlier. By the usage of time stamps and user authentication procedures and user responsibilities are comprehensibly documented. As the software is browser based, integration into an existing software Environment is not critical. As only technical QA data is processed, no further data security measures are necessary. A certification as a medical product is not required.
Software process improvement (SPI) is around for decades: frameworks are proposed, success factors are studied, and experiences have been reported. However, the sheer mass of concepts, approaches, and standards published over the years overwhelms practitioners as well as researchers. What is out there? Are there new emerging approaches? What are open issues? Still, we struggle to answer the question for what is the current state of SPI and related research? In this paper, we present initial results from a systematic mapping study to shed light on the field of SPI and to draw conclusions for future research directions. An analysis of 635 publications draws a big picture of SPI-related research of the past 25 years. Our study shows a high number of solution proposals, experience reports, and secondary studies, but only few theories. In particular, standard SPI models like CMMI and ISO/IEC 15504 are analyzed, enhanced, and evaluated for applicability, whereas these standards are critically discussed from the perspective of SPI in small-to- medium-sized companies, which leads to new specialized frameworks. Furthermore, we find a growing interest in success factors to aid companies in conducting SPI.
Software process improvement (SPI) is around for decades: frameworks are proposed, success factors are studied, and experiences have been reported. However, the sheer mass of concepts, approaches, and standards published over the years overwhelms practitioners as well as researchers. What is out there? Are there new emerging approaches? What are open issues? Still, we struggle to answer the question for what is the current state of SPI and related research? We present initial results from a systematic mapping study to shed light on the field of SPI and to draw conclusions for future research directions. An analysis of 635 publications draws a big picture of SPI-related research of the past 25 years. Our study shows a high number of solution proposals, experience reports, and secondary studies, but only few theories. In particular, standard SPI models are analyzed and evaluated for applicability, especially from the perspective of SPI in small-to-medium-sized companies, which leads to new specialized frameworks. Furthermore, we find a growing interest in success factors to aid companies in conducting SPI.
This summary refers to the paper Software process improvement : where is the evidence? [Ku15].
This paper was published as full research paper in the ICSSP’2015 proceedings.
In current times, a lot of new business opportunities appeared using the potential of the Internet and related digital technologies, like Internet of Things, services computing, cloud computing, big data with analytics, mobile systems, collaboration networks, and cyber physical systems. Enterprises are presently transforming their strategy, culture, processes, and their information systems to become more digital. The digital transformation deeply disrupts existing enterprises and economies. Digitization fosters the development of IT environments with many rather small and distributed structures, like Internet of Things. This has a strong impact for architecting digital services and products. The change from a closed-world modeling perspective to more flexible open-world and living software and system architectures defines the moving context for adaptable and evolutionary software approaches, which are essential to enable the digital transformation. In this paper, we are putting a spotlight to service oriented software evolution to support the digital transformation with micro granular digital architectures for digital services and products.
Software development as an experiment system : a qualitative survey on the state of the practice
(2015)
An experiment-driven approach to software product and service development is gaining increasing attention as a way to channel limited resources to the efficient creation of customer value. In this approach, software functionalities are developed incrementally and validated in continuous experiments with stakeholders such as customers and users. The experiments provide factual feedback for guiding subsequent development. Although case studies on experimentation in industry exist, the understanding of the state of the practice and the encountered obstacles is incomplete. This paper presents an interview-based qualitative survey exploring the experimentation experiences of ten software development companies. The study found that although the principles of continuous experimentation resonated with industry practitioners, the state of the practice is not yet mature. In particular, experimentation is rarely systematic and continuous. Key challenges relate to changing organizational culture, accelerating development cycle speed, and measuring customer value and product
success.
Modern enterprises reshape and transform continuously by a multitude of management processes with different perspectives. They range from business process management to IT service management and the management of the information systems. Enterprise Architecture (EA) management seeks to provide such a perspective and to align the diverse management perspectives. Therefore, EA management cannot rely on hierarchic - in a tayloristic manner designed - management processes to achieve and promote this alignment. It, conversely, has to apply bottom-up, information-centered coordination mechanisms to ensure that different management processes are aligned with each other and enterprise strategy. Social software provides such a bottom-up mechanism for providing support within EAM-processes. Consequently, challenges of EA management processes are investigated, and contributions of social software presented. A cockpit provides interactive functions and visualization methods to cope with this complexity and enable the practical use of social software in enterprise architecture management processes.
IT platforms as the foundation of digitized processes and products are vital in a digital economy. However, many companies’ platforms are liabilities, not strategic assets because of their complexity. Consequently, companies initiate IT complexity reduction programs. But these technology-centric programs at best provide temporary relief. Soon after, companies’ platforms become just as complex as before. Based on four case studies, we identify three non-technical drivers of platform complexity: (1) Lacking awareness of consequences business decisions have on platform complexity, (2) Lacking motivation to avoid platform complexity, (3) Lacking authority to protect platforms from complexity. We propose measures to address these drivers that can help achieve more sustainable impact on platform complexity: (1) Removing information asymmetries between those creating complexity and those dealing with complexity, (2) Redefining incentives to include long-term effects on platform complexity, (3) Redressing power imbalances between those who create complexity and those who have to manage it.
Although still in the early stages of diffusion, smartwatches represent the most popular type of wearable devices. Yet, little is known why some people are more likely to adopt smartwatches than others. To deepen the understanding of underlying factors prompting adoption behavior, the authors develop a theoretical model grounded in technology acceptance and social psychology literature. Empirical results reveal perceived usefulness and visibility as important factors that drive intention. The magnitude of these antecedents is influenced by an individual’s perception of viewing smartwatches as a technology and/or as a fashion accessory. Theoretical and managerial implications are discussed.
Smart meter based business models for the electricity sector : a systematical literature research
(2017)
The Act on the Digitization of the Energy Transition forces German industries and households to introduce smart meters in order to save engery, to gain individual based electricity tariffs and to digitize the energy data flow. Smart meter can be regarded as the advancement of the traditional meter. Utilizing this new technology enables a wide range of innovative business models that provide additional value for the electricity suppliers as well as for their customers. In this study, we followed a two-step approach. At first, we provide a state-of-the-art comparison of these business models found in the literature and identify structural differences in the way they add value to the offered products and services. Secondly, the business models are grouped into categories with respect to customer segmetns and the added value to the smart grid. Findings indicate that most business models focus on the end-costumer as their main customer.
Imagine a world in which the search for tomorrow's trends is not subject to a long and laborious data search but is possible with a single mouse click. Through the use of artificial intelligence (AI), this reality is made possible and is to be further advanced through research. The study therefore aims to provide an initial overview of the young research field. Based on research, expert interviews, company and student surveys, current application possibilities of AI in the innovation process (defined as Smart Innovation), existing challenges that slow down the further development are discussed in more detail and future application possibilities are presented. Finally, a recommendation for action is made for business, politics and science to help overcome the current obstacles together and thus drive the future of Smart Innovation.
Imagine a world in which the search for tomorrow's trends of (software) products is not subject to a long and laborious data search but is possible with a single mouse click. Through the use of artificial intelligence (AI), this reality is made possible and is to be further advanced through research. The study therefore aims to provide an initial overview of the young research field. Based on research, expert interviews, company and student surveys, current application possibilities of AI in the innovation process (defined as Smart Innovation), existing challenges that slow down the further development are discussed in more detail and future application possibilities are presented. Finally, a recommendation for action is made for business, politics and science to help overcome the current obstacles together and thus drive the future of Smart Innovation.
Companies are becoming aware of the potential risks arising from sustainability aspects in supply chains. These risks can affect ecological, economic or social aspects. One important element in managing those risks is improved transparency in supply chains by means of digital transformation. Innovative technologies like blockchain technology can be used to enforce transparency. In this paper, we present a smart contract-based Supply Chain Control Solution to reduce risks. Technological capabilities of the solution will be compared to a similar technology approach and evaluated regarding their benefits and challenges within the framework of supply chain models. As a result, the proposed solution is suitable for the dynamic administration of complex supply chains.
To evaluate the quality of a person´s sleep it is essential to identify the sleep stages and their durations. Currently, the gold standard in terms of sleep analysis is overnight polysomnography (PSG), during which several techniques like EEG (eletroencephalogram), EOG (electrooculogram), EMG (electromyogram), ECG (electrocardiogram), SpO2 (blood oxygen saturation) and for example respiratory airflow and respiratory effort are recorded. These expensive and complex procedures, applied in sleep laboratories, are invasive and unfamiliar for the subjects and it is a reason why it might have an impact on the recorded data. These are the main reasons why low-cost home diagnostic systems are likely to be advantageous. Their aim is to reach a larger population by reducing the number of parameters recorded. Nowadays, many wearable devices promise to measure sleep quality using only the ECG and body-movement signals. This work presents an android application developed in order to proof the accuracy of an algorithm published in the sleep literature. The algorithm uses ECG and body movement recordings to estimate sleep stages. The pre-recorded signals fed into the algorithm have been taken from physionet1 online database. The obtained results have been compared with those of the standard method used in PSG. The mean agreement ratios between the sleep stages REM, Wake, NREM-1, NREM-2 and NREM-3 were 38.1%, 14%, 16%, 75% and 54.3%.
The respiratory rate is a vital sign indicating breathing illness. It is necessary to analyze the mechanical oscillations of the patient's body arising from chest movements. An inappropriate holder on which the sensor is mounted, or an inappropriate sensor position is some of the external factors which should be minimized during signal registration. This paper considers using a non-invasive device placed under the bed mattress and evaluates the respiratory rate. The aim of the work is the development of an accelerometer sensor holder for this system. The normal and deep breathing signals were analyzed, corresponding to the relaxed state and when taking deep breaths. The evaluation criterion for the holder's model is its influence on the patient's respiratory signal amplitude for each state. As a result, we offer a non-invasive system of respiratory rate detection, including the mechanical component providing the most accurate values of mentioned respiratory rate.
A sleep study is a test used to diagnose sleep disorders and is usually done in sleep laboratories. The golden standard for evaluation of sleep is overnight polysomnography (PSG). Unfortunately, in-lab sleep studies are expensive and complex procedures. Furthermore, with a minimum of 22 wire attachments to the patient for sleep recording, this medical procedure is invasive and unfamiliar for the subjects. To solve this problem, low-cost home diagnostic systems, based on noninvasive recording methods requires further researches.
For this intention it is important to find suitable bio vital parameters for classifying sleep phases WAKE, REM, light sleep and deep sleep without any physical impairment at the same time. We decided to analyse body movement (BM), respiration rate (RR) and heart rate variability (HRV) from existing sleep recordings to develop an algorithm which is able to classify the sleep phases automatically. The preliminary results of this project show that BM, RR and HRV are suitable to identify WAKE, REM and NREM stage.
We present a compact battery charger topology for weight and cost sensitive applications with an average output current of 9A targeted for 36V batteries commonly found in electric bicycles. Instead of using a conventional boost converter with large DC-link capacitors, we accomplish PFC-functionality by shaping the charging current into a sin²-shape. In addition, a novel control scheme without input-current sensing is introduced. A-priori knowledge is used to implement a feed-forward control in combination with a closed-loop output current control to maintain the target current. The use of a full-bridge/half bridge LLC converter enables operation in a wide input-voltage range.
A fully featured prototype has been built with a peak output power of 1050W. An average output power of 400W was measured, resulting in a power density of 1.8 kW/dm³. At 9A charging current, a power factor of 0.96 was measured and the efficiency exceeds 93% on average with passive rectification.
The impact of pulse charging has been evaluated on a 400Wh battery which was charged with the proposed converter as well as CC-CV-charging for reference. Both charging schemes show similar battery surface temperatures.
Recognizing human actions is a core challenge for autonomous systems as they directly share the same space with humans. Systems must be able to recognize and assess human actions in real-time. To train the corresponding data-driven algorithms, a significant amount of annotated training data is required. We demonstrate a pipeline to detect humans, estimate their pose, track them over time and recognize their actions in real-time with standard monocular camera sensors. For action recognition, we transform noisy human pose estimates in an image like format we call Encoded Human Pose Image (EHPI). This encoded information can further be classified using standard methods from the computer vision community. With this simple procedure, we achieve competitive state-of-the-art performance in pose based action detection and can ensure real-time performance. In addition, we show a use case in the context of autonomous driving to demonstrate how such a system can be trained to recognize human actions using simulation data.
The current paper discusses the optimal choice of a filter time constant for filtering the steady state flux reference in an energy efficient control strategy for changing load torques. It is shown that by appropriately choosing the filter time constant as a fraction of the rotor time constant the instantaneous power losses after a load torque step can be significantly reduced compared to the standard case. The analysis for the appropriate choice of the filter time constant is based on a numerical study for three different induction motors with different rated powers.
Asymmetric read/write storage technologies such as Flash are becoming
a dominant trend in modern database systems. They introduce
hardware characteristics and properties which are fundamentally
different from those of traditional storage technologies such
as HDDs.
Multi-Versioning Database Management Systems (MV-DBMSs)
and Log-based Storage Managers (LbSMs) are concepts that can
effectively address the properties of these storage technologies but
are designed for the characteristics of legacy hardware. A critical
component of MV-DBMSs is the invalidation model: commonly,
transactional timestamps are assigned to the old and the new version,
resulting in two independent (physical) update operations.
Those entail multiple random writes as well as in-place updates,
sub-optimal for new storage technologies both in terms of performance
and endurance. Traditional page-append LbSM approaches
alleviate random writes and immediate in-place updates, hence reducing
the negative impact of Flash read/write asymmetry. Nevertheless,
they entail significant mapping overhead, leading to write
amplification.
In this work we present an approach called Snapshot Isolation
Append Storage Chains (SIAS-Chains) that employs a combination
of multi-versioning, append storage management in tuple granularity
and novel singly-linked (chain-like) version organization.
SIAS-Chains features: simplified buffer management, multi-version
indexing and introduces read/write optimizations to data placement
on modern storage media. SIAS-Chains algorithmically avoids
small in-place updates, caused by in-place invalidation and converts
them into appends. Every modification operation is executed
as an append and recently inserted tuple versions are co-located.
A premise guaranteeing the successful interdisciplinary teamwork in product design is a mutual understanding of both professional and academic communities of the different design expertise and the role they play in the process. It appears that the open compound word industrial design is open to interpretation in European education. This ambiguity had a negative impact on the labour policies of some European countries, which have labelled some professions with incorrect names. Therefore, this terminological inconsistency urges for clarification within the design community. This work analyses the term industrial design, it presents historical developments in European industrial design education, in particular in Germany and in the Netherlands, and discusses how the education to the industrial design profession was positioned towards product development. This paper suggests that the causes for the observed lack of clarity about the meaning of the term industrial design are of an etymological and disciplinary kind. In order to act as a bridge between the professional and academic communities, universities should create the premises for interdisciplinary collaboration between designers and engineers through standardized communication, ultimately contributing for a sustainable future in both design and engineering education.
Serverless computing is an emerging cloud computing paradigm with the goal of freeing developers from resource management issues. As of today, serverless computing platforms are mainly used to process computations triggered by events or user requests that can be executed independently of each other. These workloads benefit from on-demand and elastic compute resources as well as per-function billing. However, it is still an open research question to which extent parallel applications, which comprise most often complex coordination and communication patterns, can benefit from serverless computing.
In this paper, we introduce serverless skeletons for parallel cloud programming to free developers from both parallelism and resource management issues. In particular, we investigate on the well known and widely used farm skeleton, which supports the implementation of a wide range of applications. To evaluate our concepts, we present a prototypical development and runtime framework and implement two applications based on our framework: Numerical integration and hyperparameter optimization - a commonly applied technique in machine learning. We report on performance measurements for both applications and discuss
the usefulness of our approach.
Semi-automated image data labelling using AprilTags as a pre-processing step for machine learning
(2019)
Data labelling is a pre-processing step to prepare data for machine learning. There are many ways to collect and prepare this data, but these are usually associated with a greater effort. This paper presents an approach to semi-automated image data labelling using AprilTags. The AprilTags attached to the object, which contain a unique ID, make it possible to link the object surfaces to a particular class. This approach will be implemented and used to label data of a stackable box.
The data is evaluated by training a You Only Look Once (YOLO) net, with a subsequent evaluation of the detection results. These results show that the semi-automatically collected and labelled data can certainly be used for machine learning. However, if concise features of an object surface are covered by the AprilTag, there is a risk that the concerned class will not be recognized. It can be assumed that the labelled data can not only be used for YOLO, but also for other machine learning approaches.
For collision and obstacle avoidance as well as trajectory planning, robots usually generate and use a simple 2D costmap without any semantic information about the detected obstacles. Thus a robot’s path planning will simply adhere to an arbitrarily large safety margin around obstacles. A more optimal approach is to adjust this safety margin according to the class of an obstacle. For class prediction, an image processing convolutional neural network can be trained. One of the problems in the development and training of any neural network is the creation of a training dataset. The first part of this work describes methods and free open source software, allowing a fast generation of annotated datasets. Our pipeline can be applied to various objects and environment settings and is extremely easy to use to anyone for synthesising training data from 3D source data. We create a fully synthetic industrial environment dataset with 10 k physically-based rendered images and annotations. Our da taset and sources are publicly available at https://github.com/LJMP/synthetic-industrial-dataset. Subsequently, we train a convolutional neural network with our dataset for costmap safety class prediction. We analyse different class combinations and show that learning the safety classes end-to-end directly with a small dataset, instead of using a class lookup table, improves the quantity and precision of the predictions.
OpenAPI, WADL, RAML, and API Blueprint are popular formats for documenting Web APIs. Although these formats are in general both human and machine-readable, only the part of the format describing the syntax of a Web API is machine-understandable. Descriptions, which explain the meaning and purpose of Web API elements, are embedded as natural language text snippets into documents and target human readers but not machines. To enable machines to read and process these state-of-practice Web API documentation, we propose a Transformer model that solves the generic task of identifying a Web API element within a syntax structure that matches a natural language query. For our first prototype, we focus on the Web API integration task of matching output with input parameters and fined-tuned a pre-trained CodeBERT model to the downstream task of question answering with samples from 2,321 OpenAPI documentation. We formulate the original question answering problem as a multiple choice task: given a semantic natural language description of an output parameter (question) and the syntax of the input schema (paragraph), the model chooses the input parameter (answer) in the schema that best matches the description. The paper describes the data preparation, tokenization, and fine-tuning process as well as discusses possible applications of our model as part of a recommender system. Furthermore, we evaluate the generalizability and the robustness of our fine-tuned model, with the result that it achieves an accuracy of 81.46% correctly chosen parameters.
The planning and control of intralogistics systems in line with versatile production systems of smart factories requires new approaches and methods to cope with changing requirements within future factories. The planning of intralogistics can no longer follow a static, sequential approach as in the past since the planning assumptions are going to change in a high frequency. Reasons for these constant changes are amongst others external turbulences like rapidly changing market conditions, decreasing batch sizes down to customer-specific products with a batch size of one and on the other hand internal turbulences (like production and logistic resource breakdowns) affecting the production system. This paper gives an insight into research approaches and results how capabilities of intelligent logistical objects (intelligent bins, autonomous transport systems etc.) can be used to achieve a self-organized, cost and performance optimized intralogistics system with autonomously controlled process execution within versatile production environments. A first consistent method has been developed which has been validated and implemented within a scenario at the pilot factory Werk150 at the ESB Business School (Reutlingen University). Based on the incoming production orders, the method of the Extended Profitability Appraisal (EPA) covering the work system value to define the most effective work system for order fulfilment is applied. To derive the appropriate intralogistics processes, an autonomous control method involving principles of decentralized and target-oriented decision-making (e.g. intelligent bins are interacting with autonomously controlled transport systems to fulfil material orders of assembly workstations) has been developed and applied to achieve a target-optimized process execution. The results of the first stage research using predefined material sources and sinks described in this paper is going to set the basis for the further development of a self-organized and autonomously controlled method for intralogistics systems considering dynamic source and sink relations. By allowing dynamic shifts of production orders in the sense of dynamic source and sink relations the cost and performance aims of the intralogistics system can be directly aligned with the aims of the entire versatile production system in the sense of self-organized and autonomously controlled systems.
In the present paper we demonstrate the novel technique to apply the recently proposed approach of In-Place Appends – overwrites on Flash without a prior erase operation. IPA can be applied selectively: only to DB-objects that have frequent and relatively small updates. To do so we couple IPA to the concept of NoFTL regions, allowing the DBA to place update-intensive DB-objects into special IPA-enabled regions. The decision about region configuration can be (semi-)automated by an advisor analyzing DB-log files in the background.
We showcase a Shore-MT based prototype of the above approach, operating on real Flash hardware. During the demonstration we allow the users to interact with the system and gain hands-on experience under different demonstration scenarios.
Motivation: Aim of this project is the automatic classification of total hip endoprosthesis (THEP) components in 2D Xray images. Revision surgeries of total hip arthroplasty (THA) are common procedures in orthopedics and trauma surgery. Currently, around 400.000 procedures per year are performed in the United States (US) alone. To achieve the best possible result, preoperative planning is crucial. Especially if parts of the current THEP system are to be retained.
Methods: First, a ground truth based on 76 X-ray images was created: We used an image processing pipeline consisting of a segmentation step performed by a convolutional neural network and a classification step performed by a support vector machine (SVM). In total, 11 classes (5 pans and 6 shafts) shall be classified.
Results: The ground truth generated was of good quality even though the initial segmentation was performed by technicians. The best segmentation results were achieved using a U-net architecture. For classification, SVM architectures performed much better than additional neural networks.
Conclusions: The overall image processing pipeline performed well, but the ground truth needs to be extended to include a broader variability of implant types and more examples per training class.
The digital transformation of the automotive industry has a significant impact on how development processes need to be organized in future. Dynamic market and technological environments require capabilities to react on changes and to learn fast. Agile methods are a promising approach to address these needs but they are not tailored to the specific characteristics of the automotive domain like product line development. Although, there have been efforts to apply agile methods in the automotive domain for many years, significant and widespread adoptions have not yet taken place. The goal of this literature review is to gain an overview and a better understanding of agile methods for embedded software development in the automotive domain, especially with respect to product line development. A mapping study was conducted to analyze the relation between agile software development, embedded software development in the automotive domain and software product line development. Three research questions were defined and 68 papers were evaluated. The study shows that agile and product line development approaches tailored for the automotive domain are not yet fully explored in the literature. Especially, literature on the combination of agile and product line development is rare. Most of the examined combinations are customizations of generic approaches or approaches stemming from other domains. Although, only few approaches for combining agile and software product line development in the automotive domain were found, these findings were valuable for identifying research gaps and provide insights into how existing approaches can be combined, extended and tailored to suit the characteristics of the automotive domain.