004 Informatik
Refine
Document Type
- Conference proceeding (457)
- Journal article (116)
- Book chapter (26)
- Doctoral Thesis (12)
- Anthology (7)
- Book (6)
- Patent / Standard / Guidelines (1)
- Issue of a journal (1)
- Working Paper (1)
Is part of the Bibliography
- yes (627)
Institute
- Informatik (531)
- ESB Business School (62)
- Technik (35)
Publisher
- Springer (95)
- Hochschule Reutlingen (88)
- IEEE (85)
- Gesellschaft für Informatik (62)
- Elsevier (45)
- ACM (33)
- Association for Information Systems (AIS) (22)
- IARIA (13)
- RWTH Aachen (8)
- SCITEPRESS (8)
Diese Arbeit beschäftigt sich mit dem neuen elektronischen Personalausweis. Zum einen werden in diesem Paper die Sicherheitsziele des Personalausweises und die technische Umsetzung der Architektur und Protokolle erklärt. Es wird der Ablauf einer Online-Identifizierung für einen Nutzer mithilfe des Ausweises aufgezeigt. Risiken und Schwachstellen der Technologie im Software- und Hardwarebereich werden diskutiert und die bereits erfolgten Hack-Angriffe aufgezeigt. Die Arbeit legt Möglichkeiten dar, wie sich der Nutzer vor Angriffen schützen kann. Es werden die Gründe genannt, warum der neue Personalausweis online nur schwar Anklang findet und warum die Aufklärung über die zur Verfügung stehenden Anwendungen, eine Preisreduzierung der Lesegeräte sowie die vom Europa-Parlament und Europarat erlassene eIDAS-Verordnung nicht helfen werden, um die Nutzung voranzutreiben. Ergebnisse hierfür liefert eine Nutzerstudie. Zum anderen werden Ideen genannt, wie die Nutzung der elektronischen Funktionen des Ausweises stattdessen zu fördern ist.
In dieser Ausarbeitung wird eine zeitliche Vorhersage von Erdbeben getroffen. Hierfür werden mit einem Datensatz aus Labor-Erdbeben Convolutional Neural Networks (CNN) trainiert. Die trainierten Netzwerke geben Vorhersagen, indem sie einen Input an seismischen Daten klassifizieren. Durch das Klassifizieren kann das CNN die zeitliche Entfernung zum nächsten Erdbeben vorhersagen. Es werden hierfür zwei Ansätze miteinander verglichen. Beim ersten Ansatz werden die Originaldaten in ein CNN gegeben. Beim zweiten Ansatz wird vor dem CNN eine Vorverarbeitung der Daten mit den Mel Frequency Cepstral Coefficients (MFCC) durchgeführt. Es zeigt sich, dass mit beiden Ansätzen eine gute Klassifikation möglich ist. Die Kombination aus MFCC und CNN liefert die besseren quantitativen Ergebnisse. Hierbei konnte eine Genauigkeit von 65 % erreicht werden.
Database management systems (DBMS) are critical performance components in large scale applications under modern update intensive workloads. Additional access paths accelerate look-up performance in DBMS for frequently queried attributes, but the required maintenance slows down update performance. The ubiquitous B+ tree is a commonly used key-indexed access path that is able to support many required functionalities with logarithmic access time to requested records. Modern processing and storage technologies and their characteristics require reconsideration of matured indexing approaches for today's workloads. Partitioned B-trees (PBT) leverage characteristics of modern hardware technologies and complex memory hierarchies as well as high update rates and changes in workloads by maintaining partitions within one single B+-Tree. This paper includes an experimental evaluation of PBTs optimized write pattern and performance improvements. With PBT transactional throughput under TPC-C increases 30%; PBT results in beneficial sequential write patterns even in presence of updates and maintenance operations.
Business process management and IT supported processes are an actual topic. The procedure of finding a business process system that implements your processes the best way is not easy and takes a lot of time. In this article you will find a recommendation for an open source system. Four selected open source workflow management systems are tested and analyzed. Mean criteria for the evaluation are listed in a criteria catalogue and rated by experts by their importance. Finally, the systems are evaluated by the criteria and the best evaluated system can be recommended.
Context: Companies that operate in the software-intensive business are confronted with high market dynamics, rapidly evolving technologies as well as fast-changing customer behavior. Traditional product roadmapping practices, such as fixed-time-based charts including detailed planned features, products, or services typically fail in such environments. Until now, the underlying reasons for the failure of product roadmaps in a dynamic and uncertain market environment are not widely analyzed and understood.
Objective: This paper aims to identify current challenges and pitfalls practitioners face when developing and handling product roadmaps in a dynamic and uncertain market environment.
Method: To reach our objective we conducted a grey literature review (GLR).
Results: Overall, we identified 40 relevant papers, from which we could extract 11 challenges of the application of product roadmapping in a dynamic and uncertain market environment. The analysis of the articles showed that the major challenges for practitioners originate from overcoming a feature-driven mindset, not including a lot of details in the product roadmap, and ensuring that the content of the roadmap is not driven by management or expert opinion.
”I have never seen one who loves virtue as much as he loves beauty,” Confucius once said. If beauty is more important as goodness, it becomes clear why people invest so much effort in their first impression. The aesthetic of faces has many aspects and there is a strong correlation to all characteristics of humans, like age and gender. Often, research on aesthetics by social and ethic scientists lacks sufficient labelled data and the support of machine vision tools. In this position paper we propose the Aesthetic-Faces dataset, containing training data which is labelled by Chinese and German annotators. As a combination of three image subsets, the AF-dataset consists of European, Asian and African people. The research communities in machine learning, aesthetics and social ethics can benefit from our dataset and our toolbox. The toolbox provides many functions for machine learning with state-of-the-art CNNs and an Extreme-Gradient-Boosting regressor, but also 3D Morphable Model technolo gies for face shape evaluation and we discuss how to train an aesthetic estimator considering culture and ethics.
Several studies analyzed existing Web APIs against the constraints of REST to estimate the degree of REST compliance among state-of-the-art APIs. These studies revealed that only a small number of Web APIs are truly RESTful. Moreover, identified mismatches between theoretical REST concepts and practical implementations lead us to believe that practitioners perceive many rules and best practices aligned with these REST concepts differently in terms of their importance and impact on software quality. We therefore conducted a Delphi study in which we confronted eight Web API experts from industry with a catalog of 82 REST API design rules. For each rule, we let them rate its importance and software quality impact. As consensus, our experts rated 28 rules with high, 17 with medium, and 37 with low importance. Moreover, they perceived usability, maintainability, and compatibility as the most impacted quality attributes. The detailed analysis revealed that the experts saw rules for reaching Richardson maturity level 2 as critical, while reaching level 3 was less important. As the acquired consensus data may serve as valuable input for designing a tool-supported approach for the automatic quality evaluation of RESTful APIs, we briefly discuss requirements for such an approach and comment on the applicability of the most important rules.
Software development consists to a large extend of humanbased processes with continuously increasing demands regarding interdisciplinary team work. Understanding the dynamics of software teams can be seen as highly important to successful project execution. Hence, for future project managers, knowledge about non-technical processes in teams is significant. In this paper, we present a course unit that provides an environment in which students can learn and experience the impact of group dynamics on project performance and quality. The course unit uses the Tuckman model as theoretical framework, and borrows from controlled experiments to organize and implement its practical parts in which students then experience the effects of, e.g., time pressure, resource bottlenecks, staff turnover, loss of key personnel, and other stress factors. We provide a detailed design of the course unit to allow for implementation in further software project management courses. Furthermore, we provide experiences obtained from two instances of this unit conducted in Munich and Karlskrona with 36 graduate students. We observed students building awareness of stress factors and developing counter measures to reduce impact of those factors. Moreover, students experienced what problems occur when teams work under stress and how to form a performing team despite exceptional situations.
Context: Organizations are increasingly challenged by dynamic and technical market environments. Traditional product roadmapping practices such as detailed and fixed long-term planning typically fail in such environments. Therefore, companies are actively seeking ways to improve their product roadmapping approach. Goal: This paper aims at identifying problems and challenges with respect to product roadmapping. In addition, it aims at understanding how companies succeed in improving their roadmapping practices in their respective company contexts. The study focuses on mid-sized and large companies developing software-intensive products in dynamic and technical market environments. Method: We conducted semi structured expert interviews with 15 experts from 13 German companies and conducted a thematic data analysis. Results: The analysis showed that a significant number of companies is still struggling with traditional feature based product-roadmapping and opinion based prioritization of features. The most promising areas for improvement are stating the outcomes a company is trying to achieve and making them part of the roadmap, sharing or co-developing the roadmap with stakeholders, and the establishing discovery activities.
Context: Organizations are increasingly challenged by dynamic and technical market environments. Traditional product roadmapping practices such as detailed and fixed long-term planning typically fail in such environments. Therefore, companies are actively seeking ways to improve their product roadmapping approach.
Goal: This paper aims at identifying problems and challenges with respect to product roadmapping. In addition, it aims at understanding how companies succeed in improving their roadmapping practices in their respective company contexts.
Method: We conducted semi-structured expert interviews with 15 experts from 13 German companies and conducted athematic data analysis.
Results: The analysis showed that a significant number of companies is still struggling with traditional feature-based product-roadmapping and opinion-based prioritization of features. The most promising areas for improvement are stating the outcomes a company is trying to achieve and making them part of the roadmap, sharing or co-developing the roadmap with stakeholders, and establishing discovery activities.
Together with many success stories, promises such as the increase in production speed and the improvement in stakeholders' collaboration have contributed to making agile a transformation in the software industry in which many companies want to take part. However, driven either by a natural and expected evolution or by contextual factors that challenge the adoption of agile methods as prescribed by their creator(s), software processes in practice mutate into hybrids over time. Are these still agile In this article, we investigate the question: what makes a software development method agile We present an empirical study grounded in a large-scale international survey that aims to identify software development methods and practices that improve or tame agility. Based on 556 data points, we analyze the perceived degree of agility in the implementation of standard project disciplines and its relation to used development methods and practices. Our findings suggest that only a small number of participants operate their projects in a purely traditional or agile manner (under 15%). That said, most project disciplines and most practices show a clear trend towards increasing degrees of agility. Compared to the methods used to develop software, the selection of practices has a stronger effect on the degree of agility of a given discipline. Finally, there are no methods or practices that explicitly guarantee or prevent agility. We conclude that agility cannot be defined solely at the process level. Additional factors need to be taken into account when trying to implement or improve agility in a software company. Finally, we discuss the field of software process-related research in the light of our findings and present a roadmap for future research.
The question of why individuals adopt information technology has been present in the information systems research since the past quarter century. One of the most used models for predicting the technology usage was introduced by Fred David: The Technology Acceptance Model (TAM). It describes the influence of perceived usefulness and perceived ease of use on attitude, behavioral intention and system usage. The first two mentioned factors in turn are influenced by external variables. Although a plethora of papers exists about the TAM , an extensive analysis of the role of the external variables in the model is still missing. This paper aims to give an overview ove the most important variables. In an extensive literature review, we identified 763 relevant papers, found 552 unique single extenal variables, characterized the most important of them, and described the frequency of their appearance. Additionally, we grouped these variables into four categories (organizational characteristis, system characteristics, user personal characteristics, and other variables). Afterwards we discuss the results and show implications for theory and practice.
Among the multitude of software development processes available, hardly any is used by the book. Regardless of company size or industry sector, a majority of project teams and companies use customized processes that combine different development methods— so-called hybrid development methods. Even though such hybrid development methods are highly individualized, a common understanding of how to systematically construct synergetic practices is missing. In this paper, we make a first step towards devising such guidelines. Grounded in 1,467 data points from a large-scale online survey among practitioners, we study the current state of practice in process use to answer the question: What are hybrid development methods made of? Our findings reveal that only eight methods and few practices build the core of modern software development. This small set allows for statistically constructing hybrid development methods. Using an 85% agreement level in the participants’ selections, we provide two examples illustrating how hybrid development methods are characterized by the practices they are made of. Our evidence-based analysis approach lays the foundation for devising hybrid development methods.
Regardless of company size or industry sector, a majority of project teams and companies use customized processes that combine different development methods-so-called hybrid development methods. Even though such hybrid development methods are highly individualized, a common understanding of how to systematically construct synergetic practices is missing. Based on 1,467 data points from a large-scale online survey among practitioners, we study the current state of practice in process use to answer the question: What are hybrid development methods made of? Our findings reveal that only eight methods and few practices build the core of modern software development. This small set allows for statistically constructing hybrid development methods.
Rapid Prototyping Plattformen reduzieren die Entwicklungszeit, indem das Überprüfen einer Idee in Form eines Prototyps schnell umzusetzen ist und mehr Zeit für die eigentliche Anwendungsentwicklung mit Benutzerschnittstellen zur Verfügung steht. Dieser Ansatz wird schon lange bei technischen Plattformen, wie bspw. dem Arduino, verfolgt. Um diese Form von Prototyping auf Wearables zu übertragen, wird in diesem Paper WearIT vorgestellt. WearIT besteht als Wearable Prototyping Plattform aus vier Komponenten: Einer Weste, Sensor- und Aktorshieldss, einer eigenen bibliothek sowie einem Mainboard bestehend aus Arduino, Raspberry Pi, einer Steckplatine und einem GPS-Modul. Als Ergebnis kann ein Wearable Prototyp schnell, durch das Anbringen von Sensor- und Aktorshields an der WearIT Weste, entwickelt werden. Diese Sensor- und Aktorshields können anschließend durch die WearIT-Bibliothek programmiert werden. Dafür kann über Virtual Network Computing (VNC) mit einem entfernten Rechner auf die Bildschirminhalte des Rasperry Pis zugegriffen und der Arduino programmiert werden.
Der folgende Artikel befasst sich mit Wearables für Pferde. Ziel ist es, die Sicherheit der Tiere bei einem Ausbruch von einer Weide zu erhöhen und damit Personen- und Sachschäden zu minimieren. Hierzu wird der Stand der Technik zur Standortbestimmung im Freien zusammengetragen und durch eine Klassifizierung der unterschiedlichen Ansätze ermittelt, welche Standortbestimmung pferdegerecht erscheint. Zudem soll ein Fragebogen konzipiert werden, um Charakteristiken und Funktionalitäten für einen Prototypen festzustellen.
Digital Enterprise Architecture allows multiple viewpoints on a company’s IT landscape. To gain valuable information out of huge amounts of operational data, it is indispensable to have both an understanding of the operations architecture and an engine capable of managing Big Data. The mechanism of understanding huge amounts of data is based on three main steps: collect, process and use. The main idea is focused on extracting valuable information out of Big Data to make better design decisions. The Elastic Stack is an open-source solution to comfortably and quickly handle Big Data scenarios.
In dieser Ausarbeitung wird auf Visualisierungsmöglichkeiten von neuronalen Netzen eingegangen. Ein neuronales Netz scheint zuerst nicht von außen einsehbar und ist somit für viele eine Blackbox. Häufig genutzte Python-Bibliotheken, zum Beispiel TensorFlow, werden vorgestellt und deren Stärken wie auch Schwächen präsentiert. Anhand dieser werden bereits bestehende Visualisierungen gezeigt und ihr derzeitiger Einsatz wird erläutert. Durch einen Vergleich soll ersichtlich werden, welche Bibliothek am meisten Daten während des Trainings liefert, damit diese Informationen weiter verarbeitet werden. Diese Daten sollen so visualisiert werden, dass sie bei der Entwicklung eines neuronalen Netzes unterstützend sind. Ziel ist es, auf die Möglichkeiten einzugehen, welche geboten werden können. Durch eine Vereinfachung des Debuggings neuronaler Netze sollen weiterführende Entwicklungen in diese Richtung unterstützt werden.
In this paper we describe an interactive web-based visual analysis tool for Formula one races. It first provides an overview about all races on a yearly basis in a calendar-like representation. From this starting point, races can be selected and visually inspected in detail. We support a dynamic race position diagram as well as a more detailed lap times line plot for showing the drivers’ lap times in comparison. Many interaction techniques are supported like selections, filtering, highlighting, color coding, or details-on demand. We illustrate the usefulness of our visualization tool by applying it to a Formula one dataset while we describe the different dynamic visual racing patterns for a number of selected races and drivers.
Based on well-established robotic concepts of autonomous localization and navigation we present a system prototype to assist camera-based indoor navigation for human utilization implemented in the Robot Operating System (ROS). Our prototype takes advantage of state-of-the-art computer vision and robotic methods. Our system is designed for assistive indoor guidance. We employ a vibro tactile belt to serve as a guiding device to render derived motion suggestions to the user via vibration patterns. We evaluated the effectiveness of a variety of vibro-tactile feedback patterns for guidance of blindfolded users. Our prototype demonstrates that a vision-based system can support human navigation, and may also assist the visually impaired in a human-centered way.
Enterprises and societies currently face crucial challenges, while Society 5.0 can contribute to a supersmart society, especially for manufacturing and healthcare, and Industry 4.0 becomes important in the global manufacturing industry. Smart energy digital platforms are architected to manage energy supply efficiently. Furthermore, the above digital platforms are expected to collect various kinds of data and analyze Big Data for the trends in the sharing economy in ecosystems. The adaptive integrated digital architecture framework (AIDAF) for Design Thinking Approach with Risk Management is expected to make an alignment with digital IT strategy. In this paper, we propose that various energy management systems and related digital platforms are designed and implemented in an alignment to digital IT strategy for sharing economy toward Society 5.0, with the AIDAF framework for Design Thinking Approach with Risk Management. The vision of AIDAF applications to enable sharing economy and digital platforms is explained and extended in the context of Society 5.0. In addition, challenges and future activities for this area are discussed that cover the directions of smart energy for Society 5.0.
Today the optimization of metal forming processes is done using advanced simulation tools in a virtual process, e.g. FEM-studies. The modification of the free parameters represents the different variants to be analysed. So experienced engineers may derive useful proposals in an acceptable time if good initial proposals are available. As soon as the number of free parameters growths or the total process takes long times and uses different succeeding forming steps it might be quite difficult to find promising initial ideas. In metal forming another problem has to be considered. The optimization using a series of local improvements, often called a gradient approach may find a local optimum, but this could be far away from a satisfactory solution. Therefore non-deterministic approaches, e.g. Bionic Optimization have to be used. These approaches like Evolutionary Optimization or Particle Swarm Optimization are capable to cover a large range of high dimensional optimization spaces and discover many local optima. So the chance to include the global optimum increases when using such non-deterministic methods. Unfortunately these bionic methods require large numbers of studies of different variants of the process to be optimized. The number of studies tends to increase exponentially with the number of free parameters of the forming process. As the time for one single study might be not too small as well, the total time demand will be inacceptable, taking weeks to months even if high performance computing will be used. Therefore the optimization process needs to be accelerated. Among the many ideas to reduce the time and computer power requirement Meta- and Hybrid Optimization seem to produce the most efficient results. Hybrid Optimization often consists of global searches of promising regions within the parameter space. As soon as the studies indicate that there could be a local optimum, a deterministic study tries to identify this local region. If it shows better performance than other optima found until now, it is preserved for a more detailed analysis. If it performs worse than other optima the region is excluded from further search. Meta-Optimization is often understood as the derivation of Response Surfaces of the functions of free parameters. Once there are enough studies performed, the optimization is done using the Response Surfaces as representatives e.g. for the goal and the restrictions of the optimization problem. Having found regions where interesting solutions are to be expected, the studies available up to now are used to define the Response Surfaces. In many cases low degree polynomials are used, defining their coefficients by least square methods. Both proposals Hybrid Optimization and Meta-Optimization, sometimes used in combination often help to reduce the total optimization processes by large numbers of variants to be studied. In consequence they are highly recommended when dealing with time consuming optimization studies.
In diesem Beitrag wird ein neuer Ansatz vorgestellt, welcher eine schwerkraftreduzierte Navigation innerhalb einer VR-Umgebung erlaubt, wie beispielsweise ein simulierter Mondspaziergang. Zur Navigation in der VR-Umgebung wird der Cyberith Virtualizer ein-gesetzt. Die Schwerkraftsimulation erfolgt mittels eines einstellbaren Gurtsystems, das anelastischen Seilen aufgehängt wird und abgestufte Schwerkraftkompensationen erlaubt. Als Umgebung wurde ein Raumschiffszenario sowie eine Mondoberfläche generiert. Hier sind in der aktuellen Anwendung einfache Interaktionen möglich. In Anlehnung an existierende Gravity Offload Systeme wird die Lösung ViRGOS bezeichnet. ViRGOS wurde bereits bei verschiedenen Besuchsterminen und Hochschulevents eingesetzt, so dass erste Rückmeldungen von Nutzern eingeholt werden konnten.
Vergleichende Analyse des YouTube-Auftritts von privat- und öffentlich-rechtlichen Sendegruppen
(2020)
Lange wurde das Internet als Antagonismus zum Fernsehen gesehen. Es wurde dementsprechend zur Zuschauerrück- bzw. -gewinnung genutzt, was sich allerdings als ineffizient erwies. Inzwischen haben die einzelnen Sendegruppen das Internet jedoch als mediale Erweiterung erkannt und genutzt. Durch diese späte Akzeptanz zeigen sich starke Unterschiede im Umfang und der Vorgehensweise hinsichtlich der Nutzung des Internets als zusätzliches Medium. Am besten lässt sich dies in einem Vergleich in Bezug auf die wichtigste videotechnische Social Media Plattform YouTube darstellen.
In diesem Vergleich sollen die einzelnen Sendegruppen hinsichtlich ihrer wahrgenommenen Vorteile, Nachteile und Attraktivität bezogen auf das Nutzerverhalten und die Nutzermeinung bewertet werden. Die zielgruppenorientierte Optimierung des YouTube-Auftrittes ist von außerordentlich hoher Bedeutung für die zukünftige Marktdurchdringung.
Redirected walking techniques allow people to walk in a larger virtual space than the physical extents of the laboratory. We describe two experiments conducted to investigate human sensitivity to walking on a curved path and to validate a new redirected walking technique. In a psychophysical experiment, we found that sensitivity to walking on a curved path was significantly lower for slower walking speeds (radius of 10 meters versus 22 meters). In an applied study, we investigated the influence of a velocity-dependent dynamic gain controller and an avatar controller on the average distance that participants were able to freely walk before needing to be reoriented. The mean walked distance was significantly greater in the dynamic gain controller condition, as compared to the static controller (22 meters versus 15 meters). Our results demonstrate that perceptually motivated dynamic redirected walking techniques, in combination with reorientation techniques, allow for unaided exploration of a large virtual city model.
Artificial Intelligence enables innovative applications, and applications based on Artificial Intelligence are increasingly important for all aspects of the Digital Economy. However, the question of how AI resources such as tools and data can be linked to provide an AI-capability and create business value is still open. Therefore, this paper identifies the value-creating mechanisms of connectionist artificial intelligence using a capability-oriented view and points out the connections to different kinds of business value. The analysis supports an agenda that identifies areas that need further research to understand the mechanism of value creation in connectionist artificial intelligence.
Product engineering and subsequent phases of product lifecycles are predominantly managed in isolation. Companies therefore do not fully exploit potentials through using data from smart factories and product usage. The novel intelligent and integrated Product Lifecycle Management (i²PLM) describes an approach that uses these data for product engineering. This paper describes the i²PLM, shows the cause-and-effect relationships in this context and presents in detail the validation of the approach. The i²PLM is applied and validated on a smart product in an industrial research environment. Here, the subsequent generation of a smart lunchbox is developed based on production and sensor data. The results of the validation give indications for further improvements of the i²PLM. This paper describes how to integrate the i²PLM into a learning factory.
Applications often need to be deployed in different variants due to different customer requirements. However, since modern applications often need to be deployed using multiple deployment technologies in combination, such as Ansible and Terraform, the deployment variability must be considered in a holistic way. To tackle this, we previously developed Variability4TOSCA and the prototype OpenTOSCA Vintner, which is a TOSCA preprocessing and management layer that implements Variability4TOSCA. In this demonstration, we present a detailed case study that shows how to model a deployment using Variability4TOSCA, how to resolve the variability using Vintner, and how the result can be deployed.
Recognizing actions of humans, reliably inferring their meaning and being able to potentially exchange mutual social information are core challenges for autonomous systems when they directly share the same space with humans. Today’s technical perception solutions have been developed and tested mostly on standard vision benchmark datasets where manual labeling of sensory ground truth is a tedious but necessary task. Furthermore, rarely occurring human activities are underrepresented in such data leading to algorithms not recognizing such activities. For this purpose, we introduce a modular simulation framework which offers to train and validate algorithms on various environmental conditions. For this paper we created a dataset, containing rare human activities in urban areas, on which a current state of the art algorithm for pose estimation fails and demonstrate how to train such rare poses with simulated data only.
Context: Organizations increasingly develop software in a distributed manner. The cloud provides an environment to create and maintain software-based products and services. Currently, it is unknown which software processes are suited for cloud-based development and what their effects in specific contexts are.
Objective: We aim at better understanding the software process applied to distributed software development using the cloud as development environment. We further aim at providing an instrument which helps project managers comparing different solution approaches and to adapt team processes to improve future project activities and outcomes.
Method: We provide a simulation model which helps analyzing different project parameters and their impact on projects performed in the cloud. To evaluate the simulation model, we conduct different analyses using a Scrumban process and data from a project executed in Finland and Spain. An extra adaptation of the simulation model for Scrum and Kanban was used to evaluate the suitability of the simulation model to cover further process models.
Results: A comparison of the real project data with the results obtaind from the different simulation runs shows the simulation producing results close to the real data, and we could successfully replicate a distributed software project. Furthermore, we could show that the simulation model is suitable to address further process models.
Conclusion: The simulator helps reproducing activities, developers, and events in the project, and it helps analyzing potential tradeoffs, e.g., regarding throughput, total time, project size, team size and work-in-progress limits. Furthermore, the simulation model supports project managers selecting the most suitable planning alternative thus supporting decision-making processes.
Engineers of the research project “Digital Product Life-Cycle” are using a graph-based design language to model all aspects of the product they are working on. This abstract model is the base for all further investigations, developments and implementations. In particular at early stages of development, collaborative decision making is very important. We propose a semantic augmented knowledge space by means of mixed reality technology, to support engineering teams. Therefore we present an interaction prototype consisting of a pico projector and a camera. In our usage scenario engineers are augmenting different artefacts in a virtual working environment. The concept of our prototype contains both an interaction and a technical concept. To realise implicit and natural interactions, we conducted two prototype tests: (1) A test with a low-fidelity prototype and (2) a test by using the method Wizard of Oz. As a result, we present a prototype with interaction selection using augmentation spotlighting and an interaction zoom as a semantic zoom.
Using measurement and simulation for understanding distributed development processes in the Cloud
(2017)
Organizations increasingly develop software in a distributed manner. The Cloud provides an environment to create and maintain software-based products and services. Currently, it is widely unknown which software processes are suited for Cloud-based development and what their effects in specific contexts are. This paper presents a process simulation to study distributed development in the Cloud. We contribute a simulation model, which helps analyzing different project parameters and their impact on projects carried out in the Cloud. The simulator helps reproducing activities, developers, issues and events in the project, and it generates statistics, e.g., on throughput, total time, and lead and cycle time. The aim of this simulation model is thus to analyze the tradeoffs regarding throughput, total time, project size, and team size. Furthermore, the modified simulation model aims to help project managers select the most suitable planning alternative. Based on observed projects in Finland and Spain, we simulated a distributed project using artificial and real data. Particularly, we studied the variables project size, team size, throughput, and total project duration. A comparison of the real project data with the results obtained from the simulation shows the simulation producing results close to the real data, and we could successfully replicate a distributed software project. By improving the understanding of distributed development processes, our simulation model thus supports project managers in their decision-making.
A sequence of transactions represents a complex and multi dimensional type of data. Feature construction can be used to reduce the data´s dimensionality to find behavioural patterns within such sequences. The patterns can be expressed using the blue prints of the constructed relevant features. These blue prints can then be used for real time classification on other sequences.
Software evolvability is an important quality attribute, yet one difficult to grasp. A certain base level of it is allegedly provided by service- and microservice-based systems, but many software professionals lack systematic understanding of the reasons and preconditions for this. We address this issue via the proxy of architectural modifiability tactics. By qualitatively mapping principles and patterns of Service Oriented Architecture (SOA) and microservices onto tactics and analyzing the results, we cannot only generate insights into service-oriented evolution qualities, but can also provide a modifiability comparison of the two popular service-based architectural styles. The results suggest that both SOA and microservices possess several inherent qualities beneficial for software evolution. While both focus strongly on loose coupling and encapsulation, there are also differences in the way they strive for modifiability (e.g. governance vs. evolutionary design). To leverage the insights of this research, however, it is necessary to find practical ways to incorporate the results as guidance into the software development process.
The paper explains a workflow to simulate the food energy water (FEW) nexus for an urban district combining various data sources like 3D city models, particularly the City Geography Markup Language (CityGML) data model from the Open Geospatial Consortium, Open StreetMap and Census data. A long term vision is to extend the CityGML data model by developing a FEW Application Domain Extension (FEW ADE) to support future FEW simulation workflows such as the one explained in this paper. Together with the mentioned simulation workflow, this paper also identifies some necessary FEW related parameters for the future development of a FEW ADE. Furthermore, relevant key performance indicators are investigated, and the relevant datasets necessary to calculate these indicators are studied. Finally, different calculations are performed for the downtown borough Ville-Marie in the city of Montréal (Canada) for the domains of food waste (FW) and wastewater (WW) generation. For this study, a workflow is developed to calculate the energy generation from anaerobic digestion of FW and WW. In the first step, the data collection and preparation was done. Here relevant data for georeferencing, data for model set-up, and data for creating the required usage libraries, like food waste and wastewater generation per person, were collected. The next step was the data integration and calculation of the relevant parameters, and lastly, the results were visualized for analysis purposes. As a use case to support such calculations, the CityGML level of detail two model of Montréal is enriched with information such as building functions and building usages from OpenStreetMap. The calculation of the total residents based on the CityGML model as the main input for Ville-Marie results in a population of 72,606. The statistical value for 2016 was 89,170, which corresponds to a deviation of 15.3%. The energy recovery potential of FW is about 24,024 GJ/year, and that of wastewater is about 1,629 GJ/year, adding up to 25,653 GJ/year. Relating values to the calculated number of inhabitants in Ville-Marie results in 330.9 kWh/year for FW and 22.4 kWh/year for wastewater, respectively.
Avatars are in use when interacting in virtual environments in different contexts, in collaborative work, as well as in gaming and also in virtual meetings with friends. Therefore it is important to understand how the relationship between user and avatar works. In this study, an online survey is used to determine how the perception of an avatar changes in different contexts by relating it to existing avatar relationship typologies. Additionally, it is determined whether in each context a realistic, abstract or comic-like representation is preferred by the participants. One result was a preference of low poly representations in the work context, which are associated with the perception of the avatar as a tool. In the context of meeting friends, a realistic representation is perceived as more appropriate, which is perceived as an accurate self-representation. In the gaming context, the results are less clear, which can be attributed to different gaming preferences. Here, unlike in the other contexts, a comic-like representation is also perceived as appropriate, which is associated with the perception of the avatar as a friend. A symbiotic user-avatar relationship is not directly related to any form of representation, but always lies in the midfield, which is attributed to the fact that it represents a whole spectrum between other categories.
Going forward with the requirements of missions to the Moon and further into deep space, the European Space Agency is investigating new methods of astronaut training that can help accelerate learning, increase availability and reduce complexity and cost in comparison to currently used methods. To achieve this, technologies such as virtual reality may be utilized. In this paper, an investigation into the benefits of using virtual reality as a means for extravehicular activity training in comparison to conventional training methods, such as neutral buoyancy pools is given. To help determine the requirements and current uses of virtual reality for extravehicular activity training first hand tests of currently available software as well as expert interviews are utilized. With this knowledge a concept is developed that may be used to further advance training methods in virtual reality. The resulting concept is used as a basis for development of a prototype to showcase user interactions and locomotion in microgravity simulations.
The stimulation of user engagement has received significant attention in extant research. However, the theory of antecedents for user engagement with an initial electronic word-of-mouth (eWoM) communication is relatively less developed. In an investigation of 576 unique user postings across independent Facebook (FB) communities for two German firms, we contribute to the extant knowledge on user engagement in two different ways. First, we explicate senders’ prior usage experience and the extent of their acquaintance with other community members as the two key drivers of user engagement across a product and a service community. Second, we reveal that these main effects differ according to the type of community. In service communities, experience has a stronger impact on user engagement; whereas, in product communities, acquaintance is more important.
Managerial accountants spend a large part of their working time on more operational activities in cost accounting, reporting, and operational planning and budgeting. In all these areas, there has been increasing discussion in recent years, both in theory and practice, about using more digital technologies. For reporting, this means not only an intensified discussion of technologies such as RPA and AI but also more intensive changes to existing reporting systems. In particular, management information systems (MIS), which are maintained by managerial accountants and used by managers for corporate management, should be mentioned here. Based on an empirical survey in a large German company, this article discusses the requirements and assessments of users when switching from a regular MIS to a cloud-based system.
This paper examines the efficacy of social media systems in customer complaint handling. The emergence of social media, as a useful complement and (possibly) a viable alternative to the traditional channels of service delivery, motivates this research. The theoretical framework, developed from literature on social media and complaint handling, is tested against data collected from two different channels (hotline and social media) of a German telecommunication services provider, in order to gain insights into channel efficacy in complaint handling. We contribute to the understanding of firm’s technology usage for complaint handling in two ways:
(a) by conceptualizing and evaluating complaint handling quality across traditional and social media channels and (b) by comparing the impact of complaint handling quality on key performance outcomes such as customer loyalty, positive word-of-mouth, and crosspurchase intentions across traditional and social media channels.
Im Rahmen dieser Arbeit wurde ein urbaner Mixed-Reality Fahrsimulator umgesetzt. Die reale Umgebung wird hierbei in einer Greenscreen-Kammer mit Hilfe von Kamerabildern aus Nutzersicht und einem Chroma Key Shader innerhalb der virtuellen Umgebung sichtbar gemacht. Dies soll die Immersion und die Interaktivität innerhalb der virtuellen Umgebung durch die Darstellung und Verwendung von realen Elementen erhöhen.
Als virtuelle Umgebung wurde eine zufallsgenerierte Stadt geschaffen, in der KI-Fahrzeuge fahren. Die Ergebnisse der Entwicklung dieses Fahrsimulators werden in dieser Arbeit erläutert.
Der Fahrsimulator soll der Entwicklung von menschzentrierten Human-Machine-Interfaces und Motion-Capture-Komponenten dienen.
Durch das stetige Wachstum an neuen Technologien und Möglichkeiten steht der Verschmelzung von Technologien mit dem Menschen kaum noch etwas im Wege. Die Untersuchung der Implantate und die damit verbundenen Risiken sind ein Teil dieser Arbeit. Von Bedeutung sind hier die Funktionsweise und die IT-Sicherheitsaspekte. Alle in dieser Arbeit dargestellten Implantate benötigen eine Kommunikation nach außen. Diese Kommunikationsmöglichkeit birgt Risiken, die nicht nur auf die Daten der Träger beschränkt sind, sondern auch gesundheitliche Risiken beinhalten.
Eines der gängigsten bildgebenden Verfahren in der Medizin ist die Sonografie. Jedoch ist die Reproduzierbarkeit der Ultraschalldiagnostik bis heute noch immer ein Problem, wodurch Fehldiagnosen gestellt werden. Durch das in diesem Papier vorgestellte prototypische System zur Unterstützung für Medizinstudenten in Ultraschallseminaren sollen Anforderungen zur Reproduzierbarkeit einer Ultraschalluntersuchung definiert werden. Durch Experteninterviews wurden Einblicke in die klinischen Abläufe und den Krankenhaus-Alltag gewonnen, welche Inhalte relevant sind, um die Reproduzierbarkeit von Ultraschalluntersuchungen zu ermöglichen.
Forecasting demand is challenging. Various products exhibit different demand patterns. While demand may be constant and regular for one product, it may be sporadic for another, as well as when demand occurs, it may fluctuate significantly. Forecasting errors are costly and result in obsolete inventory or unsatisfied demand. Methods from statistics, machine learning, and deep learning have been used to predict such demand patterns. Nevertheless, it is not clear for what demand pattern, which algorithm would achieve the best forecast. Therefore, even today a large number of models are used to forecast on a test period. The model with the best result on the test period is used for the actual forecast. This approach is computationally and time intensive and, in most cases, uneconomical. In our paper we show the possibility to use a machine learning classification algorithm, which predicts the best possible model based on the characteristics of a time series. The approach was developed and evaluated on a dataset from a B2B-technical-retailer. The machine learning classification algorithm achieves a mean ROC-AUC of 89%, which emphasizes the skill of the model.
Intermittent time series forecasting is a challenging task which still needs particular attention of researchers. The more unregularly events occur, the more difficult is it to predict them. With Croston’s approach in 1972 (1.Nr. 3:289–303), intermittence and demand of a time series were investigated the first time separately. He proposes an exponential smoothing in his attempt to generate a forecast which corresponds to the demand per period in average. Although this algorithm produces good results in the field of stock control, it does not capture the typical characteristics of intermittent time series within the final prediction. In this paper, we investigate a time series’ intermittence and demand individually, forecast the upcoming demand value and inter-demand interval length using recent machine learning algorithms, such as long-short-term-memories and light-gradient-boosting machines, and reassemble both information to generate a prediction which preserves the characteristics of an intermittent time series. We compare the results against Croston’s approach, as well as recent forecast procedures where no split is performed.
Die Wahrnehmung unermesslicher Weite kann Ehrfurcht beim Menschen auslösen. Dies kann positive Reaktionen im Menschen zur Folge haben. Während Ehrfurcht theoretisch und praktisch bereits gut erforscht ist, gibt es nur sehr wenig Forschung zum Thema der unermesslichen Weite. Dieses Wissen wäre nützlich, um gezielt Ehrfurcht beim Menschen auszulösen. Aus diesem Grunde wurde eine Studie durchgeführt, mit der festgestellt werden soll, in wie weit sich ein Gefühl unermesslicher Weite in virtueller Realität unter Verwendung eines Head-Mounted Displays erzeugen lässt und ob dadurch Ehrfurcht entsteht.
This research addresses the question of why employees use enterprise social networks (ESN). Against the background of technology acceptance research, we propose an extended unified theory of acceptance and use of technology (UTAUT) model, adapt it to an ESN context, and test our model against data from ESN users of large and medium-sized enterprises. We use partial least squares structural equation modeling to gain insights into the determinants of ESN use. This paper contributes to ESN acceptance research by evaluating a model containing determinants of ESN use. It also examines the effects of determinants on five different usage dimensions of ESN. The results reveal that facilitating conditions are the main driver of ESN use while the impact of intention to use is comparably small. Implications for theory and practice are discussed.
Das Ziel dieser Arbeit war die Umsetzung eines Wahrnehmungsensors für Softwareagenten, die über ein virtuelles Menschmodell in einer dreidimensionalen Umgebung agieren. Hierbei sollen die Agenten über den Sensor in der Lage sein, semantische Informationen zu geometrischen Objekten in der Umgebung zu erhalten. Hierfür wurden zwei Verfahren umgesetzt, die das menschliche Sehen simulieren, indem Objekte erkannt werden, wenn diese innerhalb eines Sichtfelds liegen. Ein Problem, das dabei gelöst werden muss, ist die Identifizierung möglicher Verdeckungen der Objekte. Ein Ansatz, dieses Problem zu lösen, ist der Ray-Tracing Ansatz, welcher für das erste Verfahren umgesetzt wurde. Das zweite Verfahren verwendet den Occlusion-Culling Ansatz. Auswertungen beider Verfahren haben gezeigt, dass der Ray-Tracing Ansatz eine schnellere Laufzeit aufweist, der Occlusion-Culling Ansatz jedoch mehr unverdeckte Objekte im Sichtfeld erkennt.
In modern collaborative production environments where industrial robots and humans are supposed to work hand in hand, it is mandatory to observe the robot’s workspace at all times. Such observation is even more crucial when the robot’s main position is also dynamic e.g. because the system is mounted on a movable platform. As current solutions like physically secured areas in which a robot can perform actions potentially dangerous for humans, become unfeasible in such scenarios, novel, more dynamic, and situation aware safety solutions need to be developed and deployed.
This thesis mainly contributes to the bigger picture of such a collaborative scenario by presenting a data-driven convolutional neural network-based approach to estimate the two-dimensional kinematic-chain configuration of industrial robot-arms within raw camera images. This thesis also provides the information needed to generate and organize the mandatory data basis and presents frameworks that were used to realize all involved subsystems. The robot-arm’s extracted kinematic-chain can also be used to estimate the extrinsic camera parameters relative to the robot’s three-dimensional origin. Further a tracking system, based on a two-dimensional kinematic chain descriptor is presented to allow for an accumulation of a proper movement history which enables the prediction of future target positions within the given image plane. The combination of the extracted robot’s pose with a simultaneous human pose estimation system delivers a consistent data flow that can be used in higher-level applications.
This thesis also provides a detailed evaluation of all involved subsystems and provides a broad overview of their particular performance, based on novel generated, semi automatically annotated, real datasets.
Two Stream Hypothesis: Adaptationseffekte bei sozialen Interaktionen mit Avataren in Virtual Reality
(2015)
In diesem Paper wird ein Experiment zur Two-Streams-Hypothese vorgestellt. Dabei werden zunächst die psychologischen und technischen Grundlagen erarbeitet, welche für das Experiment benötigt werden. Anschließend wird die Forschungsfrage definiert und der Versuchsaufbau erörtert. Im Experiment soll getestet werden, ob es unterschiedliche Adaptationseffekte bei der Erkennung und dem Ausführen von nicht-eindeutigen sozialen Handlungen gibt. Es wird ein Versuchsaufbau entwickelt, bei welchem Probanden entweder aktiv durch komplementäre Handlungen auf die Handlungen von virtuellen Avataren reagieren sollen oder passiv durch das Drücken von Buttons. Abschließend werden die Ergebnisse ausgewertet und ein Fazit
gezogen.
Context: Companies need capabilities to evaluate the customer value of software intensive products and services. One way of systematically acquiring data on customer value is running continuous experiments as part of the overall development process. Objective: This paper investigates the first steps of transitioning towards continuous experimentation in a large company, including the challenges faced. Method: We conduct a single-case study using participant observation, interviews, and qualitative analysis of the collected data. Results: Results show that continuous experimentation was well received by the practitioners and practising experimentation helped them to enhance understanding of their product value and user needs. Although the complexities of a large multi-stakeholder business to-business (B2B) environment presented several challenges such as inaccessible users, it was possible to address impediments and integrate an experiment in an ongoing development project. Conclusion: Developing the capability for continuous experimentation in large organisations is a learning process which can be supported by a systematic introduction approach with the guidance of experts. We gained experience by introducing the approach on a small scale in a large organisation, and one of the major steps for future work is to understand how this can be scaled up to the whole development organisation.
Today, companies face increasing market dynamics, rapidly evolving technologies, and rapid changes in customer behavior. Traditional approaches to product development typically fail in such environments and require companies to transform their often feature-driven mindset into a product-led mindset. A promising first step on the way to a product-led company is a better understanding of how product planning can be adapted to the requirements of an increasingly dynamic and uncertain market environment in the sense of product roadmapping. The authors developed the DEEP product roadmap assessment tool to help companies evaluate their current product roadmap practices and identify appropriate actions to transition to a more product-led company. Objective: The goal of this paper is to gain insight into the applicability and usefulness of version 1.1 of the DEEP model. In addition, the benefits, and implications of using the DEEP model in corporate contexts will be explored. Method: We conducted a multiple case study in which participants were observed using the DEEP model. We then interviewed each participant to understand their perceptions of the DEEP model. In addition, we conducted interviews with each company's product management department to learn how the application of the DEEP model influenced their attitudes toward product roadmapping. Results: The study showed that by applying the DEEP model, participants better understood which artifacts and methods were critical to product roadmapping success in a dynamic and uncertain market environment. In addition, the application of the DEEP model helped convince management and other stakeholders of the need to change current product roadmapping practices. The application also proved to be a suitable starting point for the transformation in the participating companies.
Blockchains have become increasingly important in recent years and have expanded their applicability to many domains beyond finance and cryptocurrencies. This adoption has particularly increased with the introduction of smart contracts, which are immutable, user-defined programs directly deployed on blockchain networks. However, many scenarios require business transactions to simultaneously access smart contracts on multiple, possibly heterogeneous blockchain networks while ensuring the atomicity and isolation of these transactions, which is not natively supported by current blockchain systems. Therefore, in this work, we introduce the Transactional Cross-Chain Smart Contract Invocation (TCCSCI) approach that supports such distributed business transactions while ensuring their global atomicity and serializability. The approach introduces the concept of Resource Manager Smart Contracts, and 2PC for Blockchains (2PC4BC), a client-driven Atomic Commit Protocol (ACP) specialized for blockchain-based distributed transactions. We validate our approach using a prototypical implementation, evaluate its introduced overhead, and prove its correctness.
Transaction processing is of growing importance for mobile computing. Booking tickets, flight reservation, banking, ePayment, and booking holiday arrangements are just a few examples for mobile transactions. Due to temporarily disconnected situations the synchronisation and consistent transaction processing are key issues. Serializability is a too strong criteria for correctness when the semantics of a transaction is known. We introduce a transaction model that allows higher concurrency for a certain class of transactions defined by its semantic. The transaction results are ”escrow serializable” and the synchronisation mechanism is non-blocking. Experimental implementation showed higher concurrency, transaction throughput, and less resources used than common locking or optimistic protocols.
Hardly any software development process is used as prescribed by authors or standards. Regardless of company size or industry sector, a majority of project teams and companies use hybrid development methods (short: hybrid methods) that combine different development methods and practices. Even though such hybrid methods are highly individualized, a common understanding of how to systematically construct synergetic practices is missing. In this article, we make a first step towards a statistical construction procedure for hybrid methods. Grounded in 1467 data points from a large‐scale practitioner survey, we study the question: What are hybrid methods made of and how can they be systematically constructed? Our findings show that only eight methods and few practices build the core of modern software development. Using an 85% agreement level in the participants' selections, we provide examples illustrating how hybrid methods can be characterized by the practices they are made of. Furthermore, using this characterization, we develop an initial construction procedure, which allows for defining a method frame and enriching it incrementally to devise a hybrid method using ranked sets of practice.
In this paper it is first identified the trade-off among costs, flexibility and performances of autonomous robotic solutions for material handling processes, where adding value with automation is not as trivial as in production processes: hence the requirement for automated solutions to be simple, lean and efficient becomes even stricter. Then a method for modelling and comparing differential performances and costs of manual and autonomous solutions is developed. As a result of the method, a smart man-machine collaborative interface is designed and its impact evaluated on a specific case of study. Results are then generalized and prove the strong conclusions that in unconstrained environments, where full standardization cannot be achieved, the risk of investing in autonomous solutions can only be mitigated by creating a fast and smart man-machine collaborative interface.
Facial expressions play a dominant role in facilitating social interactions. We endeavor to develop tactile displays to reinstate facial expression modulated communication. The high spatial and temporal dimensionality of facial movements poses a unique challenge when designing tactile encodings of them. A further challenge is developing encodings that are at-tuned to the perceptual characteristics of our skin. A caveat of using vibrotactile displays is that tactile stimuli have been shown to induce perceptual tactile aftereffects when used on the fingers, arm and face. However, at present, despite the prevalence of waist-worn tactile displays, no such investigations of tactile aftereffects at the waist region exist in the literature, though they are warranted by the unique sensory and perceptual signalling characteristics of this area. Using an adaptation paradigm we investigated the presence of perceptual tactile aftereffects induced by continuous and burst vibrotactile stimuli delivered at the navel, side and spinal regions of the waist. We report evidence that the tactile perception topology of the waist is non-uniform, and specifically that the navel and spine regions are resistant to adaptive aftereffects while side regions are more prone to perceptual adaptations to continuous but not burst stimulations. Results of our current investigations highlight the unique set of challenges posed by designing waist-worn tactile displays. These and future perceptual studies can directly inform more realistic and effective implementations of complex high-dimensional spatiotemporal social cues.
IT environments that consist of a very large number of rather small structures like microservices, Internet of Things (IoT) components, or mobility systems are emerging to support flexible and agile products and services in the age of digital transformation. Biological metaphors of living and adaptable ecosystems with service-oriented enterprise architectures provide the foundation for self-optimizing, resilient run-time environments and distributed information systems. We are extending Enterprise Architecture (EA) methodologies and models that cover a high degree of heterogeneity and distribution to support the digital transformation and related information systems with micro-granular architectures. Our aim is to support flexibility and agile transformation for both IT and business capabilities within adaptable digital enterprise architectures. The present research paper investigates mechanisms for integrating Microservice Architectures (MSA) by extending original enterprise architecture reference models with elements for more flexible architectural metamodels and EA-mini-descriptions.
The aim of this work is the development of artificial intelligence (AI) application to support the recruiting process that elevates the domain of human resource management by advancing its capabilities and effectiveness. This affects recruiting processes and includes solutions for active sourcing, i.e. active recruitment, pre-sorting, evaluating structured video interviews and discovering internal training potential. This work highlights four novel approaches to ethical machine learning. The first is precise machine learning for ethically relevant properties in image recognition, which focuses on accurately detecting and analysing these properties. The second is the detection of bias in training data, allowing for the identification and removal of distortions that could skew results. The third is minimising bias, which involves actively working to reduce bias in machine learning models. Finally, an unsupervised architecture is introduced that can learn fair results even without ground truth data. Together, these approaches represent important steps forward in creating ethical and unbiased machine learning systems.
AI technologies such as deep learning provide promising advances in many areas. Using these technologies, enterprises and organizations implement new business models and capabilities. In the beginning, AI-technologies have been deployed in an experimental environment. AI-based applications have been created in an ad-hoc manner and without methodological guidance or engineering approach. Due to the increasing importance of AI-technologies, however, a more structured approach is necessary that enable the methodological engineering of AI-based applications. Therefore, we develop in this paper first steps towards methodological engineering of AI-based applications. First, we identify some important differences between the technological foundations of AI- technologies, in particular deep learning, and traditional information technologies. Then we create a framework that enables to engineer AI-applications using four steps: identification of an AI-application type, sub-type identification, lifecycle phase, and definition of details. The introduced framework considers that AI-applications use an inductive approach to infer knowledge from huge collections and streams of data. It not only enables the rapid development of AI-application but also the efficient sharing of knowledge on AI-applications.
A large body of literature is concerned with models of presence— the sensory illusion of being part of a virtual scene— but there is still no general agreement on how to measure it objectively and reliably. For the presented study, we applied contemporary theory to measure presence in virtual reality. Thirty-seven participants explored an existing commercial game in order to complete a collection task. Two startle events were naturally embedded in the game progression to evoke physical reactions and head tracking data was collected in response to these events. Subjective presence was recorded using a post-study questionnaire and real-time assessments. Our novel implementation of behavioral measures lead to insights which could inform future presence research: We propose a measure in which startle reflexes are evoked through specific events in the virtual environment, and head tracking data is compared to the range and speed of baseline interactions.
Continuous refactoring is necessary to maintain source code quality and to cope with technical debt. Since manual refactoring is inefficient and error prone, various solutions for automated refactoring have been proposed in the past. However, empirical studies have shown that these solutions are not widely accepted by software developers and most refactorings are still performed manually. For example, developers reported that refactoring tools should support functionality for reviewing changes. They also criticized that introducing such tools would require substantial effort for configuration and integration into the current development environment.
In this paper, we present our work towards the Refactoring-Bot, an autonomous bot that integrates into the team like a human developer via the existing version control platform. The bot automatically performs refactorings to resolve code smells and presents the changes to a developer for asynchronous review via pull requests. This way, developers are not interrupted in their workflow and can review the changes at any time with familiar tools. Proposed refactorings can then be integrated into the code base via the push of a button. We elaborate on our vision, discuss design decisions, describe the current state of development, and give an outlook on planned development and research activities.
The euphoria around microservices has decreased over the years, but the trend of modernizing legacy systems to this novel architectural style is unbroken to date. A variety of approaches have been proposed in academia and industry, aiming to structure and automate the often long-lasting and cost-intensive migration journey. However, our research shows that there is still a need for more systematic guidance. While grey literature is dominant for knowledge exchange among practitioners, academia has contributed a significant body of knowledge as well, catching up on its initial neglect. A vast number of studies on the topic yielded novel techniques, often backed by industry evaluations. However, practitioners hardly leverage these resources. In this paper, we report on our efforts to design an architecture-centric methodology for migrating to microservices. As its main contribution, a framework provides guidance for architects during the three phases of a migration. We refer to methods, techniques, and approaches based on a variety of scientific studies that have not been made available in a similarly comprehensible manner before. Through an accompanying tool to be developed, architects will be in a position to systematically plan their migration, make better informed decisions, and use the most appropriate techniques and tools to transition their systems to microservices.
While there has been increased digitization of private homes, only little has been done to understand these specific home technologies, how they serve consumers, among other issues. “Smart home technology” (SHT) refer to a wide range of artifacts from cleaning aids to energy advisors. Given this breadth, clarity surrounding the key characteristics and the multi-faceted impact of SHT is needed to conduct more directed research on SHT. We propose a taxonomy to help outline the salient intended outcomes of SHT. Through a process involving five iterations, we analyzed and classified 79 technologies (gathered from literature and industry reports). This uncovered seven dimensions encompassing 20 salient characteristics. We believe these dimensions/characteristics will help researchers and organizations better design and study the impacts of these technologies. Our long-term agenda is to use the proposed taxonomy for an exploratory inquiry to understand tensions occurring when personal and sustainability-related outcomes compete.
With the expansion of cyber-physical systems (CPSs) across critical and regulated industries, systems must be continuously updated to remain resilient. At the same time, they should be extremely secure and safe to operate and use. The DevOps approach caters to business demands of more speed and smartness in production, but it is extremely challenging to implement DevOps due to the complexity of critical CPSs and requirements from regulatory authorities. In this study, expert opinions from 33 European companies expose the gap in the current state of practice on DevOps-oriented continuous development and maintenance. The study contributes to research and practice by identifying a set of needs. Subsequently, the authors propose a novel approach called Secure DevOps and provide several avenues for further research and development in this area. The study shows that, because security is a cross-cutting property in complex CPSs, its proficient management requires system-wide competencies and capabilities across the CPSs development and operation.
Towards a practical maintainability quality model for service- and microservice-based systems
(2017)
Although current literature mentions a lot of different metrics related to the maintainability of service-based systems (SBSs), there is no comprehensive quality model (QM) with automatic evaluation and practical focus. To fill this gap, we propose a Maintainability Model for Services (MM4S), a layered maintainability QM consisting of service properties (SPs) related with automatically collectable Service Metrics (SMs). This research artifact created within an ongoing Design Science Research (DSR) project is the first version ready for detailed evaluation and critical feedback. The goal of MM4S is to serve as a simple and practical tool for basic maintainability estimation and control in the context of BSs and their specialization
microservice-based systems (μSBSs).
While there are several theoretical comparisons of Object Orientation (OO) and Service Orientation (SO), little empirical research on the maintainability of the two paradigms exists. To provide support for a generalizable comparison, we conducted a study with four related parts. Two functionally equivalent systems (one OO and one SO version) were analyzed with coupling and cohesion metrics as well as via a controlled experiment, where participants had to extend the systems. We also conducted a survey with 32 software professionals and interviewed 8 industry experts on the topic. Results indicate that the SO version of our system possesses a higher degree of cohesion, a lower degree of coupling, and could be extended faster. Survey and interview results suggest that industry sees systems built with SO as more loosely coupled, modifiable, and reusable. OO systems, however, were described as less complex and easier to test.
Current approaches for enterprise architecture lack analytical instruments for cyclic evaluations of business and system architectures in real business enterprise system environments. This impedes the broad use of enterprise architecture methodologies. Furthermore, the permanent evolution of systems desynchronizes quickly model representation and reality. Therefore we are introducing an approach for complementing the existing top-down approach for the creation of enterprise architecture with a bottom approach. Enterprise Architecture Analytics uses the architectural information contained in many infrastructures to provide architectural information. By applying Big Data technologies it is possible to exploit this information and to create architectural information. That means, Enterprise Architectures may be discovered, analyzed and optimized using analytics. The increased availability of architectural data also improves the possibilities to verify the compliance of Enterprise Architectures. Architectural decisions are linked to clustered architecture artifacts and categories according to a holistic EAM Reference Architecture with specific architecture metamodels. A special suited EAM Maturity Framework provides the base for systematic and analytics supported assessments of architecture capabilities.
Smart cities are considered data factories that generate an enormous amount of data from various sources. In fact data is the backbone of any smart services. Therefore, the strategic beneficial handling of this digital capital is crucial for cities. Some smart city pioneers have already written down their approach to data in the form of data strategies, but what should a city's data strategy include, and how can the goals and measures defined in the strategies be operationalized? This paper addresses these questions by looking closely at the data strategies of cities in Germany and the top three countries in the EU Digital Economy and Society Index. The in-depth analysis of 8 city data strategies has yielded 11 dimensions that cities should consider in their data strategy. These are relevance of data, principles, methods, data sharing, technology, data culture, data ethics, organizational structure, data security and privacy, collaborations, data literacy. In addition, data governance is a concept to put these 11 strategic dimensions into practice through standardization measures, training programs, and defining roles and responsibilities by developing a data catalog.
While the concepts of object-oriented antipatterns and code smells are prevalent in scientific literature and have been popularized by tools like SonarQube, the research field for service-based antipatterns and bad smells is not as cohesive and organized. The description of these antipatterns is distributed across several publications with no holistic schema or taxonomy. Furthermore, there is currently little synergy between documented antipatterns for the architectural styles SOA and Microservices, even though several antipatterns may hold value for both. We therefore conducted a Systematic Literature Review (SLR) that identified 14 primary studies. 36 service-based antipatterns were extracted from these studies and documented with a holistic data model. We also categorized the antipatterns with a taxonomy and implemented relationships between them. Lastly, we developed a web application for convenient browsing and implemented a GitHub-based repository and workflow for the collaborative evolution of the collection. Researchers and practitioners can use the repository as a reference, for training and education, or for quality assurance.
The benefits of urban data cannot be realized without a political and strategic view of data use. A core concept within this view is data governance, which aligns strategy in data-relevant structures and entities with data processes, actors, architectures, and overall data management. Data governance is not a new concept and has long been addressed by scientists and practitioners from an enterprise perspective. In the urban context, however, data governance has only recently attracted increased attention, despite the unprecedented relevance of data in the advent of smart cities. Urban data governance can create semantic compatibility between heterogeneous technologies and data silos and connect stakeholders by standardizing data models, processes, and policies. This research provides a foundation for developing a reference model for urban data governance, identifies challenges in dealing with data in cities, and defines factors for the successful implementation of urban data governance. To obtain the best possible insights, the study carries out qualitative research following the design science research paradigm, conducting semi-structured expert interviews with 27 municipalities from Austria, Germany, Denmark, Finland, Sweden, and the Netherlands. The subsequent data analysis based on cognitive maps provides valuable insights into urban data governance. The interview transcripts were transferred and synthesized into comprehensive urban data governance maps to analyze entities and complex relationships with respect to the current state, challenges, and success factors of urban data governance. The findings show that each municipal department defines data governance separately, with no uniform approach. Given cultural factors, siloed data architectures have emerged in cities, leading to interoperability and integrability issues. A city-wide data governance entity in a cross-cutting function can be instrumental in breaking down silos in cities and creating a unified view of the city’s data landscape. The further identified concepts and their mutual interaction offer a powerful tool for developing a reference model for urban data governance and for the strategic orientation of cities on their way to data-driven organizations.
Autonomous driving is becoming the next big digital disruption in the automotive industry. However, the possibility of integrating autonomous driving vehicles into current transportation systems not only involves technological issues but also requires the acceptance and adoption of users. Therefore, this paper develops a conceptual model for user acceptance of autonomous driving vehicles. The corresponding model is tested through a standardized survey of 470 respondents in Germany. Finally, the findings are discussed in relation to the current developments in the automotive industry, and recommendations for further research are given.
Many start-ups are in search of cooperation partners to develop their innovative business models. In response, incumbent firms are introducing increasingly more cooperation systems to engage with start-ups. However, many of these cooperations end in failure. Although qualitative studies on cooperation models have tried to improve the effectiveness of incumbent start-up strategies, only a few have empirically examined start-up cooperation behavior. Considering the lack of adequate measurement models in current research, this paper focuses on developing a multi-item scale on cooperation behavior of start-ups, drawing from a series of qualitative and quantitative studies. The resultant scale contributes to recent research on start-up cooperation and provides a framework to add an empirical perspective to current research.
Container virtualization evolved into a key technology for deployment automation in line with the DevOps paradigm. Whereas container management systems facilitate the deployment of cloud applications by employing container based artifacts, parts of the deployment logic have been applied before to build these artifacts. Current approaches do not integrate these two deployment phases in a comprehensive manner. Limited knowledge on application software and middleware encapsulated in container-based artifacts leads to maintainability and configuration issues. Besides, the deployment of cloud applications is based on custom orchestration solutions leading to lock in problems. In this paper, we propose a two-phase deployment method based on the TOSCA standard. We present integration concepts for TOSCA-based orchestration and deployment automation using container-based artifacts. Our two-phase deployment method enables capturing and aligning all the deployment logic related to a software release leading to better maintainability. Furthermore, we build a container management system, which is composed of a TOSCA-based orchestrator on Apache Mesos, to deploy container-based cloud applications automatically.
Automatisierte Analyse von Review-Daten beschäftigt sich mit den Möglichkeiten, freien Text zu analysieren und relevante Informationen daraus zu extrahieren. Die Arbeit setzt sich dabei mit Methoden des unüberwachten Lernens auseinander. Hierbei steht die Methode der Topic Modellierung im Mittelpunkt. Es werden Verfahren betrachtet, die im Bereich der textbasierten Informationsgewinnung bekannt sind. Latent Semantic Indexing LSI, (probabilistic) pLSI und Latent Dirichlet Allocation (LDA) werden erläutert und verglichen. Die Arbeit zeigt, wie LDA genutzt wurde, um einen nhaltlichen Überblick über einen Datenkorpus von 1 Mio. Reviews zu bekommen und diesen auf einen feineren Detailgrad zu betrachten. Die Topic-basierte Analyse wird genutzt, um Erkentnisse für ein Opinion Mining System zu generieren, welches eine tiefergehende Analyse vornehmen wird. Der gesamte Prozess ist als vollständig automatisiert und maschinell unüberwacht konzeptioniert.
Enterprise Architectures (EA) consist of a multitude of architecture elements, which relate in manifold ways to each other. As the change of a single element hence impacts various other elements, mechanisms for architecture analysis are important to stakeholders. The high number of relationships aggravates architecture analysis and makes it a complex yet important task. In practice EAs are often analyzed using visualizations. This article contributes to the field of visual analytics in enterprise architecture management (EAM) by reviewing how state-of-the-art software platforms in EAM support stakeholders with respect to providing and visualizing the “right” information for decision-making tasks. We investigate the collaborative decision-making process in an experiment with master students using professional EAM tools by developing a research study. We evaluate the students’ findings by comparing them with the experience of an enterprise architect.
When forecasting sales figures, not only the sales history but also the future price of a product will influence the sales quantity. At first sight, multivariate time series seem to be the appropriate model for this task. Nonetheless, in real life history is not always repeatable, i.e., in the case of sales history there is only one price for a product at a given time. This complicates the design of a multivariate time series. However, for some seasonal or perishable products the price is rather a function of the expiration date than of the sales history. This additional information can help to design a more accurate and causal time series model. The proposed solution uses an univariate time series model but takes the price of a product as a parameter that influences systematically the prediction based on a calculated periodicity. The price influence is computed based on historical sales data using correlation analysis and adjustable price ranges to identify products with comparable history. The periodicity is calculated based on a novel approach that is based on data folding and Pearson Correlation. Compared to other techniques this approach is easy to compute and allows to preset the price parameter for predictions and simulations. Tests with data from the Data Mining Cup 2012 as well as artificial data demonstrate better results than established sophisticated time series methods.
For large-scale processes as implemented in organizations that develop software in regulated domains, comprehensive software process models are implemented, e.g., for compliance requirements. Creating and evolving such processes is demanding and requires software engineers having substantial modeling skills to create consistent and certifiable processes. While teaching process engineering to students, we observed issues in providing and explaining models. In this paper, we present an exploratory study in which we aim to shed light on the challenges students face when it comes to modeling. Our findings show that students are capable of doing basic modeling tasks, yet, fail in utilizing models correctly. We conclude that the required skills, notably abstraction and solution development, are underdeveloped due to missing practice and routine. Since modeling is key to many software engineering disciplines, we advocate for intensifying modeling activities in teaching.
Theory and practice of implementing a successful enterprise IoT strategy in the industry 4.0 era
(2021)
Since the arrival of the internet and affordable access to technologies, digital technologies have occupied a growing place in industries, propelling us towards a 4th industrial revolution: Industry 4.0. In today’s era of digital upheaval, enterprises are increasingly undergoing transformations that are leading to their digitalization. The traditional manufacturing industry is in the throes of a digital transformation that is accelerated by exponentially growing technologies (e.g., intelligent robots, Internet of Things, sensors, 3D printing). Around the world, enterprises are in a frantic race to implement solutions based on IoT to improve their productivity, innovation, and reduce costs and improve their markets on the international scene. Considering the immense transformative potential that IoTs and big data have to bring to the industrial sector, the adoption of IoT in all industrial systems is a challenge to remain competitive and thus transform the industry into a smart factory. This paper presents the description of the innovation and digitalization process, following the Industry 4.0 paradigm to implement a successful enterprise IoT strategy.
Theoretical foundation, effectiveness, and design artefact for machine learning service repositories
(2022)
Machine learning (ML) has played an important role in research in recent years. For companies that want to use ML, finding the algorithms and models that fit for their business is tedious. A review of the available literature on this problem indicates only a few research papers. Given this gap, the aim of this paper is to design an effective and easy-to-use ML service repository. The corresponding research is based on a multi-vocal literature analysis combined with design science research, addressing three research questions: (1) How is current white and gray literature on ML services structured with respect to repositories? (2) Which features are relevant for an effective ML service repository? (3) How is a prototype for an effective ML service repository conceptualized? Findings are relevant for the explanation of user acceptance of ML repositories. This is essential for corporate practice in order to create and use ML repositories effectively.
Thematic issue on human-centred ambient intelligence: cognitive approaches, reasoning and learning
(2017)
This editorial presents advances on human-centred Ambient Intelligence applications which take into account cognitive issues when modelling users (i.e. stress, attention disorders), and learn users’ activities/preferences and adapt to them (i.e. at home, driving a car). These papers also show AmI applications in health and education, which make them even more valuable for the general society.
In recent years, the Graph Model has become increasingly popular, especially in the application domain of social networks. The model has been semantically augmented with properties and labels attached to the graph elements. It is difficult to ensure data quality for the properties and the data structure because the model does not need a schema. In this paper, we propose a schema bound Typed Graph Model with properties and labels. These enhancements improve not only data quality but also the quality of graph analysis. The power of this model is provided by using hyper-nodes and hyper-edges, which allows to present data structures on different abstraction levels. We prove that the model is at least equivalent in expressive power to most popular data models. Therefore, it can be used as a supermodel for model management and data integration. We illustrate by example the superiority of this model over the property graph data model of Hidders and other prevalent data models, namely the relational, object-oriented, XML model, and RDF Schema.
The typed graph model
(2020)
In recent years, the Graph Model has become increasingly popular, especially in the application domain of social networks. The model has been semantically augmented with properties and labels attached to the graph elements. It is difficult to ensure data quality for the properties and the data structure because the model does not need a schema. In this paper, we propose a schema bound Typed Graph Model with properties and labels. These enhancements improve not only data quality but also the quality of graph analysis. The power of this model is provided by using hyper-nodes and hyper edges, which allows to present a data structure on different abstraction levels. We demonstrate by example the superiority of this model over the property graph data model of Hidders and other prevalent data models, namely the relational, object-oriented, and XML model.
The time has come : application of artificial intelligence in small- and medium-sized enterprises
(2022)
Artificial intelligence (AI) is not yet widely used in small- and medium-sized industrial enterprises (SME). The reasons for this are manifold and range from not understanding use cases, not enough trained employees, to too little data. This article presents a successful design-oriented case study at a medium-sized company, where the described reasons are present. In this study, future demand forecasts are generated based on historical demand data for products at a material number level using a gradient boosting machine (GBM). An improvement of 15% on the status quo (i.e. based on the root mean squared error) could be achieved with rather simple techniques. Hence, the motivation, the method, and the first results are presented. Concluding challenges, from which practical users should derive learning experiences and impulses for their own projects, are addressed.
The tale of 1000 cores: an evaluation of concurrency control on real(ly) large multi-socket hardware
(2020)
In this paper, we set out the goal to revisit the results of “Starring into the Abyss [...] of Concurrency Control with [1000] Cores” and analyse in-memory DBMSs on today’s large hardware. Despite the original assumption of the authors, today we do not see single-socket CPUs with 1000 cores. Instead multi-socket hardware made its way into production data centres. Hence, we follow up on this prior work with an evaluation of the characteristics of concurrency control schemes on real production multi-socket hardware with 1568 cores. To our surprise, we made several interesting findings which we report on in this paper.
The success of an autonomous robotic system is influenced by several interdependent factors not easily identifiable. This paper is set to lay the foundation of a new integrated approach in order to deeply examine all the parameters and understand their contribution to success. After introducing the problem, two cutting edge autonomous systems for the process of unloading of containers will be presented. Then the STIC analysis, a recently developed method for modelling and interpreting all the parameters, will be introduced. The preliminary results of applying such a methodology to a first study case, based on one of the two systems available to the authors, will be shortly presented. Future research is in the end recommended in order to prove that this methodology is the only way to efficiently and effectively mitigate the risk that stops potential users from investing in autonomous systems in the logistics sector.
Context: Development of software intensive products and services increasingly occurs by continuously deploying product or service increments, such as new features and enhancements, to customers. Product and service developers must continuously find out what customers want by direct customer feedback and usage behaviour observation. Objective: This paper examines the preconditions for setting up an experimentation system for continuous customer experiments. It describes the RIGHT model for Continuous Experimentation (Rapid Iterative value creation Gained through High-frequency Testing), illustrating the building blocks required for such a system. Method: An initial model for continuous experimentation is analytically derived from prior work. The model is matched against empirical case study findings from two startup companies and further developed. Results: Building blocks for a continuous experimentation system and infrastructure are presented. Conclusions: A suitable experimentation system requires at least the ability to release minimum viable products or features with suitable instrumentation, design and manage experiment plans, link experiment results with a product roadmap, and manage a flexible business strategy. The main challenges are proper, rapid design of experiments, advanced instrumentation of software to collect, analyse, and store relevant data, and the integration of experiment results in both the product development cycle and the software development process.
Due to rapidly changing technologies and business contexts, many products and services are developed under high uncertainties. It is often impossible to predict customer behaviors and outcomes upfront. Therefore, product and service developers must continuously find out what customers want, requiring a more experimental mode of management and appropriate support for continuously conducting experiments. We have analytically derived an initial model for continuous experimentation from prior work and matched it against empirical case study findings from two startup companies. We examined the preconditions for setting up an experimentation system for continuous customer experiments. The resulting RIGHT model for Continuous Experimentation (Rapid Iterative value creation Gained through High-frequency Testing) illustrates the building blocks required for such a system and the necessary infrastructure. The major findings are that a suitable experimentation system requires the ability to design, manage, and conduct experiments, create so-called minimum viable products or features, link experiment results with a product roadmap, and manage a flexible business strategy. The main challenges are proper, rapid design of experiments, advanced instrumentation of software to collect, analyse, and store relevant data, and integration of experiment results in the product development cycle, software development process, and business strategy. This summary refers to the article The RIGHT Model for Continuous Experimentation, published in the Journal of Systems and Software [Fa17].
The relevance of technology knowledge in digital transformation especially in small and mediumsized enterprises (SMEs) that are still largely dependent on physical human capital has become increasingly obvious. This is due to the rapid revolution in business environment coupled with increased living examples of firms disrupted by advancement in technological knowledge. Consequently, we find it progressively vital for SMEs to spot and mitigate both threats and take advantage of opportunities arising from digital transformation dynamism.
Our study aims at exploring the relevance of technology knowledge in SMEs for digital transformation to uncover the opportunities, roadmaps, and models that SMEs can take advantage of in the digital transformation and gain a competitive edge.
We conclude that irrespective relevance of technology knowledge for digital transformation coupled with its low costs and accessibility, SMEs are yet to realize the full potential of technological knowledge. This is mainly due to technologies appearing, changing and also vanishing so rapidly in the digital age, that gaining proper understanding without dedicated resources is utterly difficult for SMEs - making them less competitive as incumbent large firms in the market.
Public transport maps are typically designed in a way to support route finding tasks for passengers, while they also provide an overview about stations, metro lines, and city-specific attractions. Most of those maps are designed as a static representation, maybe placed in a metro station or printed in a travel guide. In this paper, we describe a dynamic, interactive public transport map visualization enhanced by additional views for the dynamic passenger data on different levels of temporal granularity. Moreover, we also allow extra statistical information in form of density plots, calendar-based visualizations, and line graphs. All this information is linked to the contextual metro map to give a viewer insights into the relations between time points and typical routes taken by the passengers. We also integrated a graph-based view on user-selected routes, a way to interactively compare those routes, an attribute- and property-driven automatic computation of specific routes for one map as well as for all available maps in our repertoire, and finally, also the most important sights in each city are included as extra information to include in a user-selected route. We illustrate the usefulness of our interactive visualization and map navigation system by applying it to the railway system of Hamburg in Germany while also taking into account the extra passenger data. As another indication for the usefulness of the interactively enhanced metro maps we conducted a controlled user experiment with 20 participants.
Context: Organizations are increasingly challenged by high market dynamics, rapidly evolving technologies and shifting user expectations. In consequence, many organizations are struggling with their ability to provide reliable product roadmaps by applying traditional roadmapping approaches. Currently, many companies are seeking opportunities to improve their product roadmapping practices and strive for new roadmapping approaches. A typical first step towards advancing the roadmapping capabilities of an organization is to assess the current situation. Therefore, the so-called maturity model DEEP for assessing the product roadmapping capabilities of companies operating in dynamic and uncertain environments has been developed and published by the authors.
Objective: The aim of this article is to conduct an initial validation of the DEEP model in order to understand its applicability better and to see if important concepts are missing. In addition, the aim of this article is to evolve the model based on the findings from the initial validation.
Method: The model has been given to practitioners such as product managers with the request to perform a self-assessment of the current product roadmapping practices in their company. Afterwards, interviews with each participant have been conducted in order to gain insights.
Results: The initial validation revealed that some of the stages of the model need to be rearranged and minor usability issues were found. The overall structure of the model was well received. The study resulted in the development of the version 1.1 of the DEEP product roadmap maturity model which is also presented in this article.
Steady growing research material in a variety of databases, repositories and clouds make academic content more than ever hard to discover. Finding adequate material for the own research however is essential for every researcher. Based on recent developments in the field of artificial intelligence and the identified digital capabilities of future universities a change in the basic work of academic research is predicted. This study defines the idea of how artificial intelligence could simplifiy academic research at a digital university. Today's studies in the field of AI spectacle the true potential and its commanding impact on academic research.
Internet of things innovations and the industrial internet these days become more and more decisive factors of future success for companies. Especially manufacturing oriented SME will face the challenge to develop innovative technology driven business models alongside technology innovations in this field which will be essential for future competitiveness. Failing in developing these technology driven business models in an internationally highly competitive environment will have a serious impact both on companies and on the society. Hence, securing economic stability and success of these technology driven business models is an indispensable task. To identify challenges for innovative industrial internet business models first it is necessary to understand what the industrial internet means to the leading parties and applying companies and start-ups in the field. Second, challenges from general business model development will be outlined. In a third step risks and challenges in business model development will be discussed with regard to the special characteristics of technology driven business models in the context of the industrial internet and the important role of the technological key component of the business model. Especially the capability to deal with an integrated consideration of the indivisible linked dimensions of economic and technological aspects of these business models is questioned. In the fourth place the specific challenges for industrial internet business models are derived. On the basis of these results it is also discussed what might be done to handle these challenges successfully with the goal to turn them into chances. The need for future research on the integration of the risk management perspective into the development of these technology driven business models is derived. This will help established companies and start-ups to realize great technological innovations for the industrial internet in sound and successful innovative business models.