Refine
Document Type
- Conference proceeding (112) (remove)
Language
- English (112) (remove)
Has full text
- yes (112) (remove)
Is part of the Bibliography
- yes (112)
Institute
- Informatik (89)
- ESB Business School (14)
- Technik (8)
- Life Sciences (1)
Publisher
- Springer (112) (remove)
Software and system development is complex and diverse, and a multitude of development approaches is used and combined with each other to address the manifold challenges companies face today. To study the current state of the practice and to build a sound understanding about the utility of different development approaches and their application to modern software system development, in 2016, we launched the HELENA initiative. This paper introduces the 2nd HELENA workshop and provides an overview of the current project state. In the workshop, six teams present initial findings from their regions, impulse talk are given, and further steps of the HELENA roadmap are discussed.
The digitization of factories will be a significant issue for the 2020s. New scenarios are emerging to increase the efficiency of production lines inside the factory, based on a new generation of robots’ collaborative functions. Manufacturers are moving towards data-driven ecosystems by leveraging product lifecycle data from connected goods. Energy-efficient communication schemes, as well as scalable data analytics, will support these various data collection scenarios. With augmented reality, new remote services are emerging that facilitate the efficient sharing of knowledge in the factory. Future communication solutions should generally ensure connectivity between the various production sites spread worldwide and new players in the value chain (e.g., suppliers, logistics) transparent, real-time, and secure. Industry 4.0 brings more intelligence and flexibility to production. Resulting in more lightweight equipment and, thus, offering better ergonomics. 5G will guarantee real-time transmissions with latencies of less than 1 ms. This will provide manufacturers with new possibilities to collect data and trigger actions automatically.
A 3D face modelling approach for pose-invariant face recognition in a human-robot environment
(2017)
Face analysis techniques have become a crucial component of human-machine interaction in the fields of assistive and humanoid robotics. However, the variations in head-pose that arise naturally in these environments are still a great challenge. In this paper, we present a real-time capable 3D face modelling framework for 2D in-the-wild images that is applicable for robotics. The fitting of the 3D Morphable Model is based exclusively on automatically detected landmarks. After fitting, the face can be corrected in pose and transformed back to a frontal 2D representation that is more suitable for face recognition. We conduct face recognition experiments with non-frontal images from the MUCT database and uncontrolled, in the wild images from the PaSC database, the most challenging face recognition database to date, showing an improved performance. Finally, we present our SCITOS G5 robot system, which incorporates our framework as a means of image pre-processing for face analysis.
The main aim of presented in this manuscript research is to compare the results of objective and subjective measurement of sleep quality for older adults (65+) in the home environment. A total amount of 73 nights was evaluated in this study. Placing under the mattress device was used to obtain objective measurement data, and a common question on perceived sleep quality was asked to collect the subjective sleep quality level. The achieved results confirm the correlation between objective and subjective measurement of sleep quality with the average standard deviation equal to 2 of 10 possible quality points.
In recent years, artificial intelligence (AI) has increasingly become a relevant technology for many companies. While there are a number of studies that highlight challenges and success factors in the adoption of AI, there is a lack of guidance for firms on how to approach the topic in a holistic and strategic way. The aim of this study is therefore to develop a conceptual framework for corporate AI strategy. To address this aim, a systematic literature review of a wide spectrum of AI-related research is conducted, and the results are analyzed based on an inductive coding approach. An important conclusion is that companies should consider diverse aspects when formulating an AI strategy, ranging from technological questions to corporate culture and human resources. This study contributes to knowledge by proposing a novel, comprehensive framework to foster the understanding of crucial aspects that need to be considered when using the emerging technology of AI in a corporate context.
A hybrid deep registration of MR scans to interventional ultrasound for neurosurgical guidance
(2021)
Despite the recent advances in image-guided neurosurgery, reliable and accurate estimation of the brain shift still remains one of the key challenges. In this paper, we propose an automated multimodal deformable registration method using hybrid learning-based and classical approaches to improve neurosurgical procedures. Initially, the moving and fixed images are aligned using classical affine transformation (MINC toolkit), and then the result is provided to the convolutional neural network, which predicts the deformation field using backpropagation. Subsequently, the moving image is transformed using the resultant deformation into a moved image. Our model was evaluated on two publicly available datasets: the retrospective evaluation of cerebral tumors (RESECT) and brain images of tumors for evaluation (BITE). The mean target registration errors have been reduced from 5.35 ± 4.29 to 0.99 ± 0.22 mm in the RESECT and from 4.18 ± 1.91 to 1.68 ± 0.65 mm in the BITE. Experimental results showed that our method improved the state-of-the-art in terms of both accuracy and runtime speed (170 ms on average). Hence, the proposed method provides a fast runtime for 3D MRI to intra-operative US pair in a GPU-based implementation, which shows a promise for its applicability in assisting the neurosurgical procedures compensating for brain shift.
While several service-based maintainability metrics have been proposed in the scientific literature, reliable approaches to automatically collect these metrics are lacking. Since static analysis is complicated for decentralized and technologically diverse microservice-based systems, we propose a dynamic approach to calculate such metrics from runtime data via distributed tracing. The approach focuses on simplicity, extensibility, and broad applicability. As a first prototype, we implemented a Java application with a Zipkin integrator, 23 different metrics, and five export formats. We demonstrated the feasibility of the approach by analyzing the runtime data of an example microservice based system. During an exploratory study with six participants, 14 of the 18 services were invoked via the system’s web interface. For these services, all metrics were calculated correctly from the generated traces.
Sleep quality and in general, behavior in bed can be detected using a sleep state analysis. These results can help a subject to regulate sleep and recognize different sleeping disorders. In this work, a sensor grid for pressure and movement detection supporting sleep phase analysis is proposed. In comparison to the leading standard measuring system, which is Polysomnography (PSG), the system proposed in this project is a non invasive sleep monitoring device. For continuous analysis or home use, the PSG or wearable actigraphy devices tends to be uncomfortable. Besides this fact, they are also very expensive. The system represented in this work classifies respiration and body movement with only one type of sensor and also in a non invasive way. The sensor used is a pressure sensor. This sensor is low cost and can be used for commercial proposes. The system was tested by carrying out an experiment that recorded the sleep process of a subject. These recordings showed the potential for classification of breathing rate and body movements. Although previous researches show the use of pressure sensors in recognizing posture and breathing, they have been mostly used by positioning the sensors between the mattress and bedsheet. This project however, shows an innovative way to position the sensors under the mattress.
Context: Many companies are facing an increasingly dynamic and uncertain market environment, making traditional product roadmapping practices no longer sufficiently applicable. As a result, many companies need to adapt their product roadmapping practices for continuing to operate successfully in today’s dynamic market environment. However, transforming product roadmapping practices is a difficult process for organizations. Existing literature offers little help on how to accomplish such a process.
Objective: The objective of this paper is to present a product roadmap transformation approach for organizations to help them identify appropriate improvement actions for their roadmapping practices using an analysis of their current practices.
Method: Based on an existing assessment procedure for evaluating product roadmapping practices, the first version of a product roadmap transformation approach was developed in workshops with company experts. The approach was then given to eleven practitioners and their perceptions of the approach were gathered through interviews.
Results: The result of the study is a transformation approach consisting of a process describing what steps are necessary to adapt the currently applied product roadmapping practice to a dynamic and uncertain market environment. It also includes recommendations on how to select areas for improvement and two empirically based mapping tables. The interviews with the practitioners revealed that the product roadmap transformation approach was perceived as comprehensible, useful, and applicable. Nevertheless, we identified potential for improvements, such as a clearer presentation of some processes and the need for more improvement options in the mapping tables. In addition, minor usability issues were identified.
The investigation of stress requires to distinguish between stress caused by physical activity and stress that is caused by psychosocial factors. The behaviour of the heart in response to stress and physical activity is very similar in case the set of monitored parameters is reduced to one. Currently, the differentiation remains difficult and methods which only use the heart rate are not able to differentiate between stress and physical activity, without using additional sensor data input. The approach focusses on methods which generate signals providing characteristics that are useful for detecting stress, physical activity, no activity and relaxation.
The basic idea behind a wearable robotic grasp assistancesystem is to support people that suffer from severe motor impairments in daily activities. Such a system needs to act mostly autonomously and according to the user’s intent. Vision-based hand pose estimation could be an integral part of a larger control and assistance framework. In this paper we evaluate the performance of egocentric monocular hand pose estimation for a robot-controlled hand exoskeleton in a simulation. For hand pose estimation we adopt a Convolutional Neural Network (CNN). We train and evaluate this network with computer graphics, created by our own data generator. In order to guide further design decisions we focus in our experiments on two egocentric camera viewpoints tested on synthetic data with the help of a 3D-scanned hand model, with and without an exoskeleton attached to it.We observe that hand pose estimation with a wrist-mounted camera performs more accurate than with a head-mounted camera in the context of our simulation. Further, a grasp assistance system attached to the hand alters visual appearance and can improve hand pose estimation. Our experiment provides useful insights for the integration of sensors into a context sensitive analysis framework for intelligent assistance.
The Internet of Things, enterprise social networks, adaptive case management, mobility systems, analytics for big data, and cloud services environments are emerging to support smart connected products and services and the digital transformation. Biological metaphors of living and adaptable ecosystems provide the logical foundation for self-optimizing and resilient run-time environments for intelligent business services and related distributed information systems with service-oriented enterprise architectures. We are investigating mechanisms for flexible adaptation and evolution for the next digital enterprise architecture systems in the context of the digital transformation. Our aim is to support flexibility and agile transformation for both business and related enterprise systems through adaptation and dynamical evolution of digital enterprise architectures. The present research paper investigates digital transformations of business and IT and integrates fundamental mappings between adaptable digital enterprise architectures and service-oriented information systems. We are putting a spotlight with the example domain – Internet of Things.
Data analysis is becoming increasingly important to pursue organizational goals, especially in the context of Industry 4.0, where a wide variety of data is available. Here numerous challenges arise, especially when using unstructured data. However, this subject has not been focused by research so far. This research paper addresses this gap, which is interesting for science and practice as well. In a study three major challenges of using unstructured data has been identified: analytical know-how, data issues, variety. Additionally, measures how to improve the analysis of unstructured data in the industry 4.0 context are described. Therefore, the paper provides empirical insights about challenges and potential measures when analyzing unstructured data. The findings are presented in a framework, too. Hence, next steps of the research project and future research points become apparent.
The current advancement of Artificial Intelligence (AI) combined with other digitalization efforts significantly impacts service ecosystems. Artificial intelligence has a substantial impact on new opportunities for the co-creation of value and the development of intelligent service ecosystems. Motivated by experiences and observations from digitalization projects, this paper presents new methodological perspectives and experiences from academia and practice on architecting intelligent service ecosystems and explores the impact of artificial intelligence through real cases supporting an ongoing validation. Digital enterprise architecture models serve as an integral representation of business, information, and technological perspectives of intelligent service-based enterprise systems to support management and development. This paper focuses on architectural models for intelligent service ecosystems, showing the fundamental business mechanism of AI-based value co-creation, the corresponding digital architecture, and management models. The focus of this paper presents the key architectural model perspectives for the development of intelligent service ecosystems.
Presently, many companies are transforming their strategy and product base, as well as their culture, processes and information systems to become more digital or to approach for a digital leadership. In the last years new business opportunities appeared using the potential of the Internet and related digital technologies, like Internet of Things, services computing, cloud computing, edge and fog computing, social networks, big data with analytics, mobile systems, collaboration networks, and cyber physical systems. Digitization fosters the development of IT environments with many rather small and distributed structures, like the Internet of Things, Microservices, or other micro-granular elements. This has a strong impact for architecting digital services and products. The change from a closed-world modeling perspective to more flexible open-world composition and evolution of micro-granular system architectures defines the moving context for adaptable systems. We are focusing on a continuous bottom-up integration of micro-granular architectures for a huge amount of dynamically growing systems and services, as part of a new digital enterprise architecture for service dominant digital products.
Excellence in IT is a key enabler for the digital transformation of enterprises. To realize the vision of digital enterprises it is necessary to cope with changing business requirements and to align business and IT. In order to evaluate the contribution of enterprise architecture management to these goals, our paper explores the impact of various factors to the perceived benefit of EAM in enterprises. Based on literature, we build an empirical research model. It is tested with empirical data of European EAM experts using a structural equation modelling approach. It is shown that changing business requirements, IT business alignment, the complexity of information technology infrastructure as well as enterprise architecture knowledge of information technology employees are crucial impact factors to the perceived benefit of EAM in enterprises.
An ongoing challenge in our days is to lower the impact on the quality of life caused by dysfunctionality through individual support. With the background of an aging society and continuous increases in costs for care, a holistic solution is needed. This solution must integrate individual needs and preferences, locally available possibilities, regional conditions, professional and informal caregivers and provide the flexibility to implement future requirements. The proposed model is a result of a common initiative to overcome the major obstacles and to center a solution on individual needs caused by dysfunctionality.
The amount of image data has been rising exponentially over the last decades due to numerous trends like social networks, smartphones, automotive, biology, medicine and robotics. Traditionally, file systems are used as storage. Although they are easy to use and can handle large data volumes, they are suboptimal for efficient sequential image processing due to the limitation of data organisation on single images. Database systems and especially column-stores support more stuctured storage and access methods on the raw data level for entiere series.
In this paper we propose definitions of various layouts for an efficient storage of raw image data and metadata in a column store. These schemes are designed to improve the runtime behaviour of image processing operations. We present a tool called column-store Image Processing Toolbox (cIPT) allowing to easily combine the data layouts and operations for different image processing scenarios.
The experimental evaluation of a classification task on a real world image dataset indicates a performance increase of up to 15x on a column store compared to a traditional row-store (PostgreSQL) while the space consumption is reduced 7x. With these results cIPT provides the basis for a future mature database feature.
New business concepts such as Enterprise 2.0 foster the use of social software in enterprises. Especially social production significantly increases the amount of data in the context of business processes. Unfortunately, these data are still an unearthed treasure in many enterprises. Due to advances in data processing such as Big Data, the exploitation of context data becomes feasible. To provide a foundation for the methodical exploitation of context data, this paper introduces a classification, based on two classes, intrinsic and extrinsic data.
Indoor localization systems are becoming more and more important with the digitalization of the industrial sector. Sensor data such as the current position of machines, transport vehicles, goods or tools represent an essential component of cyber physical production systems (CCPS). However, due to the high costs of these sensors, they are not widespread and are used mainly in special scenarios. However, especially optical indoor positioning systems (OIPS) based on cameras have certain advantages due to their technological specifications. In this paper, the application scenarios and requirements as well as their characteristics are presented and a classification approach of OIPS is introduced.
While many maintainability metrics have been explicitly designed for service-based systems, tool-supported approaches to automatically collect these metrics are lacking. Especially in the context of microservices, decentralization and technological heterogeneity may pose challenges for static analysis. We therefore propose the modular and extensible RAMA approach (RESTful API Metric Analyzer) to calculate such metrics from machine-readable interface descriptions of RESTful services. We also provide prototypical tool support, the RAMA CLI, which currently parses the formats OpenAPI, RAML, and WADL and calculates 10 structural service-based metrics proposed in scientific literature. To make RAMA measurement results more actionable, we additionally designed a repeatable benchmark for quartile-based threshold ranges (green, yellow, orange, red). In an exemplary run, we derived thresholds for all RAMA CLI metrics from the interface descriptions of 1,737 publicly available RESTful APIs. Researchers and practitioners can use RAMA to evaluate the maintainability of RESTful services or to support the empirical evaluation of new service interface metrics.
The relevance of Robotic Process Automation (RPA) has increased over the last few years. Combining RPA with Artificial Intelligence (AI) can further enhance the business value of the technology. The aim of this research was to analyze applications, terminology, benefits, and challenges of combining the two technologies. A total of 60 articles were analyzed in a systematic literature review to evaluate the aforementioned areas. The results show that by adding AI, RPA applications can be used in more complex contexts, it is possible to minimize the human factor during the development process, and AI-based decision-making can be integrated into RPA routines. This paper also presents a current overview of the used terminology. Moreover, it shows that by integrating AI, some unseen challenges in RPA projects can emerge, but also a lot of new benefits will come along with it. Based on the outcome, it is concluded that the topic offers a lot of potential, but further research and development is required. The result of this study help researches to gain an overview of the state-of-the-art in combining RPA and AI.
The blockchain technology represents a decentralized database that stores information securely in immutable data blocks. Regarding supply chain management, these characteristics offer potentials in increasing supply chain transparency, visibility, automation, and efficiency. In this context, first token-based mapping approaches exist to transfer certain manufacturing processes to the blockchain, such as the creation or assembly of parts as well as their transfer of ownership. However, the decentralized and immutable structure of blockchain technology also creates challenges when applying these token-based approaches to dynamic manufacturing processes. As a first step, this paper investigates existing mapping approaches and exemplifies weaknesses regarding their suitability for products with changeable configurations. Secondly, a concept is proposed to overcome these weaknesses by introducing logically coupled tokens embedded into a flexible smart contract structure. Finally, a concept for a token-based architecture is introduced to map manufacturing processes of products with changeable configurations.
Platforms and their surrounding ecosystems are becoming increasingly important components of many companies' strategies. Artificial Intelligence, in particular, has created new opportunities to create and develop ecosystems around the platform. However, there is not yet a methodology to systematically develop these new opportunities for enterprise development strategy. Therefore, this paper aims to lay a foundation for the conceptualization of Artificial Intelligence-based service ecosystems exploiting a Service-Dominant Logic. The basis for conceptualization is the study of value creation and particularly effective network effects. This research investigates the fundamental idea of extending specific digital concepts considering the influence of Artificial Intelligence on the design of intelligent services, along with their architecture of digital platforms and ecosystems, to enable a smooth evolutionary path and adaptability for human-centric collaborative systems and services. The paper explores an extended digital enterprise conceptual model through a combined, iterative, and permanent task of co-creating value between humans and intelligent systems as part of a new idea of cognitively adapted intelligent services.
In recent years, the parallel computing community has shown increasing interest in leveraging cloud resources for executing parallel applications. Clouds exhibit several fundamental features of economic value, like on-demand resource provisioning and a pay-per-use model. Additionally, several cloud providers offer their resources with significant discounts; however, possessing limited availability. Such volatile resources are an auspicious opportunity to reduce the costs arising from computations, thus achieving higher cost efficiency. In this paper, we propose a cost model for quantifying the monetary costs of executing parallel applications in cloud environments, leveraging volatile resources. Using this cost model, one is able to determine a configuration of a cloud-based parallel system that minimizes the total costs of executing an application.
Context: Nowadays the market environment is characterized by high uncertainties due to high market dynamics, confronting companies with new challenges in creating and updating product roadmaps. Most companies are still using traditional approaches which typically fail in such environments. Therefore, companies are seeking opportunities for new product roadmapping approaches.
Objective: This paper presents good practices to support companies better understand what factors are required to conduct a successful product roadmapping in a dynamic and uncertain market environment.
Method: Based on a grey literature review, essential aspects for conducting product roadmapping in a dynamic and uncertain market environment were identified. Expert workshops were then held with two researchers and three practitioners to develop best practices and the proposed approach for an outcome-driven roadmap. These results were then given to another set of practitioners and their perceptions were gathered through interviews.
Results: The study results in the development of 9 good practices that provide practitioners with insights into what aspects are crucial for product roadmapping in a dynamic and uncertain market environment. Moreover, we propose an approach to product roadmapping that includes providing a flexible structure and focusing on delivering value to the customer and the business. To ensure the latter, this approach consists of the main items outcome hypothesis, validated outcomes, and discovered outputs.
The Internet of Things (IoT), enterprise social networks, adaptive case management, mobility systems, analytics for big data, and cloud services environments are emerging to support smart connected products and services and the digital transformation. Biological metaphors of living and adaptable ecosystems with service oriented enterprise architectures provide the foundation for self-optimizing and resilient run-time environments for intelligent business services and related distributed information systems. We are investigating mechanisms for flexible adaptation and evolution for the next digital enterprise architecture systems in the context of the digital transformation. Our aim is to support flexibility and agile transformation for both business and related enterprise systems through adaptation and dynamical evolution of digital enterprise architectures. The present research paper investigates mechanisms for decision case management in the context of multi-perspective explorations of enterprise services and Internet of Things architectures by extending original enterprise architecture reference models with state of art elements for architectural engineering for the digitization and architectural decision support.
Digitization of societies changes the way we live, work, learn, communicate, and collaborate. In the age of digital transformation IT environments with a large number of rather small structures like Internet of Things (IoT), microservices, or mobility systems are emerging to support flexible and agile digitized products and services. Adaptable ecosystems with service oriented enterprise architectures are the foundation for self-optimizing, resilient run-time environments and distributed information systems. The resulting business disruptions affect almost all new information processes and systems in the context of digitization. Our aim are more flexible and agile transformations of both business and information technology domains with more flexible enterprise information systems through adaptation and evolution of digital enterprise architectures. The present research paper investigates mechanisms for decision-controlled digitization architectures for Internet of Things and microservices by evolving enterprise architecture reference models and state of the art elements for architectural engineering for micro-granular systems.
The Internet of Things (IoT) is coined by many different standards, protocols, and data formats that are often not compatible to each other. Thus, the integration of different heterogeneous (IoT) components into a uniform IoT setup can be a time-consuming manual task. This lacking interoperability between IoT components has been addressed with different approaches in the past. However, only very few of these approaches rely on Machine Learning techniques. In this work, we present a new way towards IoT interoperability based on Deep Reinforcement Learning (DRL). In detail, we demonstrate that DRL algorithms, which use network architectures inspired by Natural Language Processing (NLP), can be applied to learn to control an environment by merely taking raw JSON or XML structures, which reflect the current state of the environment, as input. Applied to IoT setups, where the current state of a component is often reflected by features embedded into JSON or XML structures and exchanged via messages, our NLP DRL approach eliminates the need for feature engineering and manually written code for pre-processing of data, feature extraction, and decision making.
Additive Manufacturing is increasingly used in the industrial sector as a result of continuous development. In the Production Planning and Control (PPC) system, AM enables an agile response in the area of detailed and process planning, especially for a large number of plants. For this purpose, a concept for a PPC system for AM is presented, which takes into account the requirements for integration into the operational enterprise software system. The technical applicability will be demonstrated by individual implemented sections. The presented solution approach promises a more efficient utilization of the plants and a more elastic use.
The proposed approach applies current unsupervised clustering approaches in a different dynamic manner. Instead of taking all the data as input and finding clusters among them, the given approach clusters Holter ECG data (longterm electrocardiography data from a holter monitor) on a given interval which enables a dynamic clustering approach (DCA). Therefore advanced clustering techniques based on the well known Dynamic TimeWarping algorithm are used. Having clusters e.g. on a daily basis, clusters can be compared by defining cluster shape properties. Doing this gives a measure for variation in unsupervised cluster shapes and may reveal unknown changes in healthiness. Embedding this approach into wearable devices offers advantages over the current techniques. On the one hand users get feedback if their ECG data characteristic changes unforeseeable over time which makes early detection possible. On the other hand cluster properties like biggest or smallest cluster may help a doctor in making diagnoses or observing several patients. Further, on found clusters known processing techniques like stress detection or arrhythmia classification may be applied.
Cloud resources can be dynamically provisioned according to application-specific requirements and are payed on a per-use basis. This gives rise to a new concept for parallel processing: Elastic parallel computations. However, it is still an open research question to which extent parallel applications can benefit from elastic scaling, which requires resource adaptation at runtime and corresponding coordination mechanisms. In this work, we analyze how to address these system-level challenges in the context of developing and operating elastic parallel tree search applications. Based on our findings, we discuss the design and implementation of TASKWORK, a cloud-aware runtime system specifically designed for elastic parallel tree search, which enables the implementation of elastic applications by means of higher-level development frameworks. We show how to implement an elastic parallel branch-and-bound application based on an exemplary development framework and report on our experimental evaluation that also considers several benchmarks for parallel tree search.
Digital enterprise architecture management in tourism : state of the art and future directions
(2018)
The advance of information technology impacts tourism more than many other industries, due to the service character of its products. Most offerings in tourism are immaterial in nature and challenging in coordination. Therefore, the alignment of IT and strategy and digitization is of crucial importance to enterprises in tourism. To cope with the resulting challenges, methods for the management of enterprise architectures are necessary. Therefore, we scrutinize approaches for managing enterprise architectures based on a literature research. We found many areas for future research on the use of enterprise architecture in tourism.
Digitalization increases the pressure for companies to innovate. While current research on digital transformation mostly focuses on technological and management aspects, less attention has been paid to organizational culture and its influence on digital innovations. The purpose of this paper is to identify the characteristics of organizational culture that foster digital innovations. Based on a systematic literature review on three scholarly databases, we initially found 778 articles that were then narrowed down to a total number of 23 relevant articles through a methodical approach. After analyzing these articles, we determine nine characteristics of organizational culture that foster digital innovations: corporate entrepreneurship, digital awareness and necessity of innovations, digital skills and resources, ecosystem orientation, employee participation, agility and organizational structures, error culture and risk-taking, internal knowledge sharing and collaboration, customer and market orientation as well as open-mindedness and willingness to learn.
Prior to the introduction of AI-based forecast models in the procurement department of an industrial retail company, we assessed the digital skills of the procurement employees and surveyed their attitudes toward a new digital technology. The aim of the survey was to ascertain important contextual factors which are likely to influence the acceptance and the successful use of the new forecast tool. What we find is that the digital skills of the employees show an intermediate level and that their attitudes toward key aspects of new digital technologies are largely positive. Thus, the conditions for high acceptance and the successful use of the models are good, as evidenced by the high intention of the procurement staff to use the models. In line with previous research, we find that the perceived usefulness of a new technology and the perceived ease of use are significant drivers of the willingness to use the new forecast tool.
Digitization is more than using digital technologies to transfer data and perform computations and tasks. Digitization embraces disruptive effects of digital technologies on economy and society. To capture these effects, two perspectives are introduced, the product and the value-creation perspective. In the product perspective, digitization enables the transition from material, static products to interactive and configurable services. In the value-creation perspective, digitization facilitates the transition from centralized, isolated models of value creation, to bidirectional, co-creation oriented approaches of value creation.
Efficient and robust 3D object reconstruction based on monocular SLAM and CNN semantic segmentation
(2019)
Various applications implement slam technology, especially in the field of robot navigation. We show the advantage of slam technology for independent 3d object reconstruction. To receive a point cloud of every object of interest void of its environment, we leverage deep learning. We utilize recent cnn deep learning research for accurate semantic segmentation of objects. In this work, we propose two fusion methods for cnn-based semantic segmentation and slam for the 3d reconstruction of objects of interest in order to obtain a more robustness and efficiency. As a major novelty, we introduce a cnn-based masking to focus slam only on feature points belonging to every single object. Noisy, complex or even non-rigid features in the background are filtered out, improving the estimation of the camera pose and the 3d point cloud of each object. Our experiments are constrained to the reconstruction of industrial objects. We present an analysis of the accuracy and performance of each method and compare the two methods describing their pros and cons.
In the era of digital transformation, the notion of software quality transcends its traditional boundaries, necessitating an expansion to encompass the realms of value creation for customers and the business. Merely optimizing technical aspects of software quality can result in diminishing returns. Product discovery techniques can be seen as a powerful mechanism for crafting products that align with an expanded concept of quality - one that incorporates value creation. Previous research has shown that companies struggle to determine appropriate product discovery techniques for generating, validating, and prioritizing ideas for new products or features to ensure they meet the needs and desires of the customers and the business. For this reason, we conducted a grey literature review to identify various techniques for product discovery. First, the article provides an overview of different techniques and assesses how frequently they are mentioned in the literature review. Second, we mapped these techniques to an existing product discovery process from previous research to provide concrete guidelines for establishing product discovery in their organizations. The analysis shows, among other things, the increasing importance of techniques to structure the problem exploration process and the product strategy process. The results are interpreted regarding the importance of the techniques to practical applications and recognizable trends.
The goal of this paper pretends to show how a bed system with an embedded system with sensor is able to analyze a person’s movement, breathing and recognizing the positions that the subject is lying on the bed during the night without any additional physical contact. The measurements are performed with sensors placed between the mattress and the frame. An Intel Edison board was used as an endpoint that served as a communication node from the mesh network to external service. Two nodes and Intel Edison are attached to the bottom of the bed frame and they are connected to the sensors.
Glioblastomas are the most aggressive fast-growing primary brain cancer which originate in the glial cells of the brain. Accurate identification of the malignant brain tumor and its sub-regions is still one of the most challenging problems in medical image segmentation. The Brain Tumor Segmentation Challenge (BraTS) has been a popular benchmark for automatic brain glioblastomas segmentation algorithms since its initiation. In this year, BraTS 2021 challenge provides the largest multi-parametric (mpMRI) dataset of 2,000 pre-operative patients. In this paper, we propose a new aggregation of two deep learning frameworksnamely, DeepSeg and nnU-Net for automatic glioblastoma recognition in pre-operative mpMRI. Our ensemble method obtains Dice similarity scores of 92.00, 87.33, and 84.10 and Hausdorff Distances of 3.81, 8.91, and 16.02 for the enhancing tumor, tumor core, and whole tumor regions, respectively, on the BraTS 2021 validation set, ranking us among the top ten teams. These experimental findings provide evidence that it can be readily applied clinically and thereby aiding in the brain cancer prognosis, therapy planning, and therapy response monitoring. A docker image for reproducing our segmentation results is available online at (https://hub.docker.com/r/razeineldin/deepseg21).
This paper provides an introduction to the topic of enterprise social networks (ESN) and illustrates possible applications, potentials, and challenges for future research. It outlines an analysis of research papers containing a literature overview in the field of ESN. Subsequently, single relevant research papers are analysed and further research potentials derived therefrom. This yields seven promising areas for further research: (1) user behaviour; (2) effects of ESN usage; (3) management, leadership, and governance; (4) value assessment and success measurement; (5) cultural effects, (6) architecture and design of ESN; and (7) theories, research designs and methods. This paper characterises these areas and articulates further research directions.
Context: Nowadays, companies are challenged by increasing market dynamics, rapid changes and disruptive participants entering the market. To survive in such an environment, companies must be able to quickly discover product ideas that meet the needs of both customers and the company and deliver these products to customers. Dual-track agile is a new type of agile development that combines product discovery and delivery activities in parallel, iterative, and cyclical ways. At present, many companies have difficulties in finding and establishing suitable approaches for implementing dual-track agile in their business context.
Objective: In order to gain a better understanding of how product discovery and product delivery can interact with each other and how this interaction can be implemented in practice, this paper aims to identify suitable approaches to dual-track agile.
Method: We conducted a grey literature review (GLR) according to the guidelines to Garousi et al.
Results: Several approaches that support the integration of product discovery with product delivery were identified. This paper presents a selection of these approaches, i.e., the Discovery-Delivery Cycle model, Now-Next-Later Product Roadmaps, Lean Sprints, Product Kata, and Dual-Track Scrum. The approaches differ in their granularity but are similar in their underlying rationales. All approaches aim to ensure that only validated ideas turn into products and thus promise to lead to products that are better received by their users.
Recognition of sleep and wake states is one of the relevant parts of sleep analysis. Performing this measurement in a contactless way increases comfort for the users. We present an approach evaluating only movement and respiratory signals to achieve recognition, which can be measured non-obtrusively. The algorithm is based on multinomial logistic regression and analyses features extracted out of mentioned above signals. These features were identified and developed after performing fundamental research on characteristics of vital signals during sleep. The achieved accuracy of 87% with the Cohen’s kappa of 0.40 demonstrates the appropriateness of a chosen method and encourages continuing research on this topic.
Evaluation of a contactless accelerometer sensor system for heart rate monitoring during sleep
(2024)
The monitoring of a patient's heart rate (HR) is critical in the diagnosis of diseases. In the detection of sleep disorders, it also plays an important role. Several techniques have been proposed, including using sensors to record physiological signals that are automatically examined and analysed. This work aims to evaluate using a contactless HR monitoring system based on an accelerometer sensor during sleep. For this purpose, the oscillations caused by chest movements during heart contractions are recorded by an installation mounted under the bed mattress. The processing algorithm presented in this paper filters the signals and determines the HR. As a result, an average error of about 5 bpm has been documented, i.e., the system can be considered to be used for the forecasted domain.
Intelligent systems and services are the strategic targets of many current digitalization efforts and part of massive digital transformations based on digital technologies with artificial intelligence. Digital platform architectures and ecosystems provide an essential base for intelligent digital systems. The paper raises an important question: Which development paths are induced by current innovations in the field of artificial intelligence and digitalization for enterprise architectures? Digitalization disrupts existing enterprises, technologies, and economies and promotes the architecture of cognitive and open intelligent environments. This has a strong impact on new opportunities for value creation and the development of intelligent digital systems and services. Digital technologies such as artificial intelligence, the Internet of Things, service computing, cloud computing, blockchains, big data with analysis, mobile systems, and social business network systems are essential drivers of digitalization. We investigate the development of intelligent digital systems supported by a suitable digital enterprise architecture. We present methodological advances and an evolutionary path for architectures with an integral service and value perspective to enable intelligent systems and services that effectively combine digital strategies and digital architectures with artificial intelligence.
Today, many companies are adapting their strategy, business models, products, services as well as business processes and information systems in order to expand their digitalization level through intelligent systems and services. The paper raises an important question: What are cognitive co-creation mechanisms for extending digital services and architectures to readjust the usage value of smart services? Typically, extensions of digital services and products and their architectures are manual design tasks that are complex and require specialized, rare experts. The current publication explores the basic idea of extending specific digital artifacts, such as intelligent service architectures, through mechanisms of cognitive co-creation to enable a rapid evolutionary path and better integration of humans and intelligent systems. We explore the development of intelligent service architectures through a combined, iterative, and permanent task of co-creation between humans and intelligent systems as part of a new concept of cognitively adapted smart services. In this paper, we present components of a new platform for the joint co-creation of cognitive services for an ecosystem of intelligent services that enables the adaptation of digital services and architectures.
The fifth mobile communications generation (5G) can lead to a substantial change in companies enabling the full capability of wireless industrial communication. 5G with its key features of providing Enhanced Mobile Broadband, Ultra-Reliable and Low-Latency Communication, and Massive Machine Type Communication will support the implementation of Industry 4.0 applications. In particular, the possibility to set-up Non-Public Networks provides the opportunity of 5G communication in factories and ensures sole access to the 5G infrastructure offering new opportunities for companies to implement innovative mobile applications. Currently there exist various concepts, ideas, and projects for 5G applications in an industrial environment. However, the global rollout of 5G systems is a continuous process based on various stages defined by the global initiative 3rd Generation Partnership Project that develops and specifies the 5G telecommunication standard. Accordingly, some services are currently still far from their final performance capability or not yet implemented. Additionally, research lacks in clarifying the general suitability of 5G regarding frequently mentioned 5G use cases. This paper aims to identify relevant 5G use cases for intralogistics and evaluates their technical requirements regarding their practical feasibility throughout the upcoming 5G specifications.
In the last 20 years there have been major advances in autonomous robotics. In IoT (Industry 4.0), mobile robots require more intuitive interaction possibilities with humans in order to expand its field of applications. This paper describes a user-friendly setup, which enables a person to lead the robot in an unknown environment. The environment has to be perceived by means of sensory input. For realizing a cost and resource efficient Follow Me application we use a single monocular camera as low-cost sensor. For efficient scaling of our Simultaneous Localization and Mapping (SLAM) algorithm, we integrate an inertial measurement unit (IMU) sensor. With the camera input we detect and track a person. We propose combining state of the art deep learning with Convolutional Neural Network (CNN) and SLAM algorithms functionality on the same input camera image. Based on the output robot navigation is possible. This work presents the specification, workflow for an efficient development of the Follow Me application. Our application’s delivered point clouds are also used for surface construction. For demonstration, we use our platform SCITOS G5 equipped with the afore mentioned sensors. Preliminary tests show the system works robustly in the wild.
Context: The current transformation of automotive development towards innovation, permanent learning and adapting to changes are directing various foci on the integration of agile methods. Although, there have been efforts to apply agile methods in the automotive domain for many years, a wide-spread adoption has not yet taken place.
Goal: This study aims to gain a better understanding of the forces that prevent the adoption of agile methods.
Method: Survey based on 16 semi-structured interviews from the automotive domain. The results are analyzed by means of thematic coding.
Results: Forces that prevent agile adoption are mainly of organizational, technical and social nature and address inertia, anxiety and context factors. Key challenges in agile adoption are related to transforming organizational structures and culture, achieving faster software release cycles without loss of quality, the importance of software reuse in combination with agile practices, appropriate quality assurance measures, and the collaboration with suppliers and other disciplines such as mechanics.
Conclusion: Significant challenges are imposed by specific characteristics of the automotive domain such as high quality requirements and many interfaces to surrounding rigid and inflexible processes. Several means are identified that promise to overcome these challenges.
Framework for integrating intelligent product structures into a flexible manufacturing system
(2023)
Increasing individualisation of products with a high variety and shorter product lifecycles result in smaller lot sizes, increasing order numbers, and rising data and information processing for manufacturing companies. To cope with these trends, integrated management of the products and manufacturing information is necessary through a “product-driven” manufacturing system. Intelligent products that are integrated as an active element within the controlling and planning of the manufacturing process can represent flexibility advantages for the system. However, there are still challenges regarding system integration and evaluation of product intel-ligence structures. In light of these trends, this paper proposes a conceptual frame-work for defining, analysing, and evaluating intelligent products using the example of an assembly system. This paper begins with a classification of the existing problems in the assembly and a definition of the intelligence level. In contrast to previous approaches, the analysis of products is expanded to five dimensions. Based on this, a structured evaluation method for a use case is presented. The structure of solving the assembly problem is provided by the use case-specific ontology model. Results are presented in terms of an assignment of different application areas, linking the problem with the target intelligence class and, depending on the intelligence class of the product, suggesting requirements for implementation. The conceptual frame-work is evaluated by utilising a case study in a learning factory. Here, the model-mix assembly is controlled actively by the workpiece carrier in terms of transferring the variant-specific work instructions to the operator and the collaborative robot (cobot) at the workstations. The resulting system thus enables better exploitation of the poten-tials through less frequent errors and shorter search times. Such an implementation has demonstrated that the intelligent workpiece carrier represents an additional part for realising a cyber-physical production system (CPPS).
While the recently emerged microservices architectural style is widely discussed in literature, it is difficult to find clear guidance on the process of refactoring legacy applications. The importance of the topic is underpinned by high costs and effort of a refactoring process which has several other implications, e.g. overall processes (DevOps) and team structure. Software architects facing this challenge are in need of selecting an appropriate strategy and refactoring technique. One of the most discussed aspects in this context is finding the right service granularity to fully leverage the advantages of a microservices architecture. This study first discusses the notion of architectural refactoring and subsequently compares 10 existing refactoring approaches recently proposed in academic literature. The approaches are classified by the underlying decomposition technique and visually presented in the form of a decision guide for quick reference. The review yielded a variety of strategies to break down a monolithic application into independent services. With one exception, most approaches are only applicable under certain conditions. Further concerns are the significant amount of input data some approaches require as well as limited or prototypical tool support.
In recent years, 3D facial reconstructions from single images have garnered significant interest. Most of the approaches are based on 3D Morphable Model (3DMM) fitting to reconstruct the 3D face shape. Concurrently, the adoption of Generative Adversarial Networks (GAN) has been gaining momentum to improve the texture of reconstructed faces. In this paper, we propose a fundamentally different approach to reconstructing the 3D head shape from a single image by harnessing the power of GAN. Our method predicts three maps of normal vectors of the head’s frontal, left, and right poses. We are thus presenting a model-free method that does not require any prior knowledge of the object’s geometry to be reconstructed.
The key advantage of our proposed approach is the substantial improvement in reconstruction quality compared to existing methods, particularly in the case of facial regions that are self-occluded in the input image. Our method is not limited to 3d face reconstruction. It is generic and applicable to multiple kinds of 3D objects. To illustrate the versatility of our method, we demonstrate its efficacy in reconstructing the entire human body.
By delivering a model-free method capable of generating high-quality 3D reconstructions, this paper not only advances the field of 3D facial reconstruction but also provides a foundation for future research and applications spanning multiple object types. The implications of this work have the potential to extend far beyond facial reconstruction, paving the way for innovative solutions and discoveries in various domains.
Stress is recognized as a factor of predominant disease and in the future the costs for treatment will increase. The presented approach tries to detect stress in a very basic and easy to implement way, so that the cost for the device and effort to wear it remain low. The user should benefit from the fact that the system offers an easy interface reporting the status of his body in real time. In parallel, the system provides interfaces to pass the obtained data forward for further processing and (professional) analyses, in case the user agrees. The system is designed to be used in every day’s activities and it is not restricted to laboratory use or environments. The implementation of the enhanced prototype shows that the detection of stress and the reporting can be managed using correlation plots and automatic pattern recognition even on a very light weighted microcontroller platform.
Software startups often make assumptions about the problems and customers they are addressing as well as the market and the solutions they are developing. Testing the right assumptions early is a means to mitigate risks. Approaches such as Lean Startup foster this kind of testing by applying experimentation as part of a constant build-measure-learn feedback loop. The existing research on how software startups approach experimentation is very limited. In this study, we focus on understanding how software startups approach experimentation and identify challenges and advantages with respect to conducting experiments. To achieve this, we conducted a qualitative interview study. The initial results show that startups often spent a disproportionate amount of time focusing on creating solutions without testing critical assumptions. Main reasons are the lack of awareness, that these assumptions can be tested early and a lack of knowledge and support on how to identify, prioritize and test these assumptions. However, startups understand the need for testing risky assumptions and are open to conducting experiments.
A fast way to test business ideas and to explore customer problems and needs is to talk to them. Customer interviews help to understand what solutions customers will pay for before investing valuable resources to develop solutions. Customer interviews are a good way to gain qualitative insights. However, conducting interviews can be a difficult procedure and requires specific skills. The current ways of teaching interview skills have significant deficiencies. They especially lack guidance and opportunities to practice. Objective: The goal of this work is to develop and validate a workshop format to teach interview skills for conducting good customer interviews in a practical manner. Method: The research method is based on design science research which serves as a framework. A game-based workshop format was designed to teach interview skills. The approach consists of a half-day, hands-on workshop and is based on an analysis of necessary interview skills. The approach has been validated in several workshops and improved based on learnings from those workshops. Results: Results of the validation show that participants could significantly improve their interview skills while enjoying the game-based exercises. The game-based learning approach supports learning and practicing customer interview skills with playful and interactive elements that encourage greater motivation among participants to conduct interviews.
In a recently developed study programme at Reutlingen University, which focuses on practical orientations, an innovative product with solid company references is to be defined and realised by student teams. On the basis of this product, all subjects of the business engineering study programme “Sustainable Production and Business” are taught. By focusing on three main paths of future skills that have been developed by NextSkills to analyse upcoming social changes, global challenges and fields of work that are innovation-driven and agile, the new study programme aims to create responsible leaders who will shape global businesses respectfully. Thereby, different TRIZ tools help to support students in developing their own products with a focus on sustainability and pay off on the future skills enhancement. Further, students get to know TRIZ tools in an unbiased way, unburdened by too much theory, and are thus continuously supported in the progressing product development process that accompanies their studies. Hence, students perceive TRIZ on the one hand as a method to develop sustainable products and, on the other hand, to find sustainable solutions for everyday problems. The knowledge and positive experiences gained in this way should then arouse curiosity for the TRIZ class at the end of the study programme. The students can graduate with a TRIZ Level 1 certificate. Thereby, as many students as possible are introduced to the TRIZ methods, and the TRIZ tool is spread widely.
The market for indoor positioning systems for a variety of applications has grown strongly in recent years. A wide range of systems is available, varying considerably in terms of accuracy, price and technology used. The suitability of the systems is highly dependent on the intended application. This paper presents a concept to use a single low-cost PTZ camera in combination with fiducial markers for indoor position and orientation determination. The intended use case is to capture a plant layout consisting of position, orientation and unique identity of individual facilities. Important factors to consider for the selection of a camera have been identified and the transformation of the marker pose in camera coordinates into a selectable plant coordinate system is described. The concept is illustrated by an exemplary practical implementation and its results.
In the last decades, several driving systems were developed to improve the driving behaviour in energy efficiency or safety. However, these driving systems cover either the area of energy-efficiency or safety. Furthermore, they do not consider the stress level of the driver when showing a recommendation, although stress can lead to an unsafe or inefficient driving behaviour. In this paper, an approach is presented to consider the driver stress level in a driving system for safe and energy-efficient driving behaviour. The driving system tries to suppress a recommendation when the driver is in stress in order not to stress the driver additionally with recommendations in a stressful driving situation. This can lead to an increase in the road safety and in the user acceptance of the driving system, as the driver is not getting bothered or stressed by the driving system.
The evaluation of the approach showed, that the driving system
is able to show recommendations to the driver, while also reacting
to a high stress level by suppressing recommendations in
order not to stress the driver additionally.
Modern component-based architectural styles, e.g., microservices, enable developing the components independently from each other. However, this independence can result in problems when it comes to managing issues, such as bugs, as developer teams can freely choose their technology stacks, such as issue management systems (IMSs), e.g., Jira, GitHub, or Redmine. In the case of a microservice architecture, if an issue of a downstream microservice depends on an issue of an upstream microservice, this must be both identified and communicated, and the downstream service’s issues should link to its causing issue. However, agile project management today requires efficient communication, which is why more and more teams are communicating through comments in the issues themselves. Unfortunately, IMSs are not integrated with each other, thus, semantically linking these issues is not supported, and identifying such issue dependencies from different IMSs is time-consuming and requires manual searching in multiple IMS technologies. This results in many context switches and prevents developers from being focused and getting things done. Therefore, in this paper, we present a concept for seamlessly integrating different IMS technologies into each other and providing a better architectural context. The concept is based on augmenting the websites of issue management systems through a browser extension. We validate the approach with a prototypical implementation for the Chrome browser. For evaluation, we conducted expert interviews, which approved that the presented approach provides significant advantages for managing issues of agile microservice architectures.
The increasing heterogenecity of students at German Universities of Applied Sciences and the growing importance of digitization call for a rethinking of teaching and learning within higher education. In the next years, changing the learning ecosystem by developing and reflecting upon new teaching and learning techniques using methods of digitalization will be both - most relevant and very challenging. The following article introduces two different learning scenarios, which exemplify the implementation of new educational models that allow discontinuity of time and place, technology and process in teaching and learning. Within a blended learning apporach, the first learning scenario aims at adapting and individualizing the knowledge transfer in the course Foundations of Computer Science by providing knowledge individually and situation-specifically. The second learning scenario proposes a web-based tool to facilitate digital learning environments and thus digital learning communities and the possibility of computer-supported learning. The overall aim of both learning scenarios is to enhance learning for diverse groups by providing a different smart learning ecosystem in stepping away from a teacher-based to a student-centered approach. Both learning scenarios exemplarily represent the educational vision of Reutlingen University - its development into an interactive university.
Companies are constantly changing their business process models. In team environments, different versions of a process model are created at the same time. These versions of a process model need to be merged from time to time to consolidate changes and create a new common version.
In this short paper, we propose a solution for modifying a merge result. The goal is to create a meaningful merge result by adding connector nodes to the model at specific locations. This increases the amount of possible result models and reduces additional implementation effort.
Because of a high product and technology complexity, companies involve external partners in their research and development (R&D) processes. Interorganizational projects result, which represent temporary organizations. In these projects heterogenous organizations work closely together. Since project work is always teamwork, these projects face due to their characteristic’s major challenges on an organizational, relational, and content-related collaboration level. Thus, this paper raises the following research question: “How can a project team be supported on an organizational, relational, and content-related level in an interorganizational new product development setting?” To answer this research question, an explorative expert study was set up with two digital workshops using the interactive presentation tool Mentimeter. The results show that a cooperative innovation culture could support project teams on an organizational and relational level in the future in minimizing predominant problems. Moreover, it supports project teams for example in a functional communication. Furthermore, 18 values of a cooperative innovation culture result which are for example openness and transparency, risk and failure tolerance or respect. On a content-related level the results show that an adaptable tool which promotes creativity and collaboration method as well as content-related input support could be beneficial for problem-solving in an interorganizational new product development setting in the future. Because the tool can guide product developers through the process with suitable creativity and collaboration methods, can give content-related input and can enable interactive interchange on a table-top. Future research could mainly focus on the connection of the cooperative innovation culture and the tool since these potentially influence each other.
This study describes a non-contact measuring and system identification procedure for evaluating inhomogeneous stiffness and damping characteristics of the annular ligament in the physiological amplitude and frequency range without the application of large static external forces that can cause unnatural displacements of the stapes. To verify the procedure, measurements were first conducted on a steel beam. Then, measurements on an individual human cadaveric temporal bone sample were performed. The estimated results support the inhomogeneous stiffness and damping distribution of the annular ligament and are in a good agreement with the multiphoton microscopy results which show that the posterior-inferior corner of the stapes footplate is the stiffest region of the annular ligament.
Investigation of tympanic membrane influences on middle-ear impedance measurements and simulations
(2020)
This study simulates acoustic impedance measurements in the human ear canal and investigates error influences due to improperly accounted evanescence in the probe’s near field, cross-section area changes, curvature of the ear canal, and pressure inhomogeneities across the tympanic membrane, which arise mainly at frequencies above 10 kHz. Evanescence results from strongly damped modes of higher order, which can only be found in the near field of the sound source and are excited due to sharp cross-sectional changes as they occur at the transition from the probe loudspeaker to the ear canal. This means that different impedances are measured depending on the probe design. The influence of evanescence cannot be eliminated completely from measurements, however, it can be reduced by a probe design with larger distance between speaker and microphone. A completely different approach to account for the influence of evanescence is to evaluate impedance measurements with the help of a finite element model, which takes the precise arrangement of microphone and speaker in the measurement into account. The latter is shown in this study exemplary on impedance measurements at a tube terminated with a steel plate. Furthermore, the influences of shape changes of the tympanic membrane and ear canal curvature on impedance are investigated.
For years, agile methods are considered the most promising route toward successful software development, and a considerable number of published studies the (successful) use of agile methods and reports on the benefits companies have from adopting agile methods. Yet, since the world is not black or white, the question for what happened to the traditional models arises. Are traditional models replaced by agile methods? How is the transformation toward Agile managed, and, moreover, where did it start? With this paper we close a gap in literature by studying the general process use over time to investigate how traditional and agile methods are used. Is there coexistence or do agile methods accelerate the traditional processes’ extinction? The findings of our literature study comprise two major results: First, studies and reliable numbers on the general process model use are rare, i.e., we lack quantitative data on the actual process use and, thus, we often lack the ability to ground process-related research in practically relevant issues. Second, despite the assumed dominance of agile methods, our results clearly show that companies enact context-specific hybrid solutions in which traditional and agile development approaches are used in combination.
Leveraging textual information for improving decision making in the business process lifecycle
(2015)
Business process implementations fail, because requirements are elicited incompletely. At the same time, a huge amount of unstructured data is not used for decision-making during the business process lifecycle. Data from questionnaires and interviews is collected but not exploited because the effort doing so is too high. Therefore, this paper shows how to leverage textual information for improving decision making in the business process lifecycle. To do so, text mining is used for analyzing questionnaires and interviews.
Application systems often need to be deployed in different variants if requirements that influence their implementation, hosting, and configuration differ between customers. Therefore, deployment technologies, such as Ansible or Terraform, support a certain degree of variability modeling. Besides, modern application systems typically consist of various software components deployed using multiple deployment technologies that only support their proprietary, non-interoperable variability modeling concepts. The Variable Deployment Metamodel (VDMM) manages the deployment variability across heterogeneous deployment technologies based on a single variable deployment model. However, VDMM currently only supports modeling conditional components and their relations which is sometimes too coarse-grained since it requires modeling entire components, including their implementation and deployment configuration for each different component variant. Therefore, we extend VDMM by a more fine-grained approach for managing the variability of component implementations and their deployment configurations, e.g., if a cheap version of a SaaS deployment provides only a community edition of the software and not the enterprise edition, which has additional analytical reporting functionalities built-in. We show that our extended VDMM can be used to realize variable deployments across different individual deployment technologies using a case study and our prototype OpenTOSCA Vintner.
Context: Software product lines are widely used in automotive embedded software development. This software paradigm improves the quality of software variants by reuse. The combination of agile software development practices with software product lines promises a faster delivery of high quality software. However, the set up of an agile software product line is still challenging, especially in the automotive domain. Goal: This publication aims to evaluate to what extend agility fits to automotive product line engineering. Method: Based on previous work and two workshops, agility is mapped to software product line concerns. Results: This publication presents important principles of software product lines, and examines how agile approaches fit to those principles. Additionally, the principles are related to one of the four major concerns of software product line engineering: Business, Architecture, Process, and Organization. Conclusion: Agile software product line engineering is promising and can add value to existing development approaches. The identified commonalities and hindering factors need to be considered when defining a combined agile product line engineering approach.
An important shift in software delivery is the definition of a cloud service as an independently deployable unit by following the microservices architectural style. Container virtualization facilitates development and deployment by ensuring independence from the runtime environment. Thus, cloud services are built as container based systems - a set of containers that control the lifecycle of software and middleware components. However, using containers leads to a new paradigm for service development and operation: Self service environments enable software developers to deploy and operate container based systems on their own - you build it, you run it. Following this approach, more and more operational aspects are transferred towards the responsibility of software developers. In this work, we propose a concept for self-adaptive cloud services based on container virtualization in line with the microservices architectural style and present a model-based approach that assists software developers in building these services. Based on operational models specified by developers, the mechanisms required for self-adaptation are automatically generated. As a result, each container automatically adapts itself in a reactive, decentralized manner. We evaluate a prototype which leverages the emerging TOSCA standard to specify operational behavior in a portable manner.
New or adapted digital business models have huge impacts on Enterprise Architectures (EA) and require them to become more agile, flexible, and adaptable. All these changes are happening frequently and are currently not well documented. An EA consists of a lot of elements with manifold relationships between them. Thus changing the business model may have multiple impacts on other architectural elements. The EA engineering process deals with the development, change and optimization of architectural elements and their dependencies. Thus an EA provides a holistic view for both business and IT from the perspective of many stakeholders, which are involved in EA decision-making processes. Different stakeholders have specific concerns and are collaborating today in often unclear decision-making processes. In our research we are investigating information from collaborative decision-making processes to support stakeholders in taking current decisions. In addition we provide all information necessary to understand how and why decisions were taken. We are collecting the decision-related information automatically to minimize manual time intensive work as much as possible. The core contribution of our research extends a decisional metamodel, which links basic decisions with architectural elements and extends them with an associated decisional case context. Our aim is to support a new integral method for multi perspective and collaborative decision-making processes. We illustrate this by a practice-relevant decision-making scenario for Enterprise Architecture Engineering.
Social networks, smart portable devices, Internet of Things (IoT) on base of technologies like analytics for big data and cloud services are emerging to support flexible connected products and agile services as the new wave of digital transformation. Biological metaphors of living and adaptable ecosystems with service-oriented enterprise architectures provide the foundation for self-optimizing and resilient run-time environments for intelligent business services and related distributed information systems. We are extending Enterprise Architecture (EA) with mechanisms for flexible adaptation and evolution of information systems having distributed IoT and other micro-granular digital architecture to support next digitization products, services, and processes. Our aim is to support flexibility and agile transformation for both IT and business capabilities through adaptive digital enterprise architectures. The present research paper investigates additionally decision mechanisms in the context of multi-perspective explorations of enterprise services and Internet of Things architectures by extending original enterprise architecture reference models with state of art elements for architectural engineering and digitization.
Automatic segmentation is essential for the brain tumor diagnosis, disease prognosis, and follow-up therapy of patients with gliomas. Still, accurate detection of gliomas and their sub-regions in multimodal MRI is very challenging due to the variety of scanners and imaging protocols. Over the last years, the BraTS Challenge has provided a large number of multi-institutional MRI scans as a benchmark for glioma segmentation algorithms. This paper describes our contribution to the BraTS 2022 Continuous Evaluation challenge. We propose a new ensemble of multiple deep learning frameworks namely, DeepSeg, nnU-Net, and DeepSCAN for automatic glioma boundaries detection in pre-operative MRI. It is worth noting that our ensemble models took first place in the final evaluation on the BraTS testing dataset with Dice scores of 0.9294, 0.8788, and 0.8803, and Hausdorf distance of 5.23, 13.54, and 12.05, for the whole tumor, tumor core, and enhancing tumor, respectively. Furthermore, the proposed ensemble method ranked first in the final ranking on another unseen test dataset, namely Sub-Saharan Africa dataset, achieving mean Dice scores of 0.9737, 0.9593, and 0.9022, and HD95 of 2.66, 1.72, 3.32 for the whole tumor, tumor core, and enhancing tumor, respectively.
This paper presents a novel multi-modal CNN architecture that exploits complementary input cues in addition to sole color information. The joint model implements a mid-level fusion that allows the network to exploit cross modal interdependencies already on a medium feature-level. The benefit of the presented architecture is shown for the RGB-D image understanding task. So far, state-of-the-art RGB-D CNNs have used network weights trained on color data. In contrast, a superior initialization scheme is proposed to pre-train the depth branch of the multi-modal CNN independently. In an end-to-end training the network parameters are optimized jointly using the challenging Cityscapes dataset. In thorough experiments, the effectiveness of the proposed model is shown. Both, the RGB GoogLeNet and further RGB-D baselines are outperformed with a significant margin on two different tasks: semantic segmentation and object detection. For the latter, this paper shows how to extract object level groundtruth from the instance level annotations in Cityscapes in order to train a powerful object detector.
Data analytics tasks on large datasets are computationally intensive and often demand the compute power of cluster environments. Yet, data cleansing, preparation, dataset characterization and statistics or metrics computation steps are frequent. These are mostly performed ad hoc, in an explorative manner and mandate low response times. But, such steps are I/O intensive and typically very slow due to low data locality, inadequate interfaces and abstractions along the stack. These typically result in prohibitively expensive scans of the full dataset and transformations on interface boundaries.
In this paper, we examine R as analytical tool, managing large persistent datasets in Ceph, a wide-spread cluster file-system. We propose nativeNDP – a framework for Near Data Processing that pushes down primitive R tasks and executes them in-situ, directly within the storage device of a cluster-node. Across a range of data sizes, we show that nativeNDP is more than an order of magnitude faster than other pushdown alternatives.
Hypermedia as the Engine of Application State (HATEOAS) is one of the core constraints of REST. It refers to the concept of embedding hyperlinks into the response of a queried or manipulated resource to show a client possible follow-up actions and transitions to related resources. Thus, this concept aims to provide a client with a navigational support when interacting with a Web-based application. Although HATEOAS should be implemented by any Web-based API claiming to be RESTful, API providers tend to offer service descriptions in place of embedding hyperlinks into responses. Instead of relying on a navigational support, a client developer has to read the service description and has to identify resources and their URIs that are relevant for the interaction with the API. In this paper, we introduce an approach that aims to identify transitions between resources of a Web-based API by systematically analyzing the service description only. We devise an algorithm that automatically derives a URI Model from the service description and then analyzes the payload schemas to identify feasible values for the substitution of path parameters in URI Templates. We implement this approach as a proxy application, which injects hyperlinks representing transitions into the response payload of a queried or manipulated resource. The result is a HATEOAS-like navigational support through an API. Our first prototype operates on service descriptions in the OpenAPI format. We evaluate our approach using ten real-world APIs from different domains. Furthermore, we discuss the results as well as the observations captured in these tests.
Near-Data Processing (NDP) is a key computing paradigm for reducing the ever growing time and energy costs of data transport versus computations. With their flexibility, FPGAs are an especially suitable compute element for NDP scenarios. Even more promising is the exploitation of novel and future non-volatile memory (NVM) technologies for NDP, which aim to achieve DRAM-like latencies and throughputs, while providing large capacity non-volatile storage.
Experimentation in using FPGAs in such NVM-NDP scenarios has been hindered, though, by the fact that the NVM devices/FPGA boards are still very rare and/or expensive. It thus becomes useful to emulate the access characteristics of current and future NVMs using off-the-shelf DRAMs. If such emulation is sufficiently accurate, the resulting FPGA-based NDP computing elements can be used for actual full-stack hardware/software benchmarking, e.g., when employed to accelerate a database.
For this use, we present NVMulator, an open-source easy-to-use hardware emulation module that can be seamlessly inserted between the NDP processing elements on the FPGA and a conventional DRAM-based memory system. We demonstrate that, with suitable parametrization, the emulated NVM can come very close to the performance characteristics of actual NVM technologies, specifically Intel Optane. We achieve 0.62% and 1.7% accuracy for cache line sized accesses for read and write operations, while utilizing only 0.54% of LUT logic resources on a Xilinx/AMD AU280 UltraScale+ FPGA board. We consider both file-system as well as database access patterns, examining the operation of the RocksDB database when running on real or emulated Optane-technology memories.
Software Process Improvement (SPI) programs have been implemented, inter alia, to improve quality and speed of software development. SPI addresses many aspects ranging from individual developer skills to entire organizations. It comprises, for instance, the optimization of specific activities in the software lifecycle as well as the creation of organizational awareness and project culture. In the course of conducting a systematic mapping study on the state-of-the-art in SPI from a general perspective, we observed Software Quality Management (SQM) being of certain relevance in SPI programs. In this paper, we provide a detailed investigation of those papers from the overall systematic mapping study that were classified as addressing SPI in the context of SQM (including testing). From the main study’s result set, 92 papers were selected for an in-depth systematic review to study the contributions and to develop an initial picture of how these topics are addressed in SPI. Our findings show a fairly pragmatic contribution set in which different solutions are proposed, discussed, and evaluated. Among others, our findings indicate a certain reluctance towards standard quality or (test) maturity models and a strong focus on custom review, testing, and documentation techniques, whereas a set of five selected improvement measures is almost equally addressed.
Menopause is the permanent cessation of menstruation occurring naturally in women's aging. The most frequent symptoms associated with menopausal phases are mucosal dryness, increased weight and body fat, and changes in sleep patterns. Oral symptoms in menopause derived from saliva flow reduction can lead to dry mouth, ulcers, and alterations of taste and swallowing patterns. However, the oral health phenotype of postmenopausal women has not been characterized. The aim of the study was to determine postmenopausal women's oral phenotype, including medical history, lifestyle, and oral assessment through artificial intelligence algorithms. We enrolled 100 postmenopausal women attending the Dental School of the University of Seville were included in the study. We collected an extensive questionnaire, including lifestyle, medication, and medical history. We used an unsupervised k-means algorithm to cluster the data following standard features for data analysis. Our results showed the main oral symptoms in our postmenopausal cohort were reduced salivary flow and periodontal disease. Relying on the classical assessment of the collected data, we might have a biased evaluation of postmenopausal women. Then, we used artificial intelligence analysis to evaluate our data obtaining the main features and providing a reduced feature defining the oral health phenotype. We found 6 clusters with similar features, including medication affecting salivation or smoking as essential features to obtain different phenotypes. Thus, we could obtain main features considering differential oral health phenotypes of postmenopausal women with an integrative approach providing new tools to assess the women in the dental clinic.
Sleep is an important aspect in life of every human being. The average sleep duration for an adult is approximately 7 h per day. Sleep is necessary to regenerate physical and psychological state of a human. A bad sleep quality has a major impact on the health status and can lead to different diseases. In this paper an approach will be presented, which uses a long-term monitoring of vital data gathered by a body sensor during the day and the night supported by mobile application connected to an analyzing system, to estimate sleep quality of its user as well as give recommendations to improve it in real-time. Actimetry and historical data will be used to improve the individual recommendations, based on common techniques used in the area of machine learning and big data analysis.
An enormous amount of data in the context of business processes is stored as images. They contain valuable information for business process management. Up to now this data had to be integrated manually into the business process. By advances of capturing it is possible to extract information from an increasing number of images. Therefore, we systematically investigate the potentials of Image Mining for business process management by a literature research and an in-depth analysis of the business process lifecycle. As a first step to evaluate our research, we developed a prototype for recovering process model information from drawings using Rapidminer.
Preface of IDEA 2015
(2016)
Context: A product roadmap is an important tool in product development. It sets the strategic direction in which the product is to be developed to achieve the company’s vision. However, for product roadmaps to be successful, it is essential that all stakeholders agree with the company’s vision and objectives and are aligned and committed to a common product plan.
Objective: In order to gain a better understanding of product roadmap alignment, this paper aims at identifying measures, activities and techniques in order to align the different stakeholders around the product roadmap.
Method: We conducted a grey literature review according the guidelines to Garousi et al.
Results: Several approaches to gain alignment were identified such as defining and communicating clear objectives based on the product vision, conducting cross-functional workshops, shuttle diplomacy, and mission briefing. In addition, our review identified the “Behavioural Change Stairway Model” that suggests five steps to gain alignment by building empathy and a trustful relationship.
Context: The current situation and future scenarios of the automotive domain require a new strategy to develop high quality software in a fast pace. In the automotive domain, it is assumed that a combination of agile development practices and software product lines is beneficial, in order to be capable to handle high frequency of improvements. This assumption is based on the understanding that agile methods introduce more flexibility in short development intervals. Software product lines help to manage the high amount of variants and to improve quality by reuse of software for long term development.
Goal: This study derives a better understanding of the expected benefits for a combination. Furthermore, it identifies the automotive specific challenges that prevent the adoption of agile methods within the software product line.
Method: Survey based on 16 semi structured interviews from the automotive domain, an internal workshop with 40 participants and a discussion round on ESE congress 2016. The results are analyzed by means of thematic coding.
Global, competitive markets which are characterised by mass customisation and rapidly changing customer requirements force major changes in production styles and the configuration of manufacturing systems. As a result, factories may need to be regularly adapted and optimised to meet short-term requirements. One way to optimise the production process is the adaptation of the plant layout to the current or expected order situation. To determine whether a layout change is reasonable, a model of the current layout is needed. It is used to perform simulations and in the case of a layout change it serves as a basis for the reconfiguration process. To aid the selection of possible measurement systems, a requirements analysis was done to identify the important parameters for the creation of a digital shadow of a plant layout. Based on these parameters, a method is proposed for defining limit values and specifying exclusion criteria. The paper thus contributes to the development and application of systems that enable an automatic synchronisation of the real layout with the digital layout.
Medical applications are becoming increasingly important in the current development of health care and therefore a crucial part of the medical industry. The work focuses on the analysis of requirements and the challenges arisen from designing mobile medical applications in relation to the user interface. The paper describes the current status in the development of mobile medical apps and illustrates the development of e-health market. The author will explain the requirements and will illustrate the hurdles and problems. He refers to the German market which is similar to the European and compares that with the market in the USA.
Forecasting demand is challenging. Various products exhibit different demand patterns. While demand may be constant and regular for one product, it may be sporadic for another, as well as when demand occurs, it may fluctuate significantly. Forecasting errors are costly and result in obsolete inventory or unsatisfied demand. Methods from statistics, machine learning, and deep learning have been used to predict such demand patterns. Nevertheless, it is not clear for what demand pattern, which algorithm would achieve the best forecast. Therefore, even today a large number of models are used to forecast on a test period. The model with the best result on the test period is used for the actual forecast. This approach is computationally and time intensive and, in most cases, uneconomical. In our paper we show the possibility to use a machine learning classification algorithm, which predicts the best possible model based on the characteristics of a time series. The approach was developed and evaluated on a dataset from a B2B-technical-retailer. The machine learning classification algorithm achieves a mean ROC-AUC of 89%, which emphasizes the skill of the model.
Companies are becoming aware of the potential risks arising from sustainability aspects in supply chains. These risks can affect ecological, economic or social aspects. One important element in managing those risks is improved transparency in supply chains by means of digital transformation. Innovative technologies like blockchain technology can be used to enforce transparency. In this paper, we present a smart contract-based Supply Chain Control Solution to reduce risks. Technological capabilities of the solution will be compared to a similar technology approach and evaluated regarding their benefits and challenges within the framework of supply chain models. As a result, the proposed solution is suitable for the dynamic administration of complex supply chains.
Modern enterprises reshape and transform continuously by a multitude of management processes with different perspectives. They range from business process management to IT service management and the management of the information systems. Enterprise Architecture (EA) management seeks to provide such a perspective and to align the diverse management perspectives. Therefore, EA management cannot rely on hierarchic - in a tayloristic manner designed - management processes to achieve and promote this alignment. It, conversely, has to apply bottom-up, information-centered coordination mechanisms to ensure that different management processes are aligned with each other and enterprise strategy. Social software provides such a bottom-up mechanism for providing support within EAM-processes. Consequently, challenges of EA management processes are investigated, and contributions of social software presented. A cockpit provides interactive functions and visualization methods to cope with this complexity and enable the practical use of social software in enterprise architecture management processes.
Software development as an experiment system : a qualitative survey on the state of the practice
(2015)
An experiment-driven approach to software product and service development is gaining increasing attention as a way to channel limited resources to the efficient creation of customer value. In this approach, software functionalities are developed incrementally and validated in continuous experiments with stakeholders such as customers and users. The experiments provide factual feedback for guiding subsequent development. Although case studies on experimentation in industry exist, the understanding of the state of the practice and the encountered obstacles is incomplete. This paper presents an interview-based qualitative survey exploring the experimentation experiences of ten software development companies. The study found that although the principles of continuous experimentation resonated with industry practitioners, the state of the practice is not yet mature. In particular, experimentation is rarely systematic and continuous. Key challenges relate to changing organizational culture, accelerating development cycle speed, and measuring customer value and product
success.
Database management systems and K/V-Stores operate on updatable datasets – massively exceeding the size of available main memory. Tree-based K/V storage management structures became particularly popular in storage engines. B+ -Trees [1, 4] allow constant search performance, however write-heavy workloads yield in inefficient write patterns to secondary storage devices and poor performance characteristics. LSM-Trees [16, 23] overcome this issue by horizontal partitioning fractions of data – small enough to fully reside in main memory, but require frequent maintenance to sustain search performance.
Firstly, we propose Multi-Version Partitioned BTrees (MV-PBT) as sole storage and index management structure in key-sorted storage engines like K/V-Stores. Secondly, we compare MV-PBT against LSM-Trees. The logical horizontal partitioning in MV-PBT allows leveraging recent advances in modern B+ -Tree techniques in a small transparent and memory resident portion of the structure. Structural properties sustain steady read performance, yielding efficient write patterns and reducing write amplification.
We integrated MV-PBT in the WiredTiger [15] KV storage engine. MV-PBT offers an up to 2× increased steady throughput in comparison to LSM-Trees and several orders of magnitude in comparison to B+ -Trees in a YCSB [5] workload.
Context: Agile practices as well as UX methods are nowadays well-known and often adopted to develop complex software and products more efficiently and effectively. However, in the so called VUCA environment, which many companies are confronted with, the sole use of UX research is not sufficient to find the best solutions for customers. The implementation of Design Thinking can support this process. But many companies and their product owners don’t know how much resources they should spend for conducting Design Thinking.
Objective: This paper aims at suggesting a supportive tool, the “Discovery Effort Worthiness (DEW) Index”, for product owners and agile teams to determine a suitable amount of effort that should be spent for Design Thinking activities.
Method: A case study was conducted for the development of the DEW index. Design Thinking was introduced into the regular development cycle of an industry Scrum team. With the support of UX and Design Thinking experts, a formula was developed to determine the appropriate effort for Design Thinking.
Results: The developed “Discovery Effort Worthiness Index” provides an easy-to-use tool for companies and their product owners to determine how much effort they should spend on Design Thinking methods to discover and validate requirements. A company can map the corresponding Design Thinking methods to the results of the DEW Index calculation, and product owners can select the appropriate measures from this mapping. Therefore, they can optimize the effort spent for discovery and validation.
Ecuador, traditionally an agricultural based economy, has a great potential for valorizing their industrial residues. This study, presents a techno-economic analysis for applying a novel biomass oxidation method to produce formic and acetic acids from coffee husk residues in Machala, Ecuador. The analysis determined that the time of return of investment was lower than 5 years, making this project economically feasible, when producing approx. 1000 tons of formic acid per year, which is enough for supplying the Ecuadorian market. This production, would reduce imports costs and develop the chemical industry in the country.
Being able to monitor the heart activity of patients during their daily life in a reliable, comfortable and affordable way is one main goal of the personalized medicine. Current wearable solutions lack either on the wearing comfort, the quality and type of the data provided or the price of the device. This paper shows the development of a Textile Sensor Platform (TSP) in the form of an electrocardiogram (ECG)-measuring T-shirt that is able to transmit the ECG signal to a smartphone. The development process includes the selection of the materials, the design of the textile electrodes taking into consideration their electrical characteristics and ergonomy, the integration of the electrodes on the garment and their connection with the embedded electronic part. The TSP is able to transmit a real-time streaming of the ECG-signal to an Android smartphone through Bluetooth Low Energy (BLE). Initial results show a good electrical quality in the textile electrodes and promising results in the capture and transmission of the ECG signal. This is still a working- progress and it is the result of an interdisciplinary master project between the School of Informatics and the School of Textiles & Design of the Reutlingen University.
The advent of chatbots in customer service solutions received increasing attention by research and practice throughout the last years. However, the relevant dimensions and features for service quality and service performance for chatbots remain quite unclear. Therefore, this research develops and tests a conceptual model for customer service quality and customer service performance in the context of chatbots. Additionally, the impact of the developed service dimensions on different customer relationship metrics is measured across different service channels (hotline versus chatbots). Findings of six independent studies indicate a strong main effect of the conceptualized service dimensions on customer satisfaction, service costs, intention to service reusage, word-of-mouth, and customer loyalty. However, different service dimensions are relevant for chatbots compared to a traditional service hotline.
Digitization transforms business process models and processes in many enterprises. However, many of them need guidance, how digitization is impacting the design of their information systems. Therefore, this paper investigates the influence of digitization on information system design. We apply a two-phase research method applying a literature review and an exploratory case study. The case study took place in the IT service provider of a large insurance enterprise. The study’s results suggest that a number of areas of information system design are affected, such as architecture, processes, data and services.
Context: Organizations are increasingly challenged by high market dynamics, rapidly evolving technologies and shifting user expectations. In consequence, many organizations are struggling with their ability to provide reliable product roadmaps by applying traditional roadmapping approaches. Currently, many companies are seeking opportunities to improve their product roadmapping practices and strive for new roadmapping approaches. A typical first step towards advancing the roadmapping capabilities of an organization is to assess the current situation. Therefore, the so-called maturity model DEEP for assessing the product roadmapping capabilities of companies operating in dynamic and uncertain environments has been developed and published by the authors.
Objective: The aim of this article is to conduct an initial validation of the DEEP model in order to understand its applicability better and to see if important concepts are missing. In addition, the aim of this article is to evolve the model based on the findings from the initial validation.
Method: The model has been given to practitioners such as product managers with the request to perform a self-assessment of the current product roadmapping practices in their company. Afterwards, interviews with each participant have been conducted in order to gain insights.
Results: The initial validation revealed that some of the stages of the model need to be rearranged and minor usability issues were found. The overall structure of the model was well received. The study resulted in the development of the version 1.1 of the DEEP product roadmap maturity model which is also presented in this article.
Distributed ledger technologies such as the blockchain technology offer an innovative solution to increase visibility and security to reduce supply chain risks. This paper proposes a solution to increase the transparency and auditability of manufactured products in collaborative networks by adopting smart contract-based virtual identities. Compared with existing approaches, this extended smart contract-based solution offers manufacturing networks the possibility of involving privacy, content updating, and portability approaches to smart contracts. As a result, the solution is suitable for the dynamic administration of complex supply chains.
Analysis is an important part of the enterprise architecture management process. Prior to decisions regarding transformation of the enterprise architecture, the current situation and the outcomes of alternative action plans have to be analysed. Many analysis approaches have been proposed by researchers and current enterprise architecture management tools implement analysis functionalities. However, few work has been done structuring and classifying enterprise architecture analysis approaches. This paper collects and extends existing classification schemes, presenting a framework for enterprise architecture analysis classification. For evaluation, a collection of enterprise architecture analysis approaches has been classified based on this framework. As a result, the description of these approaches has been assessed, a common set of important categories for enterprise architecture analysis classification has been derived and suggestions for further development are drawn.