Refine
Year of publication
- 2019 (31) (remove)
Document Type
- Conference proceeding (22)
- Journal article (7)
- Book chapter (2)
Language
- English (31) (remove)
Is part of the Bibliography
- yes (31)
Institute
- Informatik (22)
- ESB Business School (4)
- Life Sciences (3)
- Technik (2)
Publisher
- Springer (31) (remove)
While several service-based maintainability metrics have been proposed in the scientific literature, reliable approaches to automatically collect these metrics are lacking. Since static analysis is complicated for decentralized and technologically diverse microservice-based systems, we propose a dynamic approach to calculate such metrics from runtime data via distributed tracing. The approach focuses on simplicity, extensibility, and broad applicability. As a first prototype, we implemented a Java application with a Zipkin integrator, 23 different metrics, and five export formats. We demonstrated the feasibility of the approach by analyzing the runtime data of an example microservice based system. During an exploratory study with six participants, 14 of the 18 services were invoked via the system’s web interface. For these services, all metrics were calculated correctly from the generated traces.
The investigation of stress requires to distinguish between stress caused by physical activity and stress that is caused by psychosocial factors. The behaviour of the heart in response to stress and physical activity is very similar in case the set of monitored parameters is reduced to one. Currently, the differentiation remains difficult and methods which only use the heart rate are not able to differentiate between stress and physical activity, without using additional sensor data input. The approach focusses on methods which generate signals providing characteristics that are useful for detecting stress, physical activity, no activity and relaxation.
Presently, many companies are transforming their strategy and product base, as well as their culture, processes and information systems to become more digital or to approach for a digital leadership. In the last years new business opportunities appeared using the potential of the Internet and related digital technologies, like Internet of Things, services computing, cloud computing, edge and fog computing, social networks, big data with analytics, mobile systems, collaboration networks, and cyber physical systems. Digitization fosters the development of IT environments with many rather small and distributed structures, like the Internet of Things, Microservices, or other micro-granular elements. This has a strong impact for architecting digital services and products. The change from a closed-world modeling perspective to more flexible open-world composition and evolution of micro-granular system architectures defines the moving context for adaptable systems. We are focusing on a continuous bottom-up integration of micro-granular architectures for a huge amount of dynamically growing systems and services, as part of a new digital enterprise architecture for service dominant digital products.
After the initiator of the ESB Logistics Learning Factory, Prof. Vera Hummel had made experience in developing and implementing a concept for a Learning Factory for Advanced Industrial Engineering (aIE) at the University of Stuttgart, Institute IFF between 2005 and 2008, she was appointed as a full professor at ESB Business School, a faculty of Reutlingen University in March 2010. Lacking a realistic, hands on learning and teaching environment of industrial scale for its industrial engineering students, first ideas for a Learning Factory that would strongly focus on all aspects of production logistics were drafted in 2012. Already back then, a strong integration of virtual and physical factory was desired: While the Learning Factory itself would be physical, the neighboring partners along the supply chain, such as suppliers or distribution warehouses, could be added in a fully virtual way. Considering implementation of the ESB Logistics Learning Factory a strategic initiative of the university, initial funding was provided by the faculty ESB Business School itself. Following its own creed, to provide future-oriented training for the region, also primarily local suppliers and manufacturers were selected as equipment providers to the new Learning Factory. During the initialization phase, 2014, a total of three researchers and nine students worked approximately four months to set up a first assembly line, storage racks, AGVs, or pick-by-light systems in conjunction with the underlying didactical concept. Since then, several hundred of students have participated in trainings and lectures held in the ESB Logistics Learning Factory, several research projects were carried out, and multiple high-level politicians and industry executives have been touring the shop floor. Also, more than EUR 2 million in research and infrastructure funds could be secured for expansion and upgrade — allowing the ESB Logistics Learning Factory today to represent many core aspects of an Industrie 4.0 production environment.
Indoor localization systems are becoming more and more important with the digitalization of the industrial sector. Sensor data such as the current position of machines, transport vehicles, goods or tools represent an essential component of cyber physical production systems (CCPS). However, due to the high costs of these sensors, they are not widespread and are used mainly in special scenarios. However, especially optical indoor positioning systems (OIPS) based on cameras have certain advantages due to their technological specifications. In this paper, the application scenarios and requirements as well as their characteristics are presented and a classification approach of OIPS is introduced.
In this paper, we deal with optimizing the monetary costs of executing parallel applications in cloud-based environments. Specifically, we investigate on how scalability characteristics of parallel applications impact the total costs of computations. We focus on a specific class of irregularly structured problems, where the scalability typically depends on the input data. Consequently, dynamic optimization methods are required for minimizing the costs of computation. For quantifying the total monetary costs of individual parallel computations, the paper presents a cost model that considers the costs for the parallel infrastructure employed as well as the costs caused by delayed results. We discuss a method for dynamically finding the number of processors for which the total costs based on our cost model are minimal. Our extensive experimental evaluation gives detailed insights into the performance characteristics of our approach.
In recent years, the parallel computing community has shown increasing interest in leveraging cloud resources for executing parallel applications. Clouds exhibit several fundamental features of economic value, like on-demand resource provisioning and a pay-per-use model. Additionally, several cloud providers offer their resources with significant discounts; however, possessing limited availability. Such volatile resources are an auspicious opportunity to reduce the costs arising from computations, thus achieving higher cost efficiency. In this paper, we propose a cost model for quantifying the monetary costs of executing parallel applications in cloud environments, leveraging volatile resources. Using this cost model, one is able to determine a configuration of a cloud-based parallel system that minimizes the total costs of executing an application.
Enterprises are presently transforming their strategy, culture, processes, and their information systems to become more digital. The digital transformation deeply disrupts existing enterprises and economies. Digitization fosters the development of IT systems with many rather small and distributed structures, like Internet of Things or mobile systems. Since years a lot of new business opportunities appeared using the potential of the Internet and related digital technologies, like Internet of Things, services computing, cloud computing, big data with analytics, mobile systems, collaboration networks, and cyber physical systems. This has a strong impact for architecting digital services and products. The change from a closed-world modeling perspective to more flexible open-world composition and evolution of system architectures defines the moving context for adaptable systems, which are essential to enable the digital transformation. In this paper, we are focusing on a decision-oriented architectural composition approach to support the transformation for digital services and products.
Recently, practitioners have begun appraising an effective customer journey design (CJD) as an important source of customer value in increasingly complex and digitalized consumer markets. Research, however, has neither investigated what constitutes the effectiveness of CJD from a consumer perspective nor empirically tested how it affects important variables of consumer behavior. The authors define an effective CJD as the extent to which consumers perceive multiple brand-owned touchpoints as designed in a thematically cohesive, consistent, and context-sensitive way. Analyzing consumer data from studies in two countries (4814 consumers in total), they provide evidence of the positive influence of an effective CJD on customer loyalty through brand attitude — over and above the effects of brand experience. Importantly, an effective CJD more strongly influences utilitarian brand attitudes, while brand experience more strongly affects hedonic brand attitudes. These underlying mechanisms are also prevalent when testing for the contingency factors services versus goods, perceived switching costs, and brand involvement.
Efficient and robust 3D object reconstruction based on monocular SLAM and CNN semantic segmentation
(2019)
Various applications implement slam technology, especially in the field of robot navigation. We show the advantage of slam technology for independent 3d object reconstruction. To receive a point cloud of every object of interest void of its environment, we leverage deep learning. We utilize recent cnn deep learning research for accurate semantic segmentation of objects. In this work, we propose two fusion methods for cnn-based semantic segmentation and slam for the 3d reconstruction of objects of interest in order to obtain a more robustness and efficiency. As a major novelty, we introduce a cnn-based masking to focus slam only on feature points belonging to every single object. Noisy, complex or even non-rigid features in the background are filtered out, improving the estimation of the camera pose and the 3d point cloud of each object. Our experiments are constrained to the reconstruction of industrial objects. We present an analysis of the accuracy and performance of each method and compare the two methods describing their pros and cons.