Refine
Document Type
- Conference proceeding (850) (remove)
Language
- English (850) (remove)
Is part of the Bibliography
- yes (850)
Institute
- Informatik (462)
- Technik (220)
- ESB Business School (137)
- Texoversum (21)
- Life Sciences (11)
- Zentrale Einrichtungen (2)
Publisher
- IEEE (222)
- Springer (142)
- Gesellschaft für Informatik e.V (42)
- Association for Computing Machinery (40)
- Hochschule Reutlingen (31)
- Association for Information Systems (30)
- VDE Verlag (23)
- SciTePress (21)
- IARIA (19)
- Elsevier (18)
The presented wide-Vin step-down converter introduces a parallel-resonant converter (PRC), comprising an integrated 5-bit capacitor array and a 300 nH resonant coil, placed in parallel to a conventional buck converter. Unlike conventional resonant concepts, the implemented soft-switching control eliminates input voltage dependent losses over a wide operating range. This ensures high efficiency across a wide range of Vin= 12-48V, 100-500mA load and 5V output at up to 15MHz switching frequency. The peak efficiency of the converter is 76.3 %. Thanks to the low output current ripple, the output capacitor can be as small as 50 nF, while the inductor tolerates a larger ESR, resulting in small component size. The proposed PRC architecture is also suitable for future power electronics applications using fast-switching GaN devices.
This paper presents an integrated synchronous buck converter for input voltages >12V with 10MHz switching frequency. The converter comprises a predictive dead time control with frequency compensated sampling of the switching node which does not require body diode forward conduction. A high dead time resolution of 125 ps is achieved by a differential delay chain with 8-bit resolution. This way, the efficiency of fast switching DCDC converters can be optimized by eliminating the body diode forward conduction losses, minimizing reverse recovery losses and by achieving zero voltage switching at turn off. The converter was implemented in a 180nm high-voltage BiCMOS technology. The power losses were measured to be reduced by 30%by the proposed dead time control, which results in a 6% efficiency increase at VOUT = 5V and 0.2A load. The peak efficiency is 81 %.
Socially interactive robots with human-like speech synthesis and recognition, coupled with humanoid appearance, are an important subject of robotics and artificial intelligence research. Modern solutions have matured enough to provide simple services to human users. To make the interaction with them as fast and intuitive as possible, researchers strive to create transparent interfaces close to human-human interaction. Because facial expressions play a central role in human-human communication, robot faces were implemented with varying degrees of human-likeness and expressiveness. We propose a way to implement a program that believably animates changing facial expressions and allows to influence them via inter-process communication based on an emotion model. This will can be used to create a screen based virtual face for a robotic system with an inviting appearance to stimulate users to seek interaction with the robot.
The efficient production and utilization of green hydrogen is vital to succeed in the global strive for a sustainable future. To provide the necessary amount of green hydrogen a high number of electrolyzers will be connected as decentralized power consumers to the grid. A large amount of decentralized renewable power sources will provide the energy. In such a system a control method is necessary to dispatch the available power most efficiently. In particular, the shutdown of renewable energy sources due to temporary overproduction must be avoided. This paper presents a decentralized tertiary control algorithm that provides a new decentralized control approach, thus creating a flexible, robust and easily scalable system. The operation of each grid participant within this grid connected microgrid is optimized for maximum financial profit, while minimizing the exchange of power with the mains grid and reducing the shutdown of renewable power sources.
The metric and qualitative analysis of models of the upper and lower dental arches is an important aspect of orthodontic treatment planning. Currently available eLearning systems for dental education only allow access to digital learning materials, and do not interactively support the learning progress. Moreover, to date no study compared the efficiency of learning methods based on physical or digital study models. For this pilot study, 18 dental students were separated into two groups to investigate whether the learning success in study model analysis with an interactive elearning system is higher based on digital models or on conventional plaster models. The results show that with the digital method less time is needed per model analysis. Moreover, the digital approach leads to higher total scores than that based on plaster models. We conclude that interactive eLearning using digital dental arch models is a promising tool for dental education.
This publication gives a short introduction and overview of the European project SCOUT and introduces a methodology for a holistic approach to record the state of the art in technical (vehicle and connectivity, human factors regarding physiologic and ergonomic level) and non-technical enablers (societal, economic, legal, regulatory and policy level) of connected and automated driving in Europe. The paper addresses beside the technical topics of environmental perception, E/E architecture, actuators and security, the state of the art of the legal framework in the context of connected and automated driving.
Information systems, which support the workflow in the clinical area, are currently limited to organizational processes. This work shows a first approach of an information system supporting all actors in the perioperative area. The first prototype and proof of concept was a task manager, giving all actors information about their task and the task of all other actors during an intervention. Based on this initial task manager, we implemented an information system based on a workflow engine controlling all processes and all information necessary for the intervention. A second part was the development of a perioperative process visualization which was developed based on a user centered approach jointly with clinicians and OR members.
An operation room is a stressful work environment. Nevertheless, all involved persons have to work safely as there is no space for making mistakes. To ensure a high level of concentration and seamless interaction, all involved persons have to know their own tasks and tasks of their colleagues. The entire team must work synchronously at all times. However, the operation room (OR) is a noisy environment and the actors have to set their focus on their work. To optimize the overall workflow, a task manager supporting the team was developed. Each actor is equipped with a client terminal showing a summary of their own tasks. Moreover, a big screen displays all tasks of all actors. The architecture is a distributed system based on a communication framework that supports the interaction of all clients with the task manager. A prototype of the task manager and several clients have been developed and implemented. The system represents a proof-of-concept for further development. This paper describes the concept of the task manager.
Workflow driven support systems in the peri-operative area have the potential to optimize clinical processes and to allow new situation-adaptive support systems. We started to develop a workflow management system supporting all involved actors in the operating theatre with the goal to synchronize the tasks of the different stakeholders by giving relevant information to the right team members. Using the OMG standards BPMN, CMMN and DMN gives us the opportunity to bring established methods from other industries into the medical field. The system shows each addressed actor their information in the right place at the right time to make sure every member can execute their task in time to ensure a smooth workflow. The system has the overall view of all tasks. Accordingly, a workflow management system including the Camunda BPM workflow engine to run the models, and a middleware to connect different systems to the workflow engine and some graphical user interfaces to show necessary information or to interact with the system are used. The complete pipeline is implemented with a RESTful web service. The system is designed to include different systems like hospital information system (HIS) via the RESTful web service very easily and without loss of data. The first prototype is implemented and will be expanded.
Software Process Improvement (SPI) programs have been implemented, inter alia, to improve quality and speed of software development. SPI addresses many aspects ranging from individual developer skills to entire organizations. It comprises, for instance, the optimization of specific activities in the software lifecycle as well as the creation of organizational awareness and project culture. In the course of conducting a systematic mapping study on the state-of-the-art in SPI from a general perspective, we observed Software Quality Management (SQM) being of certain relevance in SPI programs. In this paper, we provide a detailed investigation of those papers from the overall systematic mapping study that were classified as addressing SPI in the context of SQM (including testing). From the main study’s result set, 92 papers were selected for an in-depth systematic review to study the contributions and to develop an initial picture of how these topics are addressed in SPI. Our findings show a fairly pragmatic contribution set in which different solutions are proposed, discussed, and evaluated. Among others, our findings indicate a certain reluctance towards standard quality or (test) maturity models and a strong focus on custom review, testing, and documentation techniques, whereas a set of five selected improvement measures is almost equally addressed.
Mobile apps for sustainability in grocery shopping: increasing acceptance through gameification
(2022)
Sustainability has become an important topic in social sciences research as well as in the societal debate. Research in general indicates a high sensitivity of sustainability issues in broad parts of the society, however a change of consumption habits can hardly be overserved. It can be argued that technology, such as mobile apps, can play an important role to increase more sustainable behaviors and consumption habits, as they facilitate such behaviors, bring transparency to an unclear field and reduce complexity. Our research hence approaches an important research gap, especially as currently existing apps show a lack of functionalities and UX. By using a Design Science Research (DSR) approach applying Chou’s Octalysis framework, we systematically analyzed eight apps in the field of sustainability and two general gamification apps as reference points complementing our findings with issues discussed in literature and could identify a broad range of functionalities. This comprehensive analysis allowed us to develop an initial mockup of a potential app, which then was tested within a user-group of ten users by using a semi structured interview approach. Our findings contribute to knowledge by highlighting the importance of user experience on the acceptance of mobile apps, as well as, by showcasing how gamification can contribute to a sustained use of mobile apps in this specific context.
We investigated the influence of body shape and pose on the perception of physical strength and social power for male virtual characters. In the first experiment, participants judged the physical strength of varying body shapes, derived from a statistical 3D body model. Based on these ratings, we determined three body shapes (weak, average, and strong) and animated them with a set of power poses for the second experiment. Participants rated how strong or powerful they perceived virtual characters of varying body shapes that were displayed in different poses. Our results show that perception of physical strength was mainly driven by the shape of the body. However, the social attribute of power was influenced by an interaction between pose and shape. Specifically, the effect of pose on power ratings was greater for weak body shapes. These results demonstrate that a character with a weak shape can be perceived as more powerful when in a high-power pose.
Today’s cars are characterized by many functional variants. There are many reasons for the underlying variability, from the adaptation to diverse markets to different technical aspects, which are based on a cross platform reuse of software functions. Inevitably, this variability is reflected in the model-based automotive software development. A modeling language, which is widely used for modeling embedded software in the automotive industry, is MATLAB/Simulink. There are concepts facing the high demand for a systematic handling of variability in Simulinkmodels. However, not every concept is suitable for every automotive application. In order to present a classification of concepts for modeling variability in Simulink, this paper first has to determine the relevant use cases for variant handling in modelbased automotive software development. Existing concepts for modeling variability in Simulink will then be presented before being classified in relation to the previously determined use cases.
Efficient and robust 3D object reconstruction based on monocular SLAM and CNN semantic segmentation
(2019)
Various applications implement slam technology, especially in the field of robot navigation. We show the advantage of slam technology for independent 3d object reconstruction. To receive a point cloud of every object of interest void of its environment, we leverage deep learning. We utilize recent cnn deep learning research for accurate semantic segmentation of objects. In this work, we propose two fusion methods for cnn-based semantic segmentation and slam for the 3d reconstruction of objects of interest in order to obtain a more robustness and efficiency. As a major novelty, we introduce a cnn-based masking to focus slam only on feature points belonging to every single object. Noisy, complex or even non-rigid features in the background are filtered out, improving the estimation of the camera pose and the 3d point cloud of each object. Our experiments are constrained to the reconstruction of industrial objects. We present an analysis of the accuracy and performance of each method and compare the two methods describing their pros and cons.
In the last 20 years there have been major advances in autonomous robotics. In IoT (Industry 4.0), mobile robots require more intuitive interaction possibilities with humans in order to expand its field of applications. This paper describes a user-friendly setup, which enables a person to lead the robot in an unknown environment. The environment has to be perceived by means of sensory input. For realizing a cost and resource efficient Follow Me application we use a single monocular camera as low-cost sensor. For efficient scaling of our Simultaneous Localization and Mapping (SLAM) algorithm, we integrate an inertial measurement unit (IMU) sensor. With the camera input we detect and track a person. We propose combining state of the art deep learning with Convolutional Neural Network (CNN) and SLAM algorithms functionality on the same input camera image. Based on the output robot navigation is possible. This work presents the specification, workflow for an efficient development of the Follow Me application. Our application’s delivered point clouds are also used for surface construction. For demonstration, we use our platform SCITOS G5 equipped with the afore mentioned sensors. Preliminary tests show the system works robustly in the wild.
For collision and obstacle avoidance as well as trajectory planning, robots usually generate and use a simple 2D costmap without any semantic information about the detected obstacles. Thus a robot’s path planning will simply adhere to an arbitrarily large safety margin around obstacles. A more optimal approach is to adjust this safety margin according to the class of an obstacle. For class prediction, an image processing convolutional neural network can be trained. One of the problems in the development and training of any neural network is the creation of a training dataset. The first part of this work describes methods and free open source software, allowing a fast generation of annotated datasets. Our pipeline can be applied to various objects and environment settings and is extremely easy to use to anyone for synthesising training data from 3D source data. We create a fully synthetic industrial environment dataset with 10 k physically-based rendered images and annotations. Our da taset and sources are publicly available at https://github.com/LJMP/synthetic-industrial-dataset. Subsequently, we train a convolutional neural network with our dataset for costmap safety class prediction. We analyse different class combinations and show that learning the safety classes end-to-end directly with a small dataset, instead of using a class lookup table, improves the quantity and precision of the predictions.
Near-Data Processing is a promising approach to overcome the limitations of slow I/O interfaces in the quest to analyze the ever-growing amount of data stored in database systems. Next to CPUs, FPGAs will play an important role for the realization of functional units operating close to data stored in non-volatile memories such as Flash.It is essential that the NDP-device understands formats and layouts of the persistent data, to perform operations in-situ. To this end, carefully optimized format parsers and layout accessors are needed. However, designing such FPGA-based Near-Data Processing accelerators requires significant effort and expertise. To make FPGA-based Near-Data Processing accessible to non-FPGA experts, we will present a framework for the automatic generation of FPGA-based accelerators capable of data filtering and transformation for key-value stores based on simple data-format specifications.The evaluation shows that our framework is able to generate accelerators that are almost identical in performance compared to the manually optimized designs of prior work, while requiring little to no FPGA-specific knowledge and additionally providing improved flexibility and more powerful functionality.
There have been substantial research efforts for algorithms to improve continuous and automated assessment of various health-related questions in recent years. This paper addresses the deployment gap between those improving algorithms and their usability in care and mobile health applications. In practice, most algorithms require significant and founded technical knowledge to be deployed at home or support healthcare professionals. Therefore, the digital participation of persons in need of health care professionals lacks a usable interface to use the current technological advances. In this paper, we propose applying algorithms taken from research as web-based microservices following the common approach of a RESTful service to bridge the gap and make algorithms accessible to caregivers and patients without technical knowledge and extended hardware capabilities. We address implementation details, interpretation and realization of guidelines, and privacy concerns using our self-implemented example. Also, we address further usability guidelines and our approach to those.
This paper presents a generic method to enhance performance and incorporate temporal information for cardiorespiratory-based sleep stage classification with a limited feature set and limited data. The classification algorithm relies on random forests and a feature set extracted from long-time home monitoring for sleep analysis. Employing temporal feature stacking, the system could be significantly improved in terms of Cohen’s κ and accuracy. The detection performance could be improved for three classes of sleep stages (Wake, REM, Non-REM sleep), four classes (Wake, Non-REM-Light sleep, Non-REM Deep sleep, REM sleep), and five classes (Wake, N1, N2, N3/4, REM sleep) from a κ of 0.44 to 0.58, 0.33 to 0.51, and 0.28 to 0.44 respectively by stacking features before and after the epoch to be classified. Further analysis was done for the optimal length and combination method for this stacking approach. Overall, three methods and a variable duration between 30 s and 30 min have been analyzed. Overnight recordings of 36 healthy subjects from the Interdisciplinary Center for Sleep Medicine at Charité-Universitätsmedizin Berlin and Leave-One-Out-Cross-Validation on a patient-level have been used to validate the method.
We present a compact battery charger topology for weight and cost sensitive applications with an average output current of 9A targeted for 36V batteries commonly found in electric bicycles. Instead of using a conventional boost converter with large DC-link capacitors, we accomplish PFC-functionality by shaping the charging current into a sin²-shape. In addition, a novel control scheme without input-current sensing is introduced. A-priori knowledge is used to implement a feed-forward control in combination with a closed-loop output current control to maintain the target current. The use of a full-bridge/half bridge LLC converter enables operation in a wide input-voltage range.
A fully featured prototype has been built with a peak output power of 1050W. An average output power of 400W was measured, resulting in a power density of 1.8 kW/dm³. At 9A charging current, a power factor of 0.96 was measured and the efficiency exceeds 93% on average with passive rectification.
The impact of pulse charging has been evaluated on a 400Wh battery which was charged with the proposed converter as well as CC-CV-charging for reference. Both charging schemes show similar battery surface temperatures.
The Dual Active Bridge (DAB) is a very promising topology for future power converters. However, careless operation can lead to a DC component in the transformer current. The problem is further exacerbated when the phase shift changes during operation. This work presents a study of DC bias effects on the DAB with special regard to transient effects introduced by sudden shifts in the output load. We present a simple yet effective approach to avoid DC bias entirely.
This paper presents an efficient implementation of a reconfigurable battery stack which allows full exploitation of the capacity of every single cell. Contrary to most other approaches, it is possible to electrically remove one or more cells from the battery stack. Therefore, the overall capacity of the system is not restricted by the weaker cells, and cells with very different states of health can be used, making the system very attractive for refurbished batteries. For the required switches, low-voltage high-current MOSFETs are used. A demonstrator has been built with a total capacity of up to 3.5 kWh, a nominal voltage of 35 V, and currents up 200 A.
A novel configuration of the dual active bridge (DAB) DC/DC converter is presented, enabling more efficient wide voltage range conversion at light loads. A third phase leg as well as a center tapped transformer are introduced to one side of the converter. This concept provides two different turn ratios, thus extending the zero voltage switching operation resulting in higher efficiency. A laboratory prototype was built converting an input voltage of 40V to an output voltage in the range of 350V to 650V. Measurements show a significant increase up to 20% in the efficiency for light-load operation.
This paper presents a control strategy for optimal utilization of photovoltaic (PV) generated power in conjunction with an Energy Storage System (ESS). The ESS is specifically designed to be retrofitted into existing PV systems in an end-user application. It can be attached in parallel to the PV system and connects to existing DC/AC inverters. In particular, the study covers the impact such a modification has on the output power of existing PV panels. A distinct degradation of PV output power was found due to the different power characteristics of PV panel and ESS. To overcome such degradation a novel feedback system is proposed. The feedback system continuously modifies the power characteristic of the ESS to match the PV panel and thus achieves optimal power utilization. Impact on PV and power point tracking performance is analyzed. Simulation of the proposed system is performed in MATLAB/Simulink. The results are found to be satisfactory.
We present a dual active bridge topology suitable for wide voltage range applications covering all combinations of 200V to 600V on the input and 20V to 60V on the output with constant power of 1kW.We employ a stepped inductance scheme to adjust the effective inductance of the converter, thus extending the efficient operation range. Using a variable switching frequency between 35 kHz and 150 kHz with operation-point-dependent limits further increases the performance of the converter. A prototype was built and the proposed changes have been compared to a fixed frequency, fixed inductance implementation. Measurements show a maximum loss reduction of 40 %, leading to a peak efficiency of 97% while maintaining constant output power over the entire working area.
Being able to monitor the heart activity of patients during their daily life in a reliable, comfortable and affordable way is one main goal of the personalized medicine. Current wearable solutions lack either on the wearing comfort, the quality and type of the data provided or the price of the device. This paper shows the development of a Textile Sensor Platform (TSP) in the form of an electrocardiogram (ECG)-measuring T-shirt that is able to transmit the ECG signal to a smartphone. The development process includes the selection of the materials, the design of the textile electrodes taking into consideration their electrical characteristics and ergonomy, the integration of the electrodes on the garment and their connection with the embedded electronic part. The TSP is able to transmit a real-time streaming of the ECG-signal to an Android smartphone through Bluetooth Low Energy (BLE). Initial results show a good electrical quality in the textile electrodes and promising results in the capture and transmission of the ECG signal. This is still a working- progress and it is the result of an interdisciplinary master project between the School of Informatics and the School of Textiles & Design of the Reutlingen University.
Functionally impaired people have problems with choosing and finding the right clothing. So, they need help in their daily life to wash and manage the clothing. The goal of this work is to support the user by giving recommendations to choose the right clothing, to find the clothing and how to wash the clothing. The idea behind eKlarA is to generate a gateway based system that uses sensors to identify the clothing and their state in the clothing cycle. The clothing cycle consists of (one and more) closet, laundry basket and washing machine in one or several places. The gateway uses the information about the clothing, weather and calendar to support the user in the different steps of the clothing cycle. This allows to give more freedom to the functionally impaired people in their daily life.
How to protect the skin from getting sun burnt? The sun can damage your skin e.g. skin cancer. But the sun has a positive effect to the human. The time in sun and the intensity are key values between enjoy the sunbath and having a negative effect to the skin. A smart device like a UV flower could help you to enjoy the sunbath. It measures the UV index around you and gives this information to a smartphone app. The development steps of such a device are described in this paper. The UV flower is made of textile fabrics.
Switched reluctance motors are particularly attractive due to their simple structure. The control of this machine type requires the instants, to switch the currents in the motor phases in an appropriate sequence. These switching instants are determined either based on a position sensor, or on signals generated by a sensorless method. A very simple sensorless method uses the switching frequency of the hysteresis controllers used for phase current control. This paper first presents an automatic commissioning method for this sensorless method and second a startup procedure, thus enhancing this approach towards an application in industry.
Collaborative apparel consumption is proposed as more sustainable alternative to conventional consumption. The purpose of this study is the exploration of consumers’ motives to participate in collaborative apparel consumption. Findings suggest that consumers’ intention to participate in collaborative apparel consumption is mainly influenced by financial benefits, convenience and sustainability awareness.
Internet of things innovations and the industrial internet these days become more and more decisive factors of future success for companies. Especially manufacturing oriented SME will face the challenge to develop innovative technology driven business models alongside technology innovations in this field which will be essential for future competitiveness. Failing in developing these technology driven business models in an internationally highly competitive environment will have a serious impact both on companies and on the society. Hence, securing economic stability and success of these technology driven business models is an indispensable task. To identify challenges for innovative industrial internet business models first it is necessary to understand what the industrial internet means to the leading parties and applying companies and start-ups in the field. Second, challenges from general business model development will be outlined. In a third step risks and challenges in business model development will be discussed with regard to the special characteristics of technology driven business models in the context of the industrial internet and the important role of the technological key component of the business model. Especially the capability to deal with an integrated consideration of the indivisible linked dimensions of economic and technological aspects of these business models is questioned. In the fourth place the specific challenges for industrial internet business models are derived. On the basis of these results it is also discussed what might be done to handle these challenges successfully with the goal to turn them into chances. The need for future research on the integration of the risk management perspective into the development of these technology driven business models is derived. This will help established companies and start-ups to realize great technological innovations for the industrial internet in sound and successful innovative business models.
Context: The manufacturing industry is facing a transformation with regard to Industry 4.0 (I4). A transformation towards full automation of production including a multitude of innovations is necessary. Startups and entrepreneurial processes can support such a transformation as has been shown in other industries. However, I4 has some specifics, so it is unclear how entrepreneurship can be adapted in I4. Understanding these specifics is important to develop suitable training programs for I4 startups and to accelerate the transformation.
Objective: This study identifies and outlines the essential characteristics and constraints of entrepreneurial processes in I4.
Method: 14 semi-structured interviews were conducted with experts in the field of I4 entrepreneurship. The interviews were analysed and categorized by qualitative analyses.
Results: The interviews revealed several characteristics of I4 that have a significant impact on the various phases of the entrepreneurial process. Examples of such specifics include the difficult access to customers, the necessary deep understanding of the customer and the domain, the difficulty of testing risky assumptions, and the complex development and productization of solutions. The complexity of hardware and software components, cost structures, and necessary customer-specific customizations affect the scalability of I4 startups. These essential characteristics also require specialised skills and resources from I4 startups.
Product roadmaps in the new mobility domain: state of the practice and industrial experiences
(2021)
Context: The New Mobility industry is a young market that includes high market dynamics and is therefore associated with a high degree of uncertainty. Traditional product roadmapping approaches such a detailed planning of features over a long-time horizon typically fail in such environments. For this reason, companies that are active in the field of New Mobility are faced with the challenge of keeping their product roadmaps reliable for stakeholders while at the same time being able to react flexibly to changing market requirements.
Objective: The goal of this paper is to identify the state of practice regarding product roadmapping of New Mobility companies. In addition, the related challenges within the product roadmapping process as well as the success factors to overcome these challenges will be highlighted.
Method: We conducted semi-structured expert interviews with 8 experts (7 German company and one Finnish company) from the field of New Mobility and performed a content analysis.
Results: Overall the results of the study showed that the participating companies are aware of the requirements that the New Mobility sector entails. Therefore, they exhibit a high level of maturity in terms of product roadmapping. Nevertheless, some aspects were revealed that pose specific challenges for the participating companies. One major challenge, for example, is that New Mobility in terms of public clients is often a tender business with non-negotiable product requirements. Thus, the product roadmap can be significantly influenced from the outside. As factors for a successful product roadmapping mainly soft factors such as trust between all people involved in the product development process and transparency throughout the entire roadmapping process were mentioned.
The financial crisis of 2007-2010 was probably one of the greatest, most lustrous black-swan events that people of our generation(s) will experience – and at its heart, it was a dynamic phenomenon. It is stated in the vision of the System Dynamics Society that we aspire to transform society by influencing decision-making. Yet, it seems as if system dynamics did not play any significant role in this crisis: we did not examine the markets, we did not provide insights to banks, and we did not warn governments or the people. In our presentation we describe the dynamics involved in a housing bubble, and describe what made the last one different. With the insights gained from this exercise we conclude that, from a system dynamics perspective, the dimension of the financial crisis of 2007-2009 was eminently foreseeable, which will lead us to pose the following question: where were we as a field while this crisis was unfolding, why were we not active players? We present a range of potential answers to this question, hoping to provoke some reflection… and maybe some (re)action.
Increasing flexibility, greater transparency and faster adaptability play a key role in the development of future intralogistics. Ever-changing environmental conditions require easy extensibility and modifiability of existing bin systems. This research project explores approaches to transfer the Internet of Things (IoT) paradigm to intralogistics. This allows a synchronization of the material and information flow. The bin is enabled by the implementation of adequate hardware and software components to capture, store, process and forward data to selected system subscribers. Monitoring the processes in the intralogistics by means of the smart bin system ensures the implementation of appropriate actions in case of defined deviations. By using explorative expert interviews with representatives from the automotive and pharmaceutical industries, seven practical application scenarios were defined. On this basis, the requirements of smart bin systems were examined. For each individual case of application, a system model was created in order to obtain an overview of the system components and thus reveal similarities and differences. Based on the similarities of the system models, a general requirement profile was derived. After the hardware components of the bin system had been determined, a utility analysis was carried out to find the adequate IoT software. The utility analysis was conducted with a focus on data acquisition and data transfer, data storage, data analysis, data presentation as well as authorization management and data security. The results show that there is great interest in easily expandable and modifiable bin systems, as in all cases, the necessary information flow in the existing bin system has to be improved by means of new IoT hardware and software components.
The amount of image data has been rising exponentially over the last decades due to numerous trends like social networks, smartphones, automotive, biology, medicine and robotics. Traditionally, file systems are used as storage. Although they are easy to use and can handle large data volumes, they are suboptimal for efficient sequential image processing due to the limitation of data organisation on single images. Database systems and especially column-stores support more stuctured storage and access methods on the raw data level for entiere series.
In this paper we propose definitions of various layouts for an efficient storage of raw image data and metadata in a column store. These schemes are designed to improve the runtime behaviour of image processing operations. We present a tool called column-store Image Processing Toolbox (cIPT) allowing to easily combine the data layouts and operations for different image processing scenarios.
The experimental evaluation of a classification task on a real world image dataset indicates a performance increase of up to 15x on a column store compared to a traditional row-store (PostgreSQL) while the space consumption is reduced 7x. With these results cIPT provides the basis for a future mature database feature.
Current data-intensive systems suffer from scalability as they transfer massive amounts of data to the host DBMS to process it there. Novel near-data processing (NDP) DBMS architectures and smart storage can provably reduce the impact of raw data movement. However, transferring the result-set of an NDP operation may increase the data movement, and thus, the performance overhead. In this paper, we introduce a set of in-situ NDP result-set management techniques, such as spilling, materialization, and reuse. Our evaluation indicates a performance improvement of 1.13 × to 400 ×.
Modern persistent Key/Value stores are designed to meet the demand for high transactional throughput and high data ingestion rates. Still, they rely on backwards-compatible storage stack and abstractions to ease space management, foster seamless proliferation and system integration. Their dependence on the traditional I/O stack has negative impact on performance, causes unacceptably high write-amplification, and limits the storage longevity.
In the present paper we present NoFTL KV, an approach that results in a lean I/O stack, integrating physical storage management natively in the Key/Value store. NoFTL-KV eliminates backwards compatibility, allowing the Key/Value store to directly consume the characteristics of modern storage technologies. NoFTLKV is implemented under RocksDB. The performance evaluation under LinkBench shows that NoFTL-KV improves transactional throughput by 33%, while response times improve up to 2.3x. Furthermore, NoFTL KV reduces write-amplification 19x and improves storage longevity by imately the same factor.
Data analytics tasks on large datasets are computationally intensive and often demand the compute power of cluster environments. Yet, data cleansing, preparation, dataset characterization and statistics or metrics computation steps are frequent. These are mostly performed ad hoc, in an explorative manner and mandate low response times. But, such steps are I/O intensive and typically very slow due to low data locality, inadequate interfaces and abstractions along the stack. These typically result in prohibitively expensive scans of the full dataset and transformations on interface boundaries.
In this paper, we examine R as analytical tool, managing large persistent datasets in Ceph, a wide-spread cluster file-system. We propose nativeNDP – a framework for Near Data Processing that pushes down primitive R tasks and executes them in-situ, directly within the storage device of a cluster-node. Across a range of data sizes, we show that nativeNDP is more than an order of magnitude faster than other pushdown alternatives.
Massive data transfers in modern data intensive systems resulting from low data-locality and data-to-code system design hurt their performance and scalability. Near-data processing (NDP) and a shift to code-to-data designs may represent a viable solution as packaging combinations of storage and compute elements on the same device has become viable.
The shift towards NDP system architectures calls for revision of established principles. Abstractions such as data formats and layouts typically spread multiple layers in traditional DBMS, the way they are processed is encapsulated within these layers of abstraction. The NDP-style processing requires an explicit definition of cross-layer data formats and accessors to ensure in-situ executions optimally utilizing the properties of the underlying NDP storage and compute elements. In this paper, we make the case for such data format definitions and investigate the performance benefits under NoFTL-KV and the COSMOS hardware platform.
Massive data transfers in modern key/value stores resulting from low data-locality and data-to-code system design hurt their performance and scalability. Near-data processing (NDP) designs represent a feasible solution, which although not new, have yet to see widespread use.
In this paper we introduce nKV, which is a key/value store utilizing native computational storage and near-data processing. On the one hand, nKV can directly control the data and computation placement on the underlying storage hardware. On the other hand, nKV propagates the data formats and layouts to the storage device where, software and hardware parsers and accessors are implemented. Both allow NDP operations to execute in host-intervention-free manner, directly on physical addresses and thus better utilize the underlying hardware. Our performance evaluation is based on executing traditional KV operations (GET, SCAN) and on complex graph-processing algorithms (Betweenness Centrality) in-situ, with 1.4×-2.7× better performance on real hardware – the COSMOS+ platform.
The field of breath analysis has developed to be of growing interest in medical diagnosis and patient monitoring. The main advantages are that it’s noninvasive, painless and repeatable in flexible cycles. Even though breath analysis is being researched for a couple of decades there are still many unanswered questions. Human breath contains volatile organic compounds which are emitted from inside the body. Some of these compounds can be assigned to specific sources, such as inflammation or cancer, but also to non health related origins. This paper gives an overview of breath analysis for the purpose of disease diagnosis and health monitoring. Therefore, literature regarding breath analysis in the medical field has been analyzed, from its early stages to the present. As a result, this paper gives an outline of the topic of breath analysis.
Nowadays there is a rich diversity of sleep monitoring systems available on the market. They promise to offer information about sleep quality of the user by recording a limited number of vital signals, mainly heart rate and body movement. Typically, fitness trackers, smart watches, smart shirts, smartphone applications or patches do not provide access to the raw sensor data. Moreover, the sleep classification algorithm and the agreement ratio with the gold standard, polysomnography (PSG) are not disclosed. Some commercial systems record and store the data on the wearable device, but the user needs to transfer and import it into specialised software applications or return it to the doctor, for clinical evaluation of the data set. Thus an immediate feedback mechanism or the possibility of remote control and supervision are lacking. Furthermore, many such systems only distinguish between sleep and wake states, or between wake, light sleep and deep sleep. It is not always clear how these stages are mapped to the four known sleep stages: REM, NREM1, NREM2, NREM3-4. [1] The goal of this research is to find a reduced complexity method to process a minimum number of bio vital signals, while providing accurate sleep classification results. The model we propose offers remote control and real time supervision capabilities, by using Internet of Things (IoT) technology. This paper focuses on the data processing method and the sleep classification logic. The body sensor network representing our data acquisition system will be described in a separate publication. Our solution showed promising results and a good potential to overcome the limitations of existing products. Further improvements will be made and subjects with different age and health conditions will be tested.
With significant advancements in digital technologies, firms find themselves competing in an increasingly dynamic business environment. Therefore, the logic of business decisions is based on the agility to respond to emerging trends in a proactive way. By contrast, traditional IT governance (ITG) frameworks rely on hierarchy and standardized mechanisms to ensure better business/IT alignment. This conflict leads to a call for an ambidextrous governance, in which firms alternate between stability and agility in their ITG mechanisms. Accordingly, this research aims to explore how agility might be integrated in ITG. A quantitative research strategy is implemented to explore the impact of agility on the causal relationship among ITG, business/IT alignment, and firm performance. The results show that the integration of agile ITG mechanisms contributes significantly to the explanation of business/IT alignment. As such, firms need to develop a dual governance model powered by traditional and agile ITG mechanisms.
Information technology (IT) plays an essential role in organizational innovation adoption. As such, IT governance (ITG) is paramount in accompanying IT to allow innovation. However, the traditional concept of ITG to control the formulation and implementation of IT strategy is not fully equipped to deal with the current changes occurring in the digital age. Today’s ITG needs an agile approach that can respond to changing dynamics. Consequently, companies are relying heavily on agile strategies to secure better company performance. This paper aims to clarify how organizations can implement agile ITG. To do so, this study conducted 56 qualitative interviews with professionals from the banking industry to identify agile dimensions within the governance construct. The qualitative evaluation uncovered 46 agile governance dimensions. Moreover, these dimensions were rated by 29 experts to identify the most effective ones. This led to the identification of six structure elements, eight processes, and eight relational mechanisms.
With significant advancements in digital technologies, firms find themselves competing in an increasingly dynamic business environment. It is of paramount importance that organizations undertake proper governance mechanisms with respect to their business and IT strategies. Therefore, IT governance (ITG) has become an important factor for firm performance. In recent years, agility has evolved as a core concept for governance, especially in the area of software development. However, the impact of agility on ITG and firm performance has not been analyzed by the broad scientific community. This paper focuses on the question, how the concept of agility affects the ITG–firm performance relationship. The conceptual model for this question was tested by a quantitative research process with 400 executives responding to a standardized survey. Findings show that the adoption of agile principles, values, and best practices to the context of ITG leads to meaningful results for governance, business/IT alignment, and firm performance.
Digital transformation has changed corporate reality and, with that, firms’ IT environments and IT governance (ITG). As such, the perspective of ITG has shifted from the design of a relatively stable, closed and controllable system of a self-sufficient enterprise to a relatively fluid, open, agile and transformational system of networked co adaptive entities. Related to this paradigm shift in ITG, this paper aims to clarify how the concept of an effective ITG framework has changed in terms of the demand for agility in organizations. Thus, this study conducted 33 qualitative interviews with executives and senior managers from the banking industry in Germany, Switzerland and Austria. Analysis of the interviews focused on the formation of categories and the assignment of individual text parts (codings) to these categories to allow for a quantitative evaluation of the codings per category. Regarding traditional and agile ITG dimensions, 22 traditional and 25 agile dimensions were identified. Moreover, agile strategies within the agile ITG construct and ten ITG patterns were identified from the interview data. The data show relevant perspectives on the implementation of traditional and new ITG dimensions and highlight ambidextrous aspects in ITG frameworks.
IT Governance (ITG) is crucial due to its significant impact on enabling innovation and enhancing firm performance. Hence, in the last decade ITG has become important in both academic and in practical research. Although several studies have investigated individual aspects of ITG success and its impact on single determinants, the causal relationship of how ITG promotes firm performance remains unclear. Thus, a more comprehensive understanding about the link between ITG and firm performance is needed. To address this gap, this research aims at understanding how ITG and firm performance are related. Therefore, we conducted a systematic literature review (1) to create an overview on how current research structures the link between ITG mechanisms and firm performance, (2) to uncover key constructs as potential mediators or moderators on the general link between ITG and performance, and (3) to set the basis for future studies on the ITG-firm performance relationship.