Refine
Document Type
- Conference proceeding (136)
- Book chapter (88)
- Journal article (56)
- Anthology (10)
- Book (8)
- Doctoral Thesis (1)
Language
- English (299) (remove)
Is part of the Bibliography
- yes (299)
Institute
- Informatik (163)
- Texoversum (48)
- ESB Business School (39)
- Technik (25)
- Life Sciences (23)
- Zentrale Einrichtungen (1)
Publisher
- Springer (299) (remove)
Background
Personalized medicine requires the integration and analysis of vast amounts of patient data to realize individualized care. With Surgomics, we aim to facilitate personalized therapy recommendations in surgery by integration of intraoperative surgical data and their analysis with machine learning methods to leverage the potential of this data in analogy to Radiomics and Genomics.
Methods
We defined Surgomics as the entirety of surgomic features that are process characteristics of a surgical procedure automatically derived from multimodal intraoperative data to quantify processes in the operating room. In a multidisciplinary team we discussed potential data sources like endoscopic videos, vital sign monitoring, medical devices and instruments and respective surgomic features. Subsequently, an online questionnaire was sent to experts from surgery and (computer) science at multiple centers for rating the features’ clinical relevance and technical feasibility.
Results
In total, 52 surgomic features were identified and assigned to eight feature categories. Based on the expert survey (n = 66 participants) the feature category with the highest clinical relevance as rated by surgeons was “surgical skill and quality of performance” for morbidity and mortality (9.0 ± 1.3 on a numerical rating scale from 1 to 10) as well as for long-term (oncological) outcome (8.2 ± 1.8). The feature category with the highest feasibility to be automatically extracted as rated by (computer) scientists was “Instrument” (8.5 ± 1.7). Among the surgomic features ranked as most relevant in their respective category were “intraoperative adverse events”, “action performed with instruments”, “vital sign monitoring”, and “difficulty of surgery”.
Conclusion
Surgomics is a promising concept for the analysis of intraoperative data. Surgomics may be used together with preoperative features from clinical data and Radiomics to predict postoperative morbidity, mortality and long-term outcome, as well as to provide tailored feedback for surgeons.
This book presents an empirical investigation of the efforts that multinational pharmaceutical companies take in order to find a business model that allows for a profitable access to the Bottom of the Pyramid (BoP) markets. The Bottom of the Pyramid in Africa is frequently mentioned as an attractive market due to its sheer size. Yet most companies struggle to access it because of the low price level, difficult physical market access and challenges when it comes to payment.
More specifically, the book investigates the following business model-related questions: Do pharmaceutical companies provide products that meet the needs of the BoP? What characterizes the value generation of the company? What revenue model leads to a profitable business, and what role does a network of partners play in the business model?
Findings reveal that there is no ‘one-size-fits-all’ answer to these questions. Providing continuous availability, affordability at a good quality of goods and services, creating health awareness, as well as localizing business to achieve a level of inclusivenessare essential prerequisites for success. In the last chapter this book provides a business model prototype that accounts for these key success factors for business at the Bottom of the Pyramid and points to further research topics.
Besides the optimisation of the car, energy-efficiency and safety can also be increased by optimising the driving behaviour. Based on this fact, a driving system is in development whose goal is to educate the driver in energy efficient and safe driving. It monitors the driver, the car and the environment and gives energy-efficiency and safety relevant recommendations. However, the driving system tries not to distract or bother the driver by giving recommendations for example during stressful driving situations or when the driver is not interested in that recommendation. Therefore, the driving system monitors the stress level of the driver as well as the reaction of the driver to a given recommendation and decideswhether to give a recommendation or not. This allows to suppress recommendations when needed and, thus, to increase the road safety and the user acceptance of
the driving system.
Stress is becoming an important topic in modern life. The influence of stress results in a higher rate of health disorders such as burnout, heart problems, obesity, asthma, diabetes, depressions and many others. Furthermore individual’s behavior and capabilities could be directly affected leading to altered cognition, inappropriate decision making and problem solving skills. In a dynamic and unpredictable environment, such as automotive, this can result in a higher risk for accidents. Different papers faced the estimation as well as prediction of drivers’ stress level during driving. Another important question is not only the stress level of the driver himself, but also the influence on and of a group of other drivers in the near area. This paper proposes a system, which determines a group of drivers in a near area as clusters and it derives the individual stress level. This information will be analyzed to generate a stress map, which represents a graphical view about road section with a higher stress influence. Aggregated data can be used to generate navigation routes with a lower stress influence to decrease stress influenced driving as well as improve road safety.
Stress is recognized as a predominant disease with raising costs for rehabilitation and treatment. Currently there are several different approaches that can be used for determining and calculating the stress levels. Usually the methods for determining stress are divided in two categories. The first category do not require any special equipment for measuring the stress. This category useless the variation in the behaviour patterns that occur while stress. The core disadvantage for the category is their limitation to specific use case. The second category uses laboratories instruments and biological sensors. This category allow to measure stress precisely and proficiently but on the same time they are not mobile and transportable and do not support real-time feedback. This work presents a mobile system that provides the calculation of stress. For achieving this, the of a mobile ECG sensor is analysed, processed and visualised over a mobile system like a smartphone. This work also explains the used stress measurement algorithm. The result of this work is a portable system that can be used with a mobile system like a smartphone as visual interface for reporting the current stress level.
Database management systems and K/V-Stores operate on updatable datasets – massively exceeding the size of available main memory. Tree-based K/V storage management structures became particularly popular in storage engines. B+ -Trees [1, 4] allow constant search performance, however write-heavy workloads yield in inefficient write patterns to secondary storage devices and poor performance characteristics. LSM-Trees [16, 23] overcome this issue by horizontal partitioning fractions of data – small enough to fully reside in main memory, but require frequent maintenance to sustain search performance.
Firstly, we propose Multi-Version Partitioned BTrees (MV-PBT) as sole storage and index management structure in key-sorted storage engines like K/V-Stores. Secondly, we compare MV-PBT against LSM-Trees. The logical horizontal partitioning in MV-PBT allows leveraging recent advances in modern B+ -Tree techniques in a small transparent and memory resident portion of the structure. Structural properties sustain steady read performance, yielding efficient write patterns and reducing write amplification.
We integrated MV-PBT in the WiredTiger [15] KV storage engine. MV-PBT offers an up to 2× increased steady throughput in comparison to LSM-Trees and several orders of magnitude in comparison to B+ -Trees in a YCSB [5] workload.
Stent graft visualization and planning tool for endovascular surgery using finite element analysis
(2014)
Purpose: A new approach to optimize stent graft selection for endovascular aortic repair is the use of finite element analysis. Once the finite element model is created and solved, a software module is needed to view the simulation results in the clinical work environment. A new tool for Interpretation of simulation results, named Medical Postprocessor, that enables comparison of different stent graft configurations and products was designed, implemented and tested. Methods Aortic endovascular stent graft ring forces and sealing states in the vessel landing zone of three different configurations were provided in a surgical planning software using the Medical Imaging Interaction Tool Kit (MITK) Software system. For data interpretation, software modules for 2D and 3D presentations were implemented. Ten surgeons evaluated the software features of the Medical Postprocessor. These surgeons performed usability tests and answered questionnaires based on their experience with the system.
Results: The Medical Postprocessor visualization system enabled vascular surgeons to determine the configuration with the highest overall fixation force in 16 ± 6 s, best proximal sealing in 56±24 s and highest proximal fixation force in 38 ± 12 s. The majority considered the multiformat data provided helpful and found the Medical Postprocessor to be an efficient decision support system for stent graft selection. The evaluation of the user interface results in an ISONORMconform user interface (113.5 points).
Conclusion: The Medical Postprocessor visualization Software tool for analyzing stent graft properties was evaluated by vascular surgeons. The results show that the software can assist the interpretation of simulation results to optimize stent graft configuration and sizing.
One of the key challenges for automatic assistance is the support of actors in the operating room depending on the status of the procedure. Therefore, context information collected in the operating room is used to gain knowledge about the current situation. In literature, solutions already exist for specific use cases, but it is doubtful to what extent these approaches can be transferred to other conditions. We conducted a comprehensive literature research on existing situation recognition systems for the intraoperative area, covering 274 articles and 95 cross-references published between 2010 and 2019. We contrasted and compared 58 identified approaches based on defined aspects such as used sensor data or application area. In addition, we discussed applicability and transferability. Most of the papers focus on video data for recognizing situations within laparoscopic and cataract surgeries. Not all of the approaches can be used online for real-time recognition. Using different methods, good results with recognition accuracies above 90% could be achieved. Overall, transferability is less addressed. The applicability of approaches to other circumstances seems to be possible to a limited extent. Future research should place a stronger focus on adaptability. The literature review shows differences within existing approaches for situation recognition and outlines research trends. Applicability and transferability to other conditions are less addressed in current work.
Personalized remote healthcare monitoring is in continuous development due to the technology improvements of sensors and wearable electronic systems. A state of the art of research works on wearable sensors for healthcare applications is presented in this work. Furthermore, a state of the art of wearable devices, chest and wrist band and smartwatches available on the market for health and sport monitoring is presented in this paper. Many activity trackers are commercially available. The prices are continuously reducing and the performances are improving, but commercial devices do not provide raw data and are therefore not useful for research purposes.
In the current age of innovative business financing opportunities available from fintech apps, social media crowdfunding sites such as Kickstarter, Indiegogo, and RocketHub, et.al., and friends and family private equity investors, start-up firms can strategically source their venture capital funds from many globally disperse organizations and individuals. As the firm in this case learned, the benefit of alternative investing sources comes with a critical hidden risk for corporate governance. After a financial restructuring, a typical Silicon Valley software start-up found itself with close to 300 external individual shareholders, some of whom had not been documented as accredited investors. The regulatory agency could decide that the prior actions of the founders and the decisions of the board had been prejudicial to the interests of the minority investors. The management of this small private company faced an atypical investor relations dilemma, before its initial public offering (IPO).
Power line communications (PLC) reuse the existing power-grid infrastructure for the transmission of data signals. As power line the communication technology does not require a dedicated network setup, it can be used to connect a multitude of sensors and Internet of Things (IoT) devices. Those IoT devices could be deployed in homes, streets, or industrial environments for sensing and to control related applications. The key challenge faced by future IoT-oriented narrowband PLC networks is to provide a high quality of service (QoS). In fact, the power line channel has been traditionally considered too hostile. Combined with the fact that spectrum is a scarce resource and interference from other users, this requirement calls for means to increase spectral efficiency radically and to improve link reliability. However, the research activities carried out in the last decade have shown that it is a suitable technology for a large number of applications. Motivated by the relevant impact of PLC on IoT, this paper proposed a cooperative spectrum allocation in IoT-oriented narrowband PLC networks using an iterative water-filling algorithm.
Software development as an experiment system : a qualitative survey on the state of the practice
(2015)
An experiment-driven approach to software product and service development is gaining increasing attention as a way to channel limited resources to the efficient creation of customer value. In this approach, software functionalities are developed incrementally and validated in continuous experiments with stakeholders such as customers and users. The experiments provide factual feedback for guiding subsequent development. Although case studies on experimentation in industry exist, the understanding of the state of the practice and the encountered obstacles is incomplete. This paper presents an interview-based qualitative survey exploring the experimentation experiences of ten software development companies. The study found that although the principles of continuous experimentation resonated with industry practitioners, the state of the practice is not yet mature. In particular, experimentation is rarely systematic and continuous. Key challenges relate to changing organizational culture, accelerating development cycle speed, and measuring customer value and product
success.
Modern enterprises reshape and transform continuously by a multitude of management processes with different perspectives. They range from business process management to IT service management and the management of the information systems. Enterprise Architecture (EA) management seeks to provide such a perspective and to align the diverse management perspectives. Therefore, EA management cannot rely on hierarchic - in a tayloristic manner designed - management processes to achieve and promote this alignment. It, conversely, has to apply bottom-up, information-centered coordination mechanisms to ensure that different management processes are aligned with each other and enterprise strategy. Social software provides such a bottom-up mechanism for providing support within EAM-processes. Consequently, challenges of EA management processes are investigated, and contributions of social software presented. A cockpit provides interactive functions and visualization methods to cope with this complexity and enable the practical use of social software in enterprise architecture management processes.
Business process models provide a considerable number of benefits for enterprises and organizations, but the creation of such models is costly and time-consuming, which slows down the organizational adoption of business process modeling. Social paradigms pave new ways for business process modeling by integrating stakeholders and leveraging knowledge sources. However, empirical research about the impact of social paradigms on costs of business process modeling is sparse. A better understanding of their impact could help to reduce the cost of business process modeling and improve decision-making on BPM activities. The paper constributes to this field by reporting about an empirical investigation via survey research on the perceived influence of different cost factors among experts. Our results indicate that different cost components, as well as the use of social paradigms, influence cost.
The SDGs give an overview of the world's development challenges of the present and the coming decades and set a new global agenda for more inclusive and sustainable development and growth. These challenges also represent opportunities for social innovations and the creation of scalable and financially self-sustaining solutions by businesses and (social) entrepreneurs. Examples of solutions to social and ecological challenges are for instance providing low-income communities with access to affordable, quality products and services in areas such as water and sanitation, energy, health, education and finance. New business models can meet customer demands by providing solutions and thereby create opportunities for low-income people as employees, suppliers and distributors.
Social and environmental risk management in supply chains : a survey in the clothing industry
(2015)
Almost daily, news indicates that there are environmental and social problems in globally fragmented supply chains. Even though conceptualisations of sustainable supply chain management suggest supplier-related risk management for sustainable products and processes as substantial for companies, research on how risk management for environmental and social issues in supply chains is performed has so far been neglected. This study aims at analysing both why companies in the clothing industry are performing management of social and environmental risks in their supply chain and what kind of action they are taking. Based on the literature on sustainable supply chain management and supply chain risk management as well as 10 expert interviews, a conceptual model for risk management in sustainable supply chains was developed. This model was tested in an empirical study in the clothing industry. The data were analysed by structural equation modelling. Results of the research show high statistical significance for the respective conceptual model. The main driver to perform risk management in environmental and social affairs is pressures and incentives from stakeholders. While companies’ corporate orientation mainly drives social actions, top management drives environmental affairs for differentiating themselves from competitors.
Companies are becoming aware of the potential risks arising from sustainability aspects in supply chains. These risks can affect ecological, economic or social aspects. One important element in managing those risks is improved transparency in supply chains by means of digital transformation. Innovative technologies like blockchain technology can be used to enforce transparency. In this paper, we present a smart contract-based Supply Chain Control Solution to reduce risks. Technological capabilities of the solution will be compared to a similar technology approach and evaluated regarding their benefits and challenges within the framework of supply chain models. As a result, the proposed solution is suitable for the dynamic administration of complex supply chains.
Children undergoing systemic chemotherapy often suffer from severe immunosuppression usually associated to severe neutropenia (neutrophils < 0.5 x 109/l). Clinical courses during those periods range from asymptomatic to septic general conditions. Development of septic symptoms can be very fast and life-threatening. Swift detection of risk factors in those patients is therefore needed. So far no early, rapid and reliable marker or tool exists. Ion-Mobility-Spectrometry coupled with a Multi-Capillary-Column (IMS-MCC) can analyze more than 600 volatile components from exhaled air within a few minutes and hence is a potential, rapid detection-tool. As a proof of concept we measured the exhaled breath of 11 patients with neutropenia and 10 healthy controls ranging from 3 to 18 years of age at the time of measurement. Ten milliliters breath samples were taken at the outpatient clinic and analyzed with an onsite IMS-MCC (BreathDiscovery, B&S Analytik, Dortmund, Germany). Dead-space-volume was adapted to two groups (small 250 ml, large 500 ml). Interestingly 59 differing peaks were measured. Eleven were significantly different (p ≤ 0.05), three of which highly significant (p ≤ 0.01) in Mann-Whitney-Rank-Sum-testing. The corresponding analytes used in the decision tree are 2-Propanol, D-Limonene and Acetone. The analytes with the lowest rank sum identified are 2-Hexanone, Iso-Propylamine and 1-Butanol. Eventually we were able to show a three-step-decision-tree, which discerns the 21 samples except one from each group. Sensitivity was 90 % and specificity was 91 %. Naturally these findings need further confirmation within a bigger population. Our pilot-study proves that Ion-Mobility-Spectrometry coupled with a Multi-Capillary-Column is a feasible rapid diagnostic tool in the setting of a pediatric oncology out-patient clinic for patients 3 years and older. Our first results furthermore encourage additional analysis as to whether patients at risk for septic events during immunosuppression can be diagnosed in advance by rapidly assessing risk factors such as Neutropenia in exhaled breath.
Several diseases occur due to asbestos exposure. Until today, asbestos predicted mortality and morbidity will increase because of the long latency period. Actually, the methods to investigate asbestos related disease are mostly invasive. Therefore, the aim of the present paper was to investigate, whether signals in human breath could be correlated to Asbestos related lung diseases using a multi-capillary column (MCC) connected to an ion mobility spectrometer (IMS) as non-invasive method. Here, the breath samples of 10 mL of 25 patients suffering from asbestos related diseases. This group includes patients with asbestos related pleural thickening with and without pulmonary fibrosis. Twelve healthy persons constitute the control group and the breath samples are compared with those of the BK4103 patients. In total 83 peaks are found in the IMS-Chromatogram. A discrimination was possible with p-values <0.001 for two peaks (99.9 %), <0.01 (99 %) for 5 peaks and <0.05 (95 %) for 17 peaks. The most discrimination peaks alpha pinene and 4-ethyltoluol were identified among some others with lower p-values. The corresponding Box-and-Whisker-Plots comparing both groups are presented. In addition, a decision tree including all peaks was created that shows a differentiation with alpha pinene between BK4103 (pleural plaques group) and the control group. In addition, the sensitivity was calculated to 96 %, specificity was 50 %, positive and negative predictive values were 80 % and 86 %. Ion mobility spectrometry was introduced as non-invasive method to separate both groups Asbestos related and healthy. Naturally, the findings need further confirmation on larger population groups, but encourage further investigations, too.
Purpose
Context awareness in the operating room (OR) is important to realize targeted assistance to support actors during surgery. A situation recognition system (SRS) is used to interpret intraoperative events and derive an intraoperative situation from these. To achieve a modular system architecture, it is desirable to de-couple the SRS from other system components. This leads to the need of an interface between such an SRS and context-aware systems (CAS). This work aims to provide an open standardized interface to enable loose coupling of the SRS with varying CAS to allow vendor-independent device orchestrations.
Methods
A requirements analysis investigated limiting factors that currently prevent the integration of CAS in today's ORs. These elicited requirements enabled the selection of a suitable base architecture. We examined how to specify this architecture with the constraints of an interoperability standard. The resulting middleware was integrated into a prototypic SRS and our system for intraoperative support, the OR-Pad, as exemplary CAS for evaluating whether our solution can enable context-aware assistance during simulated orthopedical interventions.
Results
The emerging Service-oriented Device Connectivity (SDC) standard series was selected to specify and implement a middleware for providing the interpreted contextual information while the SRS and CAS are loosely coupled. The results were verified within a proof of concept study using the OR-Pad demonstration scenario. The fulfillment of the CAS’ requirements to act context-aware, conformity to the SDC standard series, and the effort for integrating the middleware in individual systems were evaluated. The semantically unambiguous encoding of contextual information depends on the further standardization process of the SDC nomenclature. The discussion of the validity of these results proved the applicability and transferability of the middleware.
Conclusion
The specified and implemented SDC-based middleware shows the feasibility of loose coupling an SRS with unknown CAS to realize context-aware assistance in the OR.
The digital transformation is today’s dominant business transformation having a strong influence on how digital services and products are designed in a service-dominant way. A popular underlying theory of value creation and economic exchange that is known as the service-dominant (S-D) logic can be connected to many successful digital business models. However, S-D logic by itself is abstract. Companies cannot directly use it as an instrument for business model innovation and design in an easy way. To address this a comprehensive ideation method based on S-D logic is proposed, called service-dominant design (SDD). SDD is aimed at supporting firms in the transition to a service- and value-oriented perspective. The method provides a simplified way to structure the ideation process based on four model components. Each component consists of practical implications, auxiliary questions and visualization techniques that were derived from a literature review, a use case evaluation of digital mobility and a focus group discussion. SDD represents a first step of having a toolset that can support established companies in the process of service- and value-orientation as part of their digital transformation efforts.
This chapter looks at the usage of image films produced by brands and their dealing with themselves. It focuses on analyzing important film parameters, the content and the way it can influence brand image. A list of 70 fashion brands from different categories was gathered through a survey and confirmed by comparing the results with relevant literature. All 70 brands were looked at to find relevant self-referencing films. The films had to be produced by the brand themselves. Videos for advertisement or promoting collections are not regarded either. In total 22 films from 17 brands were analyzed. Results show that most brands seem to have recognized videos as a powerful marketing tool in the social media age. Many brands seem to struggle with the compliance of certain parameters such as length and the use of the brand logo. In general, the content of the videos is focused around the four topics recruitment, value, history and behind the brand. As for the intent, the videos can be classified into the three categories learning, emotion and doing something. This paper not only analyzes this special film category, but also gives recommendations to improve the videos.
Forecasting demand is challenging. Various products exhibit different demand patterns. While demand may be constant and regular for one product, it may be sporadic for another, as well as when demand occurs, it may fluctuate significantly. Forecasting errors are costly and result in obsolete inventory or unsatisfied demand. Methods from statistics, machine learning, and deep learning have been used to predict such demand patterns. Nevertheless, it is not clear for what demand pattern, which algorithm would achieve the best forecast. Therefore, even today a large number of models are used to forecast on a test period. The model with the best result on the test period is used for the actual forecast. This approach is computationally and time intensive and, in most cases, uneconomical. In our paper we show the possibility to use a machine learning classification algorithm, which predicts the best possible model based on the characteristics of a time series. The approach was developed and evaluated on a dataset from a B2B-technical-retailer. The machine learning classification algorithm achieves a mean ROC-AUC of 89%, which emphasizes the skill of the model.
Rotating machinery occupies a predominant place in many industrial applications. However, rotating machines are often encountered with severe vibration problems. The measurement of these machines’ vibrations signal is of particular importance since it plays a crucial role in predictive maintenance. When the vibrations are too high, they often cause fatigue failure. They announce an unexpected stop or break and, consequently, a significant loss of productivity or an attack on the personnel’s safety. Therefore, fault identification at early stages will significantly enhance the machine’s health and significantly reduce maintenance costs. Although considerable efforts have been made to master the field of machine diagnostics, the usual signal processing methods still present several drawbacks. This paper examines the rotating machinery condition monitoring in the time and frequency domains. It also provides a framework for the diagnosis process based on machine learning by analyzing the vibratory signals.
Monday is unique for its reputation as a “bad” day—one that is characterized by pessimism and reluctance as noted by Rystrom and Benson (Financ Anal J 45(5):75–78, 1989). But the extent to which this applies to stock markets is still in dispute. While early evidence points to a Monday effect leading to negative returns, recent studies tend to suggest its disappearance or reversal.As a replication study, this paper searches for new evidence of this effect in the German stock market.We use data on the German blue-chip index DAX between 2000 and 2017 to test for the presence of a Monday effect by applying regression and controlling with GARCH analysis. The observation period provides a detailed insight into different market phases in one of the most liquid and information efficient international stock markets. Our results contribute no evidence to the persistent existence of a Monday effect on the German stock market. Our analysis is robust against the background of different market sentiments before, during and after the financial crisis.
This article provides a general overview of the most promising candidates of bio based materials and deals with the most important issues when it comes to their incorporation into PF resins. Due to their abundance on Earth, much knowledge of lignin-based materials has already been gained and uses of lignin in PF resins have been studied for many decades. Other natural polyphenols that are less frequently considered for impregnation are covered as well, as they do also possess some potential for PF substitution.
Revenue management information systems are very important in the hospitality sector. Revenue decisions can be better prepared based on different information from different information systems and decision strategies. There is a lack of research about the usage of such systems in small and medium-sized hotels and architectural configurations. Our paper empirically shows the current development of revenue information systems. Furthermore, we define future developments and requirements to improve such systems and the architectural base.
Since half a decade, there has been an increasing interest in Robotic Process Automation (RPA) by business firms. However, academic literature has been lacking attention to RPA, before adopting the topic to a larger extent. The aim of this study is to review and structure the latest state of scholarly research on RPA. This chapter is based on a systematic literature review that is used as a basis to develop a conceptual framework to structure the field. Our study shows that some areas of RPA have been extensively examined by many authors, e.g. potential benefits of RPA. Other categories, such as empirical studies on adoption of RPA or organisational readiness models, have remained research gaps.
Medical applications are becoming increasingly important in the current development of health care and therefore a crucial part of the medical industry. The work focuses on the analysis of requirements and the challenges arisen from designing mobile medical applications in relation to the user interface. The paper describes the current status in the development of mobile medical apps and illustrates the development of e-health market. The author will explain the requirements and will illustrate the hurdles and problems. He refers to the German market which is similar to the European and compares that with the market in the USA.
Global, competitive markets which are characterised by mass customisation and rapidly changing customer requirements force major changes in production styles and the configuration of manufacturing systems. As a result, factories may need to be regularly adapted and optimised to meet short-term requirements. One way to optimise the production process is the adaptation of the plant layout to the current or expected order situation. To determine whether a layout change is reasonable, a model of the current layout is needed. It is used to perform simulations and in the case of a layout change it serves as a basis for the reconfiguration process. To aid the selection of possible measurement systems, a requirements analysis was done to identify the important parameters for the creation of a digital shadow of a plant layout. Based on these parameters, a method is proposed for defining limit values and specifying exclusion criteria. The paper thus contributes to the development and application of systems that enable an automatic synchronisation of the real layout with the digital layout.
The recovery of our body and brain from fatigue directly depends on the quality of sleep, which can be determined from the results of a sleep study. The classification of sleep stages is the first step of this study and includes the measurement of vital data and their further processing. The non-invasive sleep analysis system is based on a hardware sensor network of 24 pressure sensors providing sleep phase detection. The pressure sensors are connected to an energy-efficient microcontroller via a system-wide bus. A significant difference between this system and other approaches is the innovative way in which the sensors are placed under the mattress. This feature facilitates the continuous use of the system without any noticeable influence on the sleeping person. The system was tested by conducting experiments that recorded the sleep of various healthy young people. Results indicate the potential to capture respiratory rate and body movement.
Context: The current situation and future scenarios of the automotive domain require a new strategy to develop high quality software in a fast pace. In the automotive domain, it is assumed that a combination of agile development practices and software product lines is beneficial, in order to be capable to handle high frequency of improvements. This assumption is based on the understanding that agile methods introduce more flexibility in short development intervals. Software product lines help to manage the high amount of variants and to improve quality by reuse of software for long term development.
Goal: This study derives a better understanding of the expected benefits for a combination. Furthermore, it identifies the automotive specific challenges that prevent the adoption of agile methods within the software product line.
Method: Survey based on 16 semi structured interviews from the automotive domain, an internal workshop with 40 participants and a discussion round on ESE congress 2016. The results are analyzed by means of thematic coding.
The blockchain technology represents a decentralised database that stores information securely in immutable data blocks. Regarding supply chain management, these characteristics offer potentials in increasing supply chain transparency, visibility, automation, and efficiency. In this context, first token-based mapping approaches exist to transfer certain manufacturing processes to the blockchain, such as the creation or assembly of parts as well as their transfer of ownership. This paper proposes a prototypical blockchain application that adopts an authority concept and a concept of smart non-fungible tokens. The application enables the mapping of complex products in dynamic supply chains that require the auditability of changeable assembling processes on the blockchain. Finally, the paper demonstrates the practical feasibility of the proposed application based on a prototypical implementation created on the Ethereum blockchain.
Context: Currently, most companies apply approaches for product roadmapping that are based on the assumption that the future is highly predicable. However, nowadays companies are facing the challenge of increasing market dynamics, rapidly evolving technologies, and shifting user expectations. Together with the adaption of lean and agile practices it makes it increasingly difficult to plan and predict upfront which products, services or features will satisfy the needs of the customers. Therefore, they are struggling with their ability to provide product roadmaps that fit into dynamic and uncertain market environments and that can be used together with lean and agile software development practices.
Objective: To gain a better understanding of modern product roadmapping processes, this paper aims to identify suitable processes for the creation and evolution of product roadmaps in dynamic and uncertain market environments.
Method: We performed a Grey Literature Review (GLR) according to the guidelines from Garousi et al.
Results: 32 approaches to product roadmapping were identified. Typical characteristics of these processes are the strong connection between the product roadmap and the product vision, an emphasis on stakeholder alignment, the definition of business and customer goals as part of the roadmapping process, a high degree of flexibility with respect to reaching these goals, and the inclusion of validation activities in the roadmapping process. An overall goal of nearly all approaches is to avoid waste by early reducing development and business risks. From the list of the 32 approaches found, four representative roadmapping processes are described in detail.
Context: A product roadmap is an important tool in product development. It sets the strategic direction in which the product is to be developed to achieve the company’s vision. However, for product roadmaps to be successful, it is essential that all stakeholders agree with the company’s vision and objectives and are aligned and committed to a common product plan.
Objective: In order to gain a better understanding of product roadmap alignment, this paper aims at identifying measures, activities and techniques in order to align the different stakeholders around the product roadmap.
Method: We conducted a grey literature review according the guidelines to Garousi et al.
Results: Several approaches to gain alignment were identified such as defining and communicating clear objectives based on the product vision, conducting cross-functional workshops, shuttle diplomacy, and mission briefing. In addition, our review identified the “Behavioural Change Stairway Model” that suggests five steps to gain alignment by building empathy and a trustful relationship.
Newly developed active pharmaceutical ingredients (APIs) are often poorly soluble in water. As a result the bioavailability of the API in the human body is reduced. One approach to overcome this restriction is the formulation of amorphous solid dispersions (ASDs), e.g., by hot-melt extrusion (HME). Thus, the poorly soluble crystalline form of the API is transferred into a more soluble amorphous form. To reach this aim in HME, the APIs are embedded in a polymer matrix. The resulting amorphous solid dispersions may contain small amounts of residual crystallinity and have the tendency to recrystallize. For the controlled release of the API in the final drug product the amount of crystallinity has to be known. This review assesses the available analytical methods that have been recently used for the characterization of ASDs
and the quantification of crystalline API content. Well established techniques like near- and mid-infrared spectroscopy (NIR and MIR, respectively), Raman spectroscopy, and emerging ones like UV/VIS, terahertz, and ultrasonic spectroscopy are considered in detail. Furthermore, their advantages and limitations are discussed with regard to general practical applicability as process analytical technology (PAT) tools in industrial manufacturing. The review focuses on spectroscopic methods which have been proven as most suitable for in-line and on-line process analytics. Further aspects are spectroscopic techniques that have been or could be integrated into an extruder.
We have seen that bionic optimization can be a powerful tool when applied to problems with non-trivial landscapes of goals and restrictions. This, in turn, led us to a discussion of useful methodologies for applying this optimization to real problems. On the other hand, it must be stated that each optimization is a time consuming process as soon as the problem expands beyond a small number of free parameters related to simple parabolic responses. Bionic optimization is not a quick approach to solving complex questions within short times. In some cases it has the potential to fail entirely, either by sticking to local maxima or by random exploration of the parameter space without finding any promising solutions. The following sections present some remarks on the efficiency and limitations users must be aware of. They aim to increase the knowledge base of using and encountering bionic optimization. But they should not discourage potential users from this promising field of powerful strategies to find good or even the best possible designs.
With the rapid development of globalization, the demand for translation between different languages is also increasing. Although pre-training has achieved excellent results in neural machine translation, the existing neural machine translation has almost no high-quality suitable for specific fields. Alignment information, so this paper proposes a pre-training neural machine translation with alignment information via optimal transport. First, this paper narrows the representation gap between different languages by using OTAP to generate domain-specific data for information alignment, and learns richer semantic information. Secondly, this paper proposes a lightweight model DR-Reformer, which uses Reformer as the backbone network, adds Dropout layers and Reduction layers, reduces model parameters without losing accuracy, and improves computational efficiency. Experiments on the Chinese and English datasets of AI Challenger 2018 and WMT-17 show that the proposed algorithm has better performance than existing algorithms.
Preface of IDEA 2015
(2016)
An enormous amount of data in the context of business processes is stored as images. They contain valuable information for business process management. Up to now this data had to be integrated manually into the business process. By advances of capturing it is possible to extract information from an increasing number of images. Therefore, we systematically investigate the potentials of Image Mining for business process management by a literature research and an in-depth analysis of the business process lifecycle. As a first step to evaluate our research, we developed a prototype for recovering process model information from drawings using Rapidminer.
The purpose of this paper is to identify the potential of a fashion fTRACE (ffTRACE) application that gives transparent insight on the supply chain of a fashion item. The research methodology applied to this purpose is a literature review examining academic references. The key findings of this paper are that information plays a major role in the consumer decision process and is therefore beneficial to the demand for sustainable products. Given the right information content in a transparent, credible and understandable way is important. It is found that the functions of such an application would be able to satisfy this consumer demand and therefore has the potential to raise the sales of a sustainable company as well as increase the brand’s awareness and improve its image. While mainly indicating the potentials of the ffTRACE application, their relevance is not examined in this paper.
This paper studies whether a monetary union can be managed solely by a rule based approach. The Five Presidents’ Report of the European Union rejects this idea. It suggests a centralisation of powers. We analyse the philosophy of policy rules from the vantage point of the German economic school of thought. There is evidence that a monetary union consisting of sovereign states is well organised by rules, together with the principle of subsidiarity. The root cause of the euro crisis is rather the weak enforcement of rules, compounded by structural problems. Therefore, we suggest a genuine rule-based paradigm for a stable future of the Economic and Monetary Union.
In clothing e-commerce, the challenge of optimally recommending clothing that suits a user’s unique characteristics remains a pressing issue. Many platforms simply recommend best-selling or popular clothing, without taking into account important attributes like user’s face color, pupil color, face shape, age, etc. To solve this problem, this paper proposes a personalized clothing recommendation algorithm that incorporates the established 4-Season Color System and user-specific biological characteristics. Firstly, the attributes and colors of clothing are classified by Fnet network, that can learn disjoint label combinations and mitigate the issue of excessive labels. Secondly, on the basis of the 4-Season Color System, the user’s face color model is trained by combined MobileNetV3_DTL, which ensures the model’s generalization and improves the training speed. Thirdly, user’s face shape and age are divided into different categories by an Inception network. Finally, according to the users’ face color, age, face shape and other information, personalized clothing is recommended in a coarse-to-fine manner. Experiments on five datasets demonstrate that the algorithm proposed in this paper achieves state-of-the-art results.
Sleep is an important aspect in life of every human being. The average sleep duration for an adult is approximately 7 h per day. Sleep is necessary to regenerate physical and psychological state of a human. A bad sleep quality has a major impact on the health status and can lead to different diseases. In this paper an approach will be presented, which uses a long-term monitoring of vital data gathered by a body sensor during the day and the night supported by mobile application connected to an analyzing system, to estimate sleep quality of its user as well as give recommendations to improve it in real-time. Actimetry and historical data will be used to improve the individual recommendations, based on common techniques used in the area of machine learning and big data analysis.
The purpose of this paper is to investigate how motion pictures are currently used for the product presentation of fashion articles. An explorative approach was chosen for the literature section. This study shows that the use of moving images for the presentation of fashion articles in online shops is possible in numerous different ways. In order to be able to use product presentation videos meaningfully, one should consider exactly what is the purpose of these videos. Different goals require different means. However, retailers should obtain enough information in advance to assess whether they can afford the production and post-processing of these videos.
Software Process Improvement (SPI) programs have been implemented, inter alia, to improve quality and speed of software development. SPI addresses many aspects ranging from individual developer skills to entire organizations. It comprises, for instance, the optimization of specific activities in the software lifecycle as well as the creation of organizational awareness and project culture. In the course of conducting a systematic mapping study on the state-of-the-art in SPI from a general perspective, we observed Software Quality Management (SQM) being of certain relevance in SPI programs. In this paper, we provide a detailed investigation of those papers from the overall systematic mapping study that were classified as addressing SPI in the context of SQM (including testing). From the main study’s result set, 92 papers were selected for an in-depth systematic review to study the contributions and to develop an initial picture of how these topics are addressed in SPI. Our findings show a fairly pragmatic contribution set in which different solutions are proposed, discussed, and evaluated. Among others, our findings indicate a certain reluctance towards standard quality or (test) maturity models and a strong focus on custom review, testing, and documentation techniques, whereas a set of five selected improvement measures is almost equally addressed.
Near-Data Processing (NDP) is a key computing paradigm for reducing the ever growing time and energy costs of data transport versus computations. With their flexibility, FPGAs are an especially suitable compute element for NDP scenarios. Even more promising is the exploitation of novel and future non-volatile memory (NVM) technologies for NDP, which aim to achieve DRAM-like latencies and throughputs, while providing large capacity non-volatile storage.
Experimentation in using FPGAs in such NVM-NDP scenarios has been hindered, though, by the fact that the NVM devices/FPGA boards are still very rare and/or expensive. It thus becomes useful to emulate the access characteristics of current and future NVMs using off-the-shelf DRAMs. If such emulation is sufficiently accurate, the resulting FPGA-based NDP computing elements can be used for actual full-stack hardware/software benchmarking, e.g., when employed to accelerate a database.
For this use, we present NVMulator, an open-source easy-to-use hardware emulation module that can be seamlessly inserted between the NDP processing elements on the FPGA and a conventional DRAM-based memory system. We demonstrate that, with suitable parametrization, the emulated NVM can come very close to the performance characteristics of actual NVM technologies, specifically Intel Optane. We achieve 0.62% and 1.7% accuracy for cache line sized accesses for read and write operations, while utilizing only 0.54% of LUT logic resources on a Xilinx/AMD AU280 UltraScale+ FPGA board. We consider both file-system as well as database access patterns, examining the operation of the RocksDB database when running on real or emulated Optane-technology memories.
Hypermedia as the Engine of Application State (HATEOAS) is one of the core constraints of REST. It refers to the concept of embedding hyperlinks into the response of a queried or manipulated resource to show a client possible follow-up actions and transitions to related resources. Thus, this concept aims to provide a client with a navigational support when interacting with a Web-based application. Although HATEOAS should be implemented by any Web-based API claiming to be RESTful, API providers tend to offer service descriptions in place of embedding hyperlinks into responses. Instead of relying on a navigational support, a client developer has to read the service description and has to identify resources and their URIs that are relevant for the interaction with the API. In this paper, we introduce an approach that aims to identify transitions between resources of a Web-based API by systematically analyzing the service description only. We devise an algorithm that automatically derives a URI Model from the service description and then analyzes the payload schemas to identify feasible values for the substitution of path parameters in URI Templates. We implement this approach as a proxy application, which injects hyperlinks representing transitions into the response payload of a queried or manipulated resource. The result is a HATEOAS-like navigational support through an API. Our first prototype operates on service descriptions in the OpenAPI format. We evaluate our approach using ten real-world APIs from different domains. Furthermore, we discuss the results as well as the observations captured in these tests.
Data analytics tasks on large datasets are computationally intensive and often demand the compute power of cluster environments. Yet, data cleansing, preparation, dataset characterization and statistics or metrics computation steps are frequent. These are mostly performed ad hoc, in an explorative manner and mandate low response times. But, such steps are I/O intensive and typically very slow due to low data locality, inadequate interfaces and abstractions along the stack. These typically result in prohibitively expensive scans of the full dataset and transformations on interface boundaries.
In this paper, we examine R as analytical tool, managing large persistent datasets in Ceph, a wide-spread cluster file-system. We propose nativeNDP – a framework for Near Data Processing that pushes down primitive R tasks and executes them in-situ, directly within the storage device of a cluster-node. Across a range of data sizes, we show that nativeNDP is more than an order of magnitude faster than other pushdown alternatives.
Hip-hop culture defines itself through four central pillars: DJing, MCing, breakdancing and graffiti, but a fifth one, fashion, may be in the coming. Hip-hop has become the most popular music genre, and the influence it has on society is undebatable. But as hip-hop artists increasingly underpin their music with visual components, like music videos, the question arises if that has an influence on the fashion industry. This chapter clarifies which factors may determine a fashion business impact and discusses differences between mainstream hip-hop artists and the ones that are active in the fashion industry as well. The focus lays on the way and amount fashion is presented in the music videos. 24 music videos were analyzed, thereof 15 popular records from the past three years and nine of artists that are already considered as fashion influential. Additionally, a fashion influence index was created to compare the degree of fashion between the music videos. Numbers of styles, recognized brands, fashion related song verses, fashion related description box mentions and articles about the fashion in the music video were noted. Findings reveal that the number of outfits shown in the video did not have a direct link to the amount of traffic it produces in fashion media. The artists that are considered influential in the fashion industry, name brands in their song lyrics more often and show brand logos more frequent in their music videos than others. Though over the observed years, for the mainstream hip-hop artists, a rise in fashion awareness can be seen through a higher number of styles, recognizable brands and fashion related verses in the lyrics.