Refine
Document Type
- Conference proceeding (138)
- Book chapter (100)
- Journal article (84)
- Anthology (12)
- Book (11)
- Doctoral Thesis (1)
Is part of the Bibliography
- yes (346)
Institute
- Informatik (173)
- ESB Business School (70)
- Texoversum (50)
- Technik (26)
- Life Sciences (24)
- Zentrale Einrichtungen (3)
Publisher
- Springer (346) (remove)
Indoor localization systems are becoming more and more important with the digitalization of the industrial sector. Sensor data such as the current position of machines, transport vehicles, goods or tools represent an essential component of cyber physical production systems (CCPS). However, due to the high costs of these sensors, they are not widespread and are used mainly in special scenarios. However, especially optical indoor positioning systems (OIPS) based on cameras have certain advantages due to their technological specifications. In this paper, the application scenarios and requirements as well as their characteristics are presented and a classification approach of OIPS is introduced.
Intermittent time series forecasting is a challenging task which still needs particular attention of researchers. The more unregularly events occur, the more difficult is it to predict them. With Croston’s approach in 1972 (1.Nr. 3:289–303), intermittence and demand of a time series were investigated the first time separately. He proposes an exponential smoothing in his attempt to generate a forecast which corresponds to the demand per period in average. Although this algorithm produces good results in the field of stock control, it does not capture the typical characteristics of intermittent time series within the final prediction. In this paper, we investigate a time series’ intermittence and demand individually, forecast the upcoming demand value and inter-demand interval length using recent machine learning algorithms, such as long-short-term-memories and light-gradient-boosting machines, and reassemble both information to generate a prediction which preserves the characteristics of an intermittent time series. We compare the results against Croston’s approach, as well as recent forecast procedures where no split is performed.
A 3D face modelling approach for pose-invariant face recognition in a human-robot environment
(2017)
Face analysis techniques have become a crucial component of human-machine interaction in the fields of assistive and humanoid robotics. However, the variations in head-pose that arise naturally in these environments are still a great challenge. In this paper, we present a real-time capable 3D face modelling framework for 2D in-the-wild images that is applicable for robotics. The fitting of the 3D Morphable Model is based exclusively on automatically detected landmarks. After fitting, the face can be corrected in pose and transformed back to a frontal 2D representation that is more suitable for face recognition. We conduct face recognition experiments with non-frontal images from the MUCT database and uncontrolled, in the wild images from the PaSC database, the most challenging face recognition database to date, showing an improved performance. Finally, we present our SCITOS G5 robot system, which incorporates our framework as a means of image pre-processing for face analysis.
Based on well-established robotic concepts of autonomous localization and navigation we present a system prototype to assist camera-based indoor navigation for human utilization implemented in the Robot Operating System (ROS). Our prototype takes advantage of state-of-the-art computer vision and robotic methods. Our system is designed for assistive indoor guidance. We employ a vibro tactile belt to serve as a guiding device to render derived motion suggestions to the user via vibration patterns. We evaluated the effectiveness of a variety of vibro-tactile feedback patterns for guidance of blindfolded users. Our prototype demonstrates that a vision-based system can support human navigation, and may also assist the visually impaired in a human-centered way.
Software startups often make assumptions about the problems and customers they are addressing as well as the market and the solutions they are developing. Testing the right assumptions early is a means to mitigate risks. Approaches such as Lean Startup foster this kind of testing by applying experimentation as part of a constant build-measure-learn feedback loop. The existing research on how software startups approach experimentation is very limited. In this study, we focus on understanding how software startups approach experimentation and identify challenges and advantages with respect to conducting experiments. To achieve this, we conducted a qualitative interview study. The initial results show that startups often spent a disproportionate amount of time focusing on creating solutions without testing critical assumptions. Main reasons are the lack of awareness, that these assumptions can be tested early and a lack of knowledge and support on how to identify, prioritize and test these assumptions. However, startups understand the need for testing risky assumptions and are open to conducting experiments.
In recent years, the parallel computing community has shown increasing interest in leveraging cloud resources for executing parallel applications. Clouds exhibit several fundamental features of economic value, like on-demand resource provisioning and a pay-per-use model. Additionally, several cloud providers offer their resources with significant discounts; however, possessing limited availability. Such volatile resources are an auspicious opportunity to reduce the costs arising from computations, thus achieving higher cost efficiency. In this paper, we propose a cost model for quantifying the monetary costs of executing parallel applications in cloud environments, leveraging volatile resources. Using this cost model, one is able to determine a configuration of a cloud-based parallel system that minimizes the total costs of executing an application.
The market for indoor positioning systems for a variety of applications has grown strongly in recent years. A wide range of systems is available, varying considerably in terms of accuracy, price and technology used. The suitability of the systems is highly dependent on the intended application. This paper presents a concept to use a single low-cost PTZ camera in combination with fiducial markers for indoor position and orientation determination. The intended use case is to capture a plant layout consisting of position, orientation and unique identity of individual facilities. Important factors to consider for the selection of a camera have been identified and the transformation of the marker pose in camera coordinates into a selectable plant coordinate system is described. The concept is illustrated by an exemplary practical implementation and its results.
Global, competitive markets which are characterised by mass customisation and rapidly changing customer requirements force major changes in production styles and the configuration of manufacturing systems. As a result, factories may need to be regularly adapted and optimised to meet short-term requirements. One way to optimise the production process is the adaptation of the plant layout to the current or expected order situation. To determine whether a layout change is reasonable, a model of the current layout is needed. It is used to perform simulations and in the case of a layout change it serves as a basis for the reconfiguration process. To aid the selection of possible measurement systems, a requirements analysis was done to identify the important parameters for the creation of a digital shadow of a plant layout. Based on these parameters, a method is proposed for defining limit values and specifying exclusion criteria. The paper thus contributes to the development and application of systems that enable an automatic synchronisation of the real layout with the digital layout.
The proposed approach applies current unsupervised clustering approaches in a different dynamic manner. Instead of taking all the data as input and finding clusters among them, the given approach clusters Holter ECG data (longterm electrocardiography data from a holter monitor) on a given interval which enables a dynamic clustering approach (DCA). Therefore advanced clustering techniques based on the well known Dynamic TimeWarping algorithm are used. Having clusters e.g. on a daily basis, clusters can be compared by defining cluster shape properties. Doing this gives a measure for variation in unsupervised cluster shapes and may reveal unknown changes in healthiness. Embedding this approach into wearable devices offers advantages over the current techniques. On the one hand users get feedback if their ECG data characteristic changes unforeseeable over time which makes early detection possible. On the other hand cluster properties like biggest or smallest cluster may help a doctor in making diagnoses or observing several patients. Further, on found clusters known processing techniques like stress detection or arrhythmia classification may be applied.
Das Weltwirtschaftswachstum der vergangenen Jahrzehnte war durch die Dynamik der Digitalisierung und Globalisierung in den Lieferketten geprägt. Die Corona-Pandemie hat die Abhängigkeit und Verletzlichkeit der Lieferketten offengelegt. Trotz einer Vielzahl verbindlicher Standards haben Unternehmen die Digitalisierung und Arbeitsteilung auch für regulatorische Arbitrage genutzt. Einerseits erhöht das die Effizienz der Wirtschaft - was mithin ökologische Ressourcen schont - andererseits werden damit internationale Standards konterkariert. Globalisierung und Digitalisierung sind Segen und Fluch zugleich.
Die weiterhin hohen Schulden in einigen Staaten der Europäischen Wirtschafts- und Währungsunion lassen nach wie vor staatliche Insolvenzen befürchten. Um die entstandenen Probleme zu bewältigen, aber auch damit eine solche Situation erst gar nicht eintritt, hält der Autor eine staatliche Insovenzordnung – mit Bail-out durch die anderen Mitgliedstaaten nur in Notfällen – für erforderlich. Er schlägt einen staatlichen Abwicklungsmechanismus für überschuldete Euro-Länder vor, der auf einem Konzept des Sachverständigenrates für Wirtschaft von 2016 beruht.
This paper studies whether a monetary union can be managed solely by a rule based approach. The Five Presidents’ Report of the European Union rejects this idea. It suggests a centralisation of powers. We analyse the philosophy of policy rules from the vantage point of the German economic school of thought. There is evidence that a monetary union consisting of sovereign states is well organised by rules, together with the principle of subsidiarity. The root cause of the euro crisis is rather the weak enforcement of rules, compounded by structural problems. Therefore, we suggest a genuine rule-based paradigm for a stable future of the Economic and Monetary Union.
Newly developed active pharmaceutical ingredients (APIs) are often poorly soluble in water. As a result the bioavailability of the API in the human body is reduced. One approach to overcome this restriction is the formulation of amorphous solid dispersions (ASDs), e.g., by hot-melt extrusion (HME). Thus, the poorly soluble crystalline form of the API is transferred into a more soluble amorphous form. To reach this aim in HME, the APIs are embedded in a polymer matrix. The resulting amorphous solid dispersions may contain small amounts of residual crystallinity and have the tendency to recrystallize. For the controlled release of the API in the final drug product the amount of crystallinity has to be known. This review assesses the available analytical methods that have been recently used for the characterization of ASDs
and the quantification of crystalline API content. Well established techniques like near- and mid-infrared spectroscopy (NIR and MIR, respectively), Raman spectroscopy, and emerging ones like UV/VIS, terahertz, and ultrasonic spectroscopy are considered in detail. Furthermore, their advantages and limitations are discussed with regard to general practical applicability as process analytical technology (PAT) tools in industrial manufacturing. The review focuses on spectroscopic methods which have been proven as most suitable for in-line and on-line process analytics. Further aspects are spectroscopic techniques that have been or could be integrated into an extruder.
Back to the future: origins and directions of the “Agile Manifesto” – views of the originators
(2018)
In 2001, seventeen professionals set up the manifesto for agile software development. They wanted to define values and basic principles for better software development. On top of brought into focus, the manifesto has been widely adopted by developers, in software-developing organizations and outside the world of IT. Agile principles and their implementation in practice have paved the way for radical new and innovative ways of software and product development. In parallel, the understanding of the manifesto’s underlying principles evolved over time. This, in turn, may affect current and future applications of agile principles. This article presents results from a survey and an interview study in collaboration with the original contributors of the manifesto for agile software development. Furthermore, it comprises the results from a workshop with one of the original authors. This publication focuses on the origins of the manifesto, the contributors’ views from today’s perspective, and their outlook on future directions. We evaluated 11 responses from the survey and 14 interviews to understand the viewpoint of the contributors. They emphasize that agile methods need to be carefully selected and agile should not be seen as a silver bullet. They underline the importance of considering the variety of different practices and methods that had an influence on the manifesto. Furthermore, they mention that people should question their current understanding of "agile" and recommend reconsidering the core ideas of the manifesto.
Context: The current transformation of automotive development towards innovation, permanent learning and adapting to changes are directing various foci on the integration of agile methods. Although, there have been efforts to apply agile methods in the automotive domain for many years, a wide-spread adoption has not yet taken place.
Goal: This study aims to gain a better understanding of the forces that prevent the adoption of agile methods.
Method: Survey based on 16 semi-structured interviews from the automotive domain. The results are analyzed by means of thematic coding.
Results: Forces that prevent agile adoption are mainly of organizational, technical and social nature and address inertia, anxiety and context factors. Key challenges in agile adoption are related to transforming organizational structures and culture, achieving faster software release cycles without loss of quality, the importance of software reuse in combination with agile practices, appropriate quality assurance measures, and the collaboration with suppliers and other disciplines such as mechanics.
Conclusion: Significant challenges are imposed by specific characteristics of the automotive domain such as high quality requirements and many interfaces to surrounding rigid and inflexible processes. Several means are identified that promise to overcome these challenges.
Context: The current situation and future scenarios of the automotive domain require a new strategy to develop high quality software in a fast pace. In the automotive domain, it is assumed that a combination of agile development practices and software product lines is beneficial, in order to be capable to handle high frequency of improvements. This assumption is based on the understanding that agile methods introduce more flexibility in short development intervals. Software product lines help to manage the high amount of variants and to improve quality by reuse of software for long term development.
Goal: This study derives a better understanding of the expected benefits for a combination. Furthermore, it identifies the automotive specific challenges that prevent the adoption of agile methods within the software product line.
Method: Survey based on 16 semi structured interviews from the automotive domain, an internal workshop with 40 participants and a discussion round on ESE congress 2016. The results are analyzed by means of thematic coding.
Context: Software product lines are widely used in automotive embedded software development. This software paradigm improves the quality of software variants by reuse. The combination of agile software development practices with software product lines promises a faster delivery of high quality software. However, the set up of an agile software product line is still challenging, especially in the automotive domain. Goal: This publication aims to evaluate to what extend agility fits to automotive product line engineering. Method: Based on previous work and two workshops, agility is mapped to software product line concerns. Results: This publication presents important principles of software product lines, and examines how agile approaches fit to those principles. Additionally, the principles are related to one of the four major concerns of software product line engineering: Business, Architecture, Process, and Organization. Conclusion: Agile software product line engineering is promising and can add value to existing development approaches. The identified commonalities and hindering factors need to be considered when defining a combined agile product line engineering approach.
Engineering of large vascularized adipose tissue constructs is still a challenge for the treatment of extensive high-graded burns or the replacement of tissue after tumor removal. Communication between mature adipocytes and endothelial cells is important for homeostasis and the maintenance of adipose tissue mass but, to date, is mainly neglected in tissue engineering strategies. Thus, new coculture strategies are needed to integrate adipocytes and endothelial cells successfully into a functional construct. This review focuses on the cross-talk of mature adipocytes and endothelial cells and considers their influence on fatty acid metabolism and vascular tone. In addition, the properties and challenges with regard to these two cell types for vascularized tissue engineering are highlighted.
After the initiator of the ESB Logistics Learning Factory, Prof. Vera Hummel had made experience in developing and implementing a concept for a Learning Factory for Advanced Industrial Engineering (aIE) at the University of Stuttgart, Institute IFF between 2005 and 2008, she was appointed as a full professor at ESB Business School, a faculty of Reutlingen University in March 2010. Lacking a realistic, hands on learning and teaching environment of industrial scale for its industrial engineering students, first ideas for a Learning Factory that would strongly focus on all aspects of production logistics were drafted in 2012. Already back then, a strong integration of virtual and physical factory was desired: While the Learning Factory itself would be physical, the neighboring partners along the supply chain, such as suppliers or distribution warehouses, could be added in a fully virtual way. Considering implementation of the ESB Logistics Learning Factory a strategic initiative of the university, initial funding was provided by the faculty ESB Business School itself. Following its own creed, to provide future-oriented training for the region, also primarily local suppliers and manufacturers were selected as equipment providers to the new Learning Factory. During the initialization phase, 2014, a total of three researchers and nine students worked approximately four months to set up a first assembly line, storage racks, AGVs, or pick-by-light systems in conjunction with the underlying didactical concept. Since then, several hundred of students have participated in trainings and lectures held in the ESB Logistics Learning Factory, several research projects were carried out, and multiple high-level politicians and industry executives have been touring the shop floor. Also, more than EUR 2 million in research and infrastructure funds could be secured for expansion and upgrade — allowing the ESB Logistics Learning Factory today to represent many core aspects of an Industrie 4.0 production environment.
Wer in ein Unternehmen investiert, tut dies, um in Zukunft Geld zu verdienen. Er rechnet mit einer risikoadäquaten Rendite. Die Auswahl der Kennzahlen, die diese Wertsteigerung transparent machen, ist allerdings nicht trivial. Denn von ihnen hängt ab, ob die Unternehmensziele richtig vorgegeben und ob die Anreize für das Management richtig gesetzt werden.
Umsatz und Gewinne stagnieren auf hohem Niveau, und dennoch steigen der Aktienkurs und der Gewinn pro Aktie – eine Entwicklung, die sich etwa bei Apple oder Ebay beobachten lässt. Aktionäre sollten wissen, welche Arithmetik sich hinter solchen Entwicklungen verbirgt und mit welchen Verfahren sie den Unternehmenswert am besten ermitteln können.
Dieser Beitrag untersucht, wer in Deutschland Bildungsminister:in wird. Zur Klärung dieser Frage entwickelten wir einen Datensatz, der die biografischen Merkmale aller Bildungsminister:innen der deutschen Bundesländer zwischen 1950 und 2020 enthält. Als Beispiel für die Nutzung des Datensatzes untersuchen wir die beiden Merkmale Geschlecht und frühere Berufserfahrung und verknüpfen diese Merkmale mit Indikatoren für die Größe und Entwicklung des Bildungsbudgets und die Dauer der Amtszeit. Wir zeigen, dass zwischen 1950 und 2020 deutlich mehr Männer als Frauen zum/zur Bildungsminister:in ernannt wurden, unabhängig davon, welche Parteien die Bildungsminister:innen stellten. Außerdem verfügt die Mehrheit der Bildungsminister:innen bei Amtsantritt nicht über vorherige Berufserfahrung als Lehrer:in. Die meisten Bildungsminister:innen haben jedoch bereits politische Erfahrung, wenn sie ihr Amt antreten. Unsere Datenbank, die die erste umfassende Erhebung biografischer Merkmale von Bildungsminister:innen in den deutschen Bundesländern enthält, steht allen interessierten Forscher:innen zur Verfügung.
Due to digitalization, constant technological progress and ever shorter product life cycles, enterprises are currently facing major challenges. In order to succeed in the market, business models have to be adapted more often and more quickly to changing market conditions than they used to be. Fast adaptability, also called agility, is a decisive competitive factor in today’s world. Because of the ever-growing IT part of products and the fact that they are manufactured using IT, changing the business model has a major impact on the enterprise architecture (EA). However, developing EAs is a very complex task, because many stakeholders with conflicting interests are involved in the decision-making process. Therefore, a lot of collaboration is required. To support organizations in developing their EA, this article introduces a novel integrative method that systematically integrates stakeholder interests into decision-making activities. By using the method, collaboration between stakeholders involved is improved by identifying points of contact between them. Furthermore, standardized activities make decision-making more transparent and comparable without limiting creativity.
Die Digitalisierung, der ständige technologische Fortschritt und immer kürzere Produktlebenszyklen stellen Unternehmen derzeit vor große Herausforderungen. Um am Markt erfolgreich zu sein, müssen Geschäftsmodelle häufiger und schneller als früher an veränderte Marktbedingungen angepasst werden. Schnelle Anpassungsfähigkeit, auch Agilität genannt, ist in der heutigen Zeit ein entscheidender Wettbewerbsfaktor. Aufgrund des ständig wachsenden IT-Anteils von Produkten und der Tatsache, dass diese mit Hilfe von IT hergestellt werden, hat die Änderung des Geschäftsmodells große Auswirkungen auf die Unternehmensarchitektur (EA). Die Entwicklung von EAs ist jedoch eine sehr komplexe Aufgabe, da viele Beteiligte mit gegensätzlichen Interessen in den Entscheidungsprozess eingebunden sind. Daher ist ein hohes Maß an Zusammenarbeit erforderlich. Um Unternehmen bei der Entwicklung ihrer EA zu unterstützen, wird in diesem Artikel eine neuartige integrative Methode vorgestellt, die die Interessen der Stakeholder systematisch in die Entscheidungsfindung einbezieht. Durch die Anwendung der Methode wird die Zusammenarbeit zwischen den beteiligten Interessengruppen verbessert, indem Berührungspunkte zwischen ihnen identifiziert werden. Darüber hinaus machen die standardisierten Aktivitäten die Entscheidungsfindung transparenter und vergleichbarer, ohne die Kreativität einzuschränken.
In times of dynamic markets, enterprises have to be agile to be able to quickly react to market influences. Due to the increasing digitization of products, the enterprise IT often is affected when business models change. Enterprise Architecture Management (EAM) targets a holistic view of the enterprise’ IT and their relations to the business. However, Enterprise Architectures (EA) are complex structures consisting of many layers, artifacts and relationships between them. Thus, analyzing EA is a very complex task for stakeholders. Visualizations are common vehicles to support analysis. However, in practice visualization capabilities lack flexibility and interactivity. A solution to improve the support of stakeholders in analyzing EAs might be the application of visual analytics. Starting from a systematic literature review, this article investigates the features of visual analytics relevant for the context of EAM.
Companies are continuously changing their strategy, processes, and information systems to benefit from the digital transformation. Controlling the digital architecture and governance is the fundamental goal. Enterprise Governance, Risk and Compliance (GRC) systems are vital for managing digital risks threatening in modern enterprises from many different angles. The most significant constituent to GRC systems is the definition of controls that is implemented on different layers of a digital Enterprise Architecture (EA). As part of the compliant aspect of GRC, the effectiveness of these controls is assessed and reported to relevant management bodies within the enterprise. In this paper, we present a metamodel which links controls to the affected elements of a digital EA and supplies a way of expressing associated assessment techniques and results. We complement a metamodel with an expository instantiation of a control compliance cockpit in an international insurance enterprise.
New or adapted digital business models have huge impacts on Enterprise Architectures (EA) and require them to become more agile, flexible, and adaptable. All these changes are happening frequently and are currently not well documented. An EA consists of a lot of elements with manifold relationships between them. Thus changing the business model may have multiple impacts on other architectural elements. The EA engineering process deals with the development, change and optimization of architectural elements and their dependencies. Thus an EA provides a holistic view for both business and IT from the perspective of many stakeholders, which are involved in EA decision-making processes. Different stakeholders have specific concerns and are collaborating today in often unclear decision-making processes. In our research we are investigating information from collaborative decision-making processes to support stakeholders in taking current decisions. In addition we provide all information necessary to understand how and why decisions were taken. We are collecting the decision-related information automatically to minimize manual time intensive work as much as possible. The core contribution of our research extends a decisional metamodel, which links basic decisions with architectural elements and extends them with an associated decisional case context. Our aim is to support a new integral method for multi perspective and collaborative decision-making processes. We illustrate this by a practice-relevant decision-making scenario for Enterprise Architecture Engineering.
The capability of the method of Immersion transmission ellipsometry (ITE) (Jung et al. Int Patent WO, 2004/109260) to not only determine three-dimensional refractive indices in anisotropic thin films (which was already possible in the past), but even their gradients along the z-direction (perpendicular to the film plane) is investigated in this paper. It is shown that the determination of orientation gradients in deep-sub-lm films becomes possible by applying ITE in combination with reflection ellipsometry. The technique is supplemented by atomic force microscopy for measuring the film thickness. For a photooriented thin film, no gradient was found, as expected. For a photo-oriented film, which was subsequently annealed in a nematic liquid crystalline phase, an order was found similar to the one applied in vertically aligned nematic displays, with a tilt angle varying along the z-direction. For fresh films, gradients were only detected for the refractive index perpendicular to the film plane, as expected.
Purpose
Context awareness in the operating room (OR) is important to realize targeted assistance to support actors during surgery. A situation recognition system (SRS) is used to interpret intraoperative events and derive an intraoperative situation from these. To achieve a modular system architecture, it is desirable to de-couple the SRS from other system components. This leads to the need of an interface between such an SRS and context-aware systems (CAS). This work aims to provide an open standardized interface to enable loose coupling of the SRS with varying CAS to allow vendor-independent device orchestrations.
Methods
A requirements analysis investigated limiting factors that currently prevent the integration of CAS in today's ORs. These elicited requirements enabled the selection of a suitable base architecture. We examined how to specify this architecture with the constraints of an interoperability standard. The resulting middleware was integrated into a prototypic SRS and our system for intraoperative support, the OR-Pad, as exemplary CAS for evaluating whether our solution can enable context-aware assistance during simulated orthopedical interventions.
Results
The emerging Service-oriented Device Connectivity (SDC) standard series was selected to specify and implement a middleware for providing the interpreted contextual information while the SRS and CAS are loosely coupled. The results were verified within a proof of concept study using the OR-Pad demonstration scenario. The fulfillment of the CAS’ requirements to act context-aware, conformity to the SDC standard series, and the effort for integrating the middleware in individual systems were evaluated. The semantically unambiguous encoding of contextual information depends on the further standardization process of the SDC nomenclature. The discussion of the validity of these results proved the applicability and transferability of the middleware.
Conclusion
The specified and implemented SDC-based middleware shows the feasibility of loose coupling an SRS with unknown CAS to realize context-aware assistance in the OR.
The focus of the developed maturity model was set on processes. The concept of the widespread CMM and its practices has been transferred to the perioperative domain and the concept of the new maturity model. Additional optimization goals and technological as well as networking-specific aspects enable a process- and object-focused view of the maturity model in order to ensure broad coverage of different subareas. The evaluation showed that the model is applicable to the perioperative field. Adjustments and extensions of the maturity model are future steps to improve the rating and classification of the new maturity model.
One of the key challenges for automatic assistance is the support of actors in the operating room depending on the status of the procedure. Therefore, context information collected in the operating room is used to gain knowledge about the current situation. In literature, solutions already exist for specific use cases, but it is doubtful to what extent these approaches can be transferred to other conditions. We conducted a comprehensive literature research on existing situation recognition systems for the intraoperative area, covering 274 articles and 95 cross-references published between 2010 and 2019. We contrasted and compared 58 identified approaches based on defined aspects such as used sensor data or application area. In addition, we discussed applicability and transferability. Most of the papers focus on video data for recognizing situations within laparoscopic and cataract surgeries. Not all of the approaches can be used online for real-time recognition. Using different methods, good results with recognition accuracies above 90% could be achieved. Overall, transferability is less addressed. The applicability of approaches to other circumstances seems to be possible to a limited extent. Future research should place a stronger focus on adaptability. The literature review shows differences within existing approaches for situation recognition and outlines research trends. Applicability and transferability to other conditions are less addressed in current work.
Purpose
For the modeling, execution, and control of complex, non-standardized intraoperative processes, a modeling language is needed that reflects the variability of interventions. As the established Business Process Model and Notation (BPMN) reaches its limits in terms of flexibility, the Case Management Model and Notation (CMMN) was considered as it addresses weakly structured processes.
Methods
To analyze the suitability of the modeling languages, BPMN and CMMN models of a Robot-Assisted Minimally Invasive Esophagectomy and Cochlea Implantation were derived and integrated into a situation recognition workflow. Test cases were used to contrast the differences and compare the advantages and disadvantages of the models concerning modeling, execution, and control. Furthermore, the impact on transferability was investigated.
Results
Compared to BPMN, CMMN allows flexibility for modeling intraoperative processes while remaining understandable. Although more effort and process knowledge are needed for execution and control within a situation recognition system, CMMN enables better transferability of the models and therefore the system. Concluding, CMMN should be chosen as a supplement to BPMN for flexible process parts that can only be covered insufficiently by BPMN, or otherwise as a replacement for the entire process.
Conclusion
CMMN offers the flexibility for variable, weakly structured process parts, and is thus suitable for surgical interventions. A combination of both notations could allow optimal use of their advantages and support the transferability of the situation recognition system.
Intra-operative fluoroscopy-guided assistance system for transcatheter aortic valve implantation
(2014)
A new surgical assistance system has been developed to assist the correct positioning of the AVP during transapical TAVI. The developed assistance system automatically defines the target area for implanting the AVP under live 2-D fluoroscopy guidance. Moreover, this surgical assistance system works with low levels of contrast agent for the final deployment of AVP, reducing therefore long-term negative effects, such as renal failure in the elderly and high-risk patients.
An important shift in software delivery is the definition of a cloud service as an independently deployable unit by following the microservices architectural style. Container virtualization facilitates development and deployment by ensuring independence from the runtime environment. Thus, cloud services are built as container based systems - a set of containers that control the lifecycle of software and middleware components. However, using containers leads to a new paradigm for service development and operation: Self service environments enable software developers to deploy and operate container based systems on their own - you build it, you run it. Following this approach, more and more operational aspects are transferred towards the responsibility of software developers. In this work, we propose a concept for self-adaptive cloud services based on container virtualization in line with the microservices architectural style and present a model-based approach that assists software developers in building these services. Based on operational models specified by developers, the mechanisms required for self-adaptation are automatically generated. As a result, each container automatically adapts itself in a reactive, decentralized manner. We evaluate a prototype which leverages the emerging TOSCA standard to specify operational behavior in a portable manner.
Parallel applications are the computational backbone of major industry trends and grand challenges in science. Whereas these applications are typically constructed for dedicated High Performance Computing clusters and supercomputers, the cloud emerges as attractive execution environment, which provides on-demand resource provisioning and a pay-per-use model. However, cloud environments require specific application properties that may restrict parallel application design. As a result, design trade-offs are required to simultaneously maximize parallel performance and benefit from cloud-specific characteristics.
In this paper, we present a novel approach to assess the cloud readiness of parallel applications based on the design decisions made. By discovering and understanding the implications of these parallel design decisions on an application’s cloud readiness, our approach supports the migration of parallel applications to the cloud.We introduce an assessment procedure, its underlying meta model, and a corresponding instantiation to structure this multi-dimensional design space. For evaluation purposes, we present an extensive case study comprising three parallel applications and discuss their cloud readiness based on our approach.
Cloud resources can be dynamically provisioned according to application-specific requirements and are payed on a per-use basis. This gives rise to a new concept for parallel processing: Elastic parallel computations. However, it is still an open research question to which extent parallel applications can benefit from elastic scaling, which requires resource adaptation at runtime and corresponding coordination mechanisms. In this work, we analyze how to address these system-level challenges in the context of developing and operating elastic parallel tree search applications. Based on our findings, we discuss the design and implementation of TASKWORK, a cloud-aware runtime system specifically designed for elastic parallel tree search, which enables the implementation of elastic applications by means of higher-level development frameworks. We show how to implement an elastic parallel branch-and-bound application based on an exemplary development framework and report on our experimental evaluation that also considers several benchmarks for parallel tree search.
Container virtualization evolved into a key technology for deployment automation in line with the DevOps paradigm. Whereas container management systems facilitate the deployment of cloud applications by employing container based artifacts, parts of the deployment logic have been applied before to build these artifacts. Current approaches do not integrate these two deployment phases in a comprehensive manner. Limited knowledge on application software and middleware encapsulated in container-based artifacts leads to maintainability and configuration issues. Besides, the deployment of cloud applications is based on custom orchestration solutions leading to lock in problems. In this paper, we propose a two-phase deployment method based on the TOSCA standard. We present integration concepts for TOSCA-based orchestration and deployment automation using container-based artifacts. Our two-phase deployment method enables capturing and aligning all the deployment logic related to a software release leading to better maintainability. Furthermore, we build a container management system, which is composed of a TOSCA-based orchestrator on Apache Mesos, to deploy container-based cloud applications automatically.
In den letzten Jahren hat der Trend zur Digitalisierung und Konnektivität die Kundenerwartungen an den B2B-Kundenservice verändert. Vorliegender Artikel arbeitet mit zwei klaren Studienzielen und untersucht zum einen die Rolle von IoT (Internet of Things) und Cybersicherheit als Erfolgsfaktoren für den Business-to-Business (B2B) Kundenservice und zum anderen wie eine sichere Integration zu einem Wettbewerbsvorteil auf dem deutschen Markt beitragen kann. Durch einen qualitativen Ansatz mithilfe von 20 Befragungen wurde untersucht, dass IoT und Cybersicherheit als Erfolgsfaktoren für den deutschen B2B-Kundenservice angesehen werden können. Als Ergebnis liefert diese Studie fünf Kernaussagen (Hypothesen) aus qualitativen Interviews. Neben der Diskussion allgemeiner Erfolgsfaktoren und deren Einfluss, wurde die Rolle von IoT bei der Optimierung des B2B Kundendienstes diskutiert. Zudem werden potenzielle Sicherheitsrisken in Zusammenhang mit den Dienstleistungsmodellen, notwendige Anforderungen an Cybersicherheit sowie Datenerfassung erörtert. Abschließend wurde ein Modell entwickelt, das interne und externe Aspekte aufzeigt, die dazu beitragen, dass IoT und Cybersicherheit als Erfolgsfaktoren in der Aktivitätskette des Kunden in der Pre-Sales‑, Sales- und After-Sales-Phase erlebt werden.
Dieser praxis-nahe und industrie-übergreifende Artikel liefert somit Einblicke basierend auf qualitativen Erkenntnissen für weitere Forschung in der Theorie und befähigt Organisationen das Thema ganzeinheitlich zu betrachten.
Forecasting demand is challenging. Various products exhibit different demand patterns. While demand may be constant and regular for one product, it may be sporadic for another, as well as when demand occurs, it may fluctuate significantly. Forecasting errors are costly and result in obsolete inventory or unsatisfied demand. Methods from statistics, machine learning, and deep learning have been used to predict such demand patterns. Nevertheless, it is not clear for what demand pattern, which algorithm would achieve the best forecast. Therefore, even today a large number of models are used to forecast on a test period. The model with the best result on the test period is used for the actual forecast. This approach is computationally and time intensive and, in most cases, uneconomical. In our paper we show the possibility to use a machine learning classification algorithm, which predicts the best possible model based on the characteristics of a time series. The approach was developed and evaluated on a dataset from a B2B-technical-retailer. The machine learning classification algorithm achieves a mean ROC-AUC of 89%, which emphasizes the skill of the model.
Delphi Markets
(2023)
Delphi markets refer to approaches and implementations of integrating prediction markets and Delphi studies (Real-time Delphi). The combination of the two methods for producing forecasts can potentially compensate for each other´s weaknesses. For example, prediction markets can be used to select participants with expertise and also motivate long-term participation through their gamified approach and incentive mechanisms. In this paper, two potentials for prediction markets and four potentials for Delphi studies, which are made possible by integration, are derived theoretically. Subsequently, three different integration approaches are presented, on the basis of which the integration on user, market and Delphi question-level is exemplified and it is shown that, depending on the approach, not all potentials can be achieved. At the end, recommendations for the use of Delphi markets are derived, existing limitations for Delphi markets as well as future developments are pointed out.
The Internet of Things (IoT) is coined by many different standards, protocols, and data formats that are often not compatible to each other. Thus, the integration of different heterogeneous (IoT) components into a uniform IoT setup can be a time-consuming manual task. This lacking interoperability between IoT components has been addressed with different approaches in the past. However, only very few of these approaches rely on Machine Learning techniques. In this work, we present a new way towards IoT interoperability based on Deep Reinforcement Learning (DRL). In detail, we demonstrate that DRL algorithms, which use network architectures inspired by Natural Language Processing (NLP), can be applied to learn to control an environment by merely taking raw JSON or XML structures, which reflect the current state of the environment, as input. Applied to IoT setups, where the current state of a component is often reflected by features embedded into JSON or XML structures and exchanged via messages, our NLP DRL approach eliminates the need for feature engineering and manually written code for pre-processing of data, feature extraction, and decision making.
Decentralized energy systems are characterized by an ad hoc planing. The missing integration of energy objectives into business strategy creates difficulties resulting in inefficient energy architectures and decisions. Practice-proven methods such as balanced scorecard, enterprise architecture management and value network approach supports the transformation path towards an effective decentralized system. The methods are evaluated based on a case study. Managing multi-dimensionality, high complexity and multiple actors are the main drivers for an effective and efficient energy management system. The underlying basis to gain the positive impacts of these methods on decentralized corporate energy systems is digitization of energy data and processes.
Several studies analyzed existing Web APIs against the constraints of REST to estimate the degree of REST compliance among state-of-the-art APIs. These studies revealed that only a small number of Web APIs are truly RESTful. Moreover, identified mismatches between theoretical REST concepts and practical implementations lead us to believe that practitioners perceive many rules and best practices aligned with these REST concepts differently in terms of their importance and impact on software quality. We therefore conducted a Delphi study in which we confronted eight Web API experts from industry with a catalog of 82 REST API design rules. For each rule, we let them rate its importance and software quality impact. As consensus, our experts rated 28 rules with high, 17 with medium, and 37 with low importance. Moreover, they perceived usability, maintainability, and compatibility as the most impacted quality attributes. The detailed analysis revealed that the experts saw rules for reaching Richardson maturity level 2 as critical, while reaching level 3 was less important. As the acquired consensus data may serve as valuable input for designing a tool-supported approach for the automatic quality evaluation of RESTful APIs, we briefly discuss requirements for such an approach and comment on the applicability of the most important rules.
Hypermedia as the Engine of Application State (HATEOAS) is one of the core constraints of REST. It refers to the concept of embedding hyperlinks into the response of a queried or manipulated resource to show a client possible follow-up actions and transitions to related resources. Thus, this concept aims to provide a client with a navigational support when interacting with a Web-based application. Although HATEOAS should be implemented by any Web-based API claiming to be RESTful, API providers tend to offer service descriptions in place of embedding hyperlinks into responses. Instead of relying on a navigational support, a client developer has to read the service description and has to identify resources and their URIs that are relevant for the interaction with the API. In this paper, we introduce an approach that aims to identify transitions between resources of a Web-based API by systematically analyzing the service description only. We devise an algorithm that automatically derives a URI Model from the service description and then analyzes the payload schemas to identify feasible values for the substitution of path parameters in URI Templates. We implement this approach as a proxy application, which injects hyperlinks representing transitions into the response payload of a queried or manipulated resource. The result is a HATEOAS-like navigational support through an API. Our first prototype operates on service descriptions in the OpenAPI format. We evaluate our approach using ten real-world APIs from different domains. Furthermore, we discuss the results as well as the observations captured in these tests.
Several diseases occur due to asbestos exposure. Until today, asbestos predicted mortality and morbidity will increase because of the long latency period. Actually, the methods to investigate asbestos related disease are mostly invasive. Therefore, the aim of the present paper was to investigate, whether signals in human breath could be correlated to Asbestos related lung diseases using a multi-capillary column (MCC) connected to an ion mobility spectrometer (IMS) as non-invasive method. Here, the breath samples of 10 mL of 25 patients suffering from asbestos related diseases. This group includes patients with asbestos related pleural thickening with and without pulmonary fibrosis. Twelve healthy persons constitute the control group and the breath samples are compared with those of the BK4103 patients. In total 83 peaks are found in the IMS-Chromatogram. A discrimination was possible with p-values <0.001 for two peaks (99.9 %), <0.01 (99 %) for 5 peaks and <0.05 (95 %) for 17 peaks. The most discrimination peaks alpha pinene and 4-ethyltoluol were identified among some others with lower p-values. The corresponding Box-and-Whisker-Plots comparing both groups are presented. In addition, a decision tree including all peaks was created that shows a differentiation with alpha pinene between BK4103 (pleural plaques group) and the control group. In addition, the sensitivity was calculated to 96 %, specificity was 50 %, positive and negative predictive values were 80 % and 86 %. Ion mobility spectrometry was introduced as non-invasive method to separate both groups Asbestos related and healthy. Naturally, the findings need further confirmation on larger population groups, but encourage further investigations, too.
Companies are constantly changing their business process models. In team environments, different versions of a process model are created at the same time. These versions of a process model need to be merged from time to time to consolidate changes and create a new common version.
In this short paper, we propose a solution for modifying a merge result. The goal is to create a meaningful merge result by adding connector nodes to the model at specific locations. This increases the amount of possible result models and reduces additional implementation effort.
Software and system development is complex and diverse, and a multitude of development approaches is used and combined with each other to address the manifold challenges companies face today. To study the current state of the practice and to build a sound understanding about the utility of different development approaches and their application to modern software system development, in 2016, we launched the HELENA initiative. This paper introduces the 2nd HELENA workshop and provides an overview of the current project state. In the workshop, six teams present initial findings from their regions, impulse talk are given, and further steps of the HELENA roadmap are discussed.
Research organisations are not only contributing to sustainable development but also contribute to scientific findings. As key influencers of innovation; employers and publicly funded research organisations not only have the social mandate to deal with their responsibilities regarding the environment and society, but also drive to understand their social responsibility for their employees and the impact on research and operational processes. Sponsored by the German Federal Ministry for Education and Research (BMBF), this paper presents the results of the joint research project; LENA—Guidelines for Sustainability Management and describes how 3 of Germany’s biggest research organisations (Fraunhofer-Gesellschaft, Leibniz Association and Helmholtz Association) face current challenges in human resource management of research organisations by the integration of a common understanding of sustainability and a broad-based framework. The empirical basis is built by a qualitative organisational ethnographical study which reflects the expert knowledge, everyday experiences and the subject-oriented interpretation of sustainability in human resource management. The result derives concrete recommendations for the institutional practice and offers structured and methodologically proven options for action addressing the stakeholders in human resource management in research institutions.