Informatik
Refine
Document Type
- Conference proceeding (570)
- Journal article (199)
- Book chapter (62)
- Doctoral Thesis (18)
- Book (10)
- Anthology (10)
- Patent / Standard / Guidelines (2)
- Report (2)
- Working Paper (2)
Is part of the Bibliography
- yes (875)
Institute
- Informatik (875)
- Technik (2)
- ESB Business School (1)
Publisher
- Springer (205)
- Hochschule Reutlingen (104)
- IEEE (90)
- Gesellschaft für Informatik e.V (62)
- Elsevier (47)
- Association for Computing Machinery (38)
- IARIA (26)
- RWTH Aachen (15)
- De Gruyter (14)
- Association for Information Systems (12)
The Twelfth International Conference on Advances in Databases, Knowledge, and Data Applications (DBKDA 2020) continued a series of events covering a large spectrum of topics related to advances in fundamentals on databases, evolution of relation between databases and other domains, data base technologies and content processing, as well as specifics in applications domains databases. Advances in different technologies and domains related to databases triggered substantial improvements for content processing, information indexing, and data, process and knowledge mining. The push came from Web services, artificial intelligence, and agent technologies, as well as from the generalization of the XML adoption. High-speed communications and computations, large storage capacities, and load-balancing for distributed databases access allow new approaches for content processing with incomplete patterns, advanced ranking algorithms and advanced indexing methods. Evolution on e-business, ehealth and telemedicine, bioinformatics, finance and marketing, geographical positioning systems put pressure on database communities to push the ‘de facto’ methods to support new requirements in terms of scalability, privacy, performance, indexing, and heterogeneity of both content and technology.
Entrepreneurship education is becoming increasingly important in higher education and also drives the development of innovative teaching formats, which can increase student engagement. It does, however, need greater international focus to become more attractive for both domestic and international students. This paper presents the examination and course design of two case studies, which promote entrepreneurship education for domestic and international students. These examples show that entrepreneurship courses are attractive due to their focus on interdisciplinarity, experience-based learning, and project-based work. Following a design-based research approach, this paper provides a practical contribution by offering a detailed overview of course design principles, classroom practice and presents reflections and learnings from an iterative development process.
A holistic approach to digitization enables decision-makers to achieve new efficiency in corporate performance management. The digitalization improves the quality, validity and speed of information retrieval and processing. At present, most corporations are confronted with the problem of not being able to organize, categorize and visualize decision-relevant information. To meet the challenges of information management, the Management Cockpit provides an information center for managers. In accordance with the specific working environment of the executives, the Management Cockpit offers a quick and comprehensive overview of the company's situation. Today, the current situation of a company is no longer only influenced by internal factors, but also by its public image. Social media monitoring and analysis is therefore a crucial component for the external factors of successful management. Real-time monitoring of the emotions and behaviors of consumers and customers thus contributes to effective controlling of allbusiness areas. The intelligent factories promise to collect data for internal factors, but the current reality in manufacturing looks different. Production often consists of a large number of different machines, with varying degrees of digitization and limited sensor data availability. In order to close this gap, we developed a compact sensor board with network components, which allows a flexible design with different sensors for a wide variety of applications. The sensor data enable decision makers to adapt the supply chain based on their internal and external observations in the Management Cockpit. Due to the realtime and long-term monitoring and analytic possibilities the Management Cockpit provides a multi-dimensional view of the company and supports an holistic Corporate Performance Management.
With significant advancements in digital technologies, firms find themselves competing in an increasingly dynamic business environment. It is of paramount importance that organizations undertake proper governance mechanisms with respect to their business and IT strategies. Therefore, IT governance (ITG) has become an important factor for firm performance. In recent years, agility has evolved as a core concept for governance, especially in the area of software development. However, the impact of agility on ITG and firm performance has not been analyzed by the broad scientific community. This paper focuses on the question, how the concept of agility affects the ITG–firm performance relationship. The conceptual model for this question was tested by a quantitative research process with 400 executives responding to a standardized survey. Findings show that the adoption of agile principles, values, and best practices to the context of ITG leads to meaningful results for governance, business/IT alignment, and firm performance.
The advent of chatbots in customer service solutions received increasing attention by research and practice throughout the last years. However, the relevant dimensions and features for service quality and service performance for chatbots remain quite unclear. Therefore, this research develops and tests a conceptual model for customer service quality and customer service performance in the context of chatbots. Additionally, the impact of the developed service dimensions on different customer relationship metrics is measured across different service channels (hotline versus chatbots). Findings of six independent studies indicate a strong main effect of the conceptualized service dimensions on customer satisfaction, service costs, intention to service reusage, word-of-mouth, and customer loyalty. However, different service dimensions are relevant for chatbots compared to a traditional service hotline.
Autonomous driving is becoming the next big digital disruption in the automotive industry. However, the possibility of integrating autonomous driving vehicles into current transportation systems not only involves technological issues but also requires the acceptance and adoption of users. Therefore, this paper develops a conceptual model for user acceptance of autonomous driving vehicles. The corresponding model is tested through a standardized survey of 470 respondents in Germany. Finally, the findings are discussed in relation to the current developments in the automotive industry, and recommendations for further research are given.
The shift of populations to cities is creating challenges in many respects, thus leading to increasing demand for smart solutions of urbanization problems. Smart city applications range from technical and social to economic and ecological. The main focus of this work is to provide a systematic literature review of smart city research to answer two main questions: (1) How is current research on smart cities structured? And (2) What directions are relevant for future research on smart cities? To answer these research questions, a text-mining approach is applied to a large number of publications. This provides an overview and gives insights into relevant dimensions of smart city research. Although the main dimensions of research are already described in the literature, an evaluation of the relevance of such dimensions is missing. Findings suggest that the dimensions of environment and governance are popular, while the dimension of economy has received only limited attention.
The shift of populations to cities is creating challenges in many respects, thus leading to increasing demand for smart solutions of urbanization problems. Smart city applications range from technical and social to economic and ecological. The main focus of this work is to provide a systematic literature review of smart city research to answer two main questions: (1) How is current research on smart cities structured? and (2) What directions are relevant for future research on smart cities? To answer these research questions, a text-mining approach is applied to a large number of publications. This provides an overview and gives insights into relevant dimensions of smart city research. Although the main dimensions of research are already described in the literature, an evaluation of the relevance of such dimensions is missing. Findings suggest that the dimensions of environment and governance are popular, while the dimension of economy has received only limited attention.
Context: Currently, most companies apply approaches for product roadmapping that are based on the assumption that the future is highly predicable. However, nowadays companies are facing the challenge of increasing market dynamics, rapidly evolving technologies, and shifting user expectations. Together with the adaption of lean and agile practices it makes it increasingly difficult to plan and predict upfront which products, services or features will satisfy the needs of the customers. Therefore, they are struggling with their ability to provide product roadmaps that fit into dynamic and uncertain market environments and that can be used together with lean and agile software development practices.
Objective: To gain a better understanding of modern product roadmapping processes, this paper aims to identify suitable processes for the creation and evolution of product roadmaps in dynamic and uncertain market environments.
Method: We performed a Grey Literature Review (GLR) according to the guidelines from Garousi et al.
Results: 32 approaches to product roadmapping were identified. Typical characteristics of these processes are the strong connection between the product roadmap and the product vision, an emphasis on stakeholder alignment, the definition of business and customer goals as part of the roadmapping process, a high degree of flexibility with respect to reaching these goals, and the inclusion of validation activities in the roadmapping process. An overall goal of nearly all approaches is to avoid waste by early reducing development and business risks. From the list of the 32 approaches found, four representative roadmapping processes are described in detail.
This book discusses important topics for engineering and managing software startups, such as how technical and business aspects are related, which complications may arise and how they can be dealt with. It also addresses the use of scientific, engineering, and managerial approaches to successfully develop software products in startup companies.
The book covers a wide range of software startup phenomena, and includes the knowledge, skills, and capabilities required for startup product development; team capacity and team roles; technical debt; minimal viable products; startup metrics; common pitfalls and patterns observed; as well as lessons learned from startups in Finland, Norway, Brazil, Russia and USA. All results are based on empirical findings, and the claims are backed by evidence and concrete observations, measurements and experiments from qualitative and quantitative research, as is common in empirical software engineering.
The book helps entrepreneurs and practitioners to become aware of various phenomena, challenges, and practices that occur in real-world startups, and provides insights based on sound research methodologies presented in a simple and easy-to-read manner. It also allows students in business and engineering programs to learn about the important engineering concepts and technical building blocks of a software startup. It is also suitable for researchers at different levels in areas such as software and systems engineering, or information systems who are studying advanced topics related to software business.
Product roadmaps are an important tool in product development. They provide direction, enable consistent development in relation to a product vision and support communication with relevant stakeholders. There are many different formats for product roadmaps, but they are often based on the assumption that the future is highly predictable. However, especially software-intensive businesses are faced with increasing market dynamics, rapidly evolving technologies and changing user expectations. As a result, many organizations are wondering what roadmap format is appropriate for them and what components it should have to deal with an unpredictable future. Objectives: To gain a better understanding of the formats of product roadmaps and their components, this paper aims to identify suitable formats for the development and handling of product roadmaps in dynamic and uncertain markets. Method: We performed a grey literature review (GLR) according to the guidelines from Garousi. Results: A Google search identified 426 articles, 25 of which were included in this study. First, various components of the roadmap were identified, especially the product vision, themes, goals, outcomes and outputs. In addition, various product roadmap formats were discovered, such as feature-based, goal-oriented, outcome-driven and a theme-based roadmap. The roadmap components were then assigned to the various product roadmap formats. This overview aims at providing initial decision support for companies to select a suitable product roadmap format and adapt it to their own needs.
Selecting a suitable development method for a specific project context is one of the most challenging activities in process design. Every project is unique and, thus, many context factors have to be considered. Recent research took some initial steps towards statistically constructing hybrid development methods, yet, paid little attention to the peculiarities of context factors influencing method and practice selection. In this paper, we utilize exploratory factor analysis and logistic regression analysis to learn such context factors and to identify methods that are correlated with these factors. Our analysis is based on 829 data points from the HELENA dataset. We provide five base clusters of methods consisting of up to 10 methods that lay the foundation for devising hybrid development methods. The analysis of the five clusters using trained models reveals only a few context factors, e.g., project/product size and target application domain, that seem to significantly influence the selection of methods. An extended descriptive analysis of these practices in the context of the identified method clusters also suggests a consolidation of the relevant practice sets used in specific project contexts.
Hochschulen sind Teil des Innovationsökosystems: in einer kooperativen Austauschbeziehung fördern sie die regionale Wirtschaft und die gesellschaftliche Entwicklung. Deshalb ist die Förderung von Innovation, Kreativität und unternehmerischem Denken eine wichtige Aufgabe. Die Europäische Kommission hat bereits 2005 unternehmerisches Denken und Handeln als Schlüsselkompetenz für das 21. Jahrhundert definiert: „Unternehmerische Kompetenz ist die Fähigkeit, Ideen in die Tat umzusetzen“ (Europäische Kommission, 2005, S. 21). Entrepreneurship Education boomt und die Förderung von unternehmerischen Kompetenzen an Hochschulen wird vorangetrieben – damit ist die Förderung von Gründungskultur nicht nur Teil der Wirtschaftsbildung sondern vielmehr als Querschnittsaufgabe zu verstehen. Die Entrepreneurial Mission verändert die Lehr- und Lern kultur an den Hochschulen. Zum einen ist es Ziel, Entrepreneurship in der Breite an den Hochschulen zu verankern: Unternehmerisches Denken und Handeln ist eine Kernkompetenz. Zum anderen fördert die Start-up Education an Hochschulen aktiv Unternehmertalente und Ausgründungen.
Das Projekt “Spinnovation” ist ein Verbundprojekt der Hochschule Reutlingen, der Hochschule Aalen und der Hochschule der Medien und wird vom Ministerium für Wissenschaft, Forschung und Kunst Baden-Württemberg in der Ausschreibung „Gründungskultur in Studium und Lehre“ gefördert. Seit 2016 wurden dazu an den beteiligten Hochschulen zahlreiche neue Angebote für Studierende entwickelt, um das Thema Entrepreneurship Education curricular zu integrieren und eine Änderung des Mindsets in Richtung Entrepreneurship und Innovation zu bewirken. Basierend auf den Erfahrungen und Ergebnissen aus dem Verbundprojekt Spinnovation können konkrete Handlungsempfehlungen für die Entrepreneurship Education an Hochschulen abgeleitet werden.
With the expansion of cyber-physical systems (CPSs) across critical and regulated industries, systems must be continuously updated to remain resilient. At the same time, they should be extremely secure and safe to operate and use. The DevOps approach caters to business demands of more speed and smartness in production, but it is extremely challenging to implement DevOps due to the complexity of critical CPSs and requirements from regulatory authorities. In this study, expert opinions from 33 European companies expose the gap in the current state of practice on DevOps-oriented continuous development and maintenance. The study contributes to research and practice by identifying a set of needs. Subsequently, the authors propose a novel approach called Secure DevOps and provide several avenues for further research and development in this area. The study shows that, because security is a cross-cutting property in complex CPSs, its proficient management requires system-wide competencies and capabilities across the CPSs development and operation.
Hardly any software development process is used as prescribed by authors or standards. Regardless of company size or industry sector, a majority of project teams and companies use hybrid development methods (short: hybrid methods) that combine different development methods and practices. Even though such hybrid methods are highly individualized, a common understanding of how to systematically construct synergetic practices is missing. In this article, we make a first step towards a statistical construction procedure for hybrid methods. Grounded in 1467 data points from a large‐scale practitioner survey, we study the question: What are hybrid methods made of and how can they be systematically constructed? Our findings show that only eight methods and few practices build the core of modern software development. Using an 85% agreement level in the participants' selections, we provide examples illustrating how hybrid methods can be characterized by the practices they are made of. Furthermore, using this characterization, we develop an initial construction procedure, which allows for defining a method frame and enriching it incrementally to devise a hybrid method using ranked sets of practice.
Elasticity is considered to be the most beneficial characteristic of cloud environments, which distinguishes the cloud from clusters and grids. Whereas elasticity has become mainstream for web-based, interactive applications, it is still a major research challenge how to leverage elasticity for applications from the high-performance computing (HPC) domain, which heavily rely on efficient parallel processing techniques. In this work, we specifically address the challenges of elasticity for parallel tree search applications. Well-known meta-algorithms based on this parallel processing technique include branch-and-bound and backtracking search. We show that their characteristics render static resource provisioning inappropriate and the capability of elastic scaling desirable. Moreover, we discuss how to construct an elasticity controller that reasons about the scaling behavior of a parallel system at runtime and dynamically adapts the number of processing units according to user-defined cost and efficiency thresholds. We evaluate a prototypical elasticity controller based on our findings by employing several benchmarks for parallel tree search and discuss the applicability of the proposed approach. Our experimental results show that, by means of elastic scaling, the performance can be controlled according to user-defined thresholds, which cannot be achieved with static resource provisioning.
Public transport maps are typically designed in a way to support route finding tasks for passengers while they also provide an overview about stations, metro lines, and city-specific attractions. Most of those maps are designed as a static representation, maybe placed in a metro station or printed in a travel guide. In this paper we describe a dynamic, interactive public transport map visualization enhanced by additional views for the dynamic passenger data on different levels of temporal granularity. Moreover, we also allow extra statistical information in form of density plots, calendar-based visualizations, and line graphs. All this information is linked to the contextual metro map to give a viewer insights into the relations between time points and typical routes taken by the passengers. We illustrate the usefulness of our interactive visualization by applying it to the railway system of Hamburg in Germany while also taking into account the extra passenger data. As another indication for the usefulness of the interactively enhanced metro maps we conducted a user experiment with 20 participants.
JumpAR kombiniert die Welt der Augmented Reality (AR) mit dem weltbekannten Jump ’n’ Run Genre in einem Mobile Game. Der Spieler kreiert einen individuellen Spielparcours in seiner realen Umgebung und navigiert seine Spielfigur auf virtuellen Plattformen durch diesen. Der mit Unity entwickelte JumpAR Prototyp wurde nach Umsetzungen der Grundfunktionen und Mechaniken im Rahmen eines Nutzertests analysiert. Die Integration von echten Gegenständen aus dem Umfeld des Spielers führt im Spielfluss zu einer starken Verknüpfung der virtuellen und realen Welt, was eine neue AR-Interaktionsform für Handyspiele darstellt.
Systemische Betrachtung des therapeutischen Roboters Paro im Vergleich zu dem Haustierroboter AIBO
(2020)
Roboter sind in der heutigen Zeit nicht nur in der Industrie zu finden, sondern werden immer häufiger in privaten Lebensbereichen eingesetzt. Ein Beispiel hierfür ist der soziale Therapie-Roboter Paro. Dieser ist dem Verhalten und Aussehen einer jungen Robbe nachempfunden, drückt Gefühle aus und wird besonders in Pflegeheimen eingesetzt. Dabei zeigt er positive Auswirkungen auf das Wohlbefinden pflegebedürftiger Menschen. Diese Arbeit stellt den Roboter Paro in einer systemischen Analyse dar: hierbei werden Systemkontext, Anwendungsfälle, Anforderungen und Struktur betrachtet. Anschließend erfolgt eine Analyse des Haustierroboters AIBO, welcher einem Welpen ähnelt und verstärkt der Unterhaltung von Privatpersonen dient. Es werden Gemeinsamkeiten und Unterschiede zwischen den Systemen herausgearbeitet. Dabei wird ersichtlich, dass beide Systeme dem Nutzer vorrangig Gesellschaft leisten, jedoch verschiedene Anforderungen besitzen und in unterschiedlichen Anwendungsdomänen eingesetzt werden. Zudem besitzt AIBO vielfältigere Fähigkeiten und einen höheren Bewegungsgrad als Paro. Dies spiegelt sich in einer komplexeren Struktur der Hardware wider.
Die rasante Entwicklung der Sensortechnik im Endverbraucherbereich lässt einen klinischen Nutzen der verfügbaren dezentral erhobenen Daten aus dem Patientenalltag zur Überwachung des individuellen Gesundheitszustands vermuten. Zur Überprüfung dieser Vermutung ist die Bereitstellung einer entsprechenden Plattform in den klinischen Alltag erforderlich. Hierzu wird die bwHealthApp entwickelt, mit der sowohl die aktuelle Bandbreite als auch die Evolution der Sensortechnik auf die klinische Anwendung abbildbar ist. Mit dem flexiblen Entwurf lässt sich der klinische Nutzen für die personalisierte Medizin evaluieren. Außerdem bietet die bwHealthApp einen an Machbarkeit orientierten Diskussionsbeitrag zu offenen rechtlichen, regulatorischen und ethischen Fragestellungen der Digitalisierung in der Medizin in Deutschland.
The livestock sector is growing steadily and is responsible for around 18% of global greenhouse‐gas‐emissions, which is more than the global transport sec-tor (Steinfeld et al. 2006). This paper examines the potential of social marketing to reduce meat consumption. The aim is to understand consumers’ motivation in diet choices and to learn what opportunities social marketing can provide to counteract negative environmental and health trends. The authors believe that research to answer this question should start in metropolitan areas, be-cause measures should be especially effective there. Based on the Theory of Planned Behaviour (TPB, Ajzen 1991) and the Technology‐Acceptance‐Model by Huijts et al. (2012), an online‐study with participants from the metropolitan region (n = 708) was conducted in which central socio‐psychological constructs for a meat consumption reduction were examined. It was shown that attitude, personal norm and habit have a critical influence on the intention to reduce meat consumption. A segmentation of consumers based on these factors led to three consumer clusters: vegetarians/flexitarians, potential flexitarians and convinced meat eaters. Potential flexitarians are an especially relevant target group for the development of social‐marketing‐measures to reduce meat consumption. In co‐creation‐workshops with potential flexitarians from the metropolitan region, barriers and benefits of reducing meat consumption were identified. The factors of environmental protection, animal welfare and desire for variety turn out to be the most relevant motivational factors. Based on these factors, consumers proposed a variety of social marketing measures, such as applications and labels to inform about the environmental impact of meat products.
In diesem Beitrag wird ein neuer Ansatz vorgestellt, welcher eine schwerkraftreduzierte Navigation innerhalb einer VR-Umgebung erlaubt, wie beispielsweise ein simulierter Mondspaziergang. Zur Navigation in der VR-Umgebung wird der Cyberith Virtualizer ein-gesetzt. Die Schwerkraftsimulation erfolgt mittels eines einstellbaren Gurtsystems, das anelastischen Seilen aufgehängt wird und abgestufte Schwerkraftkompensationen erlaubt. Als Umgebung wurde ein Raumschiffszenario sowie eine Mondoberfläche generiert. Hier sind in der aktuellen Anwendung einfache Interaktionen möglich. In Anlehnung an existierende Gravity Offload Systeme wird die Lösung ViRGOS bezeichnet. ViRGOS wurde bereits bei verschiedenen Besuchsterminen und Hochschulevents eingesetzt, so dass erste Rückmeldungen von Nutzern eingeholt werden konnten.
Motto der Herbstkonferenz Informatics Inside 2020 ist KInside. Wieder einmal blicken Studierende inside und schauen sich Methoden, Anwendungen und Zusammenhänge genauer an. Die Beiträge sind vielfältig und entsprechend dem Studiengang human-centered. Es ist der Anspruch, dass sich die Themen um die Bedürfnisse der Menschen drehen und eingesetzte Methoden kein Selbstzweck sind, sondern am Nutzen für den Menschen gemessen werden.
Modern mixed (HTAP)workloads execute fast update-transactions and long running analytical queries on the same dataset and system. In multi-version (MVCC) systems, such workloads result in many short-lived versions and long version-chains as well as in increased and frequent maintenance overhead.
Consequently, the index pressure increases significantly. Firstly, the frequent modifications cause frequent creation of new versions, yielding a surge in index maintenance overhead. Secondly and more importantly, index-scans incur extra I/O overhead to determine, which of the resulting tuple versions are visible to the executing transaction (visibility-check) as current designs only store version/timestamp information in the base table – not in the index. Such index-only visibility-check is critical for HTAP workloads on large datasets.
In this paper we propose the Multi Version Partitioned B-Tree (MV-PBT) as a version-aware index structure, supporting index-only visibility checks and flash-friendly I/O patterns. The experimental evaluation indicates a 2x improvement for analytical queries and 15% higher transactional throughput under HTAP workloads. MV-PBT offers 40% higher tx. throughput compared to WiredTiger’s LSM-Tree implementation under YCSB.
In this paper, we present a new approach for achieving robust performance of data structures making it easier to reuse the same design for different hardware generations but also for different workloads. To achieve robust performance, the main idea is to strictly separate the data structure design from the actual strategies to execute access operations and adjust the actual execution strategies by means of so-called configurations instead of hard-wiring the execution strategy into the data structure. In our evaluation we demonstrate the benefits of this configuration approach for individual data structures as well as complex OLTP workloads.
The tale of 1000 cores: an evaluation of concurrency control on real(ly) large multi-socket hardware
(2020)
In this paper, we set out the goal to revisit the results of “Starring into the Abyss [...] of Concurrency Control with [1000] Cores” and analyse in-memory DBMSs on today’s large hardware. Despite the original assumption of the authors, today we do not see single-socket CPUs with 1000 cores. Instead multi-socket hardware made its way into production data centres. Hence, we follow up on this prior work with an evaluation of the characteristics of concurrency control schemes on real production multi-socket hardware with 1568 cores. To our surprise, we made several interesting findings which we report on in this paper.
Massive data transfers in modern data intensive systems resulting from low data-locality and data-to-code system design hurt their performance and scalability. Near-data processing (NDP) and a shift to code-to-data designs may represent a viable solution as packaging combinations of storage and compute elements on the same device has become viable.
The shift towards NDP system architectures calls for revision of established principles. Abstractions such as data formats and layouts typically spread multiple layers in traditional DBMS, the way they are processed is encapsulated within these layers of abstraction. The NDP-style processing requires an explicit definition of cross-layer data formats and accessors to ensure in-situ executions optimally utilizing the properties of the underlying NDP storage and compute elements. In this paper, we make the case for such data format definitions and investigate the performance benefits under NoFTL-KV and the COSMOS hardware platform.
Massive data transfers in modern key/value stores resulting from low data-locality and data-to-code system design hurt their performance and scalability. Near-data processing (NDP) designs represent a feasible solution, which although not new, have yet to see widespread use.
In this paper we introduce nKV, which is a key/value store utilizing native computational storage and near-data processing. On the one hand, nKV can directly control the data and computation placement on the underlying storage hardware. On the other hand, nKV propagates the data formats and layouts to the storage device where, software and hardware parsers and accessors are implemented. Both allow NDP operations to execute in host-intervention-free manner, directly on physical addresses and thus better utilize the underlying hardware. Our performance evaluation is based on executing traditional KV operations (GET, SCAN) and on complex graph-processing algorithms (Betweenness Centrality) in-situ, with 1.4×-2.7× better performance on real hardware – the COSMOS+ platform.
nKV in action: accelerating KVstores on native computational storage with NearData processing
(2020)
Massive data transfers in modern data intensive systems resulting from low data-locality and data-to-code system design hurt their performance and scalability. Near-data processing (NDP) designs represent a feasible solution, which although not new, has yet to see widespread use.
In this paper we demonstrate various NDP alternatives in nKV, which is a key/value store utilizing native computational storage and near-data processing. We showcase the execution of classical operations (GET, SCAN) and complex graph-processing algorithms (Betweenness Centrality) in-situ, with 1.4x-2.7x better performance due to NDP. nKV runs on real hardware - the COSMOS+ platform.
The automation of work by means of disruptive technologies such as Artificial Intelligence (AI) and Robotic Process Automation (RPA) is currently intensely discussed in business practice and academia. Recent studies indicate that many tasks manually conducted by humans today will not in the future. In a similar vein, it is expected that new roles will emerge. The aim of this study is to analyze prospective employment opportunities in the context of RPA in order to foster our understanding of the pivotal qualifications, expertise and skills necessary to find an occupation in a completely changing world of work. This study is based on an explorative, content analysis of 119 job advertisements related to RPA in Germany. The data was collected from major German online job platforms, qualitatively coded, and subsequently analyzed quantitatively. The research indicates that there indeed are employment opportunities, especially in the consulting sector. The positions require different technological expertise such as specific programming languages and knowledge in statistics. The results of this study provide guidance for organizations and individuals on reskilling requirements for future employment. As many of the positions require profound IT expertise, the generally accepted perspective that existing employees affected by automation can be retrained to work in the emerging positions has to be seen extremely critical. This paper contributes to the body of knowledge by providing a novel perspective on the ongoing discussion of employment opportunities, and reskilling demands of the existing workforce in the context of recent technological developments and automation.
Urban platforms are essential for smart and sustainable city planning and operation. Today they are mostly designed to handle and connect large urban data sets from very different domains. Modelling and optimisation functionalities are usually not part of the cities software infrastructure. However, they are considered crucial for transformation scenario development and optimised smart city operation. The work discusses software architecture concepts for such urban platforms and presents case study results on the building sector modelling, including urban data analysis and visualisation. Results from a case study in New York are presented to demonstrate the implementation status.
In networked operating room environments, there is an emerging trend towards standardized non-proprietary communication protocols which allow to build new integration solutions and flexible human-machine interaction concepts. The most prominent endeavor is the IEEE 11073 SDC protocol. For some uses cases, it would be helpful if not just medical devices could be controlled based on SDC, but also building automation systems like light, shutters, air condition, etc. For those systems, the KNX protocol is widely used. We build an SDC-to-KNX gateway which allows to use the SDC protocol for sending commands to connected KNX devices. The first prototype system was successfully implemented at the demonstration operating room at Reutlingen University. This is a first step toward the integration of a broader variety of KNX devices.
Documentation of clinical processes, especially in the perioperative are, is a base requirement for quality of service. Nonetheless, the documentation is a burden for the medical staff since it distracts from the clinical core process. An intuitive and user-friendly documentation system could increase documentation quality and reduce documentation workload. The optimal system solution would know what happened and the person documenting the step would need a single “confirm” button. In many cases, such a linear flow of activities is given as long as only one profession (e.g. anaestesiology, scrub nurse) is considered, but even in such cases, there might be derivations from the linear process flow and further interaction is required.
Intraoperative brain deformation, so called brain shift, affects the applicability of preoperative magnetic resonance imaging (MRI) data to assist the procedures of intraoperative ultrasound (iUS) guidance during neurosurgery. This paper proposes a deep learning-based approach for fast and accurate deformable registration of preoperative MRI to iUS images to correct brain shift. Based on the architecture of 3D convolutional neural networks, the proposed deep MRI-iUS registration method has been successfully tested and evaluated on the retrospective evaluation of cerebral tumors (RESECT) dataset. This study showed that our proposed method outperforms other registration methods in previous studies with an average mean squared error (MSE) of 85. Moreover, this method can register three 3D MRI-US pair in less than a second, improving the expected outcomes of brain surgery.
Checklists are a valuable tool to ensure process quality and quality of care. To ensure proper integration in clinical processes, it would be desirable to generate checklists directly from formal process descriptions. Those checklists could also be used for user interaction in context-aware surgical assist systems. We built a tool to automatically convert Business Process Model and Notation (BPMN) process models to checklists displayed as HTML websites. Gateways representing decisions are mapped to checklist items that trigger dynamic content loading based on the placed checkmark. The usability of the resulting system was positively evaluated regarding comprehensibility and end-user friendliness.
At DBKDA 2019, we demonstrated that StrongDBMS with simple but rigorous optimistic algorithms, provides better performance in situations of high concurrency than major commercial database management systems (DBMS). The demonstration was convincing but the reasons for its success were not fully analysed. There is a brief account of the results below. In this short contribution, we wish to discuss the reasons for the results. The analysis leads to a strong criticism of all DBMS algorithms based on locking, and based on these results, it is not fanciful to suggest that it is time to re-engineer existing DBMS.
Due to digitalization, constant technological progress and ever shorter product life cycles, enterprises are currently facing major challenges. In order to succeed in the market, business models have to be adapted more often and more quickly to changing market conditions than they used to be. Fast adaptability, also called agility, is a decisive competitive factor in today’s world. Because of the ever-growing IT part of products and the fact that they are manufactured using IT, changing the business model has a major impact on the enterprise architecture (EA). However, developing EAs is a very complex task, because many stakeholders with conflicting interests are involved in the decision-making process. Therefore, a lot of collaboration is required. To support organizations in developing their EA, this article introduces a novel integrative method that systematically integrates stakeholder interests into decision-making activities. By using the method, collaboration between stakeholders involved is improved by identifying points of contact between them. Furthermore, standardized activities make decision-making more transparent and comparable without limiting creativity.
Enterprises are currently transforming their strategy, processes, and their information systems to extend their degree of digitalization. The potential of the Internet and related digital technologies, like Internet of Things, services computing, cloud computing, artificial intelligence, big data with analytics, mobile systems, collaboration networks, and cyber physical systems both drives and enables new business designs. Digitalization deeply disrupts existing businesses, technologies and economies and fosters the architecture of digital environments with many rather small and distributed structures. This has a strong impact for new value producing opportunities and architecting digital services and products guiding their design through exploiting a Service-Dominant Logic. The main result of the book chapter extends methods for integral digital strategies with value-oriented models for digital products and services which are defined in the framework of a multi-perspective digital enterprise architecture reference model.
This chapter presents an introduction to the emerging trends for architecting the digital transformation having a strong focus on digital products, intelligent services, and related systems together with methods, models and architectures. The primary aim of this book is to highlight some of the most recent research results in the field. We are providing a focused set of brief descriptions of the chapters included in the book.
The digital transformation is today’s dominant business transformation having a strong influence on how digital services and products are designed in a service-dominant way. A popular underlying theory of value creation and economic exchange that is known as the service-dominant (S-D) logic can be connected to many successful digital business models. However, S-D logic by itself is abstract. Companies cannot directly use it as an instrument for business model innovation and design in an easy way. To address this a comprehensive ideation method based on S-D logic is proposed, called service-dominant design (SDD). SDD is aimed at supporting firms in the transition to a service- and value-oriented perspective. The method provides a simplified way to structure the ideation process based on four model components. Each component consists of practical implications, auxiliary questions and visualization techniques that were derived from a literature review, a use case evaluation of digital mobility and a focus group discussion. SDD represents a first step of having a toolset that can support established companies in the process of service- and value-orientation as part of their digital transformation efforts.
This research-oriented book presents key contributions on architecting the digital transformation. It includes the following main sections covering 20 chapters: · Digital Transformation · Digital Business · Digital Architecture · Decision Support · Digital Applications Focusing on digital architectures for smart digital products and services, it is a valuable resource for researchers, doctoral students, postgraduates, graduates, undergraduates, academics and practitioners interested in digital transformation.
The typed graph model
(2020)
In recent years, the Graph Model has become increasingly popular, especially in the application domain of social networks. The model has been semantically augmented with properties and labels attached to the graph elements. It is difficult to ensure data quality for the properties and the data structure because the model does not need a schema. In this paper, we propose a schema bound Typed Graph Model with properties and labels. These enhancements improve not only data quality but also the quality of graph analysis. The power of this model is provided by using hyper-nodes and hyper edges, which allows to present a data structure on different abstraction levels. We demonstrate by example the superiority of this model over the property graph data model of Hidders and other prevalent data models, namely the relational, object-oriented, and XML model.
Formula One races provide a wealth of data worth investigating. Although the time-varying data has a clear structure, it is pretty challenging to analyze it for further properties. Here the focus is on a visual classification for events, drivers, as well as time periods. As a first step, the Formula One data is visually encoded based on a line plot visual metaphor reflecting the dynamic lap times, and finally, a classification of the races based on the visual outcomes gained from these line plots is presented. The visualization tool is web-based and provides several interactively linked views on the data; however, it starts with a calendar-based overview representation. To illustrate the usefulness of the approach, the provided Formula One data from several years is visually explored while the races took place in different locations. The chapter discusses algorithmic, visual, and perceptual limitations that might occur during the visual classification of time-series data such as Formula One races.
Additive manufacturing (AM) is a promising manufacturing method for many industrial sectors. For this application, industrial requirements such as high production volumes and coordinated implementation must be taken into account. These tasks of the internal handling of production facilities are carried out by the Production Planning and Control (PPC) information system. A key factor in the planning and scheduling is the exact calculation of manufacturing times. For this purpose we investigate the use of Machine Learning (ML) for the prediction of manufacturing times of AM facilities.
Die Erfindung betrifft ein Verfahren zur extrinsischen Kalibrierung wenigstens eines bildgebenden Sensors, wonach eine Pose des wenigstens einen bildgebenden Sensors relativ zu dem Ursprung (U) eines dreidimensionalen Koordinatensystems einer Handhabungseinrichtung mittels einer Recheneinrichtung bestimmt wird, wobei bekannte dreidimensionale Koordinaten betreffend die Position wenigstens eines Gelenks der Handhabungseinrichtung durch die Recheneinrichtung berücksichtigt werden, und wobei zweidimensionale Koordinaten betreffend die Position des wenigstens einen Gelenks anhand von Rohdaten des wenigstens einen bildgebenden Sensors ermittelt werden, und wobei die Recheneinrichtung die Pose des wenigstens einen bildgebenden Sensors anhand der Korrespondenz zwischen den zweidimensionalen Koordinaten und den dreidimensionalen Koordinaten bestimmt.
Purpose: Gliomas are the most common and aggressive type of brain tumors due to their infiltrative nature and rapid progression. The process of distinguishing tumor boundaries from healthy cells is still a challenging task in the clinical routine. Fluid attenuated inversion recovery (FLAIR) MRI modality can provide the physician with information about tumor infiltration. Therefore, this paper proposes a new generic deep learning architecture, namely DeepSeg, for fully automated detection and segmentation of the brain lesion using FLAIR MRI data.
Methods: The developed DeepSeg is a modular decoupling framework. It consists of two connected core parts based on an encoding and decoding relationship. The encoder part is a convolutional neural network (CNN) responsible for spatial information extraction. The resulting semantic map is inserted into the decoder part to get the full-resolution probability map. Based on modified U-Net architecture, different CNN models such as residual neural network (ResNet), dense convolutional network (DenseNet), and NASNet have been utilized in this study.
Results: The proposed deep learning architectures have been successfully tested and evaluated on-line based on MRI datasets of brain tumor segmentation (BraTS 2019) challenge, including s336 cases as training data and 125 cases for validation data. The dice and Hausdorff distance scores of obtained segmentation results are about 0.81 to 0.84 and 9.8 to 19.7 correspondingly.
Conclusion: This study showed successful feasibility and comparative performance of applying different deep learning models in a new DeepSeg framework for automated brain tumor segmentation in FLAIR MR images. The proposed DeepSeg is open source and freely available at https://github.com/razeineldin/DeepSeg/.
Internet of Things (IoT) provides a strong platform for computer users to connect objects, devices, and people to the Internet for exchanging or sharing of information with each other. IoT is growing rapidly and is expected to adapt to disciplines such as manufacturing, agriculture, healthcare, and robotics. Furthermore, the new concept of IoT is proposed and shown, especially for robotics areas as Internet of Robotics Things (IoRT). IoRT is a mixed structure of diverse technologies such as cloud computing, artificial intelligence, and machine learning. However, to promote and realize IoRT, digitization and digital transformation should be proceeded and implemented in the robotics enterprise. In this paper, we propose and architecture framework for IoRT-based digital platforms an verify it using a planned case in a global robotics enterprise. The associated challenges and future research directions in this field are also presented.
Zero or plus energy office buildings must have very high building standards and require highly efficient energy supply systems due to space limitations for renewable installations. Conventional solar cooling systems use photovoltaic electricity or thermal energy to run either a compression cooling machine or an absorption-cooling machine in order to produce cooling energy during daytime, while they use electricity from the grid for the nightly cooling energy demand. With a hybrid photovoltaic-thermal collector, electricity as well as thermal energy can be produced at the same time. These collectors can produce also cooling energy at nighttime by longwave radiation exchange with the night sky and convection losses to the ambient air. Such a renewable trigeneration system offers new fields of applications. However, the technical, ecological and economical aspects of such systems are still largely unexplored.
In this work, the potential of a PVT system to heat and cool office buildings in three different climate zones is investigated. In the investigated system, PVT collectors act as a heat source and heat sink for a reversible heat pump. Due to the reduced electricity consumption (from the grid) for heat rejection, the overall efficiency and economics improve compared to a conventional solar cooling system using a reversible air-to-water heat pump as heat and cold source.
A parametric simulation study was carried out to evaluate the system design with different PVT surface areas and storage tank volumes to optimize the system for three different climate zones and for two different building standards. It is shown such systems are technically feasible today. With a maximum utilization of PV electricity for heating, ventilation, air conditioning and other electricity demand such as lighting and plug loads, high solar fractions and primary energy savings can be achieved.
Annual costs for such a system are comparable to conventional solar thermal and solar electrical cooling systems. Nevertheless, the economic feasibility strongly depends on country specific energy prices and energy policy. However, even in countries without compensation schemes for energy produced by renewables, this system can still be economically viable today. It could be shown, that a specific system dimensioning can be found at each of the investigated locations worldwide for a valuable economic and ecological operation of an office building with PVT technologies in different system designs.
Vergleichende Analyse des YouTube-Auftritts von privat- und öffentlich-rechtlichen Sendegruppen
(2020)
Lange wurde das Internet als Antagonismus zum Fernsehen gesehen. Es wurde dementsprechend zur Zuschauerrück- bzw. -gewinnung genutzt, was sich allerdings als ineffizient erwies. Inzwischen haben die einzelnen Sendegruppen das Internet jedoch als mediale Erweiterung erkannt und genutzt. Durch diese späte Akzeptanz zeigen sich starke Unterschiede im Umfang und der Vorgehensweise hinsichtlich der Nutzung des Internets als zusätzliches Medium. Am besten lässt sich dies in einem Vergleich in Bezug auf die wichtigste videotechnische Social Media Plattform YouTube darstellen.
In diesem Vergleich sollen die einzelnen Sendegruppen hinsichtlich ihrer wahrgenommenen Vorteile, Nachteile und Attraktivität bezogen auf das Nutzerverhalten und die Nutzermeinung bewertet werden. Die zielgruppenorientierte Optimierung des YouTube-Auftrittes ist von außerordentlich hoher Bedeutung für die zukünftige Marktdurchdringung.
Going forward with the requirements of missions to the Moon and further into deep space, the European Space Agency is investigating new methods of astronaut training that can help accelerate learning, increase availability and reduce complexity and cost in comparison to currently used methods. To achieve this, technologies such as virtual reality may be utilized. In this paper, an investigation into the benefits of using virtual reality as a means for extravehicular activity training in comparison to conventional training methods, such as neutral buoyancy pools is given. To help determine the requirements and current uses of virtual reality for extravehicular activity training first hand tests of currently available software as well as expert interviews are utilized. With this knowledge a concept is developed that may be used to further advance training methods in virtual reality. The resulting concept is used as a basis for development of a prototype to showcase user interactions and locomotion in microgravity simulations.
Ein nicht unerheblicher Anteil der Autounfälle ist auf Müdigkeit am Steuer zurückzuführen. Um Unfälle aufgrund von Müdigkeit zu vermeiden, existieren schon einige Ansätze wie beispielsweise die Erkennung der Fahrweise. Im Rahmen des IOT-Labors des Masterstudiengangs Human Centered Computing der Hochschule Reutlingen sollen verschiedene Fahrassistenzsysteme entwickelt und getestet werden, um Unfälle aufgrund von Müdigkeit zu verhindern. Diese Arbeit beschäftigt sich mit der Müdigkeitserkennung über Computer Vision (CV) und das Elektrokardiogramm (EKG). Im Rahmen dieses Papers wird die Müdigkeitserkennung über CV am Steuer mittels den Open Source Bibliotheken OpenCV und Dlib und dem Embedded PC Nvidia Jetson Nano verwirklicht. Die Müdigkeit über EKG wird über den Herzschlag und die Herzfrequenzvariabilität erkannt. Ebenfalls wurde in dieser Arbeit eine Schnittstelle aus CV und EKG entwickelt, um aus den Python-Skripten der Müdigkeitserkennung über Computer Vision und der Müdigkeitserkennung über EKG die zur Erkennung wichtigen Daten zusammenzufassen. Diese werden anschließend zu einem gesamten Ergebnis ausgewertet.
In dieser Arbeit werden drei verschiedene Testumgebungen vorgestellt, welche in ein iteratives Vorgehen einfließen, um die Entwicklung von Augmented-Reality-Anwendungen zur Darstellung von autonomen Fahrfunktionen zu unterstützen.
Gestaltungsentwürfe und Softwareentwicklungen können in den Testumgebungen für unterschiedliche Zielsetzungen von Personenbefragungen vorgestellt und bewertet werden. Das entwicklungsbegleitende Testen ermöglicht eine frühzeitige Identifizierung von Änderungshinweisen, welche für einen gültigen Lösungsentwurf eingearbeitet werden können. Die entwickelten Testumgebungen sind ein verkleinertes Modell, ein Fahrsimulator und ein reales Fahrzeug. Eigenschaften, Funktionen und Aufbauten resultieren aus Erkenntnissen der Literatur und Erfahrungen aus ersten Entwicklungen. Diese und die Einsatzmöglichkeiten werden mit dieser Arbeit aufgezeigt.
In dieser Ausarbeitung wird auf Visualisierungsmöglichkeiten von neuronalen Netzen eingegangen. Ein neuronales Netz scheint zuerst nicht von außen einsehbar und ist somit für viele eine Blackbox. Häufig genutzte Python-Bibliotheken, zum Beispiel TensorFlow, werden vorgestellt und deren Stärken wie auch Schwächen präsentiert. Anhand dieser werden bereits bestehende Visualisierungen gezeigt und ihr derzeitiger Einsatz wird erläutert. Durch einen Vergleich soll ersichtlich werden, welche Bibliothek am meisten Daten während des Trainings liefert, damit diese Informationen weiter verarbeitet werden. Diese Daten sollen so visualisiert werden, dass sie bei der Entwicklung eines neuronalen Netzes unterstützend sind. Ziel ist es, auf die Möglichkeiten einzugehen, welche geboten werden können. Durch eine Vereinfachung des Debuggings neuronaler Netze sollen weiterführende Entwicklungen in diese Richtung unterstützt werden.
Detecting semantic similarities between sentences is still a challenge today due to the ambiguity of natural languages. In this work, we propose a simple approach to identifying semantically similar questions by combining the strengths of word embeddings and Convolutional Neural Networks (CNNs). In addition, we demonstrate how the cosine similarity metric can be used to effectively compare feature vectors. Our network is trained on the Quora dataset, which contains over 400k question pairs. We experiment with different embedding approaches such as Word2Vec, Fasttext, and Doc2Vec and investigate the effects these approaches have on model performance. Our model achieves competitive results on the Quora dataset and complements the well-established evidence that CNNs can be utilized for paraphrase recognition tasks.
Die Entwicklung eines Medizinproduktes benötigt in der Regel mehrere Jahre. Gesetzliche Vorgaben, wie zum Beispiel das Medizinprodukte Durchführungsgesetz, bestimmen, welche Schritte während der Entwicklung durchgeführt werden müssen. Deren Einhaltung muss in der technischen Dokumentation nachgewiesen werden. Die darin enthaltenen technischen Dokumente entstehen im Verlauf der Entwicklung. Diese bauen aufeinander auf und verweisen sich gegenseitig. Dadurch entstehen heterogene und unübersichtliche Strukturen. Eine Lösung für dieses Problem bietet Traceability. Traceability sorgt dafür, dass die Anforderungen an das Medizinprodukt mit Dokumenten, wie dem Anforderungskatalog, Lastenheft oder der Spezifikation verknüpft werden können. Somit ist jederzeit nachvollziehbar, welche Anforderungen mit welchem Test, welchen Änderungen oder welchen Ergebnissen zusammenhängen. Ein wichtiger Prozess bei der Entwicklung von Medizinprodukten ist zudem das Usability Engineering, wodurch die Sicherheit eines Medizinprodukts sichergestellt und Risiken bei der Anwendung minimiert werden sollen. In diesem Prozess entstehen viele Artefakte, wie zum Beispiel Usability-Berichte. Um den Überblick über alle Usability-Daten behalten zu können, können diese mithilfe von Traceability verknüpft werden. In diesem Artikel wird herausgestellt, welche Voraussetzungen für das Usability Engineering in der Medizintechnik an Traceability gestellt
werden.
Requirements Engineering (RE) umfasst sämtliche systematische Schritte zur Entwicklung eines Systems, um die Bedürfnisse der Nutzer und Vorgaben, die an dieses gestellt werden, zu erfüllen. Das RE eines ausgewählten Herstellers für klinische Informationssysteme (KIS) wurde untersucht und es stellt sich als intransparent als auch teilweise unzureichend dar. Das Ausmaß des Einsatzes von systematischen Vorgehensweisen und Methoden zum RE wurden beim ausgewählten KIS-Hersteller analysiert. Die Analyse zeigt, dass RE weit verbreitet ist, aber differenziert betrieben wird.
Das Ziel dieser Arbeit ist es, den Stand der Technik des RE für die KIS Entwicklung zu ermitteln. Es werden wichtige Faktoren des RE für die Entwicklung von KIS beschrieben. Die Ergebnisse dieser Arbeit werden als erster Schritt für die Optimierung des RE des ausgewählten KIS-Herstellers dienen.
Medizinprodukte sind Gegenstände, Stoffe oder Software mit medizinischer Zweckbestimmung für die Anwendung am Menschen. Diese werden von Medizinprodukteherstellern entwickelt und auf den Markt eingeführt. Da die falsche Anwendung von Medizinprodukten bei Menschen zu Verletzbarkeit des menschlichen Körpers führen kann, ist eine angemessene Qualität der Medizinprodukte zu gewährtleisten. Um die Sicherstellung der Qualität einzuhalten, sind Medizinproduktehersteller verpflichtet, sich an die Medizinprodukteverordnung (MDR) zu halten. Für risikoreiche Produkte ist ergänzend die Nutzung eines Qualitätsmanagementsystems (QMS) verpflichtend. Dieses steuert die Struktur, Verantwortlichkeiten, Verfahren und Prozesse des Unternehmens, die für die Medizinprodukteentwicklung notwendig sind. In Zeiten der Digitalisierung werden Softwarelösungen eingesetzt, um die zeitaufwendigen Dokumentations- und Administrationstätigkeiten im QMS zu reduzieren und die Prozesse zu optimieren. Mit der Einführung einer Software wird ein QMS in der Praxis auch als elektronisches QMS (eQMS) bezeichnet. Weiterhin muss das gesamte QMS mit den Regularien konform sein. Deshalb ist das Ziel dieser Arbeit, mithilfe der regulatorischen Anforderungen herauszuarbeiten, welche Vorgaben bei der Einführung eines eQMS zu beachten sind und wie diese erfüllt werden können. Diese Arbeit bezieht sich auf die regulatorsichen Vorgaben aus der MDR und der ISO 13485. Die Norm beinhaltet Anforderungen an ein QMS von Medizinprodukten.
In Zusammenarbeit mit dem Medizinproduktehersteller ulrich medical wird eine User Experience und Usability Studie an der Software der im Moment eingesetzten Kontrastmittelinjektoren durchgeführt. Das Unternehmen möchte eine neue Variante eines Kontrastmittelinjektors entwickeln, der als Basis eine verbesserte Version dieser Softwares enthält. Benutzerstudien können mit den unterschiedlichsten Methoden durchgeführt werden. Das geeignete Vorgehen muss definiert und die Testpersonen in Bezug zur eingesetzten Methode ermittelt werden. Bei Medizinprodukten muss zusätzlich auf strikte Auflagen in Normen und Gesetzen geachtet werden. Die Grundlage zur Methodenauswahl bildet eine Recherche zu Usability und User Experience Vorgaben für Medizinprodukte. Die Studie wird anhand quantitativer Daten eines Usability Tests im Labor, Fragebögen zur User Experience und qualitativen Post Test- Interviews evaluiert. In erster Linie dient diese Studie der Ermittlung von möglichen Verbesserungen, welche in der darauf folgenden Masterthesis vertieft und umgesetzt werden.
The field of breath analysis has developed to be of growing interest in medical diagnosis and patient monitoring. The main advantages are that it’s noninvasive, painless and repeatable in flexible cycles. Even though breath analysis is being researched for a couple of decades there are still many unanswered questions. Human breath contains volatile organic compounds which are emitted from inside the body. Some of these compounds can be assigned to specific sources, such as inflammation or cancer, but also to non health related origins. This paper gives an overview of breath analysis for the purpose of disease diagnosis and health monitoring. Therefore, literature regarding breath analysis in the medical field has been analyzed, from its early stages to the present. As a result, this paper gives an outline of the topic of breath analysis.
Haptisches Feedback ist nach zahlreichen Studien ein wichtiger Bestandteil in der medizinischen Robotik. Die meisten Systeme befinden sich jedoch noch im Forschungsstatus und verfolgen unterschiedliche Ansätze. In der Teleoperation wird mit sensorlosen und Sensor-Systemen geforscht. Sensoren bieten, im Gegensatz zu den Encodern in sensorlosen Systemen, genaue Messungen, sind allerdings teuer in der Anschaffung, schwer zu desinfizieren und müssen in OP-Besteck integriert werden. In Hands-On Systemen fühlt der Operateur im Gegensatz zu Teleoperationssystemen direkt die auftretenden Kräfte bei der Benutzung. Der Roboter bietet in diesen Systemen nur die benötigte Stabilität und Genauigkeit, gesteuert werden sie direkt durch den Menschen. Dagegen werden in Teleoperationssystemen gezielte Controller eingesetzt. Hier hat sich der für den OP entwickelte sigma.7 durchgesetzt. Gegenüber der für die Allgemeinheit entwickelten Konkurrenz bietet er haptisches Feedback in allen nötigen Freiheitsgraden und eine entsprechende Kraftrückkoppelung.
In der Kryochirurgie wird Kälte verwendet, um tumoröses Gewebe abzutöten. Dazu werden Kryosonden in den Tumor gestochen und stark abgekühlt. Hierbei gibt es verschiedene Herausforderungen, welchen computergestützt begegnet werden kann. Diese Arbeit gibt die Ergebnisse einer Literaturrecherche zu den Herausforderungen wieder. Die vorgestellten Arbeiten beschäftigten sich mit der Simulation des im Tumor entstehenden Eisballs, dem korrekten Positionieren der Kryosonden im Tumor, dem Überwachen des Eingriffs sowie dem Entwickeln von Simulationen für Trainingszwecke. Dabei zeigt sich, dass der Einsatz von computergestützten Lösungen die Kryochirurgie für Operateur und Patient verbessern kann.
Anhaltend erlebt die Künstliche Intelligenz (KI) eine Renaissance in vielen Branchen. Der Trend, komplexe Zusammenhänge in Daten zu erfassen und zu nutzen, hält an. Hierbei ist jedoch der Grundgedanke des Maschinellen Lernens basierend auf empirischen Daten nicht neu. Es bleibt nach wie vor die Herausforderung, erst ein oft auch interdisziplinäres Verständnis von komplexen Zusammenhängen für verschiedenste Anwendungs-Domänen zu gewinnen, um zum Beispiel KI sinnvoll zum Einsatz zu bringen. Als Besucher der Konferenz erwarten Sie Beiträge aus den unterschiedlichsten Bereichen. Hierzu gehören zum Beispiel Müdigkeitserkennungssysteme im Automobil, ein Tastsinn auch für Roboter, aber auch neue Ansätze zur Erzeugung und Nutzung von Virtuellen Realitäten für die Erprobung des autonomen Fahrens bis hin zur Simulation von Außenboardeinsätzen in der Raumfahrt.
On the design of an urban data and modeling platform and its application to urban district analyses
(2020)
An integrated urban platform is the essential software infrastructure for smart, sustainable and resilitent city planning, operation and maintenance. Today such platforms are mostly designed to handle and analyze large and heterogeneous urban data sets from very different domains. Modeling and optimization functionalities are usually not part of the software concepts. However, such functionalities are considered crucial by the authors to develop transformation scenarios and to optimized smart city operation. An urban platform needs to handle multiple scales in the time and spatial domain, ranging from long term population and land use change to hourly or sub-hourly matching of renewable energy supply and urban energy demand.
In Folge der gegenwärtigen Digitalisierung in der produzierenden Industrie werden Anwendungen oder Services mit potentiell positiven Auswirkungen auf Faktoren wie Effektivität und Arbeitsqualität entwickelt. Ein geeigneter Ansatz zur Stärkung motivierender Aspekte im Arbeitskontext kann Gamification darstellen. In dieser Arbeit ist die initiale Konzeption und Evaluation eines Gamification-Ansatzes für Anwender eines KI-Service zur Maschinenoptimierung dargestellt und möglichen Anforderungen an ein Konzept zur Motivationssteigerung extrahiert.
In dieser Ausarbeitung wird eine zeitliche Vorhersage von Erdbeben getroffen. Hierfür werden mit einem Datensatz aus Labor-Erdbeben Convolutional Neural Networks (CNN) trainiert. Die trainierten Netzwerke geben Vorhersagen, indem sie einen Input an seismischen Daten klassifizieren. Durch das Klassifizieren kann das CNN die zeitliche Entfernung zum nächsten Erdbeben vorhersagen. Es werden hierfür zwei Ansätze miteinander verglichen. Beim ersten Ansatz werden die Originaldaten in ein CNN gegeben. Beim zweiten Ansatz wird vor dem CNN eine Vorverarbeitung der Daten mit den Mel Frequency Cepstral Coefficients (MFCC) durchgeführt. Es zeigt sich, dass mit beiden Ansätzen eine gute Klassifikation möglich ist. Die Kombination aus MFCC und CNN liefert die besseren quantitativen Ergebnisse. Hierbei konnte eine Genauigkeit von 65 % erreicht werden.
Semi-automated image data labelling using AprilTags as a pre-processing step for machine learning
(2019)
Data labelling is a pre-processing step to prepare data for machine learning. There are many ways to collect and prepare this data, but these are usually associated with a greater effort. This paper presents an approach to semi-automated image data labelling using AprilTags. The AprilTags attached to the object, which contain a unique ID, make it possible to link the object surfaces to a particular class. This approach will be implemented and used to label data of a stackable box.
The data is evaluated by training a You Only Look Once (YOLO) net, with a subsequent evaluation of the detection results. These results show that the semi-automatically collected and labelled data can certainly be used for machine learning. However, if concise features of an object surface are covered by the AprilTag, there is a risk that the concerned class will not be recognized. It can be assumed that the labelled data can not only be used for YOLO, but also for other machine learning approaches.
Bereits zum elften Mal findet nun die Studierendenkonferenz Informatics Inside statt. Als Teil des Masterstudiengangs Human-Centered Computing organisieren Masterstudierende selbständig eine vollumfängliche wissenschaftliche Konferenz. Die Informatik ist nach wie vor ständigem Wandel unterworfen. Unsere Studierenden tragen diesem Wandel bei, indem sie in ihrer wissenschaftllichen Vertiefung aktuelle Problemstellungen durch innovative Konzepte lösen. Inzwischen ist die Informatik aber auch nicht immer sofort sichtbar. Das merken wir immer dann, wenn etwas nicht wie vorgesehen funktioniert. Das diesjährige Motto der Informatics Inside ist experience (IT);, verdeckt als Funktionsaufruf:).
Automatic classification of rotating machinery defects using Machine Learning (ML) algorithms
(2020)
Electric machines and motors have been the subject of enormous development. New concepts in design and control allow expanding their applications in different fields. The vast amount of data have been collected almost in any domain of interest. They can be static; that is to say, they represent real-world processes at a fixed point of time. Vibration analysis and vibration monitoring, including how to detect and monitor anomalies in vibration data are widely used techniques for predictive maintenance in high-speed rotating machines. However, accurately identifying the presence of a bearing fault can be challenging in practice, especially when the failure is still at its incipient stage, and the signal-to-noise ratio of the monitored signal is small. The main objective of this work is to design a system that will analyze the vibration signals of a rotating machine, based on recorded data from sensors, in the time/frequency domain. As a consequence of such substantial interest, there has been a dramatic increase of interest in applying Machine Learning (ML) algorithms to this task. An ML system will be used to classify and detect abnormal behavior and recognize the different levels of machine operation modes. The proposed solution can be deployed as predictive maintenance for Industry 4.0.
Power line communications (PLC) reuse the existing power-grid infrastructure for the transmission of data signals. As power line the communication technology does not require a dedicated network setup, it can be used to connect a multitude of sensors and Internet of Things (IoT) devices. Those IoT devices could be deployed in homes, streets, or industrial environments for sensing and to control related applications. The key challenge faced by future IoT-oriented narrowband PLC networks is to provide a high quality of service (QoS). In fact, the power line channel has been traditionally considered too hostile. Combined with the fact that spectrum is a scarce resource and interference from other users, this requirement calls for means to increase spectral efficiency radically and to improve link reliability. However, the research activities carried out in the last decade have shown that it is a suitable technology for a large number of applications. Motivated by the relevant impact of PLC on IoT, this paper proposed a cooperative spectrum allocation in IoT-oriented narrowband PLC networks using an iterative water-filling algorithm.
Our paper gives first answers on a fundamental question: how can the design of architectures of intelligent digital systems and services be accomplished methodologically? Intelligent systems and services are the goals of many current digitalization efforts today and part of massive digital transformation efforts based on digital technologies. Digital systems and services are the foundation of digital platforms and ecosystems. Digtalization disrupts existing businesses, technologies, and economies and promotes the architecture of open environments. This has a strong impact on new value-added opportunities and the development of intelligent digital systems and services. Digital technologies such as artificial intelligence, the Internet of Things, services computing, cloud computing, big data with analytics, mobile systems, and social enterprise networks systems are important enablers of digitalization. The current publication presents our research on the architecture of intelligent digital ecosystems and products and services influenced by the service-dominant logic. We present original methodological extensions and a new reference model for digital architectures with an integral service and value perspective to model intelligent systems and services that effectively align digital strategies and architectures with artificial intelligence as main elements to support intelligent digitalization.
his book highlights new trends and challenges in intelligent systems, which play an important part in the digital transformation of many areas of science and practice. It includes papers offering a deeper understanding of the human-centred perspective on artificial intelligence, of intelligent value co-creation, ethics, value-oriented digital models, transparency, and intelligent digital architectures and engineering to support digital services and intelligent systems, the transformation of structures in digital businesses and intelligent systems based on human practices, as well as the study of interaction and the co-adaptation of humans and systems. All papers were originally presented at the International KES Conference on Human Centred Intelligent Systems 2020 (KES HCIS 2020), held on June 17–19, 2020, in Split, Croatia.
Serverless computing is an emerging cloud computing paradigm with the goal of freeing developers from resource management issues. As of today, serverless computing platforms are mainly used to process computations triggered by events or user requests that can be executed independently of each other. These workloads benefit from on-demand and elastic compute resources as well as per-function billing. However, it is still an open research question to which extent parallel applications, which comprise most often complex coordination and communication patterns, can benefit from serverless computing.
In this paper, we introduce serverless skeletons for parallel cloud programming to free developers from both parallelism and resource management issues. In particular, we investigate on the well known and widely used farm skeleton, which supports the implementation of a wide range of applications. To evaluate our concepts, we present a prototypical development and runtime framework and implement two applications based on our framework: Numerical integration and hyperparameter optimization - a commonly applied technique in machine learning. We report on performance measurements for both applications and discuss
the usefulness of our approach.
Companies are continuously changing their strategy, processes, and information systems to benefit from the digital transformation. Controlling the digital architecture and governance is the fundamental goal. Enterprise Governance, Risk and Compliance (GRC) systems are vital for managing digital risks threatening in modern enterprises from many different angles. The most significant constituent to GRC systems is the definition of controls that is implemented on different layers of a digital Enterprise Architecture (EA). As part of the compliant aspect of GRC, the effectiveness of these controls is assessed and reported to relevant management bodies within the enterprise. In this paper, we present a metamodel which links controls to the affected elements of a digital EA and supplies a way of expressing associated assessment techniques and results. We complement a metamodel with an expository instantiation of a control compliance cockpit in an international insurance enterprise.
Business process models provide a considerable number of benefits for enterprises and organizations, but the creation of such models is costly and time-consuming, which slows down the organizational adoption of business process modeling. Social paradigms pave new ways for business process modeling by integrating stakeholders and leveraging knowledge sources. However, empirical research about the impact of social paradigms on costs of business process modeling is sparse. A better understanding of their impact could help to reduce the cost of business process modeling and improve decision-making on BPM activities. The paper constributes to this field by reporting about an empirical investigation via survey research on the perceived influence of different cost factors among experts. Our results indicate that different cost components, as well as the use of social paradigms, influence cost.
Due to the consequential impact of technological breakdowns, companies have to be prepared to deal with breakdowns or even better prevent them. In today's information technology, several methods and tools exist to downscale this concern. Therefore, this paper deals with the initial determination of a resilient enterprise architecture supporting predictive maintenance in the information technology domain and furthermore, concerns several mechanisms on how to reactively and proactively secure the state of resiliency on several abstraction levels. The objective of this paper is to give an overview on existing mechanisms for resiliency and to describe the foundation of an optimized approach, combining infrastructure and process mining techniques.
This book contains the proceedings of the KES International conferences on Innovation in Medicine and Healthcare (KES-InMed-19) and Intelligent Interactive Multimedia Systems and Services (KES-IIMSS-19), held on 17–19 June 2019 and co-located in St. Julians, on the island of Malta, as part of the KES Smart Digital Futures 2019 multi theme conference.
The major areas covered by KES-InMed-19 include: Digital IT Architecture in Healthcare; Advanced ICT for Medical and Healthcare; Biomedical Engineering, Trends, Research and Technologies and Healthcare Support System. The major areas covered by KES-IIMSS-19 were: Interactive Technologies; Artificial Intelligence and Data Analytics; Intelligent Services and Architectures and Applications.
This book is of use to researchers in these vibrant areas, managers, industrialists and anyone wishing to gain an overview of the latest research in these fields.
The rise of digital technologies has become an important driver for change in multiple industries. Therefore, firms need to develop digital capabilities to manage the transformation process successfully. Prior research assumes that the development of a specific set of digital capabilities leads to higher digital maturity. However, a measurement framework for digital maturity does not exist in scholarly work. Therefore, this paper develops a conceptualization and measuremnent model for digital maturity.
Autism spectrum disorders (ASD) affect a large number of children both in the Russian Federation and in Germany. Early diagnosis is key for these children, because the sooner parents notice such disorders in a child and the rehabilitation and treatment program starts, the higher the likelihood of his social adaptation. The difficulties in raising such a child lie in the complexity of his learning outside of children's groups and the complexity of his medical care. In this regard, the development of digital applications that facilitate medical care and education of such children at home is important and relevant. The purpose of the project is to improve the availability and quality of healthcare and social adaptation at home of children with ASD through the use of digital technologies.
Continuous refactoring is necessary to maintain source code quality and to cope with technical debt. Since manual refactoring is inefficient and error prone, various solutions for automated refactoring have been proposed in the past. However, empirical studies have shown that these solutions are not widely accepted by software developers and most refactorings are still performed manually. For example, developers reported that refactoring tools should support functionality for reviewing changes. They also criticized that introducing such tools would require substantial effort for configuration and integration into the current development environment.
In this paper, we present our work towards the Refactoring-Bot, an autonomous bot that integrates into the team like a human developer via the existing version control platform. The bot automatically performs refactorings to resolve code smells and presents the changes to a developer for asynchronous review via pull requests. This way, developers are not interrupted in their workflow and can review the changes at any time with familiar tools. Proposed refactorings can then be integrated into the code base via the push of a button. We elaborate on our vision, discuss design decisions, describe the current state of development, and give an outlook on planned development and research activities.
The Third International Conference on Data Analytics (DATA ANALYTICS 2014), held on August 24 - 28, 2014 - Rome, Italy, continued the inaugural event on fundamentals in supporting data analytics, special mechanisms and features of applying principles of data analytics, application oriented analytics, and target-area analytics.
Processing of terabytes to petabytes of data, or incorporating non-structural data and multistructured data sources and types require advanced analytics and data science mechanisms for both raw and partially-processed information. Despite considerable advancements on high performance, large storage, and high computation power, there are challenges in identifying, clustering, classifying, and interpreting of a large spectrum of information.
The Eleventh International Conference on Advances in Databases, Knowledge, and Data Applications (DBKDA 2019), held between June 02, 2019 to June 06, 2019 - Athens, Greece, continued a series of international events covering a large spectrum of topics related to advances in fundamentals on databases, evolution of relation between databases and other domains, data base technologies and content processing, as well as specifics in applications domains databases.
Advances in different technologies and domains related to databases triggered substantial improvements for content processing, information indexing, and data, process and knowledge mining. The push came from Web services, artificial intelligence, and agent technologies, as well as from the generalization of the XML adoption.
High-speed communications and computations, large storage capacities, and loadbalancing for distributed databases access allow new approaches for content processing with incomplete patterns, advanced ranking algorithms and advanced indexing methods.
Evolution on e-business, ehealth and telemedicine, bioinformatics, finance and marketing, geographical positioning systems put pressure on database communities to push the ‘de facto’ methods to support new requirements in terms of scalability, privacy, performance, indexing, and heterogeneity of both content and technology.
We welcomed academic, research and industry contributions. The conference had the followingtracks:
Knowledgeanddecisionbase
Databasestechnologies
Datamanagement
GraphSM: Large-scale Graph Analysis, Management and Applications
The Tenth International Conference on Advances in Databases, Knowledge, and Data Applications (DBKDA 2018), held between May 20 - 24, 2018 - Nice, France, continued a series of international events covering a large spectrum of topics related to advances in fundamentals on databases, evolution of relation between databases and other domains, data base technologies and content processing, as well as specifics in applications domains databases.
Advances in different technologies and domains related to databases triggered substantial improvements for content processing, information indexing, and data, process and knowledge mining. The push came from Web services, artificial intelligence, and agent technologies, as well as from the generalization of the XML adoption.
High-speed communications and computations, large storage capacities, and loadbalancing for distributed databases access allow new approaches for content processing with incomplete patterns, advanced ranking algorithms and advanced indexing methods.
Evolution on e-business, ehealth and telemedicine, bioinformatics, finance and marketing, geographical positioning systems put pressure on database communities to push the ‘de facto’ methods to support new requirements in terms of scalability, privacy, performance, indexing, and heterogeneity of both content and technology.
To remain competitive in a fast changing environment, many companies started to migrate their legacy applications towards a Microservices architecture. Such extensive migration processes require careful planning and consideration of implications and challenges likewise. In this regard, hands-on experiences from industry practice are still rare. To fill this gap in scientific literature, we contribute a qualitative study on intentions, strategies, and challenges in the context of migrations to Microservices. We investigated the migration process of 14 systems across different domains and sizes by conducting 16 in-depth interviews with software professionals from 10 companies. Along with a summary of the most important findings, we present a separate discussion of each case. As primary migration drivers, maintainability and scalability were identified. Due to the high complexity of their legacy systems, most companies preferred a rewrite using current technologies over splitting up existing code bases. This was often caused by the absence of a suitable decomposition approach. As such, finding the right service cut was a major technical challenge, next to building the necessary expertise with new technologies. Organizational challenges were especially related to large, traditional companies that simultaneously established agile processes. Initiating a mindset change and ensuring smooth collaboration between teams were crucial for them. Future research on the evolution of software systems can in particular profit from the individual cases presented.
While Microservices promise several beneficial characteristics for sustainable long-term software evolution, little empirical research covers what concrete activities industry applies for the evolvability assurance of Microservices and how technical debt is handled in such systems. Since insights into the current state of practice are very important for researchers, we performed a qualitative interview study to explore applied evolvability assurance processes, the usage of tools, metrics, and patterns, as well as participants’ reflections on the topic. In 17 semi-structured interviews, we discussed 14 different Microservice-based systems with software professionals from 10 companies and how the sustainable evolution of these systems was ensured. Interview transcripts were analyzed with a detailed coding system and the constant comparison method.
We found that especially systems for external customers relied on central governance for the assurance. Participants saw guidelines like architectural principles as important to ensure a base consistency for evolvability. Interviewees also valued manual activities like code review, even though automation and tool support was described as very important. Source code quality was the primary target for the usage of tools and metrics. Despite most reported issues being related to Architectural Technical Debt (ATD), our participants did not apply any architectural or service-oriented tools and metrics. While participants generally saw their Microservices as evolvable, service cutting and finding an appropriate service granularity with low coupling and high cohesion were reported as challenging. Future Microservices research in the areas of evolution and technical debt should take these findings and industry sentiments into account.
Small and Medium Enterprises (SMEs) which play substantial role in the development of any economy have been on the rise in the recent periods. Consequently, these enterprises are faced with a myriad of challenges which could potentially be solved through adoption of technology. Nonetheless, it has been observed that the new technological uptake among SMEs remains limited with the majority of them opting to maintain the status quo with regards to technology awareness and innovation strategies.
In a literature review, this paper explores three major dynamics curtailing adoption of new technologies by SMEs in the manufacturing: Knowledge absorptive capacity and management factors, organisational structures as well as technological awareness. Firstly, with regards to knowledge absorptive capacity and management factors, this study shows how these factors drive innovation potentials in SMEs.
Secondly, with regards to technological awareness factors, this study documents how perceived usefulness, costs, network and infrastructure, education and skills, training and attitude as well as knowledge influence adoption of new technologies among SMEs in the world. Lastly, the study concludes by analysing how organisational structures drive innovation potentials of SMEs in the wake of swift and profound technological changes in the market.
The relevance of technology knowledge in digital transformation especially in small and mediumsized enterprises (SMEs) that are still largely dependent on physical human capital has become increasingly obvious. This is due to the rapid revolution in business environment coupled with increased living examples of firms disrupted by advancement in technological knowledge. Consequently, we find it progressively vital for SMEs to spot and mitigate both threats and take advantage of opportunities arising from digital transformation dynamism.
Our study aims at exploring the relevance of technology knowledge in SMEs for digital transformation to uncover the opportunities, roadmaps, and models that SMEs can take advantage of in the digital transformation and gain a competitive edge.
We conclude that irrespective relevance of technology knowledge for digital transformation coupled with its low costs and accessibility, SMEs are yet to realize the full potential of technological knowledge. This is mainly due to technologies appearing, changing and also vanishing so rapidly in the digital age, that gaining proper understanding without dedicated resources is utterly difficult for SMEs - making them less competitive as incumbent large firms in the market.
The energy turnaround, digitalization and decreasing revenues forces enterprises in the energy domain to develop new business models. Business models for renewable energy are compound on different logic than business models for larger scale power plants. Following a design science research approach, we examined the business models of three enterprises in the energy domain in a first step. We identified that these business models result in complex ecosystems with multiple actors and difficult relationships between them. One cause is the fast changing and complicated state regulation in Germany. In order to solve the problem, we captured together with the partners of the enterprises the requirements in a second phase. Further we developed the prototype Business Model Configurator (BMConfig) based on the e3Value Ontology on the metamodelling platform ADOxx. We demonstrate the feasibility of our approach in business model of energy efficiency service based on smart meter data.
In a time of upheaval and digitalization, new business models for companies play an important role. Decentralized power generation and energy efficiency indicators to achieve climate goals and to reduce global warming are currently forcing energy companies to develop new business models. In recent years, many methods of business model development have been introduced to create new business ideas. But what are the obstacles in implementing these business models in the energy sector to develop new business opportunities? And what challenges do companies face in this respect? To answer this question, a systematic literature review was conducted in this paper. As a result, eight categories were identified which summarise the main barriers for the implementation of new business models in the energy domain.
We introduce IPA-IDX – an approach to handle index modifications modern storage technologies (NVM, Flash) as physical in-place appends, using simplified physiological log records. IPA-IDX provides similar performance and longevity advantages for indexes as basic IPA [5] does for tables. The selective application of IPA-IDX and basic IPA to certain regions and objects, lowers the GC overhead by over 60%, while keeping the total space overhead to 2%. The combined effect of IPA and IPA-IDX increases performance by 28%.
Multi-dimensional patient data, such as time varying volume data, data of different imaging modalities, surface segmentations etc. are of growing importance in the clinical routine. For many use cases, it is of major importance to replicate a certain visualization of a data set created on one machine on a different computer using different software tools. Up until now, there exists no standardized methodology for this consistent presentation. We propose an extension of the Digital Imaging und Communications in Medicine (DICOM) called “Multi dimensional Presentation State” and outline scope and first results of the standardization process.
Workflow driven support systems in the peri-operative area have the potential to optimize clinical processes and to allow new situation-adaptive support systems. We started to develop a workflow management system supporting all involved actors in the operating theatre with the goal to synchronize the tasks of the different stakeholders by giving relevant information to the right team members. Using the OMG standards BPMN, CMMN and DMN gives us the opportunity to bring established methods from other industries into the medical field. The system shows each addressed actor their information in the right place at the right time to make sure every member can execute their task in time to ensure a smooth workflow. The system has the overall view of all tasks. Accordingly, a workflow management system including the Camunda BPM workflow engine to run the models, and a middleware to connect different systems to the workflow engine and some graphical user interfaces to show necessary information or to interact with the system are used. The complete pipeline is implemented with a RESTful web service. The system is designed to include different systems like hospital information system (HIS) via the RESTful web service very easily and without loss of data. The first prototype is implemented and will be expanded.
The goal of the presented project is to develop the concept of home e-health centers for barrier-free and cross-border telemedicine. AAL technologies are already present on the market but there is still a gap to close until they can be used for ordinary patient needs. The general idea needs to be accompanied by new services, which should be brought together in order to provide a full coverage of service for the users. Sleep and stress were chosen as predominant influence in the population. The executed scientific study of available home devices analyzing sleep has provided the necessary to select appropriate devices. The first choice for the project implementation is the device EMFIT QS+. This equipment provides a part of a complete system that a home telemedical hospital can provide at a level of precision and communication with internal and/or external health services.
In this paper, an approach is introduced how reinforcement learning can be used to achieve interoperability between heterogeneous Internet of Things (IoT) components. More specifically, we model an HTTP REST service as a Markov Decision Process and adapt Q-Learning to the properties of REST so that an agent in the role of an HTTP REST client can learn the semantics of the service and, especially an optimal sequence of service calls to achieve an application specific goal. With our approach, we want to open up and facilitate a discussion in the community, as we see the key for achieving interoperability in IoT by the utilization of artificial intelligence techniques.
Interoperability is an important topic in the Internet of Things (IoT), because this domain incorporates diverse and heterogeneous objects, communication protocols and data formats. Many models and classification schemes have been proposed to make the degree of interoperability measurable - however only on the basis of a hierarchical scale. In the course of this paper we introduce a novel approach to measure the degree of interoperability using a metric scaled quantity. We consider IoT as a distributed system, where interoperable objects exchange messages with each other. Under this premise, we interpret messages as operation calls and formalize this view as a causal model. The analysis of this model enables us to quantify the interoperable behavior of communicating objects.
The metric and qualitative analysis of models of the upper and lower dental arches is an important aspect of orthodontic treatment planning. Currently available eLearning systems for dental education only allow access to digital learning materials, and do not interactively support the learning progress. Moreover, to date no study compared the efficiency of learning methods based on physical or digital study models. For this pilot study, 18 dental students were separated into two groups to investigate whether the learning success in study model analysis with an interactive elearning system is higher based on digital models or on conventional plaster models. The results show that with the digital method less time is needed per model analysis. Moreover, the digital approach leads to higher total scores than that based on plaster models. We conclude that interactive eLearning using digital dental arch models is a promising tool for dental education.
OR-Pad - Entwicklung eines Prototyps zur sterilen Informationsanzeige am OP-Situs : meeting abstract
(2019)
Hintergrund: Oftmals werden Informationen aus der Krankenakte oder von Bildgebungsverfahren nur auf recht weit vom Operationsgebiet entfernten Monitoren, außerhalb der ergonomischen Sichtachse des Operateurs, dargestellt. Dies führt dazu, dass relevante Informationen übersehen werden oder ihr Informationspotenzial nicht ausgeschöpft werden kann. In Papierform mitgenommene Notizen befinden sich während der OP außerhalb des sterilen Bereichs und sind dadurch für den Operateur nicht ohne Weiteres zugänglich. Auch bei intraoperativen Einträgen für die OP Dokumentation ist der Operateur auf die Mithilfe der Assistenz angewiesen. Durch die zusätzlichen Kommunikationswege entstehen dabei ein personeller und zeitlicher Mehraufwand und das Fehlerpotenzial nimmt zu. Das anwendungsorientierte Forschungsprojekt OR-Pad - Nutzung von portablen Informationsanzeigen im Operationssaal - soll dem Operateur zu einem verbesserten Informationsfluss verhelfen. Die Idee entstand aus der klinischen Routine der Anatomie und Urologie des Universitätsklinikums Tübingen und wird nun durch Fördermittel vom Ministerium für Wissenschaft, Forschung und Kunst Baden-Württemberg sowie vom Europäischen Fonds für regionale Entwicklung an der Hochschule Reutlingen zu einem High Fidelity-Prototypen weiterentwickelt.
Ziel: Ziel des OR-Pad Projekts ist es, während einer OP zum aktuellen Zeitpunkt klinisch relevante Informationen in unmittelbarer Nähe zum Operateur darzustellen. Mithilfe des Systems soll der Informationsfluss zwischen dem Eingriff sowie dessen Vor- und Nachbereitung optimiert werden. Der Operateur soll vorab relevante Informationen, wie aktuelle Röntgenbilder oder persönliche Notizen, zur intraoperativen Anzeige auswählen können, die dann am OP-Situs auf einer sterilen Informationsanzeige dargestellt werden. Durch die Positionierung soll eine ergonomische Sichtachse sowie die direkte Interaktion mit dem System ermöglicht werden. Kontextrelevante Informationen sollen basierend auf dem aktuellen OP-Verlauf durch die Entwicklung einer Situationserkennung automatisch bereitgestellt werden. Zur Optimierung des Informationsflusses gehört ebenfalls die Unterstützung der OP-Dokumentation. Für diese sollen während des Eingriffs manuell vom Operateur sowie automatisch vom System Einträge, wie Zeitpunkte oder intraoperative Aufnahmen, erstellt werden. Aus diesen soll nach dem Eingriff die OP-Dokumentation generiert und damit der Prozess qualitativer und zeiteffizienter gestaltet werden.
Methodik: Zur Erreichung des Ziels werden zunächst die klinischen Anforderungen spezifiziert und in ein Lastenheft überführt. Hierfür werden Interviews und Beobachtungen bei mehreren Interventionen durchgeführt. Nach dem User-Centered-Designprozess werden Personas und Nutzungsszenarien entworfen und mit klinischen Projektpartnern in mehreren Iterationen evaluiert. Es gilt eine Informationsarchitektur aufzubauen, die eine Einbettung klinischer Informationssysteme sowie Bild- und Gerätedaten aus dem OP-Netzwerk erlaubt. Eine Situationserkennung, basierend auf Prozessmodellen, soll zur Abschätzung des Operationsfortschritts entwickelt werden. Zur Befestigung der Informationsanzeige sollen geeignete Haltemechanismen eingesetzt werden. Das OR-Pad System soll laufend im Lehr- und Forschungs-OP der Hochschule Reutlingen getestet und im Sinne agiler Produktentwicklung mit den klinischen Projektpartnern abgestimmt werden. Der finale Funktionsprototyp soll abschließend in den Versuchs-OPs der Anatomie Tübingen getestet und evaluiert werden.
Ergebnisse: Über eine erste Datenerhebung mittels Contextual Inquiry konnten erste Anforderungen an das OR-Pad System erfasst werden, woraus ein Low-Fidelity-Prototyp resultierte. Die Evaluation über Experteninterviews führte in die zweite Iteration, in der das Konzept entsprechend der Ergebnisse angepasst wurde. Über Hospitationen am Uniklinikum Tübingen fand eine weitere Datenerhebung zur Erstellung von Szenarien für die intraoperativen Anwendungsfälle statt. Anhand der Anforderungen wurde ein Konzept für die Benutzerschnittstelle entworfen, die im weiteren Verlauf mit den klinischen Projektpartnern evaluiert wird.
Potentials of smart contracts-based disintermediation in additive manufacturing supply chains
(2019)
We investigate which potentials are created by using smart contracts for disintermediation in supply chains for additive manufacturing. Using a qualitative, critical realist research approach, we analyzed three case studies with companies active in additive manufactures. Based on interviews with experts from these companies, we could identify eight key requirements for disintermediation and associate four potentials of smart contracts-based disintermediation.
Artefaktkorrektur und verfeinerte Metriken für ein EEG-basiertes System zur Müdigkeitserkennung
(2019)
Fragestellung: Müdigkeit ist ein oft unterschätztes, aber dennoch großes Problem im Straßenverkehr. Von rund 2,5 Mio. Verkehrsunfällen 2015 in Deutschland, waren 2898 Unfälle, mit insgesamt 59 Toten (~1,7 % der Todesfälle), auf Übermüdung zurückzuführen. Schätzungen gehen von einer Dunkelziffer von bis zu 20 % aus. In einer ersten eigenen Studie wurde überprüft, ob ein mobiles EEG in einem Fahrsimulator Müdigkeitszustände zuverlässig erkennen kann. Die Erkennungsrate lag lediglich bei 61 %. Ziel dieser Arbeit ist, das verwendete Messsystem zu verbessern. Dazu wird die Genauigkeit durch eine Artefaktkorrektur und mit Hilfe von verfeinerten Qualitätsmetriken erhöht. Eine erkannte Übermüdung wird dem Fahrer dann in angemessener Weise angezeigt, so dass er entsprechend reagieren kann.
Patienten und Methoden: Die Independent Component Analysis (ICA) ist ein multivariates Verfahren, um mehrere Zufallsvariablen zu analysieren. Für die Entscheidung, ob ein Fahrer gerade müde oder wach ist, wird der erstellte Merkmalsvektor für jede Sequenz mit ICA klassifiziert. Dafür wird ein trainierter Machine-Learning-Algorithmus eingesetzt, der in der Lage ist, auch unbekannte Datensätze in Klassen einzuteilen. Um die benötigten Frequenzwerte zu erhalten, wurde für jeden EEG-Kanal eine Fourier Transformation durchgeführt. Der erstellte Merkmalsvektor wird im nächsten Schritt durch ein Künstliches Neuronales Netz klassifiziert. Für das Training werden vorab erstellte Merkmalsvektoren mit den Klassen „Wach“ und „Müde“ versehen. Diese Daten werden zufällig gemischt und im Verhältnis 2:1 in eine Trainings- und Testmenge geteilt. Das Experiment wurde mit acht Personen mit jeweils zweimal 45 min Testfahrt durchgeführt.
Ergebnisse: Der komplette Datensatz besteht aus 150.000 Signalwerten, welche zu ca. 7000 Sequenzen zusammengefasst werden. Durch die Anwendung der Qualitätsmetrik bleiben 4370 Sequenzen für das Training übrig. Bei invaliden Sequenzen aufgrund von EEG-Artefakten gibt es deutliche Unterschiede. Im „Wach“ Zustand werden dreimal so viele Sequenzen verworfen als im „Müde“ Zustand. Insgesamt werden bei wachen Probanden im Schnitt ca. 50 % der Sequenzen verworfen, bei Müden lediglich 25 %. Im Durchschnitt erreicht das System eine Erkennungsrate von 73 % für beide Zustände. Vergleicht man nun das Verhältnis von „Wach“ und „Müde“ und lässt „Leichte Müdigkeit“ außen vor, liegen die Ergebnisse bei über 90 %.
Schlussfolgerungen: Die Ergebnisse zeigen, dass die Aufmerksamkeit während des Experiments abnimmt bzw. die Müdigkeit zunimmt. Dies verdeutlichen zum einen subjektive und objektive Beobachtungen von Müdigkeitsanzeichen. Zum anderen lassen sich messbare und klassifizierbare Unterschiede im EEG Signal nachweisen. Die als Merkmale eingesetzten Theta-Wellen zeigten eine niedrigere Amplitude gegen Ende des Experiments. Die Erweiterung der binären Klassifizierung führt zu einer weiteren Stabilisierung der Ergebnisse. Artefaktkorrektur und Qualitätsmetriken steigern die Güte der Daten weiter. Die entwickelte Anwendung zur Müdigkeitserkennung ermittelt messbare Zeichen von Müdigkeit und kann eine gute Entscheidung über die Fahrtauglichkeit treffen.
Among the multitude of software development processes available, hardly any is used by the book. Regardless of company size or industry sector, a majority of project teams and companies use customized processes that combine different development methods— so-called hybrid development methods. Even though such hybrid development methods are highly individualized, a common understanding of how to systematically construct synergetic practices is missing. In this paper, we make a first step towards devising such guidelines. Grounded in 1,467 data points from a large-scale online survey among practitioners, we study the current state of practice in process use to answer the question: What are hybrid development methods made of? Our findings reveal that only eight methods and few practices build the core of modern software development. This small set allows for statistically constructing hybrid development methods. Using an 85% agreement level in the participants’ selections, we provide two examples illustrating how hybrid development methods are characterized by the practices they are made of. Our evidence-based analysis approach lays the foundation for devising hybrid development methods.