Informatik
Refine
Document Type
- Conference Proceeding (392)
- Article (86)
- Part of a Book (47)
- Book (9)
- Doctoral Thesis (8)
- Anthology (6)
- Patent (2)
- Working Paper (2)
Motto der Herbstkonferenz Informatics Inside 2020 ist KInside. Wieder einmal blicken Studierende inside und schauen sich Methoden, Anwendungen und Zusammenhänge genauer an. Die Beiträge sind vielfältig und entsprechend dem Studiengang human-centered. Es ist der Anspruch, dass sich die Themen um die Bedürfnisse der Menschen drehen und eingesetzte Methoden kein Selbstzweck sind, sondern am Nutzen für den Menschen gemessen werden.
Modern mixed (HTAP)workloads execute fast update-transactions and long running analytical queries on the same dataset and system. In multi-version (MVCC) systems, such workloads result in many short-lived versions and long version-chains as well as in increased and frequent maintenance overhead.
Consequently, the index pressure increases significantly. Firstly, the frequent modifications cause frequent creation of new versions, yielding a surge in index maintenance overhead. Secondly and more importantly, index-scans incur extra I/O overhead to determine, which of the resulting tuple versions are visible to the executing transaction (visibility-check) as current designs only store version/timestamp information in the base table – not in the index. Such index-only visibility-check is critical for HTAP workloads on large datasets.
In this paper we propose the Multi Version Partitioned B-Tree (MV-PBT) as a version-aware index structure, supporting index-only visibility checks and flash-friendly I/O patterns. The experimental evaluation indicates a 2x improvement for analytical queries and 15% higher transactional throughput under HTAP workloads. MV-PBT offers 40% higher tx. throughput compared to WiredTiger’s LSM-Tree implementation under YCSB.
In this paper, we present a new approach for achieving robust performance of data structures making it easier to reuse the same design for different hardware generations but also for different workloads. To achieve robust performance, the main idea is to strictly separate the data structure design from the actual strategies to execute access operations and adjust the actual execution strategies by means of so-called configurations instead of hard-wiring the execution strategy into the data structure. In our evaluation we demonstrate the benefits of this configuration approach for individual data structures as well as complex OLTP workloads.
The tale of 1000 cores: an evaluation of concurrency control on real(ly) large multi-socket hardware
(2020)
In this paper, we set out the goal to revisit the results of “Starring into the Abyss [...] of Concurrency Control with [1000] Cores” and analyse in-memory DBMSs on today’s large hardware. Despite the original assumption of the authors, today we do not see single-socket CPUs with 1000 cores. Instead multi-socket hardware made its way into production data centres. Hence, we follow up on this prior work with an evaluation of the characteristics of concurrency control schemes on real production multi-socket hardware with 1568 cores. To our surprise, we made several interesting findings which we report on in this paper.
Massive data transfers in modern data intensive systems resulting from low data-locality and data-to-code system design hurt their performance and scalability. Near-data processing (NDP) and a shift to code-to-data designs may represent a viable solution as packaging combinations of storage and compute elements on the same device has become viable.
The shift towards NDP system architectures calls for revision of established principles. Abstractions such as data formats and layouts typically spread multiple layers in traditional DBMS, the way they are processed is encapsulated within these layers of abstraction. The NDP-style processing requires an explicit definition of cross-layer data formats and accessors to ensure in-situ executions optimally utilizing the properties of the underlying NDP storage and compute elements. In this paper, we make the case for such data format definitions and investigate the performance benefits under NoFTL-KV and the COSMOS hardware platform.
Massive data transfers in modern key/value stores resulting from low data-locality and data-to-code system design hurt their performance and scalability. Near-data processing (NDP) designs represent a feasible solution, which although not new, have yet to see widespread use.
In this paper we introduce nKV, which is a key/value store utilizing native computational storage and near-data processing. On the one hand, nKV can directly control the data and computation placement on the underlying storage hardware. On the other hand, nKV propagates the data formats and layouts to the storage device where, software and hardware parsers and accessors are implemented. Both allow NDP operations to execute in host-intervention-free manner, directly on physical addresses and thus better utilize the underlying hardware. Our performance evaluation is based on executing traditional KV operations (GET, SCAN) and on complex graph-processing algorithms (Betweenness Centrality) in-situ, with 1.4×-2.7× better performance on real hardware – the COSMOS+ platform.
nKV in action: accelerating KVstores on native computational storage with NearData processing
(2020)
Massive data transfers in modern data intensive systems resulting from low data-locality and data-to-code system design hurt their performance and scalability. Near-data processing (NDP) designs represent a feasible solution, which although not new, has yet to see widespread use.
In this paper we demonstrate various NDP alternatives in nKV, which is a key/value store utilizing native computational storage and near-data processing. We showcase the execution of classical operations (GET, SCAN) and complex graph-processing algorithms (Betweenness Centrality) in-situ, with 1.4x-2.7x better performance due to NDP. nKV runs on real hardware - the COSMOS+ platform.
The automation of work by means of disruptive technologies such as Artificial Intelligence (AI) and Robotic Process Automation (RPA) is currently intensely discussed in business practice and academia. Recent studies indicate that many tasks manually conducted by humans today will not in the future. In a similar vein, it is expected that new roles will emerge. The aim of this study is to analyze prospective employment opportunities in the context of RPA in order to foster our understanding of the pivotal qualifications, expertise and skills necessary to find an occupation in a completely changing world of work. This study is based on an explorative, content analysis of 119 job advertisements related to RPA in Germany. The data was collected from major German online job platforms, qualitatively coded, and subsequently analyzed quantitatively. The research indicates that there indeed are employment opportunities, especially in the consulting sector. The positions require different technological expertise such as specific programming languages and knowledge in statistics. The results of this study provide guidance for organizations and individuals on reskilling requirements for future employment. As many of the positions require profound IT expertise, the generally accepted perspective that existing employees affected by automation can be retrained to work in the emerging positions has to be seen extremely critical. This paper contributes to the body of knowledge by providing a novel perspective on the ongoing discussion of employment opportunities, and reskilling demands of the existing workforce in the context of recent technological developments and automation.
Urban platforms are essential for smart and sustainable city planning and operation. Today they are mostly designed to handle and connect large urban data sets from very different domains. Modelling and optimisation functionalities are usually not part of the cities software infrastructure. However, they are considered crucial for transformation scenario development and optimised smart city operation. The work discusses software architecture concepts for such urban platforms and presents case study results on the building sector modelling, including urban data analysis and visualisation. Results from a case study in New York are presented to demonstrate the implementation status.
In networked operating room environments, there is an emerging trend towards standardized non-proprietary communication protocols which allow to build new integration solutions and flexible human-machine interaction concepts. The most prominent endeavor is the IEEE 11073 SDC protocol. For some uses cases, it would be helpful if not just medical devices could be controlled based on SDC, but also building automation systems like light, shutters, air condition, etc. For those systems, the KNX protocol is widely used. We build an SDC-to-KNX gateway which allows to use the SDC protocol for sending commands to connected KNX devices. The first prototype system was successfully implemented at the demonstration operating room at Reutlingen University. This is a first step toward the integration of a broader variety of KNX devices.
Documentation of clinical processes, especially in the perioperative are, is a base requirement for quality of service. Nonetheless, the documentation is a burden for the medical staff since it distracts from the clinical core process. An intuitive and user-friendly documentation system could increase documentation quality and reduce documentation workload. The optimal system solution would know what happened and the person documenting the step would need a single “confirm” button. In many cases, such a linear flow of activities is given as long as only one profession (e.g. anaestesiology, scrub nurse) is considered, but even in such cases, there might be derivations from the linear process flow and further interaction is required.
Intraoperative brain deformation, so called brain shift, affects the applicability of preoperative magnetic resonance imaging (MRI) data to assist the procedures of intraoperative ultrasound (iUS) guidance during neurosurgery. This paper proposes a deep learning-based approach for fast and accurate deformable registration of preoperative MRI to iUS images to correct brain shift. Based on the architecture of 3D convolutional neural networks, the proposed deep MRI-iUS registration method has been successfully tested and evaluated on the retrospective evaluation of cerebral tumors (RESECT) dataset. This study showed that our proposed method outperforms other registration methods in previous studies with an average mean squared error (MSE) of 85. Moreover, this method can register three 3D MRI-US pair in less than a second, improving the expected outcomes of brain surgery.
Checklists are a valuable tool to ensure process quality and quality of care. To ensure proper integration in clinical processes, it would be desirable to generate checklists directly from formal process descriptions. Those checklists could also be used for user interaction in context-aware surgical assist systems. We built a tool to automatically convert Business Process Model and Notation (BPMN) process models to checklists displayed as HTML websites. Gateways representing decisions are mapped to checklist items that trigger dynamic content loading based on the placed checkmark. The usability of the resulting system was positively evaluated regarding comprehensibility and end-user friendliness.
At DBKDA 2019, we demonstrated that StrongDBMS with simple but rigorous optimistic algorithms, provides better performance in situations of high concurrency than major commercial database management systems (DBMS). The demonstration was convincing but the reasons for its success were not fully analysed. There is a brief account of the results below. In this short contribution, we wish to discuss the reasons for the results. The analysis leads to a strong criticism of all DBMS algorithms based on locking, and based on these results, it is not fanciful to suggest that it is time to re-engineer existing DBMS.
Due to digitalization, constant technological progress and ever shorter product life cycles, enterprises are currently facing major challenges. In order to succeed in the market, business models have to be adapted more often and more quickly to changing market conditions than they used to be. Fast adaptability, also called agility, is a decisive competitive factor in today’s world. Because of the ever-growing IT part of products and the fact that they are manufactured using IT, changing the business model has a major impact on the enterprise architecture (EA). However, developing EAs is a very complex task, because many stakeholders with conflicting interests are involved in the decision-making process. Therefore, a lot of collaboration is required. To support organizations in developing their EA, this article introduces a novel integrative method that systematically integrates stakeholder interests into decision-making activities. By using the method, collaboration between stakeholders involved is improved by identifying points of contact between them. Furthermore, standardized activities make decision-making more transparent and comparable without limiting creativity.
Enterprises are currently transforming their strategy, processes, and their information systems to extend their degree of digitalization. The potential of the Internet and related digital technologies, like Internet of Things, services computing, cloud computing, artificial intelligence, big data with analytics, mobile systems, collaboration networks, and cyber physical systems both drives and enables new business designs. Digitalization deeply disrupts existing businesses, technologies and economies and fosters the architecture of digital environments with many rather small and distributed structures. This has a strong impact for new value producing opportunities and architecting digital services and products guiding their design through exploiting a Service-Dominant Logic. The main result of the book chapter extends methods for integral digital strategies with value-oriented models for digital products and services which are defined in the framework of a multi-perspective digital enterprise architecture reference model.
The digital transformation is today’s dominant business transformation having a strong influence on how digital services and products are designed in a service-dominant way. A popular underlying theory of value creation and economic exchange that is known as the service-dominant (S-D) logic can be connected to many successful digital business models. However, S-D logic by itself is abstract. Companies cannot directly use it as an instrument for business model innovation and design in an easy way. To address this a comprehensive ideation method based on S-D logic is proposed, called service-dominant design (SDD). SDD is aimed at supporting firms in the transition to a service- and value-oriented perspective. The method provides a simplified way to structure the ideation process based on four model components. Each component consists of practical implications, auxiliary questions and visualization techniques that were derived from a literature review, a use case evaluation of digital mobility and a focus group discussion. SDD represents a first step of having a toolset that can support established companies in the process of service- and value-orientation as part of their digital transformation efforts.
This research-oriented book presents key contributions on architecting the digital transformation. It includes the following main sections covering 20 chapters: · Digital Transformation · Digital Business · Digital Architecture · Decision Support · Digital Applications Focusing on digital architectures for smart digital products and services, it is a valuable resource for researchers, doctoral students, postgraduates, graduates, undergraduates, academics and practitioners interested in digital transformation.
The typed graph model
(2020)
In recent years, the Graph Model has become increasingly popular, especially in the application domain of social networks. The model has been semantically augmented with properties and labels attached to the graph elements. It is difficult to ensure data quality for the properties and the data structure because the model does not need a schema. In this paper, we propose a schema bound Typed Graph Model with properties and labels. These enhancements improve not only data quality but also the quality of graph analysis. The power of this model is provided by using hyper-nodes and hyper edges, which allows to present a data structure on different abstraction levels. We demonstrate by example the superiority of this model over the property graph data model of Hidders and other prevalent data models, namely the relational, object-oriented, and XML model.
Formula One races provide a wealth of data worth investigating. Although the time-varying data has a clear structure, it is pretty challenging to analyze it for further properties. Here the focus is on a visual classification for events, drivers, as well as time periods. As a first step, the Formula One data is visually encoded based on a line plot visual metaphor reflecting the dynamic lap times, and finally, a classification of the races based on the visual outcomes gained from these line plots is presented. The visualization tool is web-based and provides several interactively linked views on the data; however, it starts with a calendar-based overview representation. To illustrate the usefulness of the approach, the provided Formula One data from several years is visually explored while the races took place in different locations. The chapter discusses algorithmic, visual, and perceptual limitations that might occur during the visual classification of time-series data such as Formula One races.
Additive manufacturing (AM) is a promising manufacturing method for many industrial sectors. For this application, industrial requirements such as high production volumes and coordinated implementation must be taken into account. These tasks of the internal handling of production facilities are carried out by the Production Planning and Control (PPC) information system. A key factor in the planning and scheduling is the exact calculation of manufacturing times. For this purpose we investigate the use of Machine Learning (ML) for the prediction of manufacturing times of AM facilities.
Die Erfindung betrifft ein Verfahren zur extrinsischen Kalibrierung wenigstens eines bildgebenden Sensors, wonach eine Pose des wenigstens einen bildgebenden Sensors relativ zu dem Ursprung (U) eines dreidimensionalen Koordinatensystems einer Handhabungseinrichtung mittels einer Recheneinrichtung bestimmt wird, wobei bekannte dreidimensionale Koordinaten betreffend die Position wenigstens eines Gelenks der Handhabungseinrichtung durch die Recheneinrichtung berücksichtigt werden, und wobei zweidimensionale Koordinaten betreffend die Position des wenigstens einen Gelenks anhand von Rohdaten des wenigstens einen bildgebenden Sensors ermittelt werden, und wobei die Recheneinrichtung die Pose des wenigstens einen bildgebenden Sensors anhand der Korrespondenz zwischen den zweidimensionalen Koordinaten und den dreidimensionalen Koordinaten bestimmt.
Purpose: Gliomas are the most common and aggressive type of brain tumors due to their infiltrative nature and rapid progression. The process of distinguishing tumor boundaries from healthy cells is still a challenging task in the clinical routine. Fluid attenuated inversion recovery (FLAIR) MRI modality can provide the physician with information about tumor infiltration. Therefore, this paper proposes a new generic deep learning architecture, namely DeepSeg, for fully automated detection and segmentation of the brain lesion using FLAIR MRI data.
Methods: The developed DeepSeg is a modular decoupling framework. It consists of two connected core parts based on an encoding and decoding relationship. The encoder part is a convolutional neural network (CNN) responsible for spatial information extraction. The resulting semantic map is inserted into the decoder part to get the full-resolution probability map. Based on modified U-Net architecture, different CNN models such as residual neural network (ResNet), dense convolutional network (DenseNet), and NASNet have been utilized in this study.
Results: The proposed deep learning architectures have been successfully tested and evaluated on-line based on MRI datasets of brain tumor segmentation (BraTS 2019) challenge, including s336 cases as training data and 125 cases for validation data. The dice and Hausdorff distance scores of obtained segmentation results are about 0.81 to 0.84 and 9.8 to 19.7 correspondingly.
Conclusion: This study showed successful feasibility and comparative performance of applying different deep learning models in a new DeepSeg framework for automated brain tumor segmentation in FLAIR MR images. The proposed DeepSeg is open source and freely available at https://github.com/razeineldin/DeepSeg/.
Internet of Things (IoT) provides a strong platform for computer users to connect objects, devices, and people to the Internet for exchanging or sharing of information with each other. IoT is growing rapidly and is expected to adapt to disciplines such as manufacturing, agriculture, healthcare, and robotics. Furthermore, the new concept of IoT is proposed and shown, especially for robotics areas as Internet of Robotics Things (IoRT). IoRT is a mixed structure of diverse technologies such as cloud computing, artificial intelligence, and machine learning. However, to promote and realize IoRT, digitization and digital transformation should be proceeded and implemented in the robotics enterprise. In this paper, we propose and architecture framework for IoRT-based digital platforms an verify it using a planned case in a global robotics enterprise. The associated challenges and future research directions in this field are also presented.
Zero or plus energy office buildings must have very high building standards and require highly efficient energy supply systems due to space limitations for renewable installations. Conventional solar cooling systems use photovoltaic electricity or thermal energy to run either a compression cooling machine or an absorption-cooling machine in order to produce cooling energy during daytime, while they use electricity from the grid for the nightly cooling energy demand. With a hybrid photovoltaic-thermal collector, electricity as well as thermal energy can be produced at the same time. These collectors can produce also cooling energy at nighttime by longwave radiation exchange with the night sky and convection losses to the ambient air. Such a renewable trigeneration system offers new fields of applications. However, the technical, ecological and economical aspects of such systems are still largely unexplored.
In this work, the potential of a PVT system to heat and cool office buildings in three different climate zones is investigated. In the investigated system, PVT collectors act as a heat source and heat sink for a reversible heat pump. Due to the reduced electricity consumption (from the grid) for heat rejection, the overall efficiency and economics improve compared to a conventional solar cooling system using a reversible air-to-water heat pump as heat and cold source.
A parametric simulation study was carried out to evaluate the system design with different PVT surface areas and storage tank volumes to optimize the system for three different climate zones and for two different building standards. It is shown such systems are technically feasible today. With a maximum utilization of PV electricity for heating, ventilation, air conditioning and other electricity demand such as lighting and plug loads, high solar fractions and primary energy savings can be achieved.
Annual costs for such a system are comparable to conventional solar thermal and solar electrical cooling systems. Nevertheless, the economic feasibility strongly depends on country specific energy prices and energy policy. However, even in countries without compensation schemes for energy produced by renewables, this system can still be economically viable today. It could be shown, that a specific system dimensioning can be found at each of the investigated locations worldwide for a valuable economic and ecological operation of an office building with PVT technologies in different system designs.
Vergleichende Analyse des YouTube- Auftritts von privat- und öffentlich-rechtlichen Sendegruppen
(2020)
Lange wurde das Internet als Antagonismus zum Fernsehen gesehen. Es wurde dementsprechend zur Zuschauerrück- bzw. -gewinnung genutzt, was sich allerdings als ineffizient erwies. Inzwischen haben die einzelnen Sendegruppen das Internet jedoch als mediale Erweiterung erkannt und genutzt. Durch diese späte Akzeptanz zeigen sich starke Unterschiede im Umfang und der Vorgehensweise hinsichtlich der Nutzung des Internets als zusätzliches Medium. Am besten lässt sich dies in einem Vergleich in Bezug auf die wichtigste videotechnische Social Media Plattform YouTube darstellen.
In diesem Vergleich sollen die einzelnen Sendegruppen hinsichtlich ihrer wahrgenommenen Vorteile, Nachteile und Attraktivität bezogen auf das Nutzerverhalten und die Nutzermeinung bewertet werden. Die zielgruppenorientierte Optimierung des YouTube-Auftrittes ist von außerordentlich hoher Bedeutung für die zukünftige Marktdurchdringung.
Going forward with the requirements of missions to the Moon and further into deep space, the European Space Agency is investigating new methods of astronaut training that can help accelerate learning, increase availability and reduce complexity and cost in comparison to currently used methods. To achieve this, technologies such as virtual reality may be utilized. In this paper, an investigation into the benefits of using virtual reality as a means for extravehicular activity training in comparison to conventional training methods, such as neutral buoyancy pools is given. To help determine the requirements and current uses of virtual reality for extravehicular activity training first hand tests of currently available software as well as expert interviews are utilized. With this knowledge a concept is developed that may be used to further advance training methods in virtual reality. The resulting concept is used as a basis for development of a prototype to showcase user interactions and locomotion in microgravity simulations.
Ein nicht unerheblicher Anteil der Autounfälle ist auf Müdigkeit am Steuer zurückzuführen. Um Unfälle aufgrund von Müdigkeit zu vermeiden, existieren schon einige Ansätze wie beispielsweise die Erkennung der Fahrweise. Im Rahmen des IOT-Labors des Masterstudiengangs Human Centered Computing der Hochschule Reutlingen sollen verschiedene Fahrassistenzsysteme entwickelt und getestet werden, um Unfälle aufgrund von Müdigkeit zu verhindern. Diese Arbeit beschäftigt sich mit der Müdigkeitserkennung über Computer Vision (CV) und das Elektrokardiogramm (EKG). Im Rahmen dieses Papers wird die Müdigkeitserkennung über CV am Steuer mittels den Open Source Bibliotheken OpenCV und Dlib und dem Embedded PC Nvidia Jetson Nano verwirklicht. Die Müdigkeit über EKG wird über den Herzschlag und die Herzfrequenzvariabilität erkannt. Ebenfalls wurde in dieser Arbeit eine Schnittstelle aus CV und EKG entwickelt, um aus den Python-Skripten der Müdigkeitserkennung über Computer Vision und der Müdigkeitserkennung über EKG die zur Erkennung wichtigen Daten zusammenzufassen. Diese werden anschließend zu einem gesamten Ergebnis ausgewertet.
In dieser Arbeit werden drei verschiedene Testumgebungen vorgestellt, welche in ein iteratives Vorgehen einfließen, um die Entwicklung von Augmented-Reality-Anwendungen zur Darstellung von autonomen Fahrfunktionen zu unterstützen.
Gestaltungsentwürfe und Softwareentwicklungen können in den Testumgebungen für unterschiedliche Zielsetzungen von Personenbefragungen vorgestellt und bewertet werden. Das entwicklungsbegleitende Testen ermöglicht eine frühzeitige Identifizierung von Änderungshinweisen, welche für einen gültigen Lösungsentwurf eingearbeitet werden können. Die entwickelten Testumgebungen sind ein verkleinertes Modell, ein Fahrsimulator und ein reales Fahrzeug. Eigenschaften, Funktionen und Aufbauten resultieren aus Erkenntnissen der Literatur und Erfahrungen aus ersten Entwicklungen. Diese und die Einsatzmöglichkeiten werden mit dieser Arbeit aufgezeigt.
In dieser Ausarbeitung wird auf Visualisierungsmöglichkeiten von neuronalen Netzen eingegangen. Ein neuronales Netz scheint zuerst nicht von außen einsehbar und ist somit für viele eine Blackbox. Häufig genutzte Python-Bibliotheken, zum Beispiel TensorFlow, werden vorgestellt und deren Stärken wie auch Schwächen präsentiert. Anhand dieser werden bereits bestehende Visualisierungen gezeigt und ihr derzeitiger Einsatz wird erläutert.
Durch einen Vergleich soll ersichtlich werden, welche Bibliothek am meisten Daten während des Trainings liefert, damit diese Informationen weiter verarbeitet werden. Diese Daten sollen so visualisiert werden, dass sie bei der Entwicklung eines neuronalen Netzes unterstützend sind. Ziel ist es, auf die Möglichkeiten einzugehen, welche geboten werden können. Durch eine Vereinfachung des Debuggings neuronaler Netze sollen weiterführende Entwicklungen in diese Richtung unterstützt werden.
Detecting semantic similarities between sentences is still a challenge today due to the ambiguity of natural languages. In this work, we propose a simple approach to identifying semantically similar questions by combining the strengths of word embeddings and Convolutional Neural Networks (CNNs). In addition, we demonstrate how the cosine similarity metric can be used to effectively compare feature vectors. Our network is trained on the Quora dataset, which contains over 400k question pairs. We experiment with different embedding approaches such as Word2Vec, Fasttext, and Doc2Vec and investigate the effects these approaches have on model performance. Our model achieves competitive results on the Quora dataset and complements the well-established evidence that CNNs can be utilized for paraphrase recognition tasks.
Die Entwicklung eines Medizinproduktes benötigt in der Regel mehrere Jahre. Gesetzliche Vorgaben, wie zum Beispiel das Medizinprodukte Durchführungsgesetz, bestimmen, welche Schritte während der Entwicklung durchgeführt werden müssen. Deren Einhaltung muss in der technischen Dokumentation nachgewiesen werden. Die darin enthaltenen technischen Dokumente entstehen im Verlauf der Entwicklung. Diese bauen aufeinander auf und verweisen sich gegenseitig. Dadurch entstehen heterogene und unübersichtliche Strukturen. Eine Lösung für dieses Problem bietet Traceability. Traceability sorgt dafür, dass die Anforderungen an das Medizinprodukt mit Dokumenten, wie dem Anforderungskatalog, Lastenheft oder der Spezifikation verknüpft werden können. Somit ist jederzeit nachvollziehbar, welche Anforderungen mit welchem Test, welchen Änderungen oder welchen Ergebnissen zusammenhängen. Ein wichtiger Prozess bei der Entwicklung von Medizinprodukten ist zudem das Usability Engineering, wodurch die Sicherheit eines Medizinprodukts sichergestellt und Risiken bei der Anwendung minimiert werden sollen. In diesem Prozess entstehen viele Artefakte, wie zum Beispiel Usability-Berichte. Um den Überblick über alle Usability-Daten behalten zu können, können diese mithilfe von Traceability verknüpft werden. In diesem Artikel wird herausgestellt, welche Voraussetzungen für das Usability Engineering in der Medizintechnik an Traceability gestellt
werden.
Requirements Engineering (RE) umfasst sämtliche systematische Schritte zur Entwicklung eines Systems, um die Bedürfnisse der Nutzer und Vorgaben, die an dieses gestellt werden, zu erfüllen. Das RE eines ausgewählten Herstellers für klinische Informationssysteme (KIS) wurde untersucht und es stellt sich als intransparent als auch teilweise unzureichend dar. Das Ausmaß des Einsatzes von systematischen Vorgehensweisen und Methoden zum RE wurden beim ausgewählten KIS-Hersteller analysiert. Die Analyse zeigt, dass RE weit verbreitet ist, aber differenziert betrieben wird.
Das Ziel dieser Arbeit ist es, den Stand der Technik des RE für die KIS Entwicklung zu ermitteln. Es werden wichtige Faktoren des RE für die Entwicklung von KIS beschrieben. Die Ergebnisse dieser Arbeit werden als erster Schritt für die Optimierung des RE des ausgewählten KIS-Herstellers dienen.
Medizinprodukte sind Gegenstände, Stoffe oder Software mit medizinischer Zweckbestimmung für die Anwendung am Menschen. Diese werden von Medizinprodukteherstellern entwickelt und auf den Markt eingeführt. Da die falsche Anwendung von Medizinprodukten bei Menschen zu Verletzbarkeit des menschlichen Körpers führen kann, ist eine angemessene Qualität der Medizinprodukte zu gewährtleisten. Um die Sicherstellung der Qualität einzuhalten, sind Medizinproduktehersteller verpflichtet, sich an die Medizinprodukteverordnung (MDR) zu halten. Für risikoreiche Produkte ist ergänzend die Nutzung eines Qualitätsmanagementsystems (QMS) verpflichtend. Dieses steuert die Struktur, Verantwortlichkeiten, Verfahren und Prozesse des Unternehmens, die für die Medizinprodukteentwicklung notwendig sind. In Zeiten der Digitalisierung werden Softwarelösungen eingesetzt, um die zeitaufwendigen Dokumentations- und Administrationstätigkeiten im QMS zu reduzieren und die Prozesse zu optimieren. Mit der Einführung einer Software wird ein QMS in der Praxis auch als elektronisches QMS (eQMS) bezeichnet. Weiterhin muss das gesamte QMS mit den Regularien konform sein. Deshalb ist das Ziel dieser Arbeit, mithilfe der regulatorischen Anforderungen herauszuarbeiten, welche Vorgaben bei der Einführung eines eQMS zu beachten sind und wie diese erfüllt werden können. Diese Arbeit bezieht sich auf die regulatorsichen Vorgaben aus der MDR und der ISO 13485. Die Norm beinhaltet Anforderungen an ein QMS von Medizinprodukten.
In Zusammenarbeit mit dem Medizinproduktehersteller ulrich medical wird eine User Experience und Usability Studie an der Software der im Moment eingesetzten Kontrastmittelinjektoren durchgeführt. Das Unternehmen möchte eine neue Variante eines Kontrastmittelinjektors entwickeln, der als Basis eine verbesserte Version dieser Softwares enthält. Benutzerstudien können mit den unterschiedlichsten Methoden durchgeführt werden. Das geeignete Vorgehen muss definiert und die Testpersonen in Bezug zur eingesetzten Methode ermittelt werden. Bei Medizinprodukten muss zusätzlich auf strikte Auflagen in Normen und Gesetzen geachtet werden. Die Grundlage zur Methodenauswahl bildet eine Recherche zu Usability und User Experience Vorgaben für Medizinprodukte. Die Studie wird anhand quantitativer Daten eines Usability Tests im Labor, Fragebögen zur User Experience und qualitativen Post Test- Interviews evaluiert. In erster Linie dient diese Studie der Ermittlung von möglichen Verbesserungen, welche in der darauf folgenden Masterthesis vertieft und umgesetzt werden.
The field of breath analysis has developed to be of growing interest in medical diagnosis and patient monitoring. The main advantages are that it’s noninvasive, painless and repeatable in flexible cycles. Even though breath analysis is being researched for a couple of decades there are still many unanswered questions. Human breath contains volatile organic compounds which are emitted from inside the body. Some of these compounds can be assigned to specific sources, such as inflammation or cancer, but also to non health related origins. This paper gives an overview of breath analysis for the purpose of disease diagnosis and health monitoring. Therefore, literature regarding breath analysis in the medical field has been analyzed, from its early stages to the present. As a result, this paper gives an outline of the topic of breath analysis.
Haptisches Feedback ist nach zahlreichen Studien ein wichtiger Bestandteil in der medizinischen Robotik. Die meisten Systeme befinden sich jedoch noch im Forschungsstatus und verfolgen unterschiedliche Ansätze. In der Teleoperation wird mit sensorlosen und Sensor-Systemen geforscht. Sensoren bieten, im Gegensatz zu den Encodern in sensorlosen Systemen, genaue Messungen, sind allerdings teuer in der Anschaffung, schwer zu desinfizieren und müssen in OP-Besteck integriert werden. In Hands-On Systemen fühlt der Operateur im Gegensatz zu Teleoperationssystemen direkt die auftretenden Kräfte bei der Benutzung. Der Roboter bietet in diesen Systemen nur die benötigte Stabilität und Genauigkeit, gesteuert werden sie direkt durch den Menschen. Dagegen werden in Teleoperationssystemen gezielte Controller eingesetzt. Hier hat sich der für den OP entwickelte sigma.7 durchgesetzt. Gegenüber der für die Allgemeinheit entwickelten Konkurrenz bietet er haptisches Feedback in allen nötigen Freiheitsgraden und eine entsprechende Kraftrückkoppelung.
In der Kryochirurgie wird Kälte verwendet, um tumoröses Gewebe abzutöten. Dazu werden Kryosonden in den Tumor gestochen und stark abgekühlt. Hierbei gibt es verschiedene Herausforderungen, welchen computergestützt begegnet werden kann. Diese Arbeit gibt die Ergebnisse einer Literaturrecherche zu den Herausforderungen wieder. Die vorgestellten Arbeiten beschäftigten sich mit der Simulation des im Tumor entstehenden Eisballs, dem korrekten Positionieren der Kryosonden im Tumor, dem Überwachen des Eingriffs sowie dem Entwickeln von Simulationen für Trainingszwecke. Dabei zeigt sich, dass der Einsatz von computergestützten Lösungen die Kryochirurgie für Operateur und Patient verbessern kann.
Anhaltend erlebt die Künstliche Intelligenz (KI) eine Renaissance in vielen Branchen. Der Trend, komplexe Zusammenhänge in Daten zu erfassen und zu nutzen, hält an. Hierbei ist jedoch der Grundgedanke des Maschinellen Lernens basierend auf empirischen Daten nicht neu. Es bleibt nach wie vor die Herausforderung, erst ein oft auch interdisziplinäres Verständnis von komplexen Zusammenhängen für verschiedenste Anwendungs-Domänen zu gewinnen, um zum Beispiel KI sinnvoll zum Einsatz zu bringen. Als Besucher der Konferenz erwarten Sie Beiträge aus den unterschiedlichsten Bereichen. Hierzu gehören zum Beispiel Müdigkeitserkennungssysteme im Automobil, ein Tastsinn auch für Roboter, aber auch neue Ansätze zur Erzeugung und Nutzung von Virtuellen Realitäten für die Erprobung des autonomen Fahrens bis hin zur Simulation von Außenboardeinsätzen in der Raumfahrt.
On the design of an urban data and modeling platform and its application to urban district analyses
(2020)
An integrated urban platform is the essential software infrastructure for smart, sustainable and resilitent city planning, operation and maintenance. Today such platforms are mostly designed to handle and analyze large and heterogeneous urban data sets from very different domains. Modeling and optimization functionalities are usually not part of the software concepts. However, such functionalities are considered crucial by the authors to develop transformation scenarios and to optimized smart city operation. An urban platform needs to handle multiple scales in the time and spatial domain, ranging from long term population and land use change to hourly or sub-hourly matching of renewable energy supply and urban energy demand.
In Folge der gegenwärtigen Digitalisierung in der produzierenden Industrie werden Anwendungen oder Services mit potentiell positiven Auswirkungen auf Faktoren wie Effektivität und Arbeitsqualität entwickelt. Ein geeigneter Ansatz zur Stärkung motivierender Aspekte im Arbeitskontext kann Gamification darstellen. In dieser Arbeit ist die initiale Konzeption und Evaluation eines Gamification-Ansatzes für Anwender eines KI-Service zur Maschinenoptimierung dargestellt und möglichen Anforderungen an ein Konzept zur Motivationssteigerung extrahiert.
In dieser Ausarbeitung wird eine zeitliche Vorhersage von Erdbeben getroffen. Hierfür werden mit einem Datensatz aus Labor-Erdbeben Convolutional Neural Networks (CNN) trainiert. Die trainierten Netzwerke geben Vorhersagen, indem sie einen Input an seismischen Daten klassifizieren. Durch das Klassifizieren kann das CNN die zeitliche Entfernung zum nächsten Erdbeben vorhersagen. Es werden hierfür zwei Ansätze miteinander verglichen. Beim ersten Ansatz werden die Originaldaten in ein CNN gegeben. Beim zweiten Ansatz wird vor dem CNN eine Vorverarbeitung der Daten mit den Mel Frequency Cepstral Coefficients (MFCC) durchgeführt. Es zeigt sich, dass mit beiden Ansätzen eine gute Klassifikation möglich ist. Die Kombination aus MFCC und CNN liefert die besseren quantitativen Ergebnisse. Hierbei konnte eine Genauigkeit von 65 % erreicht werden.
Semi-automated image data labelling using AprilTags as a pre-processing step for machine learning
(2019)
Data labelling is a pre-processing step to prepare data for machine learning. There are many ways to collect and prepare this data, but these are usually associated with a greater effort. This paper presents an approach to semi-automated image data labelling using AprilTags. The AprilTags attached to the object, which contain a unique ID, make it possible to link the object surfaces to a particular class. This approach will be implemented and used to label data of a stackable box.
The data is evaluated by training a You Only Look Once (YOLO) net, with a subsequent evaluation of the detection results. These results show that the semi-automatically collected and labelled data can certainly be used for machine learning. However, if concise features of an object surface are covered by the AprilTag, there is a risk that the concerned class will not be recognized. It can be assumed that the labelled data can not only be used for YOLO, but also for other machine learning approaches.
Informatics Inside : experience(IT) : Informatik-Konferenz an der Hochschule Reutlingen, 8. Mai 2019
(2019)
Bereits zum elften Mal findet nun die Studierendenkonferenz Informatics Inside statt. Als Teil des Masterstudiengangs Human-Centered Computing organisieren Masterstudierende selbständig eine vollumfängliche wissenschaftliche Konferenz.
Die Informatik ist nach wie vor ständigem Wandel unterworfen. Unsere Studierenden tragen diesem Wandel bei, indem sie in ihrer wissenschaftllichen Vertiefung aktuelle Problemstellungen durch innovative Konzepte lösen. Inzwischen ist die Informatik aber auch nicht immer sofort sichtbar. Das merken wir immer dann, wenn etwas nicht wie vorgesehen funktioniert. Das diesjährige Motto der Informatics Inside ist experience (IT);, verdeckt als Funktionsaufruf:).
Automatic classification of rotating machinery defects using Machine Learning (ML) algorithms
(2020)
Electric machines and motors have been the subject of enormous development. New concepts in design and control allow expanding their applications in different fields. The vast amount of data have been collected almost in any domain of interest. They can be static; that is to say, they represent real-world processes at a fixed point of time. Vibration analysis and vibration monitoring, including how to detect and monitor anomalies in vibration data are widely used techniques for predictive maintenance in high-speed rotating machines. However, accurately identifying the presence of a bearing fault can be challenging in practice, especially when the failure is still at its incipient stage, and the signal-to-noise ratio of the monitored signal is small. The main objective of this work is to design a system that will analyze the vibration signals of a rotating machine, based on recorded data from sensors, in the time/frequency domain. As a consequence of such substantial interest, there has been a dramatic increase of interest in applying Machine Learning (ML) algorithms to this task. An ML system will be used to classify and detect abnormal behavior and recognize the different levels of machine operation modes. The proposed solution can be deployed as predictive maintenance for Industry 4.0.
Power line communications (PLC) reuse the existing power-grid infrastructure for the transmission of data signals. As power line the communication technology does not require a dedicated network setup, it can be used to connect a multitude of sensors and Internet of Things (IoT) devices. Those IoT devices could be deployed in homes, streets, or industrial environments for sensing and to control related applications. The key challenge faced by future IoT-oriented narrowband PLC networks is to provide a high quality of service (QoS). In fact, the power line channel has been traditionally considered too hostile. Combined with the fact that spectrum is a scarce resource and interference from other users, this requirement calls for means to increase spectral efficiency radically and to improve link reliability. However, the research activities carried out in the last decade have shown that it is a suitable technology for a large number of applications. Motivated by the relevant impact of PLC on IoT, this paper proposed a cooperative spectrum allocation in IoT-oriented narrowband PLC networks using an iterative water-filling algorithm.
Our paper gives first answers on a fundamental question: how can the design of architectures of intelligent digital systems and services be accomplished methodologically? Intelligent systems and services are the goals of many current digitalization efforts today and part of massive digital transformation efforts based on digital technologies. Digital systems and services are the foundation of digital platforms and ecosystems. Digtalization disrupts existing businesses, technologies, and economies and promotes the architecture of open environments. This has a strong impact on new value-added opportunities and the development of intelligent digital systems and services. Digital technologies such as artificial intelligence, the Internet of Things, services computing, cloud computing, big data with analytics, mobile systems, and social enterprise networks systems are important enablers of digitalization. The current publication presents our research on the architecture of intelligent digital ecosystems and products and services influenced by the service-dominant logic. We present original methodological extensions and a new reference model for digital architectures with an integral service and value perspective to model intelligent systems and services that effectively align digital strategies and architectures with artificial intelligence as main elements to support intelligent digitalization.
his book highlights new trends and challenges in intelligent systems, which play an important part in the digital transformation of many areas of science and practice. It includes papers offering a deeper understanding of the human-centred perspective on artificial intelligence, of intelligent value co-creation, ethics, value-oriented digital models, transparency, and intelligent digital architectures and engineering to support digital services and intelligent systems, the transformation of structures in digital businesses and intelligent systems based on human practices, as well as the study of interaction and the co-adaptation of humans and systems. All papers were originally presented at the International KES Conference on Human Centred Intelligent Systems 2020 (KES HCIS 2020), held on June 17–19, 2020, in Split, Croatia.
Serverless computing is an emerging cloud computing paradigm with the goal of freeing developers from resource management issues. As of today, serverless computing platforms are mainly used to process computations triggered by events or user requests that can be executed independently of each other. These workloads benefit from on-demand and elastic compute resources as well as per-function billing. However, it is still an open research question to which extent parallel applications, which comprise most often complex coordination and communication patterns, can benefit from serverless computing.
In this paper, we introduce serverless skeletons for parallel cloud programming to free developers from both parallelism and resource management issues. In particular, we investigate on the well known and widely used farm skeleton, which supports the implementation of a wide range of applications. To evaluate our concepts, we present a prototypical development and runtime framework and implement two applications based on our framework: Numerical integration and hyperparameter optimization - a commonly applied technique in machine learning. We report on performance measurements for both applications and discuss
the usefulness of our approach.
Companies are continuously changing their strategy, processes, and information systems to benefit from the digital transformation. Controlling the digital architecture and governance is the fundamental goal. Enterprise Governance, Risk and Compliance (GRC) systems are vital for managing digital risks threatening in modern enterprises from many different angles. The most significant constituent to GRC systems is the definition of controls that is implemented on different layers of a digital Enterprise Architecture (EA). As part of the compliant aspect of GRC, the effectiveness of these controls is assessed and reported to relevant management bodies within the enterprise. In this paper, we present a metamodel which links controls to the affected elements of a digital EA and supplies a way of expressing associated assessment techniques and results. We complement a metamodel with an expository instantiation of a control compliance cockpit in an international insurance enterprise.