Informatik
Refine
Year of publication
Document Type
- Conference proceeding (570) (remove)
Is part of the Bibliography
- yes (570)
Institute
- Informatik (570)
- Technik (2)
Publisher
- Springer (119)
- Hochschule Reutlingen (102)
- IEEE (83)
- Gesellschaft für Informatik e.V (51)
- Association for Computing Machinery (34)
- IARIA (19)
- RWTH Aachen (15)
- Association for Information Systems (12)
- SciTePress (12)
- Deutsche Gesellschaft für Computer- und Roboterassistierte Chirurgie e.V. (9)
- University of Hawai'i at Manoa (8)
- Università Politecnica delle Marche (8)
- IOP Publishing (5)
- SPIE. The International Society for Optical Engineering (5)
- University of Zagreb (5)
- Curran Associates Inc. (4)
- OpenProceedings (4)
- University of Hawaii at Manoa (4)
- Deutsche Gesellschaft für Computer- und Roboterassistierte Chirurgie e. V. (3)
- EuroMed Press (3)
- Universität Konstanz (3)
- Academic Conferences International (2)
- American Marketing Association (2)
- GMDS e.V. (2)
- HTWG Konstanz (2)
- IADIS Press (2)
- IBM Research Division (2)
- International Society for Photogrammetry and Remote Sensing (2)
- Smart Home & Living Baden-Württemberg e.V. (2)
- The Association for Computing Machinery, Inc. (2)
- Academic Conferences International Limited (1)
- American Institute of Physics (1)
- Association for Computing Machinery ACM (1)
- CIDR (1)
- Cambridge University Press (1)
- Copenhagen Business School (1)
- Cuvillier Verlag (1)
- DGMP (1)
- EMAC (1)
- Ed2.0Work (1)
- Elektronikpraxis, Vogel Business Media GmbH & Co. KG (1)
- Elsevier (1)
- Eurographics Association (1)
- German Medical Science Publishing House (1)
- IADIS (1)
- International Association for Development of the Information Society (1)
- Johannes Kepler University Linz (1)
- Lund University (1)
- Morressier (1)
- NextMed (1)
- SISSA (1)
- Shaker Verlag (1)
- The Association for Computing Machinery (1)
- University of Belgrade (1)
- University of Portsmouth (1)
- University of Zagreb Faculty of Organization and Informatics (1)
- Universität Trier (1)
- Universität des Saarlandes (1)
- libreriauniversitaria.it.edizioni (1)
- vwh Verlag Werner Hülsbusch (1)
The internet of things, enterprise social networks, adaptive case management, mobility systems, analytics for big data, and cloud environments are emerging to support smart connected i.e. digital products and services and the digital transformation. Biological metaphors for living and adaptable ecosystems are currently providing the logical foundation for resilient run-time environments with serviceoriented digitization architectures and for self-optimizing intelligent business services and related distributed information systems. We are investigating mechanisms for flexible adaptation and evolution of information systems with digital architecture in the context of the ongoing digital transformation. The goal is to support flexible and agile transformations for both business and related information systems through adaptation and dynamical evolution of their digital architectures. The present research paper investigates mechanisms of decision analytics for digitization architectures, putting a spotlight to internet of things micro-granular architectures, by extending original enterprise architecture reference models with digitization architectures and their multi-perspective architectural decision management.
Social networks, smart portable devices, Internet of Things (IoT) on base of technologies like analytics for big data and cloud services are emerging to support flexible connected products and agile services as the new wave of digital transformation. Biological metaphors of living and adaptable ecosystems with service-oriented enterprise architectures provide the foundation for self-optimizing and resilient run-time environments for intelligent business services and related distributed information systems. We are extending Enterprise Architecture (EA) with mechanisms for flexible adaptation and evolution of information systems having distributed IoT and other micro-granular digital architecture to support next digitization products, services, and processes. Our aim is to support flexibility and agile transformation for both IT and business capabilities through adaptive digital enterprise architectures. The present research paper investigates additionally decision mechanisms in the context of multi-perspective explorations of enterprise services and Internet of Things architectures by extending original enterprise architecture reference models with state of art elements for architectural engineering and digitization.
Characteristics of modern computing and storage technologies fundamentally differ from traditional hardware. There is a need to optimally leverage their performance, endurance and energy consumption characteristics. Therefore, existing architectures and algorithms in modern high performance database management systems have to be redesigned and advanced. Multi Version Concurrency Control (MVCC) approaches in data-base management systems maintain multiple physically independent tuple versions. Snapshot isolation approaches enable high parallelism and concurrency in workloads with almost serializable consistency level. Modern hardware technologies benefit from multi-version approaches. Indexing multi-version data on modern hardware is still an open research area. In this paper, we provide a survey of popular multi-version indexing approaches and an extended scope of high performance single-version approaches. An optimal multi-version index structure brings look-up efficiency of tuple versions, which are visible to transactions, and effort on index maintenance in balance for different workloads on modern hardware technologies.
Automatic segmentation is essential for the brain tumor diagnosis, disease prognosis, and follow-up therapy of patients with gliomas. Still, accurate detection of gliomas and their sub-regions in multimodal MRI is very challenging due to the variety of scanners and imaging protocols. Over the last years, the BraTS Challenge has provided a large number of multi-institutional MRI scans as a benchmark for glioma segmentation algorithms. This paper describes our contribution to the BraTS 2022 Continuous Evaluation challenge. We propose a new ensemble of multiple deep learning frameworks namely, DeepSeg, nnU-Net, and DeepSCAN for automatic glioma boundaries detection in pre-operative MRI. It is worth noting that our ensemble models took first place in the final evaluation on the BraTS testing dataset with Dice scores of 0.9294, 0.8788, and 0.8803, and Hausdorf distance of 5.23, 13.54, and 12.05, for the whole tumor, tumor core, and enhancing tumor, respectively. Furthermore, the proposed ensemble method ranked first in the final ranking on another unseen test dataset, namely Sub-Saharan Africa dataset, achieving mean Dice scores of 0.9737, 0.9593, and 0.9022, and HD95 of 2.66, 1.72, 3.32 for the whole tumor, tumor core, and enhancing tumor, respectively.
Enterprises are transforming their strategy, culture, processes, and their information systems to enlarge their digitalization efforts or to approach for digital leadership. The digital transformation profoundly disrupts existing enterprises and economies. In current times, a lot of new business opportunities appeared using the potential of the Internet and related digital technologies: The Internet of Things, services computing, cloud computing, artificial intelligence, big data with analytics, mobile systems, collaboration networks, and cyber physical systems. Digitization fosters the development of IT environments with many rather small and distributed structures, like the Internet of Things, microservices, or other micro-granular elements. Architecting micro-granular structures have a substantial impact on architecting digital services and products. The change from a closed-world modeling perspective to more flexible Open World of living software and system architectures defines the context for flexible and evolutionary software approaches, which are essential to enable the digital transformation. In this paper, we are revealing multiple perspectives of digital enterprise architecture and decisions to effectively support value and service oriented software systems for intelligent digital services and products.
An index in a Multi-Version DBMS (MV-DBMS) has to reflect different tuple versions of a single data item. Existing approaches follow the paradigm of logically separating the tuple version data from the data item, e.g. an index is only allowed to return at most one version of a single data item (while it may return multiple data items that match a search criteria). Hence to determine the valid (and therefore visible) tuple version of a data item, the MV-DBMS first fetches all tuple versions that match the search criteria and subsequently filters visible versions using visibility checks. This involves I/O storage accesses to tuple versions that do not have to be fetched. In this vision paper we present the Multi Version Index (MV-IDX) approach that allows index-only visibility checks which significantly reduce the amount of I/O storage accesses as well as the index maintenance overhead. The MV-IDX achieves significantly lower response times and higher transactional throughput on OLTP workloads.
Modern mixed (HTAP)workloads execute fast update-transactions and long running analytical queries on the same dataset and system. In multi-version (MVCC) systems, such workloads result in many short-lived versions and long version-chains as well as in increased and frequent maintenance overhead.
Consequently, the index pressure increases significantly. Firstly, the frequent modifications cause frequent creation of new versions, yielding a surge in index maintenance overhead. Secondly and more importantly, index-scans incur extra I/O overhead to determine, which of the resulting tuple versions are visible to the executing transaction (visibility-check) as current designs only store version/timestamp information in the base table – not in the index. Such index-only visibility-check is critical for HTAP workloads on large datasets.
In this paper we propose the Multi Version Partitioned B-Tree (MV-PBT) as a version-aware index structure, supporting index-only visibility checks and flash-friendly I/O patterns. The experimental evaluation indicates a 2x improvement for analytical queries and 15% higher transactional throughput under HTAP workloads. MV-PBT offers 40% higher tx. throughput compared to WiredTiger’s LSM-Tree implementation under YCSB.
In the present tutorial we perform a cross-cut analysis of database storage management from the perspective of modern storage technologies. We argue that neither the design of modern DBMS, nor the architecture of modern storage technologies are aligned with each other. Moreover, the majority of the systems rely on a complex multi-layer and compatibility oriented storage stack. The result is needlessly suboptimal DBMS performance, inefficient utilization, or significant write amplification due to outdated abstractions and interfaces. In the present tutorial we focus on the concept of native storage, which is storage operated without intermediate abstraction layers over an open native storage interface and is directly controlled by the DBMS.
Data analytics tasks on large datasets are computationally intensive and often demand the compute power of cluster environments. Yet, data cleansing, preparation, dataset characterization and statistics or metrics computation steps are frequent. These are mostly performed ad hoc, in an explorative manner and mandate low response times. But, such steps are I/O intensive and typically very slow due to low data locality, inadequate interfaces and abstractions along the stack. These typically result in prohibitively expensive scans of the full dataset and transformations on interface boundaries.
In this paper, we examine R as analytical tool, managing large persistent datasets in Ceph, a wide-spread cluster file-system. We propose nativeNDP – a framework for Near Data Processing that pushes down primitive R tasks and executes them in-situ, directly within the storage device of a cluster-node. Across a range of data sizes, we show that nativeNDP is more than an order of magnitude faster than other pushdown alternatives.
Hypermedia as the Engine of Application State (HATEOAS) is one of the core constraints of REST. It refers to the concept of embedding hyperlinks into the response of a queried or manipulated resource to show a client possible follow-up actions and transitions to related resources. Thus, this concept aims to provide a client with a navigational support when interacting with a Web-based application. Although HATEOAS should be implemented by any Web-based API claiming to be RESTful, API providers tend to offer service descriptions in place of embedding hyperlinks into responses. Instead of relying on a navigational support, a client developer has to read the service description and has to identify resources and their URIs that are relevant for the interaction with the API. In this paper, we introduce an approach that aims to identify transitions between resources of a Web-based API by systematically analyzing the service description only. We devise an algorithm that automatically derives a URI Model from the service description and then analyzes the payload schemas to identify feasible values for the substitution of path parameters in URI Templates. We implement this approach as a proxy application, which injects hyperlinks representing transitions into the response payload of a queried or manipulated resource. The result is a HATEOAS-like navigational support through an API. Our first prototype operates on service descriptions in the OpenAPI format. We evaluate our approach using ten real-world APIs from different domains. Furthermore, we discuss the results as well as the observations captured in these tests.
Multi-versioning and MVCC are the foundations of many modern DBMSs. Under mixed workloads and large datasets, the creation of the transactional snapshot can become very expensive, as long-running analytical transactions may request old versions, residing on cold storage, for reasons of transactional consistency. Furthermore, analytical queries operate on cold data, stored on slow persistent storage. Due to the poor data locality, snapshot creation may cause massive data transfers and thus lower performance. Given the current trend towards computational storage and near-data processing, it has become viable to perform such operations in-storage to reduce data transfers and improve scalability. neoDBMS is a DBMS designed for near-data processing and computational storage. In this paper, we demonstrate how neoDBMS performs snapshot computation in-situ. We showcase different interactive scenarios, where neoDBMS outperforms PostgreSQL 12 by up to 5×.
Massive data transfers in modern key/value stores resulting from low data-locality and data-to-code system design hurt their performance and scalability. Near-data processing (NDP) designs represent a feasible solution, which although not new, have yet to see widespread use.
In this paper we introduce nKV, which is a key/value store utilizing native computational storage and near-data processing. On the one hand, nKV can directly control the data and computation placement on the underlying storage hardware. On the other hand, nKV propagates the data formats and layouts to the storage device where, software and hardware parsers and accessors are implemented. Both allow NDP operations to execute in host-intervention-free manner, directly on physical addresses and thus better utilize the underlying hardware. Our performance evaluation is based on executing traditional KV operations (GET, SCAN) and on complex graph-processing algorithms (Betweenness Centrality) in-situ, with 1.4×-2.7× better performance on real hardware – the COSMOS+ platform.
Flash SSDs are omnipresent as database storage. HDD replacement is seamless since Flash SSDs implement the same legacy hardware and software interfaces to enable backward compatibility. Yet, the price paid is high as backward compatibility masks the native behaviour, incurs significant complexity and decreases I/O performance, making it non-robust and unpredictable. Flash SSDs are black-boxes. Although DBMS have ample mechanisms to control hardware directly and utilize the performance potential of Flash memory, the legacy interfaces and black-box architecture of Flash devices prevent them from doing so.
In this paper we demonstrate NoFTL, an approach that enables native Flash access and integrates parts of the Flashmanagement functionality into the DBMS yielding significant performance increase and simplification of the I/O stack. NoFTL is implemented on real hardware based on the OpenSSD research platform. The contributions of this paper include: (i) a description of the NoFTL native Flash storage architecture; (ii) its integration in Shore-MT and (iii) performance evaluation of NoFTL on a real Flash SSD and on an on-line data-driven Flash emulator under TPCB, C,E and H workloads. The performance evaluation results indicate an improvement of at least 2.4x on real hardware over conventional Flash storage; as well as better utilisation of native Flash parallelism.
Modern persistent Key/Value stores are designed to meet the demand for high transactional throughput and high data ingestion rates. Still, they rely on backwards-compatible storage stack and abstractions to ease space management, foster seamless proliferation and system integration. Their dependence on the traditional I/O stack has negative impact on performance, causes unacceptably high write-amplification, and limits the storage longevity.
In the present paper we present NoFTL KV, an approach that results in a lean I/O stack, integrating physical storage management natively in the Key/Value store. NoFTL-KV eliminates backwards compatibility, allowing the Key/Value store to directly consume the characteristics of modern storage technologies. NoFTLKV is implemented under RocksDB. The performance evaluation under LinkBench shows that NoFTL-KV improves transactional throughput by 33%, while response times improve up to 2.3x. Furthermore, NoFTL KV reduces write-amplification 19x and improves storage longevity by imately the same factor.
Sleep analysis using a Polysomnography system is difficult and expensive. That is why we suggest a non-invasive and unobtrusive measurement. Very few people want the cables or devices attached to their bodies during sleep. The proposed approach is to implement a monitoring system, so the subject is not bothered. As a result, the idea is a non-invasive monitoring system based on detecting pressure distribution. This system should be able to measure the pressure differences that occur during a single heartbeat and during breathing through the mattress. The system consists of two blocks signal acquisition and signal processing. This whole technology should be economical to be affordable enough for every user. As a result, preprocessed data is obtained for further detailed analysis using different filters for heartbeat and respiration detection. In the initial stage of filtration, Butterworth filters are used.
Sleep study can be used for detection of sleep quality and in general bed behaviors. These results can helpful for regulating sleep and recognizing different sleeping disorders of human. In comparison to the leading standard measuring system, which is Polysomnography (PSG), the system proposed in this work is a non-invasive sleep monitoring device. For continuous analysis or home use, the PSG or wearable Actigraphy devices tends to be uncomfortable. Besides, these methods not only decrease practicality due to the process of having to put them on, but they are also very expensive. The system proposed in this paper classifies respiration and body movement with only one type of sensor and also in a noninvasive way. The sensor used is a pressure sensor. This sensor is low cost and can be used for commercial proposes. The system was tested by carrying out an experiment that recorded the sleep process of a subject. These recordings showed excellent results in the classification of breathing rate and body movements.
Die Erholung unseres Körpers und Gehirns von Müdigkeit ist direkt abhängig von der Qualität des Schlafes, die aus den Ergebnissen einer Schlafstudie ermittelt werden kann. Die Klassifizierung der Schlafstadien ist der erste Schritt dieser Studie und beinhaltet die Messung von Biovitaldaten und deren weitere Verarbeitung. Das non-invasive Schlafanalyse-System basiert auf einem Hardware-Sensornetz aus 24 Drucksensoren, das die Schlafphasenerkennung ermöglicht. Die Drucksensoren sind mit einem energieeffizienten Mikrocontroller über einen systemweiten Bus mit Adressarbitrierung verbunden. Ein wesentlicher Unterschied dieses Systems im Vergleich zu anderen Ansätzen ist die innovative Art, die Sensoren unter der Matratze zu platzieren. Diese Eigenschaft erleichtert die kontinuierliche Nutzung des Systems ohne fühlbaren Einfluss auf das gewohnte Bett. Das System wurde getestet, indem Experimente durchgeführt wurden, die den Schlaf verschiedener gesunder junger Personen aufzeichneten. Die ersten Ergebnisse weisen auf das Potenzial hin, nicht nur Atemfrequenz und Körperbewegung, sondern auch Herzfrequenz zu erfassen.
Near-Data Processing (NDP) is a key computing paradigm for reducing the ever growing time and energy costs of data transport versus computations. With their flexibility, FPGAs are an especially suitable compute element for NDP scenarios. Even more promising is the exploitation of novel and future non-volatile memory (NVM) technologies for NDP, which aim to achieve DRAM-like latencies and throughputs, while providing large capacity non-volatile storage.
Experimentation in using FPGAs in such NVM-NDP scenarios has been hindered, though, by the fact that the NVM devices/FPGA boards are still very rare and/or expensive. It thus becomes useful to emulate the access characteristics of current and future NVMs using off-the-shelf DRAMs. If such emulation is sufficiently accurate, the resulting FPGA-based NDP computing elements can be used for actual full-stack hardware/software benchmarking, e.g., when employed to accelerate a database.
For this use, we present NVMulator, an open-source easy-to-use hardware emulation module that can be seamlessly inserted between the NDP processing elements on the FPGA and a conventional DRAM-based memory system. We demonstrate that, with suitable parametrization, the emulated NVM can come very close to the performance characteristics of actual NVM technologies, specifically Intel Optane. We achieve 0.62% and 1.7% accuracy for cache line sized accesses for read and write operations, while utilizing only 0.54% of LUT logic resources on a Xilinx/AMD AU280 UltraScale+ FPGA board. We consider both file-system as well as database access patterns, examining the operation of the RocksDB database when running on real or emulated Optane-technology memories.
Urban platforms are essential for smart and sustainable city planning and operation. Today they are mostly designed to handle and connect large urban data sets from very different domains. Modelling and optimisation functionalities are usually not part of the cities software infrastructure. However, they are considered crucial for transformation scenario development and optimised smart city operation. The work discusses software architecture concepts for such urban platforms and presents case study results on the building sector modelling, including urban data analysis and visualisation. Results from a case study in New York are presented to demonstrate the implementation status.
Digitalization of products and services commonly causes substantial changes in business models, operations, organization structures and IT infrastructures of enterprises. Motivated by experiences and observations from digitalization projects, the paper investigates the effects of digitalization on enterprise architectures (EA). EA models serve as representation of business, information system and technical aspects of an enterprise to support management and development. By comparing EA models before and after digitalization, the paper analyzes the kinds of changes visible in the EA model. The most important finding is that newly created digitized products and the associated (product)- and enterprise architecture are no longer properly integrated into the overall architecture and even exist in parallel. Thus, the focus of this work is on showing these parallel architectures and proposing derivations for a better integration.
Massive data transfers in modern data intensive systems resulting from low data-locality and data-to-code system design hurt their performance and scalability. Near-data processing (NDP) and a shift to code-to-data designs may represent a viable solution as packaging combinations of storage and compute elements on the same device has become viable.
The shift towards NDP system architectures calls for revision of established principles. Abstractions such as data formats and layouts typically spread multiple layers in traditional DBMS, the way they are processed is encapsulated within these layers of abstraction. The NDP-style processing requires an explicit definition of cross-layer data formats and accessors to ensure in-situ executions optimally utilizing the properties of the underlying NDP storage and compute elements. In this paper, we make the case for such data format definitions and investigate the performance benefits under NoFTL-KV and the COSMOS hardware platform.
In summary, we believe that current “sleep monitoring” consumer devices on the market must undergo a more robust validation process before being made available and distributed in the general public. This is especially noteworthy as there have been first reports in the literature that inaccurate feedback of such consumer devices can worry subjects and may even lead to compromised well-being of the user.
Software Process Improvement (SPI) programs have been implemented, inter alia, to improve quality and speed of software development. SPI addresses many aspects ranging from individual developer skills to entire organizations. It comprises, for instance, the optimization of specific activities in the software lifecycle as well as the creation of organizational awareness and project culture. In the course of conducting a systematic mapping study on the state-of-the-art in SPI from a general perspective, we observed Software Quality Management (SQM) being of certain relevance in SPI programs. In this paper, we provide a detailed investigation of those papers from the overall systematic mapping study that were classified as addressing SPI in the context of SQM (including testing). From the main study’s result set, 92 papers were selected for an in-depth systematic review to study the contributions and to develop an initial picture of how these topics are addressed in SPI. Our findings show a fairly pragmatic contribution set in which different solutions are proposed, discussed, and evaluated. Among others, our findings indicate a certain reluctance towards standard quality or (test) maturity models and a strong focus on custom review, testing, and documentation techniques, whereas a set of five selected improvement measures is almost equally addressed.
A software process is the game plan to organize project teams and run projects. Yet, it still is a challenge to select the appropriate development approach for the respective context. A multitude of development approaches compete for the users’ favor, but there is no silver bullet serving all possible setups. Moreover, recent research as well as experience from practice shows companies utilizing different development approaches to assemble the best-fitting approach for the respective company: a more traditional process provides the basic framework to serve the organization, while project teams embody this framework with more agile (and/or lean) practices to keep their flexibility. The paper at hand provides insights into the HELENA study with which we aim to investigate the use of “Hybrid dEveLopmENt Approaches in software systems development”. We present the survey design and initial findings from the survey’s test runs. Furthermore, we outline the next steps towards the full survey.
The digital transformation of our society changes the way we live, work, learn, communicate, and collaborate. This disruptive change drive current and next information processes and systems that are important business enablers for the context of digitization since years. Our aim is to support flexibility and agile transformations for both business domains and related information technology with more flexible enterprise information systems through adaptation and evolution of digital architectures. The present research paper investigates the continuous bottom-up integration of micro-granular architectures for a huge amount of dynamically growing systems and services, like microservices and the Internet of Things, as part of a new composed digital architecture. To integrate micro-granular architecture models into living architectural model versions we are extending enterprise architecture reference models by state of art elements for agile architectural engineering to support digital products, services, and processes.
A clinically useful system for individual continuous health data monitoring needs an architecture that takes into account all relevant medical and technical conditions. The requirements for a health app to support such a system are collected, and a vendor independent architecture is designed that allows the collection of vital data from arbitrary wearables using a smartphone. A prototypical implementation for the main scenario shows the feasibility of the approach.
Mit dem starken Wachstum des CarSharing- Angebots und der großen Menge an Flottenfahrzeugen in Unternehmen nimmt auch die Anzahl der Fahrtenbuch-Apps zu. Bei den meisten mobilen Fahrtenbuch- Anwendungen muss der Benutzer den Kilometerstand manuell eintragen. Dies wirkt sich negativ auf die Usability und die User Experience aus. Hinzu kommt, dass jede Minute kostbar ist, die der Fahrer im ausgeliehenen Auto verbringt. Aus diesen Gründen wird hier eine Lösung vorgestellt, bei der der Kilometerstand aus einer Mercedes Benz A-Klasse über den OBD-Anschluss mit Hilfe des CAN Interfaces „ISI b2air“ automatisch ausgelesen und per Bluetooth an die Fahrtenbuch-App der Berger Elektronik GmbH gesendet wird. Hierfür wird mittels der Software „ISI b2app“ die Kommunikation des Diagnosetesters mit dem Fahrzeug aufgezeichnet. Anschließend werden die CAN-Botschaften analysiert und in Bezug auf den Kilometerstand gefiltert. Die entsprechende Anfrage zum Erhalt des Kilometerstandes wird in den Programmcode des Berger Fahrtenbuches implementiert, so dass die App selbstständig den Kilometerstand auslesen kann.
OR-Pad - Entwicklung eines Prototyps zur sterilen Informationsanzeige am OP-Situs : meeting abstract
(2019)
Hintergrund: Oftmals werden Informationen aus der Krankenakte oder von Bildgebungsverfahren nur auf recht weit vom Operationsgebiet entfernten Monitoren, außerhalb der ergonomischen Sichtachse des Operateurs, dargestellt. Dies führt dazu, dass relevante Informationen übersehen werden oder ihr Informationspotenzial nicht ausgeschöpft werden kann. In Papierform mitgenommene Notizen befinden sich während der OP außerhalb des sterilen Bereichs und sind dadurch für den Operateur nicht ohne Weiteres zugänglich. Auch bei intraoperativen Einträgen für die OP Dokumentation ist der Operateur auf die Mithilfe der Assistenz angewiesen. Durch die zusätzlichen Kommunikationswege entstehen dabei ein personeller und zeitlicher Mehraufwand und das Fehlerpotenzial nimmt zu. Das anwendungsorientierte Forschungsprojekt OR-Pad - Nutzung von portablen Informationsanzeigen im Operationssaal - soll dem Operateur zu einem verbesserten Informationsfluss verhelfen. Die Idee entstand aus der klinischen Routine der Anatomie und Urologie des Universitätsklinikums Tübingen und wird nun durch Fördermittel vom Ministerium für Wissenschaft, Forschung und Kunst Baden-Württemberg sowie vom Europäischen Fonds für regionale Entwicklung an der Hochschule Reutlingen zu einem High Fidelity-Prototypen weiterentwickelt.
Ziel: Ziel des OR-Pad Projekts ist es, während einer OP zum aktuellen Zeitpunkt klinisch relevante Informationen in unmittelbarer Nähe zum Operateur darzustellen. Mithilfe des Systems soll der Informationsfluss zwischen dem Eingriff sowie dessen Vor- und Nachbereitung optimiert werden. Der Operateur soll vorab relevante Informationen, wie aktuelle Röntgenbilder oder persönliche Notizen, zur intraoperativen Anzeige auswählen können, die dann am OP-Situs auf einer sterilen Informationsanzeige dargestellt werden. Durch die Positionierung soll eine ergonomische Sichtachse sowie die direkte Interaktion mit dem System ermöglicht werden. Kontextrelevante Informationen sollen basierend auf dem aktuellen OP-Verlauf durch die Entwicklung einer Situationserkennung automatisch bereitgestellt werden. Zur Optimierung des Informationsflusses gehört ebenfalls die Unterstützung der OP-Dokumentation. Für diese sollen während des Eingriffs manuell vom Operateur sowie automatisch vom System Einträge, wie Zeitpunkte oder intraoperative Aufnahmen, erstellt werden. Aus diesen soll nach dem Eingriff die OP-Dokumentation generiert und damit der Prozess qualitativer und zeiteffizienter gestaltet werden.
Methodik: Zur Erreichung des Ziels werden zunächst die klinischen Anforderungen spezifiziert und in ein Lastenheft überführt. Hierfür werden Interviews und Beobachtungen bei mehreren Interventionen durchgeführt. Nach dem User-Centered-Designprozess werden Personas und Nutzungsszenarien entworfen und mit klinischen Projektpartnern in mehreren Iterationen evaluiert. Es gilt eine Informationsarchitektur aufzubauen, die eine Einbettung klinischer Informationssysteme sowie Bild- und Gerätedaten aus dem OP-Netzwerk erlaubt. Eine Situationserkennung, basierend auf Prozessmodellen, soll zur Abschätzung des Operationsfortschritts entwickelt werden. Zur Befestigung der Informationsanzeige sollen geeignete Haltemechanismen eingesetzt werden. Das OR-Pad System soll laufend im Lehr- und Forschungs-OP der Hochschule Reutlingen getestet und im Sinne agiler Produktentwicklung mit den klinischen Projektpartnern abgestimmt werden. Der finale Funktionsprototyp soll abschließend in den Versuchs-OPs der Anatomie Tübingen getestet und evaluiert werden.
Ergebnisse: Über eine erste Datenerhebung mittels Contextual Inquiry konnten erste Anforderungen an das OR-Pad System erfasst werden, woraus ein Low-Fidelity-Prototyp resultierte. Die Evaluation über Experteninterviews führte in die zweite Iteration, in der das Konzept entsprechend der Ergebnisse angepasst wurde. Über Hospitationen am Uniklinikum Tübingen fand eine weitere Datenerhebung zur Erstellung von Szenarien für die intraoperativen Anwendungsfälle statt. Anhand der Anforderungen wurde ein Konzept für die Benutzerschnittstelle entworfen, die im weiteren Verlauf mit den klinischen Projektpartnern evaluiert wird.
Menopause is the permanent cessation of menstruation occurring naturally in women's aging. The most frequent symptoms associated with menopausal phases are mucosal dryness, increased weight and body fat, and changes in sleep patterns. Oral symptoms in menopause derived from saliva flow reduction can lead to dry mouth, ulcers, and alterations of taste and swallowing patterns. However, the oral health phenotype of postmenopausal women has not been characterized. The aim of the study was to determine postmenopausal women's oral phenotype, including medical history, lifestyle, and oral assessment through artificial intelligence algorithms. We enrolled 100 postmenopausal women attending the Dental School of the University of Seville were included in the study. We collected an extensive questionnaire, including lifestyle, medication, and medical history. We used an unsupervised k-means algorithm to cluster the data following standard features for data analysis. Our results showed the main oral symptoms in our postmenopausal cohort were reduced salivary flow and periodontal disease. Relying on the classical assessment of the collected data, we might have a biased evaluation of postmenopausal women. Then, we used artificial intelligence analysis to evaluate our data obtaining the main features and providing a reduced feature defining the oral health phenotype. We found 6 clusters with similar features, including medication affecting salivation or smoking as essential features to obtain different phenotypes. Thus, we could obtain main features considering differential oral health phenotypes of postmenopausal women with an integrative approach providing new tools to assess the women in the dental clinic.
This paper presents a concurrency control mechanism that does not follow a ‘one concurrency control mechanism fits all needs’ strategy. With the presented mechanism a transaction runs under several concurrency control mechanisms and the appropriate one is chosen based on the accessed data. For this purpose, the data is divided into four classes based on its access type and usage (semantics). Class O (the optimistic class) implements a first-committer-wins strategy, class R (the reconciliation class) implements a first-n-committers-win strategy, class P (the pessimistic class) implements a first reader-wins strategy, and class E (the escrow class) implements a firsnreaderswin strategy. Accordingly, the model is called OjRjPjE. Under this model the TPC-C benchmark outperforms other CC mechanisms like optimistic Snapshot Isolation.
To assess the quality of a person’s sleep, it is essential to examine the sleep behaviour by identifying the several sleep stages, their durations and sleep cycles. The established and gold standard procedure for sleep stage scoring is overnight polysomnography (PSG) with the Rechtschaffen and Kales (R-K) method. Unfortunately, the conduct of PSG is time-consuming and unfamiliar for the subjects and might have an impact of the recorded data. To avoid the disadvantages with PSG, it is important to make further investigations in low-cost home diagnostic systems. For this intention it is necessary to find suitable bio vital parameters for classifying sleep stages without any physical impairments at the same time. Due to the promising results in several publications we want to analyse existing methods for sleep stage classification based on the parameters body movement,
heartbeat and respiration. Our aim was to find different behaviour patterns in the several sleep stages. Therefore, the average values of 15 whole-night PSG recordings -obtained from the ‘DREAMS
Subjects Database’- where analysed in the light of heartbeat, body movement and respiration with 10 different methods.
Public transport maps are typically designed in a way to support route finding tasks for passengers while they also provide an overview about stations, metro lines, and city-specific attractions. Most of those maps are designed as a static representation, maybe placed in a metro station or printed in a travel guide. In this paper we describe a dynamic, interactive public transport map visualization enhanced by additional views for the dynamic passenger data on different levels of temporal granularity. Moreover, we also allow extra statistical information in form of density plots, calendar-based visualizations, and line graphs. All this information is linked to the contextual metro map to give a viewer insights into the relations between time points and typical routes taken by the passengers. We illustrate the usefulness of our interactive visualization by applying it to the railway system of Hamburg in Germany while also taking into account the extra passenger data. As another indication for the usefulness of the interactively enhanced metro maps we conducted a user experiment with 20 participants.
We present a multitask network that supports various deep neural network based pedestrian detection functions. Besides 2D and 3D human pose, it also supports body and head orientation estimation based on full body bounding box input. This eliminates the need for explicit face recognition. We show that the performance of 3D human pose estimation and orientation estimation is comparable to the state-of-the-art. Since very few data sets exist for 3D human pose and in particular body and head orientation estimation based on full body data, we further show the benefit of particular simulation data to train the network. The network architecture is relatively simple, yet powerful, and easily adaptable for further research and applications.
We investigated the influence of body shape and pose on the perception of physical strength and social power for male virtual characters. In the first experiment, participants judged the physical strength of varying body shapes, derived from a statistical 3D body model. Based on these ratings, we determined three body shapes (weak, average, and strong) and animated them with a set of power poses for the second experiment. Participants rated how strong or powerful they perceived virtual characters of varying body shapes that were displayed in different poses. Our results show that perception of physical strength was mainly driven by the shape of the body. However, the social attribute of power was influenced by an interaction between pose and shape. Specifically, the effect of pose on power ratings was greater for weak body shapes. These results demonstrate that a character with a weak shape can be perceived as more powerful when in a high-power pose.
Sleep is an important aspect in life of every human being. The average sleep duration for an adult is approximately 7 h per day. Sleep is necessary to regenerate physical and psychological state of a human. A bad sleep quality has a major impact on the health status and can lead to different diseases. In this paper an approach will be presented, which uses a long-term monitoring of vital data gathered by a body sensor during the day and the night supported by mobile application connected to an analyzing system, to estimate sleep quality of its user as well as give recommendations to improve it in real-time. Actimetry and historical data will be used to improve the individual recommendations, based on common techniques used in the area of machine learning and big data analysis.
Assistive environments are entering our homes faster than ever. However, there are still various barriers to be broken. One of the crucial points is a personalization of offered services and integration of assistive technologies in common objects and therefore in a regular daily routine. Recognition of sleep patterns for the preliminary sleep study is one of the Health services that could be performed in an undisturbing way. This article proposes the hardware system for the measurement of bio-vital signals necessary for initial sleep study in a nonobtrusive way. The first results confirm the potential of measurement of breathing and movement signals with the proposed system.
The performance and scalability of modern data-intensive systems are limited by massive data movement of growing datasets across the whole memory hierarchy to the CPUs. Such traditional processor-centric DBMS architectures are bandwidth- and latency-bound. Processing-in-Memory (PIM) designs seek to overcome these limitations by integrating memory and processing functionality on the same chip. PIM targets near- or in-memory data processing, leveraging the greater in-situ parallelism and bandwidth.
In this paper, we introduce pimDB and provide an initial comparison of processor-centric and PIM-DBMS approaches under different aspects, such as scalability and parallelism, cache-awareness, or PIM-specific compute/bandwidth tradeoffs. The evaluation is performed end-to-end on a real PIM hardware system from UPMEM.
Human pose estimation (HPE) is integral to scene understanding in numerous safety-critical domains involving human-machine interaction, such as autonomous driving or semi-automated work environments. Avoiding costly mistakes is synonymous with anticipating failure in model predictions, which necessitates meta-judgments on the accuracy of the applied models. Here, we propose a straightforward human pose regression framework to examine the behavior of two established methods for simultaneous aleatoric and epistemic uncertainty estimation: maximum a-posteriori (MAP) estimation with Monte-Carlo variational inference and deep evidential regression (DER). First, we evaluate both approaches on the quality of their predicted variances and whether these truly capture the expected model error. The initial assessment indicates that both methods exhibit the overconfidence issue common in deep probabilistic models. This observation motivates our implementation of an additional recalibration step to extract reliable confidence intervals. We then take a closer look at deep evidential regression, which, to our knowledge, is applied comprehensively for the first time to the HPE problem. Experimental results indicate that DER behaves as expected in challenging and adverse conditions commonly occurring in HPE and that the predicted uncertainties match their purported aleatoric and epistemic sources. Notably, DER achieves smooth uncertainty estimates without the need for a costly sampling step, making it an attractive candidate for uncertainty estimation on resource-limited platforms.
An enormous amount of data in the context of business processes is stored as images. They contain valuable information for business process management. Up to now this data had to be integrated manually into the business process. By advances of capturing it is possible to extract information from an increasing number of images. Therefore, we systematically investigate the potentials of Image Mining for business process management by a literature research and an in-depth analysis of the business process lifecycle. As a first step to evaluate our research, we developed a prototype for recovering process model information from drawings using Rapidminer.
Potentials of smart contracts-based disintermediation in additive manufacturing supply chains
(2019)
We investigate which potentials are created by using smart contracts for disintermediation in supply chains for additive manufacturing. Using a qualitative, critical realist research approach, we analyzed three case studies with companies active in additive manufactures. Based on interviews with experts from these companies, we could identify eight key requirements for disintermediation and associate four potentials of smart contracts-based disintermediation.
Due to decreased mobility or families living apart, older adults are especially vulnerable to the issue of social isolation. Literature suggests that technology can help to prevent this isolation. The present work addresses an approach to participate in society by sharing knowledge that is cherished. We propose the cooking recipe exchange application PrecRec for older adults to make them feel precious and valued. PrecRec has been developed and evaluated in an iterative process with eleven older adults. The results show that a broad perspective has to be taken into account when designing such systems.
Additive manufacturing (AM) is a promising manufacturing method for many industrial sectors. For this application, industrial requirements such as high production volumes and coordinated implementation must be taken into account. These tasks of the internal handling of production facilities are carried out by the Production Planning and Control (PPC) information system. A key factor in the planning and scheduling is the exact calculation of manufacturing times. For this purpose we investigate the use of Machine Learning (ML) for the prediction of manufacturing times of AM facilities.
Preface of IDEA 2015
(2016)
Proceedings of the International Workshop on Mobile Networks for Biometric Data Analysis (mBiDA)
(2014)
Prevention and treatment of common and widesprea (chronic) diseases is a challenge in any modern Society and vitally important for health maintenance in aging societies. Capturing biometric data is a cornerstone for any analysis and Treatment strategy. Latest advances in sensor technology allow accurate data measurement in a non-intrusive way. In many cases, it is necessary to provide online monitoring and real-time data capturing to support patients´ prevention plans or to allow medical professionals to access the current status. Different communication standards are required to push sensor data and to store and analyze them on different (mobile) platforms. The objective of the workshop is to show new and innovative approaches dedicated to biometric data capture and analysis in a non-intrusive way maintaining mobility. Examples can be found in human centered ambient intelligence attributed with sensors or even in methodologies applied in automotive real-time conform mobile system design. The workshop´s main challenge is to focus on approaches promoting non-intrusiveness, reliable prediction algorithms and high user-acceptance. The workshop will provide overview presentations, Young researcher poster tracks, doctoral tracks and classical peer-review full paper tracks. Especially, would like to encourage students and young researchers to participate and to contribute to the workshop. Scientific contributions to the event are peer-reviewed by a suited program committee.
In recent years companies have faced challenges by high market dynamics, rapidly evolving technologies and shifting user expectations. Together with the adaption of lean and agile practices, it is increasingly difficult to predict upfront which products, features or services will satisfy the needs of the customers and the organization. Currently, many new products fail to produce a significant financial return. One reason is that companies are not doing enough product discovery activities. Product discovery aims at tackling the various risks before the implementation of a product starts. The academic literature only provides little guidance for conducting product discovery in practice. Objective: In order to gain a better understanding of product discovery activities in practice, this paper aims at identifying motivations, approaches, challenges, risks, and pitfalls of product discovery reported in the grey literature. Method: We performed a grey literature review (GLR) according to the guidelines to Garousi et al. Results: The study shows that the main motivation for conducting product discovery activities is to reduce the uncertainty to a level that makes it possible to start building a solution that provides value for the customers and the business. Several product discovery approaches are reported in the grey literature which include different phases such as alignment, problem exploration, ideation, and validation. Main challenges are, among others, the lack of clarity of the problem to be solved, the prescription of concrete solutions through management or experts, and the lack of cross-functional collaboration.
Context: A product roadmap is an important tool in product development. It sets the strategic direction in which the product is to be developed to achieve the company’s vision. However, for product roadmaps to be successful, it is essential that all stakeholders agree with the company’s vision and objectives and are aligned and committed to a common product plan.
Objective: In order to gain a better understanding of product roadmap alignment, this paper aims at identifying measures, activities and techniques in order to align the different stakeholders around the product roadmap.
Method: We conducted a grey literature review according the guidelines to Garousi et al.
Results: Several approaches to gain alignment were identified such as defining and communicating clear objectives based on the product vision, conducting cross-functional workshops, shuttle diplomacy, and mission briefing. In addition, our review identified the “Behavioural Change Stairway Model” that suggests five steps to gain alignment by building empathy and a trustful relationship.
Product roadmaps are an important tool in product development. They provide direction, enable consistent development in relation to a product vision and support communication with relevant stakeholders. There are many different formats for product roadmaps, but they are often based on the assumption that the future is highly predictable. However, especially software-intensive businesses are faced with increasing market dynamics, rapidly evolving technologies and changing user expectations. As a result, many organizations are wondering what roadmap format is appropriate for them and what components it should have to deal with an unpredictable future. Objectives: To gain a better understanding of the formats of product roadmaps and their components, this paper aims to identify suitable formats for the development and handling of product roadmaps in dynamic and uncertain markets. Method: We performed a grey literature review (GLR) according to the guidelines from Garousi. Results: A Google search identified 426 articles, 25 of which were included in this study. First, various components of the roadmap were identified, especially the product vision, themes, goals, outcomes and outputs. In addition, various product roadmap formats were discovered, such as feature-based, goal-oriented, outcome-driven and a theme-based roadmap. The roadmap components were then assigned to the various product roadmap formats. This overview aims at providing initial decision support for companies to select a suitable product roadmap format and adapt it to their own needs.
Context: Companies in highly dynamic markets increasingly struggle with their ability to plan product development and to create reliable roadmaps. A main reason is the decreasing lack of predictability of markets, technologies, and customer behaviors. New approaches for product roadmapping seem to be necessary in order to cope with today's highly dynamic conditions. Little research is available with respect to such new approaches. Objective: In order to better understand the state of the art and to identify research gaps, this article presents a review of the scientific literature with respect to product roadmapping. Method: We performed a systematic literature review (SLR) with respect to identify papers in the field of computer science. Results: After filtering, the search resulted in a set of 23 relevant papers. The identified papers focus on different aspects such as roadmap types, processes for creating and updating roadmaps, problems and challenges with roadmapping, approaches to visualize roadmaps, generic frameworks and specific aspects such as the combination of roadmaps with business modeling. Overall, the scientific literature covers many important aspects of roadmapping but does provide only little knowledge on how to create product roadmaps under highly dynamic conditions. Research gaps address, for instance, the inclusion of goals or outcomes into product roadmaps, the alignment of a roadmap with a product vision, and the inclusion of product discovery activities in product roadmaps. In addition, the transformation from traditional roadmapping processes to new ways of roadmapping is not sufficiently addressed in the scientific literature.
Context: Currently, most companies apply approaches for product roadmapping that are based on the assumption that the future is highly predicable. However, nowadays companies are facing the challenge of increasing market dynamics, rapidly evolving technologies, and shifting user expectations. Together with the adaption of lean and agile practices it makes it increasingly difficult to plan and predict upfront which products, services or features will satisfy the needs of the customers. Therefore, they are struggling with their ability to provide product roadmaps that fit into dynamic and uncertain market environments and that can be used together with lean and agile software development practices.
Objective: To gain a better understanding of modern product roadmapping processes, this paper aims to identify suitable processes for the creation and evolution of product roadmaps in dynamic and uncertain market environments.
Method: We performed a Grey Literature Review (GLR) according to the guidelines from Garousi et al.
Results: 32 approaches to product roadmapping were identified. Typical characteristics of these processes are the strong connection between the product roadmap and the product vision, an emphasis on stakeholder alignment, the definition of business and customer goals as part of the roadmapping process, a high degree of flexibility with respect to reaching these goals, and the inclusion of validation activities in the roadmapping process. An overall goal of nearly all approaches is to avoid waste by early reducing development and business risks. From the list of the 32 approaches found, four representative roadmapping processes are described in detail.
Product roadmaps in the new mobility domain: state of the practice and industrial experiences
(2021)
Context: The New Mobility industry is a young market that includes high market dynamics and is therefore associated with a high degree of uncertainty. Traditional product roadmapping approaches such a detailed planning of features over a long-time horizon typically fail in such environments. For this reason, companies that are active in the field of New Mobility are faced with the challenge of keeping their product roadmaps reliable for stakeholders while at the same time being able to react flexibly to changing market requirements.
Objective: The goal of this paper is to identify the state of practice regarding product roadmapping of New Mobility companies. In addition, the related challenges within the product roadmapping process as well as the success factors to overcome these challenges will be highlighted.
Method: We conducted semi-structured expert interviews with 8 experts (7 German company and one Finnish company) from the field of New Mobility and performed a content analysis.
Results: Overall the results of the study showed that the participating companies are aware of the requirements that the New Mobility sector entails. Therefore, they exhibit a high level of maturity in terms of product roadmapping. Nevertheless, some aspects were revealed that pose specific challenges for the participating companies. One major challenge, for example, is that New Mobility in terms of public clients is often a tender business with non-negotiable product requirements. Thus, the product roadmap can be significantly influenced from the outside. As factors for a successful product roadmapping mainly soft factors such as trust between all people involved in the product development process and transparency throughout the entire roadmapping process were mentioned.