Informatik
Refine
Document Type
- Conference proceeding (570)
- Journal article (199)
- Book chapter (62)
- Doctoral Thesis (18)
- Book (10)
- Anthology (10)
- Patent / Standard / Guidelines (2)
- Report (2)
- Working Paper (2)
Is part of the Bibliography
- yes (875)
Institute
- Informatik (875)
- Technik (2)
- ESB Business School (1)
Publisher
- Springer (205)
- Hochschule Reutlingen (104)
- IEEE (90)
- Gesellschaft für Informatik e.V (62)
- Elsevier (47)
- Association for Computing Machinery (38)
- IARIA (26)
- RWTH Aachen (15)
- De Gruyter (14)
- Association for Information Systems (12)
Workflow driven support systems in the peri-operative area have the potential to optimize clinical processes and to allow new situation-adaptive support systems. We started to develop a workflow management system supporting all involved actors in the operating theatre with the goal to synchronize the tasks of the different stakeholders by giving relevant information to the right team members. Using the OMG standards BPMN, CMMN and DMN gives us the opportunity to bring established methods from other industries into the medical field. The system shows each addressed actor their information in the right place at the right time to make sure every member can execute their task in time to ensure a smooth workflow. The system has the overall view of all tasks. Accordingly, a workflow management system including the Camunda BPM workflow engine to run the models, and a middleware to connect different systems to the workflow engine and some graphical user interfaces to show necessary information or to interact with the system are used. The complete pipeline is implemented with a RESTful web service. The system is designed to include different systems like hospital information system (HIS) via the RESTful web service very easily and without loss of data. The first prototype is implemented and will be expanded.
The goal of the presented project is to develop the concept of home e-health centers for barrier-free and cross-border telemedicine. AAL technologies are already present on the market but there is still a gap to close until they can be used for ordinary patient needs. The general idea needs to be accompanied by new services, which should be brought together in order to provide a full coverage of service for the users. Sleep and stress were chosen as predominant influence in the population. The executed scientific study of available home devices analyzing sleep has provided the necessary to select appropriate devices. The first choice for the project implementation is the device EMFIT QS+. This equipment provides a part of a complete system that a home telemedical hospital can provide at a level of precision and communication with internal and/or external health services.
In this paper, an approach is introduced how reinforcement learning can be used to achieve interoperability between heterogeneous Internet of Things (IoT) components. More specifically, we model an HTTP REST service as a Markov Decision Process and adapt Q-Learning to the properties of REST so that an agent in the role of an HTTP REST client can learn the semantics of the service and, especially an optimal sequence of service calls to achieve an application specific goal. With our approach, we want to open up and facilitate a discussion in the community, as we see the key for achieving interoperability in IoT by the utilization of artificial intelligence techniques.
Interoperability is an important topic in the Internet of Things (IoT), because this domain incorporates diverse and heterogeneous objects, communication protocols and data formats. Many models and classification schemes have been proposed to make the degree of interoperability measurable - however only on the basis of a hierarchical scale. In the course of this paper we introduce a novel approach to measure the degree of interoperability using a metric scaled quantity. We consider IoT as a distributed system, where interoperable objects exchange messages with each other. Under this premise, we interpret messages as operation calls and formalize this view as a causal model. The analysis of this model enables us to quantify the interoperable behavior of communicating objects.
The metric and qualitative analysis of models of the upper and lower dental arches is an important aspect of orthodontic treatment planning. Currently available eLearning systems for dental education only allow access to digital learning materials, and do not interactively support the learning progress. Moreover, to date no study compared the efficiency of learning methods based on physical or digital study models. For this pilot study, 18 dental students were separated into two groups to investigate whether the learning success in study model analysis with an interactive elearning system is higher based on digital models or on conventional plaster models. The results show that with the digital method less time is needed per model analysis. Moreover, the digital approach leads to higher total scores than that based on plaster models. We conclude that interactive eLearning using digital dental arch models is a promising tool for dental education.
OR-Pad - Entwicklung eines Prototyps zur sterilen Informationsanzeige am OP-Situs : meeting abstract
(2019)
Hintergrund: Oftmals werden Informationen aus der Krankenakte oder von Bildgebungsverfahren nur auf recht weit vom Operationsgebiet entfernten Monitoren, außerhalb der ergonomischen Sichtachse des Operateurs, dargestellt. Dies führt dazu, dass relevante Informationen übersehen werden oder ihr Informationspotenzial nicht ausgeschöpft werden kann. In Papierform mitgenommene Notizen befinden sich während der OP außerhalb des sterilen Bereichs und sind dadurch für den Operateur nicht ohne Weiteres zugänglich. Auch bei intraoperativen Einträgen für die OP Dokumentation ist der Operateur auf die Mithilfe der Assistenz angewiesen. Durch die zusätzlichen Kommunikationswege entstehen dabei ein personeller und zeitlicher Mehraufwand und das Fehlerpotenzial nimmt zu. Das anwendungsorientierte Forschungsprojekt OR-Pad - Nutzung von portablen Informationsanzeigen im Operationssaal - soll dem Operateur zu einem verbesserten Informationsfluss verhelfen. Die Idee entstand aus der klinischen Routine der Anatomie und Urologie des Universitätsklinikums Tübingen und wird nun durch Fördermittel vom Ministerium für Wissenschaft, Forschung und Kunst Baden-Württemberg sowie vom Europäischen Fonds für regionale Entwicklung an der Hochschule Reutlingen zu einem High Fidelity-Prototypen weiterentwickelt.
Ziel: Ziel des OR-Pad Projekts ist es, während einer OP zum aktuellen Zeitpunkt klinisch relevante Informationen in unmittelbarer Nähe zum Operateur darzustellen. Mithilfe des Systems soll der Informationsfluss zwischen dem Eingriff sowie dessen Vor- und Nachbereitung optimiert werden. Der Operateur soll vorab relevante Informationen, wie aktuelle Röntgenbilder oder persönliche Notizen, zur intraoperativen Anzeige auswählen können, die dann am OP-Situs auf einer sterilen Informationsanzeige dargestellt werden. Durch die Positionierung soll eine ergonomische Sichtachse sowie die direkte Interaktion mit dem System ermöglicht werden. Kontextrelevante Informationen sollen basierend auf dem aktuellen OP-Verlauf durch die Entwicklung einer Situationserkennung automatisch bereitgestellt werden. Zur Optimierung des Informationsflusses gehört ebenfalls die Unterstützung der OP-Dokumentation. Für diese sollen während des Eingriffs manuell vom Operateur sowie automatisch vom System Einträge, wie Zeitpunkte oder intraoperative Aufnahmen, erstellt werden. Aus diesen soll nach dem Eingriff die OP-Dokumentation generiert und damit der Prozess qualitativer und zeiteffizienter gestaltet werden.
Methodik: Zur Erreichung des Ziels werden zunächst die klinischen Anforderungen spezifiziert und in ein Lastenheft überführt. Hierfür werden Interviews und Beobachtungen bei mehreren Interventionen durchgeführt. Nach dem User-Centered-Designprozess werden Personas und Nutzungsszenarien entworfen und mit klinischen Projektpartnern in mehreren Iterationen evaluiert. Es gilt eine Informationsarchitektur aufzubauen, die eine Einbettung klinischer Informationssysteme sowie Bild- und Gerätedaten aus dem OP-Netzwerk erlaubt. Eine Situationserkennung, basierend auf Prozessmodellen, soll zur Abschätzung des Operationsfortschritts entwickelt werden. Zur Befestigung der Informationsanzeige sollen geeignete Haltemechanismen eingesetzt werden. Das OR-Pad System soll laufend im Lehr- und Forschungs-OP der Hochschule Reutlingen getestet und im Sinne agiler Produktentwicklung mit den klinischen Projektpartnern abgestimmt werden. Der finale Funktionsprototyp soll abschließend in den Versuchs-OPs der Anatomie Tübingen getestet und evaluiert werden.
Ergebnisse: Über eine erste Datenerhebung mittels Contextual Inquiry konnten erste Anforderungen an das OR-Pad System erfasst werden, woraus ein Low-Fidelity-Prototyp resultierte. Die Evaluation über Experteninterviews führte in die zweite Iteration, in der das Konzept entsprechend der Ergebnisse angepasst wurde. Über Hospitationen am Uniklinikum Tübingen fand eine weitere Datenerhebung zur Erstellung von Szenarien für die intraoperativen Anwendungsfälle statt. Anhand der Anforderungen wurde ein Konzept für die Benutzerschnittstelle entworfen, die im weiteren Verlauf mit den klinischen Projektpartnern evaluiert wird.
Potentials of smart contracts-based disintermediation in additive manufacturing supply chains
(2019)
We investigate which potentials are created by using smart contracts for disintermediation in supply chains for additive manufacturing. Using a qualitative, critical realist research approach, we analyzed three case studies with companies active in additive manufactures. Based on interviews with experts from these companies, we could identify eight key requirements for disintermediation and associate four potentials of smart contracts-based disintermediation.
Artefaktkorrektur und verfeinerte Metriken für ein EEG-basiertes System zur Müdigkeitserkennung
(2019)
Fragestellung: Müdigkeit ist ein oft unterschätztes, aber dennoch großes Problem im Straßenverkehr. Von rund 2,5 Mio. Verkehrsunfällen 2015 in Deutschland, waren 2898 Unfälle, mit insgesamt 59 Toten (~1,7 % der Todesfälle), auf Übermüdung zurückzuführen. Schätzungen gehen von einer Dunkelziffer von bis zu 20 % aus. In einer ersten eigenen Studie wurde überprüft, ob ein mobiles EEG in einem Fahrsimulator Müdigkeitszustände zuverlässig erkennen kann. Die Erkennungsrate lag lediglich bei 61 %. Ziel dieser Arbeit ist, das verwendete Messsystem zu verbessern. Dazu wird die Genauigkeit durch eine Artefaktkorrektur und mit Hilfe von verfeinerten Qualitätsmetriken erhöht. Eine erkannte Übermüdung wird dem Fahrer dann in angemessener Weise angezeigt, so dass er entsprechend reagieren kann.
Patienten und Methoden: Die Independent Component Analysis (ICA) ist ein multivariates Verfahren, um mehrere Zufallsvariablen zu analysieren. Für die Entscheidung, ob ein Fahrer gerade müde oder wach ist, wird der erstellte Merkmalsvektor für jede Sequenz mit ICA klassifiziert. Dafür wird ein trainierter Machine-Learning-Algorithmus eingesetzt, der in der Lage ist, auch unbekannte Datensätze in Klassen einzuteilen. Um die benötigten Frequenzwerte zu erhalten, wurde für jeden EEG-Kanal eine Fourier Transformation durchgeführt. Der erstellte Merkmalsvektor wird im nächsten Schritt durch ein Künstliches Neuronales Netz klassifiziert. Für das Training werden vorab erstellte Merkmalsvektoren mit den Klassen „Wach“ und „Müde“ versehen. Diese Daten werden zufällig gemischt und im Verhältnis 2:1 in eine Trainings- und Testmenge geteilt. Das Experiment wurde mit acht Personen mit jeweils zweimal 45 min Testfahrt durchgeführt.
Ergebnisse: Der komplette Datensatz besteht aus 150.000 Signalwerten, welche zu ca. 7000 Sequenzen zusammengefasst werden. Durch die Anwendung der Qualitätsmetrik bleiben 4370 Sequenzen für das Training übrig. Bei invaliden Sequenzen aufgrund von EEG-Artefakten gibt es deutliche Unterschiede. Im „Wach“ Zustand werden dreimal so viele Sequenzen verworfen als im „Müde“ Zustand. Insgesamt werden bei wachen Probanden im Schnitt ca. 50 % der Sequenzen verworfen, bei Müden lediglich 25 %. Im Durchschnitt erreicht das System eine Erkennungsrate von 73 % für beide Zustände. Vergleicht man nun das Verhältnis von „Wach“ und „Müde“ und lässt „Leichte Müdigkeit“ außen vor, liegen die Ergebnisse bei über 90 %.
Schlussfolgerungen: Die Ergebnisse zeigen, dass die Aufmerksamkeit während des Experiments abnimmt bzw. die Müdigkeit zunimmt. Dies verdeutlichen zum einen subjektive und objektive Beobachtungen von Müdigkeitsanzeichen. Zum anderen lassen sich messbare und klassifizierbare Unterschiede im EEG Signal nachweisen. Die als Merkmale eingesetzten Theta-Wellen zeigten eine niedrigere Amplitude gegen Ende des Experiments. Die Erweiterung der binären Klassifizierung führt zu einer weiteren Stabilisierung der Ergebnisse. Artefaktkorrektur und Qualitätsmetriken steigern die Güte der Daten weiter. Die entwickelte Anwendung zur Müdigkeitserkennung ermittelt messbare Zeichen von Müdigkeit und kann eine gute Entscheidung über die Fahrtauglichkeit treffen.
Among the multitude of software development processes available, hardly any is used by the book. Regardless of company size or industry sector, a majority of project teams and companies use customized processes that combine different development methods— so-called hybrid development methods. Even though such hybrid development methods are highly individualized, a common understanding of how to systematically construct synergetic practices is missing. In this paper, we make a first step towards devising such guidelines. Grounded in 1,467 data points from a large-scale online survey among practitioners, we study the current state of practice in process use to answer the question: What are hybrid development methods made of? Our findings reveal that only eight methods and few practices build the core of modern software development. This small set allows for statistically constructing hybrid development methods. Using an 85% agreement level in the participants’ selections, we provide two examples illustrating how hybrid development methods are characterized by the practices they are made of. Our evidence-based analysis approach lays the foundation for devising hybrid development methods.
Context: Organizations are increasingly challenged by high market dynamics, rapidly evolving technologies and shifting user expectations. In consequence, many organizations are struggling with their ability to provide reliable product roadmaps by applying traditional roadmapping approaches. Currently, many companies are seeking opportunities to improve their product roadmapping practices and strive for new roadmapping approaches. A typical first step towards advancing the roadmapping capabilities of an organization is to assess the current situation. Therefore, the so-called maturity model DEEP for assessing the product roadmapping capabilities of companies operating in dynamic and uncertain environments has been developed and published by the authors.
Objective: The aim of this article is to conduct an initial validation of the DEEP model in order to understand its applicability better and to see if important concepts are missing. In addition, the aim of this article is to evolve the model based on the findings from the initial validation.
Method: The model has been given to practitioners such as product managers with the request to perform a self-assessment of the current product roadmapping practices in their company. Afterwards, interviews with each participant have been conducted in order to gain insights.
Results: The initial validation revealed that some of the stages of the model need to be rearranged and minor usability issues were found. The overall structure of the model was well received. The study resulted in the development of the version 1.1 of the DEEP product roadmap maturity model which is also presented in this article.
Through increasing market dynamics, rapidly evolving technologies and shifting user expectations coupled with the adoption of lean and agile practices, companies are struggling with their ability to provide reliable product roadmaps by applying traditional approaches. Currently, most companies are seeking opportunities to improve their product roadmapping practices. As a first challenge they have to assess their current product roadmapping capabilities in order to better understand how to improve their practices and how to switch to a new approach. The aim of this article is to provide an initial maturity model for product roadmapping practices that is especially suited for assessing the roadmapping capabilities of companies operating in dynamic and uncertain market environments. Based on interviews with 15 experts from 13 various companies the current state of practice regarding product roadmapping was identified. Afterwards, the model development was conducted in the context of expert workshops with the Robert Bosch GmbH and researchers. The study results in the so-called DEEP 1.0 product roadmap maturity model which allows companies to conduct a self assessment of their product roadmapping practice.
Context: Organizations are increasingly challenged by dynamic and technical market environments. Traditional product roadmapping practices such as detailed and fixed long-term planning typically fail in such environments. Therefore, companies are actively seeking ways to improve their product roadmapping approach. Goal: This paper aims at identifying problems and challenges with respect to product roadmapping. In addition, it aims at understanding how companies succeed in improving their roadmapping practices in their respective company contexts. The study focuses on mid-sized and large companies developing software-intensive products in dynamic and technical market environments. Method: We conducted semi structured expert interviews with 15 experts from 13 German companies and conducted a thematic data analysis. Results: The analysis showed that a significant number of companies is still struggling with traditional feature based product-roadmapping and opinion based prioritization of features. The most promising areas for improvement are stating the outcomes a company is trying to achieve and making them part of the roadmap, sharing or co-developing the roadmap with stakeholders, and the establishing discovery activities.
Context: Companies in highly dynamic markets increasingly struggle with their ability to plan product development and to create reliable roadmaps. A main reason is the decreasing lack of predictability of markets, technologies, and customer behaviors. New approaches for product roadmapping seem to be necessary in order to cope with today's highly dynamic conditions. Little research is available with respect to such new approaches. Objective: In order to better understand the state of the art and to identify research gaps, this article presents a review of the scientific literature with respect to product roadmapping. Method: We performed a systematic literature review (SLR) with respect to identify papers in the field of computer science. Results: After filtering, the search resulted in a set of 23 relevant papers. The identified papers focus on different aspects such as roadmap types, processes for creating and updating roadmaps, problems and challenges with roadmapping, approaches to visualize roadmaps, generic frameworks and specific aspects such as the combination of roadmaps with business modeling. Overall, the scientific literature covers many important aspects of roadmapping but does provide only little knowledge on how to create product roadmaps under highly dynamic conditions. Research gaps address, for instance, the inclusion of goals or outcomes into product roadmaps, the alignment of a roadmap with a product vision, and the inclusion of product discovery activities in product roadmaps. In addition, the transformation from traditional roadmapping processes to new ways of roadmapping is not sufficiently addressed in the scientific literature.
Software process improvement (SPI) is around for decades, but it is a critically discussed topic. In several waves, different aspects of SPI have been discussed in the past, e.g., large scale company-level SPI programs, maturity models, success factors, and in-project SPI. It is hard to find new streams or a consensus in the community, but there is a trend coming along with agile and lean software development. Apparently, practitioners reject extensive and prescriptive maturity models and move towards smaller, faster and continuous project-integrated SPI. Based on data from two survey studies conducted in Germany (2012) and Europe (2016), we analyze the process customization for projects and practices for implementing SPI in the participating companies. Our findings indicate that, even in regulated industry sectors, companies increasingly adopt in-project SPI activities, primarily with the goal to continuously optimize specific processes. Therefore, with this paper, we want to stimulate a discussion on how to evolve traditional SPI towards a continuous learning environment.
The emergence of agile methods and practices has not only changed the development processes but might also have affected how companies conduct software process improvement (SPI). Through a set of complementary studies, we aim to understand how SPI has changed in times of agile software development. Specifically, we aim (a) to identify and characterize the set of publications that connect elements of agility to SPI, (b) to explore to which extent agile methods/practices have been used in the context of SPI, and (c) to understand whether the topics addressed in the literature are relevant and useful for industry professionals. To study these questions, we conducted an in-depth analysis of the literature identified in a previous mapping study, an interview study, and an analysis of the responses given by industry professionals to SPI related questions stemming from an independently conducted survey study. Regarding the first question, we identified 55 publications that focus on both SPI and agility of which 48 present and discuss how agile methods/practices are used to steer SPI initiatives. Regarding the second question, we found that the two most frequently mentioned agile methods in the context of SPI are Scrum and Extreme Programming (XP), while the most frequently mentioned agile practices are integrate often, test-first, daily meeting, pair programming, retrospective, on-site customer, and product backlog. Regarding the third question, we found that a majority of the interviewed and surveyed industry professionals see SPI as a continuous activity. They agree with the agile SPI literature that agile methods/practices play an important role in SPI activities but that the importance given to specific agile methods/practices does not always coincide with the frequency with which these methods/practices are mentioned in the literature.
Im Rahmen dieser Arbeit wurde eine Software-Architektur entwickelt, mit der sich Interaktionen zwischen autonomen Fahrzeugen und Passanten im Straßenverkehr in einer simulierten Umgebung untersuchen lassen. Hierbei wird das autonome Fahrzeug durch einen externen Fahrsimulator gesteuert. Der Einsatz eines Motion-Capture-Systems ermöglicht dabei die Aufzeichnung und Übertragungen der Bewegungsdaten von Passant und Fahrer in die virtuelle Umgebung. Durch den Einsatz von head-mounted Displays sollen Akteure die virtuelle Umgebung möglichst als real empfinden. Auf Basis der entwickelten Software-Architektur wurde eine Simulationsumgebung realisiert, in der Interaktionen zwischen einem Passant und einem autonomem Fahrzeug untersucht werden können. Das Projekt soll das Potential von Motion-Capture gestützten Simulationsumgebungen für die Konzeption und Entwicklung von autonomen Fahrsystemen aufzeigen.
Über die letzten Jahre nehmen die Verkehrsunfälle zwischen Fahrzeugen und Fahrrädern und Motorrädern immer weiter zu. Die Ursache ist meistens eine unachtsam geöffnete Fahrzeugtür, mit der ein anfahrender Zweiradfahrer kollidiert. Diese Unfälle können verheerende Folgen für alle Beteiligten haben. Aus diesem Grund soll der technologische Fortschritt, welcher sich durch die gesamte Automobilbranche zieht, um eine zusätzliche Komponente erweitert werden. Hierbei handelt es sich um ein System für Zweiradfahrer, welches eine sich öffnende Fahrzeugtür frühzeitig erkennen soll, den Fahrer somit frühzeitig warnen kann und gegebenenfalls Strategien zu Unfallvermeidung einleiten kann. Dieses Konzept soll die Entwicklung des Systems vorgeben und grundlegend erläutern.
Digitalization of products and services commonly causes substantial changes in business models, operations, organization structures and IT infrastructures of enterprises. Motivated by experiences and observations from digitalization projects, the paper investigates the effects of digitalization on enterprise architectures (EA). EA models serve as representation of business, information system and technical aspects of an enterprise to support management and development. By comparing EA models before and after digitalization, the paper analyzes the kinds of changes visible in the EA model. The most important finding is that newly created digitized products and the associated (product)- and enterprise architecture are no longer properly integrated into the overall architecture and even exist in parallel. Thus, the focus of this work is on showing these parallel architectures and proposing derivations for a better integration.
Die Navigation mit dem E-Bike soll eine positive Nutzerfahrung sein. Deshalb wurde im Zuge dieser Arbeit im Rahmen von Bosch E-Bike Systems ein multimodales smart User Interface (MSUI) entwickelt. Das Konzept umfasst visuelle Turn-by-Turn Signale, taktile Vibrationssignale im Lenker und eine auditive Sprachausgabe. Ziel der Arbeit ist ein Prototyp, der sich für die Evaluation von Nutzerbedürfnissen in Bezug auf verschiedene multimodale Rückmeldemöglichkeiten eignet.
Die Simulation menschlichen Gruppenverhaltens kann bei der Kapazitäten-, Risiko- und Evakuierungs Planung von Gebäuden hilfreich sein, bei der Produktion von Filmen für eindrucksvolle Massen-Szenen eingesetzt werden oder virtuelle Schauplätze in Echtzeit-Anwendungen beleben. Die Herausforderungen liegen vor allem in einem realistischen Erscheinungsbild der virtuellen Crowd, glaubwürdigem Verhalten innerhalb eines sozialen Verbundes, realitätsnahen Animationen und der Wahrung der Echtzeitfähigkeit interaktiver Anwendungen. Im Rahmen dieser Arbeit wird der aktuelle Stand der Technik vorgestellt, Technologien evaluiert und ein Crowd Simulation Prototyp mit der Unity Engine implementiert.
Die Bedeutung und Stellung der Informationstechnologie erlebte in den letzten 60 Jahren einen fortlaufenden Wandel. Der anfänglich rein unterstützende Charakter entwickelte sich immer mehr zu einem wichtigen Bestandteil der Aufbau- und Ablauforganisation im Unternehmen. Ein definiertes IT-Servicemanagement im Unternehmen sieht sich mittlerweile gleichgeordnet mit den restlichen Fachabteilungen, tritt mit seinen Leistungen als Dienstleister auf und betrachtet die Fachabteilungen als „Kunde“. Neue Technologien und Innovationen und die daraus resultierenden Neudefinitionen bestehender Anforderungen sollen im Rahmen der Digitalisierung in Unternehmen positive Effekte zeigen. IT Infrastructure Library (ITIL) wird als Framework für IT-Servicemanagement in der Industrie und im öffentlichen Dienst genutzt. Der Ansatz von ITIL unterstützte den Kulturwandel und sensibilisierte das Management und die Mitarbeiter darin, serviceorientiert zu denken. Da dieser Ansatz einen vordefinierten, zyklischen Ablauf hat, könnten schnell eintreffende Kundenanforderungen nicht fristgerecht umgesetzt werden, weshalb agile Methoden wie der DevOps-Ansatz in den Vordergrund rücken. Die Herausforderung besteht darin, den Kulturwandel bei der Einführung von DevOps in bestehenden ITIL-Strukturen in Unternehmen zu fördern.
Eines der gängigsten bildgebenden Verfahren in der Medizin ist die Sonografie. Jedoch ist die Reproduzierbarkeit der Ultraschalldiagnostik bis heute noch immer ein Problem, wodurch Fehldiagnosen gestellt werden. Durch das in diesem Papier vorgestellte prototypische System zur Unterstützung für Medizinstudenten in Ultraschallseminaren sollen Anforderungen zur Reproduzierbarkeit einer Ultraschalluntersuchung definiert werden. Durch Experteninterviews wurden Einblicke in die klinischen Abläufe und den Krankenhaus-Alltag gewonnen, welche Inhalte relevant sind, um die Reproduzierbarkeit von Ultraschalluntersuchungen zu ermöglichen.
Durch das stetige Wachstum an neuen Technologien und Möglichkeiten steht der Verschmelzung von Technologien mit dem Menschen kaum noch etwas im Wege. Die Untersuchung der Implantate und die damit verbundenen Risiken sind ein Teil dieser Arbeit. Von Bedeutung sind hier die Funktionsweise und die IT-Sicherheitsaspekte. Alle in dieser Arbeit dargestellten Implantate benötigen eine Kommunikation nach außen. Diese Kommunikationsmöglichkeit birgt Risiken, die nicht nur auf die Daten der Träger beschränkt sind, sondern auch gesundheitliche Risiken beinhalten.
Segmentierung von Polypen in Koloskopie-Bilddaten : eine Potentialanalyse von Deep-Learning-Methoden
(2018)
Kolorektale Karzinome haben eine hohe Sterblichkeitsrate, wenn sie spät entdeckt werden. Eine frühzeitige Entfernung von bösartigen Polypen im Magen-Darm-Trakt, die deren Vorstufen bilden, bietet jedoch hohe Überlebenschancen. Bei Darmspiegelungen werden gerade kleine Polypen aber recht häufig übersehen. Zuverlässige bildverarbeitende Systeme, die Polypen in einem Koloskopie-Frame nicht nur detektieren, sondern pixelgenau segmentieren, könnten Ärzten bei Darmkrebs-Screenings helfen. Diese Arbeit analysiert den aktuellen Stand der Segmentierung von Polypen im Gastrointestinaltrakt. Weiterführend wird untersucht, inwiefern die in letzter Zeit sehr erfolgreichen Methoden des Deep Learning hier Vorteile bieten.
Enterprises are transforming their strategy, culture, processes, and their information systems to enlarge their digitalization efforts or to approach for digital leadership. The digital transformation profoundly disrupts existing enterprises and economies. In current times, a lot of new business opportunities appeared using the potential of the Internet and related digital technologies: The Internet of Things, services computing, cloud computing, artificial intelligence, big data with analytics, mobile systems, collaboration networks, and cyber physical systems. Digitization fosters the development of IT environments with many rather small and distributed structures, like the Internet of Things, microservices, or other micro-granular elements. Architecting micro-granular structures have a substantial impact on architecting digital services and products. The change from a closed-world modeling perspective to more flexible Open World of living software and system architectures defines the context for flexible and evolutionary software approaches, which are essential to enable the digital transformation. In this paper, we are revealing multiple perspectives of digital enterprise architecture and decisions to effectively support value and service oriented software systems for intelligent digital services and products.
Presently, many companies are transforming their strategy and product base, as well as their culture, processes and information systems to become more digital or to approach for a digital leadership. In the last years new business opportunities appeared using the potential of the Internet and related digital technologies, like Internet of Things, services computing, cloud computing, edge and fog computing, social networks, big data with analytics, mobile systems, collaboration networks, and cyber physical systems. Digitization fosters the development of IT environments with many rather small and distributed structures, like the Internet of Things, Microservices, or other micro-granular elements. This has a strong impact for architecting digital services and products. The change from a closed-world modeling perspective to more flexible open-world composition and evolution of micro-granular system architectures defines the moving context for adaptable systems. We are focusing on a continuous bottom-up integration of micro-granular architectures for a huge amount of dynamically growing systems and services, as part of a new digital enterprise architecture for service dominant digital products.
New business opportunities appeared using the potential of the Internet and related digital technologies, like the Internet of Things, services computing, artificial intelligence, cloud, edge, and fog computing, social networks, big data with analytics, mobile systems, collaboration networks, and cyber-physical systems. Companies are transforming their strategy and product base, as well as their culture, processes and information systems to adopt digital transformation or to approach for digital leadership. Digitalization fosters the development of IT environments with many rather small and distributed structures, like the Internet of Things, Microservices, or other micro-granular elements. Digitalization has a substantial impact for architecting the open and complex world of highly distributed digital servcies and products, as part of a new digital enterprise architecture, which structure and direct service-dominant digital products and services. The present research paper investigates mechanisms for supporting the evolution of digital enterprise architectures with user-friendly methods and instruments of interaction, visualization, and intelligent decision management during the exploration of multiple and interconnected perspectives by an architecture management cockpit.
Enterprise Governance, Risk and Compliance (GRC) systems are key to managing risks threatening modern enterprises from many different angles. Key constituent to GRC systems is the definition of controls that are implemented on the different layers of an Enterprise Architecture (EA). Controls become part of a “concern” of the EA, which allows to use an EA viewpoint to cover control compliance assessments. In this article we explore this relationship further, derive a metamodel linking control and EA, and elicit how this linkage give rise to a hierarchic understanding of the viewpoint concept for EAs. We complement these considerations with an expository instantiation in a cockpit for control compliance applied in an international enterprise in the insurance industry.
Recognizing human actions is a core challenge for autonomous systems as they directly share the same space with humans. Systems must be able to recognize and assess human actions in real-time. To train the corresponding data-driven algorithms, a significant amount of annotated training data is required. We demonstrate a pipeline to detect humans, estimate their pose, track them over time and recognize their actions in real-time with standard monocular camera sensors. For action recognition, we transform noisy human pose estimates in an image like format we call Encoded Human Pose Image (EHPI). This encoded information can further be classified using standard methods from the computer vision community. With this simple procedure, we achieve competitive state-of-the-art performance in pose based action detection and can ensure real-time performance. In addition, we show a use case in the context of autonomous driving to demonstrate how such a system can be trained to recognize human actions using simulation data.
RoPose-Real: real world dataset acquisition for data-driven industrial robot arm pose estimation
(2019)
It is necessary to employ smart sensory systems in dynamic and mobile workspaces where industrial robots are mounted on mobile platforms. Such systems should be aware of flexible and non-stationary workspaces and able to react autonomously to changing situations. Building upon our previously presented RoPose-system, which employs a convolutional neural network architecture that has been trained on pure synthetic data to estimate the kinematic chain of an industrial robot arm system, we now present RoPose-Real. RoPose-Real extends the prior system with a comfortable and targetless extrinsic calibration tool, to allow for the production of automatically annotated datasets for real robot systems. Furthermore, we use the novel datasets to train the estimation network with real world data. The extracted pose information is used to automatically estimate the observing sensor pose relative to the robot system. Finally we evaluate the performance of the presented subsystems in a real world robotic scenario.
Learning to translate between real world and simulated 3D sensors while transferring task models
(2019)
Learning-based vision tasks are usually specialized on the sensor technology for which data has been labeled. The knowledge of a learned model is simply useless when it comes to data which differs from the data on which the model has been initially trained or if the model should be applied to a totally different imaging or sensor source. New labeled data has to be acquired on which a new model can be trained. Depending on the sensor, this can even get more complicated when the sensor data becomes more abstract and hard to be interpreted and labeled by humans. To enable reuse of models trained for a specific task across different sensors minimizes the data acquisition effort. Therefore, this work focuses on learning sensor models and translating between them, thus aiming for sensor interoperability. We show that even for the complex task of human pose estimation from 3D depth data recorded with different sensors, i.e. a simulated and a Kinect 2TM depth sensor, human pose estimation can greatly improve by translating between sensor models without modifying the original task model. This process especially benefits sensors and applications for which labels and models are difficult if at all possible to retrieve from raw sensor data.
The investigation of stress requires to distinguish between stress caused by physical activity and stress that is caused by psychosocial factors. The behaviour of the heart in response to stress and physical activity is very similar in case the set of monitored parameters is reduced to one. Currently, the differentiation remains difficult and methods which only use the heart rate are not able to differentiate between stress and physical activity, without using additional sensor data input. The approach focusses on methods which generate signals providing characteristics that are useful for detecting stress, physical activity, no activity and relaxation.
Autismus-Spektrum-Störungen (ASD) bei Kindern werden häufig zu spät diagnostiziert und die Begleitung der chronischen Krankheit gestaltet sich schwierig. Der vorgestellte Ansatz erlaubt die Behandlung der Kinder in dem bekannten häuslichen Umfeld und versucht die Beziehungen zwischen Schlaf und Verhalten herauszuarbeiten. Die gewonnenen Erkenntnisse sollen die Lebensqualität der Patienten verbessern und den Eltern Hilfestellung geben. Die notwendige infrastrukturelle Unterstützung wird durch medizinisches Fachpersonal geleistet, das auf einen web-basierten Service zurückgreifen kann, der sämtliche Prozesse (Diagnostik, Datenerfassung, -aufzeichnung und Training etc.) begleitet. Die anonymisierten Daten werden in einem Diagnosesystem zentral abgelegt und können so für zukünftige Behandlungsstrategien nutzbar sein. Die umfassende Lösung setzt auf zentrale Elemente von Smart-Homes und AAL auf.
Fatigue and drowsiness are responsible for a significant percentage of road traffic accidents. There are several approaches to monitor the driver's drowsiness, ranging from the driver's steering behavior to the analysis of the driver, e.g. eye tracking, blinking, yawning, or electrocardiogram (ECG). This paper describes the development of a low-cost ECG sensor to derive heart rate variability (HRV) data for drowsiness detection. The work includes hardware and software design. The hardware was implemented on a printed circuit board (PCB) designed so that the board can be used as an extension shield for an Arduino. The PCB contains a double, inverted ECG channel including low-pass filtering and provides two analog outputs to the Arduino, which combines them and performs the analog-to-digital conversion. The digital ECG signal is transferred to an NVidia embedded PC where the processing takes place, including QRS-complex, heart rate, and HRV detection as well as visualization features. The resulting compact sensor provides good results in the extraction of the main ECG parameters. The sensor is being used in a larger frame, where facial-recognition-based drowsiness detection is combined with ECG-based detection to improve the recognition rate under unfavorable light or occlusion conditions.
Die Erholung unseres Körpers und Gehirns von Müdigkeit ist direkt abhängig von der Qualität des Schlafes, die aus den Ergebnissen einer Schlafstudie ermittelt werden kann. Die Klassifizierung der Schlafstadien ist der erste Schritt dieser Studie und beinhaltet die Messung von Biovitaldaten und deren weitere Verarbeitung. Das non-invasive Schlafanalyse-System basiert auf einem Hardware-Sensornetz aus 24 Drucksensoren, das die Schlafphasenerkennung ermöglicht. Die Drucksensoren sind mit einem energieeffizienten Mikrocontroller über einen systemweiten Bus mit Adressarbitrierung verbunden. Ein wesentlicher Unterschied dieses Systems im Vergleich zu anderen Ansätzen ist die innovative Art, die Sensoren unter der Matratze zu platzieren. Diese Eigenschaft erleichtert die kontinuierliche Nutzung des Systems ohne fühlbaren Einfluss auf das gewohnte Bett. Das System wurde getestet, indem Experimente durchgeführt wurden, die den Schlaf verschiedener gesunder junger Personen aufzeichneten. Die ersten Ergebnisse weisen auf das Potenzial hin, nicht nur Atemfrequenz und Körperbewegung, sondern auch Herzfrequenz zu erfassen.
This document presents an algorithm for a nonobtrusive recognition of Sleep/Wake states using signals derived from ECG, respiration, and body movement captured while lying in a bed. As a core mathematical base of system data analytics, multinomial logistic regression techniques were chosen. Derived parameters of the three signals are used as the input for the proposed method. The overall achieved accuracy rate is 84% for Wake/Sleep stages, with Cohen’s kappa value 0.46. The presented algorithm should support experts in analyzing sleep quality in more detail. The results confirm the potential of this method and disclose several ways for its improvement.
The goal of this paper pretends to show how a bed system with an embedded system with sensor is able to analyze a person’s movement, breathing and recognizing the positions that the subject is lying on the bed during the night without any additional physical contact. The measurements are performed with sensors placed between the mattress and the frame. An Intel Edison board was used as an endpoint that served as a communication node from the mesh network to external service. Two nodes and Intel Edison are attached to the bottom of the bed frame and they are connected to the sensors.
This study is about estimating the reproducibility of finding palpation points of three different anatomical landmarks in the human body (Xiphoid Process and the 2 Hip Crests) to support a navigated ultrasound application. On 6 test subjects with different body mass index the three palpation points were located five times by two examiners. The deviation from the target position was calculated and correlated to the fat thickness above each palpation point. The reproducibility of the measurements had a mean error of ≈13.5 mm +- 4 mm, which seems to be sufficient for the desired application field.
In der Orthopädie werden Robotersysteme bereits seit mehreren Jahren erfolgreich unterstützend eingesetzt. Dieser Ansatz erfordert die vorgelagerte Erstellung eines digitalen Modells auf Basis von medizinischen Bilddatensätzen. Die Erstellung und Überprüfung der Modelle soll in einer browserbasierten Client- Server-Anwendung erfolgen. Hierfür ist die Darstellung von zweidimensionalen und dreidimensionalen Datensätzen erforderlich. Basis dieses Papers ist die Entwicklung eines Ansatzes zur interaktiven, browserbasierten dreidimensionalen Darstellung medizinischer Planungsdaten. Die Anwendung stellt ein Proof of Concept dar, ob die bestehenden Desktopanwendungen zur Darstellung von Planungsdaten ersetzt werden können. Mit Hilfe des Frameworks AMI.js wurde die Anwendung umgesetzt. Sie erfüllt alle definierten Anforderungen und kann somit die aktuellen Desktopanwendungen ersetzen.
Zur Unterstützung des Operateurs wird eine patientennahe Informationsanzeige entwickelt, die kontextrelevante Informationen entsprechend der aktuellen Situation bereitstellen kann. Hierfür soll eine Situationserkennung konzipiert werden, die auf unterschiedliche intraoperative Prozesse übertragen werden kann. Ziel der adaptiven Situationserkennung ist das Erkennen spezifischer Situationen durch intraoperative Informationen unterschiedlicher Datenquellen im Operationssaal. Innerhalb der Datenerhebung und -analyse wurden Anwendungsfälle für die Situationserkennung definiert sowie chirurgische Prozessmodelle erstellt, die intraoperative Ereignisse abbilden. Auf Basis dieser Informationen wurde ein Konzept entworfen, das sich zunächst auf die Erkennung abstrakter generalisierter Phasen, unabhängig vom Eingriff, fokussiert und sich Schritt für Schritt auf granulare Prozessschritte spezifizieren lässt. Diese Flexibilität soll die Übertragbarkeit des Konzepts auf intraoperative Prozesse ermöglichen und den Operateur dadurch gezielt mit kontextrelevanten Informationen unterstützen. Das Konzept wird in zukünftigen Schritten weiterentwickelt.
Mammographie-Geräte werden in der Diagnostik von Mammakarzinomen eingesetzt. Die ursprüngliche Technik wurde in den letzten Jahren von analogen Röntgenfilmen zu digital integrierten Systemen weiterentwickelt. Durch die Tomosynthese, bei der in einem Schnittbildverfahren mehrere Schichten des Organismus untersucht werden können, können auch überlagerte Strukturen sichtbar gemacht werden. Um als adäquate Grundlage zur Diagnostik von malignen Tumoren dienen zu können, müssen einige qualitative Anforderungen erfüllt werden. Bisher gibt es wenig Literatur, die Anforderungen und den Aufbau solcher Geräte systematisch beschreiben. Im Rahmen dieser Arbeit werden auf Basis der Literatur und bestehender Systeme die qualitativen Anforderungen identifiziert. Der prinzipielle Aufbau solcher Systeme wird anhand der einzelnen Systembausteine in der semiformalen Notationssprache SysML gezeigt. Die grundlegende Funktionsweise eines tomosynthesefähigen Mammographie Gerätes wird in dieser Arbeit zusammenfassend und anhand der einzelnen Systembausteine beschrieben. Diese Arbeit dient der Vermittlung eines umfassenden Verständnisses für die digitale Mammographie, um als Grundlage für die Dokumentation von qualitativen Anforderungen dienen zu können.
Telemetrie und Homemonitoring werden bereits in vielen Gesundheitsbereichen erfolgreich genutzt. Moderne Herzschrittmacher ermöglichen durch telemetrische Datenübertragung das Homemonitoring aktueller Gesundheits- und Zustandsdaten durch PatientInnen und ÄrztInnen. Für die Weiterentwicklung existierender Produkte ist ein grundlegendes Verständnis der Anforderungen an und des Aufbaus solcher Systeme notwendig. Bisher existieren
herstellerunabhängige Betrachtungen dieser noch nicht. Durch die Verwendung von SysML als semiformale Notationssprache wird das System Herzschrittmacher und Homemonitoring modelliert. Die Anforderungen an ein solches System lassen sich aus bestehenden Produkten ableiten. Die vorliegende Arbeit beschreibt die Systemarchitektur solcher Systeme, anhand derer die Anbindung an Informationssysteme über das Homemonitoringsystem und die dadurch umgesetzten Funktionen gezeigt werden.
Private equity (PE) firms are investment firms that acquire equity shares in companies. The goal of PE firms is to exit the investment after few years with a substantial increase in value. PE firms often claim to outperform the market, i.e. to create alpha.
The overall aim of this paper is to unravel the mystery of value creation in the PE industry. First, the author presents a conceptual framework for value creation in the PE industry based on a multiple valuation model that breaks down value creation into different elements. Second, the paper evaluates whether PE firms really create value by analysing and combining results from prior empirical studies based on the conceptual framework.
The results show that existing empirical evidence is mixed but that there is indeed a tendency toward a positive evidence that PE firms create economic value in average. However, there are methodological difficulties in measuring the value creation and studies are often subject to bias. Finally, it is pointed out that the question whether PE firms really create value has to be viewed from different perspectives such as the perspective of the PE firm, the investors and the portfolio companies.
Companies are constantly changing their business process models. In team environments, different versions of a process model are created at the same time. These versions of a process model need to be merged from time to time to consolidate changes and create a new common version.
In this short paper, we propose a solution for modifying a merge result. The goal is to create a meaningful merge result by adding connector nodes to the model at specific locations. This increases the amount of possible result models and reduces additional implementation effort.
While several service-based maintainability metrics have been proposed in the scientific literature, reliable approaches to automatically collect these metrics are lacking. Since static analysis is complicated for decentralized and technologically diverse microservice-based systems, we propose a dynamic approach to calculate such metrics from runtime data via distributed tracing. The approach focuses on simplicity, extensibility, and broad applicability. As a first prototype, we implemented a Java application with a Zipkin integrator, 23 different metrics, and five export formats. We demonstrated the feasibility of the approach by analyzing the runtime data of an example microservice based system. During an exploratory study with six participants, 14 of the 18 services were invoked via the system’s web interface. For these services, all metrics were calculated correctly from the generated traces.
Microservices are a topic driven mainly by practitioners and academia is only starting to investigate them. Hence, there is no clear picture of the usage of Microservices in practice. In this paper, we contribute a qualitative study with insights into industry adoption and implementation of Microservices. Contrary to existing quantitative studies, we conducted interviews to gain a more in-depth understanding of the current state of practice. During 17 interviews with software professionals from 10 companies, we analyzed 14 service-based systems. The interviews focused on applied technologies, Microservices characteristics, and the perceived influence on software quality. We found that companies generally rely on well established technologies for service implementation, communication, and deployment. Most systems, however, did not exhibit a high degree of technological diversity as commonly expected with Microservices. Decentralization and product character were different for systems built for external customers. Applied DevOps practices and automation were still on a mediocre level and only very few companies strictly followed the you build it, you run it principle. The impact of Microservices on software quality was mainly rated as positive. While maintainability received the most positive mentions, some major issues were associated with security. We present a description of each case and summarize the most important findings of companies across different domains and sizes. Researchers may build upon our findings and take them into account when designing industry-focused methods.
While the concepts of object-oriented antipatterns and code smells are prevalent in scientific literature and have been popularized by tools like SonarQube, the research field for service-based antipatterns and bad smells is not as cohesive and organized. The description of these antipatterns is distributed across several publications with no holistic schema or taxonomy. Furthermore, there is currently little synergy between documented antipatterns for the architectural styles SOA and Microservices, even though several antipatterns may hold value for both. We therefore conducted a Systematic Literature Review (SLR) that identified 14 primary studies. 36 service-based antipatterns were extracted from these studies and documented with a holistic data model. We also categorized the antipatterns with a taxonomy and implemented relationships between them. Lastly, we developed a web application for convenient browsing and implemented a GitHub-based repository and workflow for the collaborative evolution of the collection. Researchers and practitioners can use the repository as a reference, for training and education, or for quality assurance.
Background: Design patterns are supposed to improve various quality attributes of software systems. However, there is controversial quantitative evidence of this impact. Especially for younger paradigms such as service- and microservice-based systems, there is a lack of empirical studies.
Objective: In this study, we focused on the effect of four service-based patterns - namely process abstraction, service façade, decomposed capability, and event-driven messaging - on the evolvability of a system from the viewpoint of inexperienced developers.
Method: We conducted a controlled experiment with Bachelor students (N = 69). Two functionally equivalent versions of a service-based web shop - one with patterns (treatment group), one without (control group) - had to be changed and extended in three tasks. We measured evolvability by the effectiveness and efficiency of the participants in these tasks. Additionally, we compared both system versions with nine structural maintainability metrics for size, granularity, complexity, cohesion, and coupling.
Results: Both experiment groups were able to complete a similar number of tasks within the allowed 90 min. Median effectiveness was 1/3. Mean efficiency was 12% higher in the treatment group, but this difference was not statistically significant. Only for the third task, we found statistical support for accepting the alternative hypothesis that the pattern version led to higher efficiency. In the metric analysis, the pattern version had worse measurements for size and granularity while simultaneously having slightly better values for coupling metrics. Complexity and cohesion were not impacted.
Interpretation: For the experiment, our analysis suggests that the difference in efficiency is stronger with more experienced participants and increased from task to task. With respect to the metrics, the patterns introduce additional volume in the system, but also seem to decrease coupling in some areas.
Conclusions: Overall, there was no clear evidence for a decisive positive effect of using service-based patterns, neither for the student experiment nor for the metric analysis. This effect might only be visible in an experiment setting with higher initial effort to understand the system or with more experienced developers.
Software evolvability is an important quality attribute, yet one difficult to grasp. A certain base level of it is allegedly provided by service- and microservice-based systems, but many software professionals lack systematic understanding of the reasons and preconditions for this. We address this issue via the proxy of architectural modifiability tactics. By qualitatively mapping principles and patterns of Service Oriented Architecture (SOA) and microservices onto tactics and analyzing the results, we cannot only generate insights into service-oriented evolution qualities, but can also provide a modifiability comparison of the two popular service-based architectural styles. The results suggest that both SOA and microservices possess several inherent qualities beneficial for software evolution. While both focus strongly on loose coupling and encapsulation, there are also differences in the way they strive for modifiability (e.g. governance vs. evolutionary design). To leverage the insights of this research, however, it is necessary to find practical ways to incorporate the results as guidance into the software development process.