Informatik
Refine
Document Type
- Conference proceeding (567)
- Journal article (199)
- Book chapter (62)
- Doctoral Thesis (18)
- Book (10)
- Anthology (10)
- Patent / Standard / Guidelines (2)
- Report (2)
- Working Paper (2)
Is part of the Bibliography
- yes (872)
Institute
- Informatik (872)
- Technik (2)
- ESB Business School (1)
Publisher
- Springer (173)
- Hochschule Reutlingen (104)
- IEEE (89)
- Gesellschaft für Informatik (60)
- Elsevier (46)
- ACM (33)
- IARIA (26)
- Springer Gabler (15)
- De Gruyter (12)
- Association for Information Systems (AIS) (11)
Continuous refactoring is necessary to maintain source code quality and to cope with technical debt. Since manual refactoring is inefficient and error prone, various solutions for automated refactoring have been proposed in the past. However, empirical studies have shown that these solutions are not widely accepted by software developers and most refactorings are still performed manually. For example, developers reported that refactoring tools should support functionality for reviewing changes. They also criticized that introducing such tools would require substantial effort for configuration and integration into the current development environment.
In this paper, we present our work towards the Refactoring-Bot, an autonomous bot that integrates into the team like a human developer via the existing version control platform. The bot automatically performs refactorings to resolve code smells and presents the changes to a developer for asynchronous review via pull requests. This way, developers are not interrupted in their workflow and can review the changes at any time with familiar tools. Proposed refactorings can then be integrated into the code base via the push of a button. We elaborate on our vision, discuss design decisions, describe the current state of development, and give an outlook on planned development and research activities.
Serverless computing is an emerging cloud computing paradigm with the goal of freeing developers from resource management issues. As of today, serverless computing platforms are mainly used to process computations triggered by events or user requests that can be executed independently of each other. These workloads benefit from on-demand and elastic compute resources as well as per-function billing. However, it is still an open research question to which extent parallel applications, which comprise most often complex coordination and communication patterns, can benefit from serverless computing.
In this paper, we introduce serverless skeletons for parallel cloud programming to free developers from both parallelism and resource management issues. In particular, we investigate on the well known and widely used farm skeleton, which supports the implementation of a wide range of applications. To evaluate our concepts, we present a prototypical development and runtime framework and implement two applications based on our framework: Numerical integration and hyperparameter optimization - a commonly applied technique in machine learning. We report on performance measurements for both applications and discuss
the usefulness of our approach.
We introduce IPA-IDX – an approach to handle index modifications modern storage technologies (NVM, Flash) as physical in-place appends, using simplified physiological log records. IPA-IDX provides similar performance and longevity advantages for indexes as basic IPA [5] does for tables. The selective application of IPA-IDX and basic IPA to certain regions and objects, lowers the GC overhead by over 60%, while keeping the total space overhead to 2%. The combined effect of IPA and IPA-IDX increases performance by 28%.
The goal of the presented project is to develop the concept of home e-health centers for barrier-free and cross-border telemedicine. AAL technologies are already present on the market but there is still a gap to close until they can be used for ordinary patient needs. The general idea needs to be accompanied by new services, which should be brought together in order to provide a full coverage of service for the users. Sleep and stress were chosen as predominant influence in the population. The executed scientific study of available home devices analyzing sleep has provided the necessary to select appropriate devices. The first choice for the project implementation is the device EMFIT QS+. This equipment provides a part of a complete system that a home telemedical hospital can provide at a level of precision and communication with internal and/or external health services.
In a time of upheaval and digitalization, new business models for companies play an important role. Decentralized power generation and energy efficiency indicators to achieve climate goals and to reduce global warming are currently forcing energy companies to develop new business models. In recent years, many methods of business model development have been introduced to create new business ideas. But what are the obstacles in implementing these business models in the energy sector to develop new business opportunities? And what challenges do companies face in this respect? To answer this question, a systematic literature review was conducted in this paper. As a result, eight categories were identified which summarise the main barriers for the implementation of new business models in the energy domain.
The energy turnaround, digitalization and decreasing revenues forces enterprises in the energy domain to develop new business models. Business models for renewable energy are compound on different logic than business models for larger scale power plants. Following a design science research approach, we examined the business models of three enterprises in the energy domain in a first step. We identified that these business models result in complex ecosystems with multiple actors and difficult relationships between them. One cause is the fast changing and complicated state regulation in Germany. In order to solve the problem, we captured together with the partners of the enterprises the requirements in a second phase. Further we developed the prototype Business Model Configurator (BMConfig) based on the e3Value Ontology on the metamodelling platform ADOxx. We demonstrate the feasibility of our approach in business model of energy efficiency service based on smart meter data.
The relevance of technology knowledge in digital transformation especially in small and mediumsized enterprises (SMEs) that are still largely dependent on physical human capital has become increasingly obvious. This is due to the rapid revolution in business environment coupled with increased living examples of firms disrupted by advancement in technological knowledge. Consequently, we find it progressively vital for SMEs to spot and mitigate both threats and take advantage of opportunities arising from digital transformation dynamism.
Our study aims at exploring the relevance of technology knowledge in SMEs for digital transformation to uncover the opportunities, roadmaps, and models that SMEs can take advantage of in the digital transformation and gain a competitive edge.
We conclude that irrespective relevance of technology knowledge for digital transformation coupled with its low costs and accessibility, SMEs are yet to realize the full potential of technological knowledge. This is mainly due to technologies appearing, changing and also vanishing so rapidly in the digital age, that gaining proper understanding without dedicated resources is utterly difficult for SMEs - making them less competitive as incumbent large firms in the market.
Small and Medium Enterprises (SMEs) which play substantial role in the development of any economy have been on the rise in the recent periods. Consequently, these enterprises are faced with a myriad of challenges which could potentially be solved through adoption of technology. Nonetheless, it has been observed that the new technological uptake among SMEs remains limited with the majority of them opting to maintain the status quo with regards to technology awareness and innovation strategies.
In a literature review, this paper explores three major dynamics curtailing adoption of new technologies by SMEs in the manufacturing: Knowledge absorptive capacity and management factors, organisational structures as well as technological awareness. Firstly, with regards to knowledge absorptive capacity and management factors, this study shows how these factors drive innovation potentials in SMEs.
Secondly, with regards to technological awareness factors, this study documents how perceived usefulness, costs, network and infrastructure, education and skills, training and attitude as well as knowledge influence adoption of new technologies among SMEs in the world. Lastly, the study concludes by analysing how organisational structures drive innovation potentials of SMEs in the wake of swift and profound technological changes in the market.
While Microservices promise several beneficial characteristics for sustainable long-term software evolution, little empirical research covers what concrete activities industry applies for the evolvability assurance of Microservices and how technical debt is handled in such systems. Since insights into the current state of practice are very important for researchers, we performed a qualitative interview study to explore applied evolvability assurance processes, the usage of tools, metrics, and patterns, as well as participants’ reflections on the topic. In 17 semi-structured interviews, we discussed 14 different Microservice-based systems with software professionals from 10 companies and how the sustainable evolution of these systems was ensured. Interview transcripts were analyzed with a detailed coding system and the constant comparison method.
We found that especially systems for external customers relied on central governance for the assurance. Participants saw guidelines like architectural principles as important to ensure a base consistency for evolvability. Interviewees also valued manual activities like code review, even though automation and tool support was described as very important. Source code quality was the primary target for the usage of tools and metrics. Despite most reported issues being related to Architectural Technical Debt (ATD), our participants did not apply any architectural or service-oriented tools and metrics. While participants generally saw their Microservices as evolvable, service cutting and finding an appropriate service granularity with low coupling and high cohesion were reported as challenging. Future Microservices research in the areas of evolution and technical debt should take these findings and industry sentiments into account.
To remain competitive in a fast changing environment, many companies started to migrate their legacy applications towards a Microservices architecture. Such extensive migration processes require careful planning and consideration of implications and challenges likewise. In this regard, hands-on experiences from industry practice are still rare. To fill this gap in scientific literature, we contribute a qualitative study on intentions, strategies, and challenges in the context of migrations to Microservices. We investigated the migration process of 14 systems across different domains and sizes by conducting 16 in-depth interviews with software professionals from 10 companies. Along with a summary of the most important findings, we present a separate discussion of each case. As primary migration drivers, maintainability and scalability were identified. Due to the high complexity of their legacy systems, most companies preferred a rewrite using current technologies over splitting up existing code bases. This was often caused by the absence of a suitable decomposition approach. As such, finding the right service cut was a major technical challenge, next to building the necessary expertise with new technologies. Organizational challenges were especially related to large, traditional companies that simultaneously established agile processes. Initiating a mindset change and ensuring smooth collaboration between teams were crucial for them. Future research on the evolution of software systems can in particular profit from the individual cases presented.
A clinically useful system for individual continuous health data monitoring needs an architecture that takes into account all relevant medical and technical conditions. The requirements for a health app to support such a system are collected, and a vendor independent architecture is designed that allows the collection of vital data from arbitrary wearables using a smartphone. A prototypical implementation for the main scenario shows the feasibility of the approach.
Assistive environments are entering our homes faster than ever. However, there are still various barriers to be broken. One of the crucial points is a personalization of offered services and integration of assistive technologies in common objects and therefore in a regular daily routine. Recognition of sleep patterns for the preliminary sleep study is one of the Health services that could be performed in an undisturbing way. This article proposes the hardware system for the measurement of bio-vital signals necessary for initial sleep study in a nonobtrusive way. The first results confirm the potential of measurement of breathing and movement signals with the proposed system.
In summary, we believe that current “sleep monitoring” consumer devices on the market must undergo a more robust validation process before being made available and distributed in the general public. This is especially noteworthy as there have been first reports in the literature that inaccurate feedback of such consumer devices can worry subjects and may even lead to compromised well-being of the user.
During two researches the influence of technologies on sleep were analyzed. The first one is about the effect of light on the circadian rhythm and as consequence on sleep quality of persons in a vegetative state. The second one, which is still running, surveys the influence of several technical tools on the sleep of elderly people living in a nursing home.
Type 1 diabetes is a chronic and a life threatening disease: an adjusted treatment and a proper management of the disease are crucial to prevent or delay the complications of diabetes. Although during the last decade the development of the artificial pancreas has presented great advances in diabetes care, the multiple daily injections therapy still represents the most widely used treatment option for type 1 diabetes. This work presents the proposal and first development stages of an application focused on guiding patients using the continuous glucose monitors and smart pens together with insulin and carbohydrates recommendations. Our proposal aims to develop a platform to integrate a series of innovative machine learning models and tools rigorously tested together with the use of the latest IoT devices to manage type 1 diabetes. The resulting system actually closes the loop, like the artificial pancreas, but in an intermittent way.
This paper investigates the possibility to effectively monitor and control the respiratory action using a very simple and non invasive technique based on a single lightweight reduced-size wireless surface electromyography (sEMG) sensor placed below the sternum. The captured sEMG signal, due to the critical sensor position, is characterized by a low energy level and it is affected by motion artifacts and cardiac noise. In this work we present a preliminary study performed on adults for assessing the correlation of the spirometry signal and the sEMG signal after the removal of the superimposed heart signal. This study and the related findings could be useful in respiratory monitoring of preterm infants.
The potentials and opportunities created by digitized healthcare can be further customized through smart data processing and analysis using accurate patient information. This development and the associated new treatment concepts basing on digital smart sensors can lead to an increase in motivation by applying gamification approaches. This effect can also be used in the field of medical treatment, e.g. with the help of a digital spirometer combined with an app. In one of our exemplary applications, we show how to control an airplane within an app by breathing respectively inhaling and exhaling. Using this biofeedback within a game allows us to increase the motivation and fun for children that need to perform necessary exercises.
Due to the rising need for palliative care in Russia, it is crucial to provide timely and high-quality solutions for patients, relatives, and caregivers. A methodology for remote monitoring of patients in need of palliative care and the requirements will be developed for a hardware-software complex for remote monitoring of patients' health at home.
A large body of literature is concerned with models of presence— the sensory illusion of being part of a virtual scene— but there is still no general agreement on how to measure it objectively and reliably. For the presented study, we applied contemporary theory to measure presence in virtual reality. Thirty-seven participants explored an existing commercial game in order to complete a collection task. Two startle events were naturally embedded in the game progression to evoke physical reactions and head tracking data was collected in response to these events. Subjective presence was recorded using a post-study questionnaire and real-time assessments. Our novel implementation of behavioral measures lead to insights which could inform future presence research: We propose a measure in which startle reflexes are evoked through specific events in the virtual environment, and head tracking data is compared to the range and speed of baseline interactions.
OR-Pad - Entwicklung eines Prototyps zur sterilen Informationsanzeige am OP-Situs : meeting abstract
(2019)
Hintergrund: Oftmals werden Informationen aus der Krankenakte oder von Bildgebungsverfahren nur auf recht weit vom Operationsgebiet entfernten Monitoren, außerhalb der ergonomischen Sichtachse des Operateurs, dargestellt. Dies führt dazu, dass relevante Informationen übersehen werden oder ihr Informationspotenzial nicht ausgeschöpft werden kann. In Papierform mitgenommene Notizen befinden sich während der OP außerhalb des sterilen Bereichs und sind dadurch für den Operateur nicht ohne Weiteres zugänglich. Auch bei intraoperativen Einträgen für die OP Dokumentation ist der Operateur auf die Mithilfe der Assistenz angewiesen. Durch die zusätzlichen Kommunikationswege entstehen dabei ein personeller und zeitlicher Mehraufwand und das Fehlerpotenzial nimmt zu. Das anwendungsorientierte Forschungsprojekt OR-Pad - Nutzung von portablen Informationsanzeigen im Operationssaal - soll dem Operateur zu einem verbesserten Informationsfluss verhelfen. Die Idee entstand aus der klinischen Routine der Anatomie und Urologie des Universitätsklinikums Tübingen und wird nun durch Fördermittel vom Ministerium für Wissenschaft, Forschung und Kunst Baden-Württemberg sowie vom Europäischen Fonds für regionale Entwicklung an der Hochschule Reutlingen zu einem High Fidelity-Prototypen weiterentwickelt.
Ziel: Ziel des OR-Pad Projekts ist es, während einer OP zum aktuellen Zeitpunkt klinisch relevante Informationen in unmittelbarer Nähe zum Operateur darzustellen. Mithilfe des Systems soll der Informationsfluss zwischen dem Eingriff sowie dessen Vor- und Nachbereitung optimiert werden. Der Operateur soll vorab relevante Informationen, wie aktuelle Röntgenbilder oder persönliche Notizen, zur intraoperativen Anzeige auswählen können, die dann am OP-Situs auf einer sterilen Informationsanzeige dargestellt werden. Durch die Positionierung soll eine ergonomische Sichtachse sowie die direkte Interaktion mit dem System ermöglicht werden. Kontextrelevante Informationen sollen basierend auf dem aktuellen OP-Verlauf durch die Entwicklung einer Situationserkennung automatisch bereitgestellt werden. Zur Optimierung des Informationsflusses gehört ebenfalls die Unterstützung der OP-Dokumentation. Für diese sollen während des Eingriffs manuell vom Operateur sowie automatisch vom System Einträge, wie Zeitpunkte oder intraoperative Aufnahmen, erstellt werden. Aus diesen soll nach dem Eingriff die OP-Dokumentation generiert und damit der Prozess qualitativer und zeiteffizienter gestaltet werden.
Methodik: Zur Erreichung des Ziels werden zunächst die klinischen Anforderungen spezifiziert und in ein Lastenheft überführt. Hierfür werden Interviews und Beobachtungen bei mehreren Interventionen durchgeführt. Nach dem User-Centered-Designprozess werden Personas und Nutzungsszenarien entworfen und mit klinischen Projektpartnern in mehreren Iterationen evaluiert. Es gilt eine Informationsarchitektur aufzubauen, die eine Einbettung klinischer Informationssysteme sowie Bild- und Gerätedaten aus dem OP-Netzwerk erlaubt. Eine Situationserkennung, basierend auf Prozessmodellen, soll zur Abschätzung des Operationsfortschritts entwickelt werden. Zur Befestigung der Informationsanzeige sollen geeignete Haltemechanismen eingesetzt werden. Das OR-Pad System soll laufend im Lehr- und Forschungs-OP der Hochschule Reutlingen getestet und im Sinne agiler Produktentwicklung mit den klinischen Projektpartnern abgestimmt werden. Der finale Funktionsprototyp soll abschließend in den Versuchs-OPs der Anatomie Tübingen getestet und evaluiert werden.
Ergebnisse: Über eine erste Datenerhebung mittels Contextual Inquiry konnten erste Anforderungen an das OR-Pad System erfasst werden, woraus ein Low-Fidelity-Prototyp resultierte. Die Evaluation über Experteninterviews führte in die zweite Iteration, in der das Konzept entsprechend der Ergebnisse angepasst wurde. Über Hospitationen am Uniklinikum Tübingen fand eine weitere Datenerhebung zur Erstellung von Szenarien für die intraoperativen Anwendungsfälle statt. Anhand der Anforderungen wurde ein Konzept für die Benutzerschnittstelle entworfen, die im weiteren Verlauf mit den klinischen Projektpartnern evaluiert wird.
Business process models provide a considerable number of benefits for enterprises and organizations, but the creation of such models is costly and time-consuming, which slows down the organizational adoption of business process modeling. Social paradigms pave new ways for business process modeling by integrating stakeholders and leveraging knowledge sources. However, empirical research about the impact of social paradigms on costs of business process modeling is sparse. A better understanding of their impact could help to reduce the cost of business process modeling and improve decision-making on BPM activities. The paper constributes to this field by reporting about an empirical investigation via survey research on the perceived influence of different cost factors among experts. Our results indicate that different cost components, as well as the use of social paradigms, influence cost.
Companies are continuously changing their strategy, processes, and information systems to benefit from the digital transformation. Controlling the digital architecture and governance is the fundamental goal. Enterprise Governance, Risk and Compliance (GRC) systems are vital for managing digital risks threatening in modern enterprises from many different angles. The most significant constituent to GRC systems is the definition of controls that is implemented on different layers of a digital Enterprise Architecture (EA). As part of the compliant aspect of GRC, the effectiveness of these controls is assessed and reported to relevant management bodies within the enterprise. In this paper, we present a metamodel which links controls to the affected elements of a digital EA and supplies a way of expressing associated assessment techniques and results. We complement a metamodel with an expository instantiation of a control compliance cockpit in an international insurance enterprise.
Die Erfindung betrifft ein Verfahren zur extrinsischen Kalibrierung wenigstens eines bildgebenden Sensors, wonach eine Pose des wenigstens einen bildgebenden Sensors relativ zu dem Ursprung (U) eines dreidimensionalen Koordinatensystems einer Handhabungseinrichtung mittels einer Recheneinrichtung bestimmt wird, wobei bekannte dreidimensionale Koordinaten betreffend die Position wenigstens eines Gelenks der Handhabungseinrichtung durch die Recheneinrichtung berücksichtigt werden, und wobei zweidimensionale Koordinaten betreffend die Position des wenigstens einen Gelenks anhand von Rohdaten des wenigstens einen bildgebenden Sensors ermittelt werden, und wobei die Recheneinrichtung die Pose des wenigstens einen bildgebenden Sensors anhand der Korrespondenz zwischen den zweidimensionalen Koordinaten und den dreidimensionalen Koordinaten bestimmt.
Additive manufacturing (AM) is a promising manufacturing method for many industrial sectors. For this application, industrial requirements such as high production volumes and coordinated implementation must be taken into account. These tasks of the internal handling of production facilities are carried out by the Production Planning and Control (PPC) information system. A key factor in the planning and scheduling is the exact calculation of manufacturing times. For this purpose we investigate the use of Machine Learning (ML) for the prediction of manufacturing times of AM facilities.
Autism spectrum disorders (ASD) affect a large number of children both in the Russian Federation and in Germany. Early diagnosis is key for these children, because the sooner parents notice such disorders in a child and the rehabilitation and treatment program starts, the higher the likelihood of his social adaptation. The difficulties in raising such a child lie in the complexity of his learning outside of children's groups and the complexity of his medical care. In this regard, the development of digital applications that facilitate medical care and education of such children at home is important and relevant. The purpose of the project is to improve the availability and quality of healthcare and social adaptation at home of children with ASD through the use of digital technologies.
The rise of digital technologies has become an important driver for change in multiple industries. Therefore, firms need to develop digital capabilities to manage the transformation process successfully. Prior research assumes that the development of a specific set of digital capabilities leads to higher digital maturity. However, a measurement framework for digital maturity does not exist in scholarly work. Therefore, this paper develops a conceptualization and measuremnent model for digital maturity.
This book contains the proceedings of the KES International conferences on Innovation in Medicine and Healthcare (KES-InMed-19) and Intelligent Interactive Multimedia Systems and Services (KES-IIMSS-19), held on 17–19 June 2019 and co-located in St. Julians, on the island of Malta, as part of the KES Smart Digital Futures 2019 multi theme conference.
The major areas covered by KES-InMed-19 include: Digital IT Architecture in Healthcare; Advanced ICT for Medical and Healthcare; Biomedical Engineering, Trends, Research and Technologies and Healthcare Support System. The major areas covered by KES-IIMSS-19 were: Interactive Technologies; Artificial Intelligence and Data Analytics; Intelligent Services and Architectures and Applications.
This book is of use to researchers in these vibrant areas, managers, industrialists and anyone wishing to gain an overview of the latest research in these fields.
Due to the consequential impact of technological breakdowns, companies have to be prepared to deal with breakdowns or even better prevent them. In today's information technology, several methods and tools exist to downscale this concern. Therefore, this paper deals with the initial determination of a resilient enterprise architecture supporting predictive maintenance in the information technology domain and furthermore, concerns several mechanisms on how to reactively and proactively secure the state of resiliency on several abstraction levels. The objective of this paper is to give an overview on existing mechanisms for resiliency and to describe the foundation of an optimized approach, combining infrastructure and process mining techniques.
The promise of immutable documents to make it easier and less expensive for consumers and producers to collaborate in a verifiable way would represent an enormous progress, especially as companies strive for establish service contracts which are based on the flow of many small transactions using machine-to-machine communication. The blockchain technology logs these data, verifies the authenticity and make them available for service offers. This work deals with an architecture enabling to setup order processing between consumers and produceers using blockchain. In this way, the technical feasibility is shown and the special characteristics of blockchain production networks will be discussed.
The increasing heterogenecity of students at German Universities of Applied Sciences and the growing importance of digitization call for a rethinking of teaching and learning within higher education. In the next years, changing the learning ecosystem by developing and reflecting upon new teaching and learning techniques using methods of digitalization will be both - most relevant and very challenging. The following article introduces two different learning scenarios, which exemplify the implementation of new educational models that allow discontinuity of time and place, technology and process in teaching and learning. Within a blended learning apporach, the first learning scenario aims at adapting and individualizing the knowledge transfer in the course Foundations of Computer Science by providing knowledge individually and situation-specifically. The second learning scenario proposes a web-based tool to facilitate digital learning environments and thus digital learning communities and the possibility of computer-supported learning. The overall aim of both learning scenarios is to enhance learning for diverse groups by providing a different smart learning ecosystem in stepping away from a teacher-based to a student-centered approach. Both learning scenarios exemplarily represent the educational vision of Reutlingen University - its development into an interactive university.
While the recently emerged microservices architectural style is widely discussed in literature, it is difficult to find clear guidance on the process of refactoring legacy applications. The importance of the topic is underpinned by high costs and effort of a refactoring process which has several other implications, e.g. overall processes (DevOps) and team structure. Software architects facing this challenge are in need of selecting an appropriate strategy and refactoring technique. One of the most discussed aspects in this context is finding the right service granularity to fully leverage the advantages of a microservices architecture. This study first discusses the notion of architectural refactoring and subsequently compares 10 existing refactoring approaches recently proposed in academic literature. The approaches are classified by the underlying decomposition technique and visually presented in the form of a decision guide for quick reference. The review yielded a variety of strategies to break down a monolithic application into independent services. With one exception, most approaches are only applicable under certain conditions. Further concerns are the significant amount of input data some approaches require as well as limited or prototypical tool support.
Information technology (IT) plays an essential role in organizational innovation adoption. As such, IT governance (ITG) is paramount in accompanying IT to allow innovation. However, the traditional concept of ITG to control the formulation and implementation of IT strategy is not fully equipped to deal with the current changes occurring in the digital age. Today’s ITG needs an agile approach that can respond to changing dynamics. Consequently, companies are relying heavily on agile strategies to secure better company performance. This paper aims to clarify how organizations can implement agile ITG. To do so, this study conducted 56 qualitative interviews with professionals from the banking industry to identify agile dimensions within the governance construct. The qualitative evaluation uncovered 46 agile governance dimensions. Moreover, these dimensions were rated by 29 experts to identify the most effective ones. This led to the identification of six structure elements, eight processes, and eight relational mechanisms.
Many start-ups are in search of cooperation partners to develop their innovative business models. In response, incumbent firms are introducing increasingly more cooperation systems to engage with start-ups. However, many of these cooperations end in failure. Although qualitative studies on cooperation models have tried to improve the effectiveness of incumbent start-up strategies, only a few have empirically examined start-up cooperation behavior. Considering the lack of adequate measurement models in current research, this paper focuses on developing a multi-item scale on cooperation behavior of start-ups, drawing from a series of qualitative and quantitative studies. The resultant scale contributes to recent research on start-up cooperation and provides a framework to add an empirical perspective to current research.
Virtual Reality (VR) technology has the potential to support knowledge communication in several sectors. Still, when educators make use of immersive VR technology in favor of presenting their knowledge, their audience within the same room may not be able to see them anymore due to wearing head-mounted displays (HMDs). In this paper, we propose the Avatar2Avatar system and design, which augments the visual aspect during such a knowledge presentation. Avatar2Avatar enables users to see both a realistic representation of their respective counterpart and the virtual environment at the same time. We point out several design aspects of such a system and address design challenges and possibilities that arose during implementation. We specifically explore opportunities of a system design for integrating 2D video-avatars in existing roomscale VR setups. An additional user study indicates a positive impact concerning spatial presence when using Avatar2Avatar.
Representing users within an immersive virtual environment is an essential functionality of a multi-person virtual reality system. Especially when communicative or collaborative tasks must be performed, there exist challenges about realistic embodying and integrating such avatar representations. A shared comprehension of local space and non-verbal communication (like gesture, posture or self-expressive cues) can support these tasks. In this paper, we introduce a novel approach to create realistic, video-texture based avatars of colocated users in real-time and integrate them in an immersive virtual environment. We show a straight forward and low-cost hard- and software solution to do so. We discuss technical design problems that arose during implementation and present a qualitative analysis on the usability of the concept from a user study, applying it to a training scenario in the automotive sector.
In this paper, an approach is introduced how reinforcement learning can be used to achieve interoperability between heterogeneous Internet of Things (IoT) components. More specifically, we model an HTTP REST service as a Markov Decision Process and adapt Q-Learning to the properties of REST so that an agent in the role of an HTTP REST client can learn the semantics of the service and, especially an optimal sequence of service calls to achieve an application specific goal. With our approach, we want to open up and facilitate a discussion in the community, as we see the key for achieving interoperability in IoT by the utilization of artificial intelligence techniques.
Interoperability is an important topic in the Internet of Things (IoT), because this domain incorporates diverse and heterogeneous objects, communication protocols and data formats. Many models and classification schemes have been proposed to make the degree of interoperability measurable - however only on the basis of a hierarchical scale. In the course of this paper we introduce a novel approach to measure the degree of interoperability using a metric scaled quantity. We consider IoT as a distributed system, where interoperable objects exchange messages with each other. Under this premise, we interpret messages as operation calls and formalize this view as a causal model. The analysis of this model enables us to quantify the interoperable behavior of communicating objects.
Artefaktkorrektur und verfeinerte Metriken für ein EEG-basiertes System zur Müdigkeitserkennung
(2019)
Fragestellung: Müdigkeit ist ein oft unterschätztes, aber dennoch großes Problem im Straßenverkehr. Von rund 2,5 Mio. Verkehrsunfällen 2015 in Deutschland, waren 2898 Unfälle, mit insgesamt 59 Toten (~1,7 % der Todesfälle), auf Übermüdung zurückzuführen. Schätzungen gehen von einer Dunkelziffer von bis zu 20 % aus. In einer ersten eigenen Studie wurde überprüft, ob ein mobiles EEG in einem Fahrsimulator Müdigkeitszustände zuverlässig erkennen kann. Die Erkennungsrate lag lediglich bei 61 %. Ziel dieser Arbeit ist, das verwendete Messsystem zu verbessern. Dazu wird die Genauigkeit durch eine Artefaktkorrektur und mit Hilfe von verfeinerten Qualitätsmetriken erhöht. Eine erkannte Übermüdung wird dem Fahrer dann in angemessener Weise angezeigt, so dass er entsprechend reagieren kann.
Patienten und Methoden: Die Independent Component Analysis (ICA) ist ein multivariates Verfahren, um mehrere Zufallsvariablen zu analysieren. Für die Entscheidung, ob ein Fahrer gerade müde oder wach ist, wird der erstellte Merkmalsvektor für jede Sequenz mit ICA klassifiziert. Dafür wird ein trainierter Machine-Learning-Algorithmus eingesetzt, der in der Lage ist, auch unbekannte Datensätze in Klassen einzuteilen. Um die benötigten Frequenzwerte zu erhalten, wurde für jeden EEG-Kanal eine Fourier Transformation durchgeführt. Der erstellte Merkmalsvektor wird im nächsten Schritt durch ein Künstliches Neuronales Netz klassifiziert. Für das Training werden vorab erstellte Merkmalsvektoren mit den Klassen „Wach“ und „Müde“ versehen. Diese Daten werden zufällig gemischt und im Verhältnis 2:1 in eine Trainings- und Testmenge geteilt. Das Experiment wurde mit acht Personen mit jeweils zweimal 45 min Testfahrt durchgeführt.
Ergebnisse: Der komplette Datensatz besteht aus 150.000 Signalwerten, welche zu ca. 7000 Sequenzen zusammengefasst werden. Durch die Anwendung der Qualitätsmetrik bleiben 4370 Sequenzen für das Training übrig. Bei invaliden Sequenzen aufgrund von EEG-Artefakten gibt es deutliche Unterschiede. Im „Wach“ Zustand werden dreimal so viele Sequenzen verworfen als im „Müde“ Zustand. Insgesamt werden bei wachen Probanden im Schnitt ca. 50 % der Sequenzen verworfen, bei Müden lediglich 25 %. Im Durchschnitt erreicht das System eine Erkennungsrate von 73 % für beide Zustände. Vergleicht man nun das Verhältnis von „Wach“ und „Müde“ und lässt „Leichte Müdigkeit“ außen vor, liegen die Ergebnisse bei über 90 %.
Schlussfolgerungen: Die Ergebnisse zeigen, dass die Aufmerksamkeit während des Experiments abnimmt bzw. die Müdigkeit zunimmt. Dies verdeutlichen zum einen subjektive und objektive Beobachtungen von Müdigkeitsanzeichen. Zum anderen lassen sich messbare und klassifizierbare Unterschiede im EEG Signal nachweisen. Die als Merkmale eingesetzten Theta-Wellen zeigten eine niedrigere Amplitude gegen Ende des Experiments. Die Erweiterung der binären Klassifizierung führt zu einer weiteren Stabilisierung der Ergebnisse. Artefaktkorrektur und Qualitätsmetriken steigern die Güte der Daten weiter. Die entwickelte Anwendung zur Müdigkeitserkennung ermittelt messbare Zeichen von Müdigkeit und kann eine gute Entscheidung über die Fahrtauglichkeit treffen.
Purpose – Many start-ups are in search of cooperation partners to develop their innovative business models. In response, incumbent firms are introducing increasingly more cooperation systems to engage with startups. However, many of these cooperations end in failure. Although qualitative studies on cooperation models have tried to improve the effectiveness of incumbent start-up strategies, only a few have empirically examined start-up cooperation behavior. The paper aims to discuss these issues.
Design/methodology/approach – Drawing from a series of qualitative and quantitative studies. The scale dimensions are identified on an interview based qualitative study. Following workshops and questionnaire-based studies identify factors and rank them. These ranked factors are then used to build a measurement scale that is integrated in a standardized online questionnaire addressing start-ups. The gathered data are then analyzed using PLS-SEM.
Findings – The research was able to build a multi-item scale for start-ups cooperation behavior. This scale can be used in future research. The paper also provides a causal analysis on the impact of cooperation behavior on start-up performance. The research finds, that the found dimensions are suitable for measuring cooperation behavior. It also shows a minor positive effect on start-up’s performance.
Originality/value – The research fills the gap of lacking empirical research on the cooperation between start-ups and established firms. Also, most past studies focus on organizational structures and their performance when addressing these cooperations. Although past studies identified the start-ups behavior as a relevant factor, no empirical research has been conducted on the topic yet.
Digitalization of products and services commonly causes substantial changes in business models, operations, organization structures and IT infrastructures of enterprises. Motivated by experiences and observations from digitalization projects, the paper investigates the effects of digitalization on enterprise architectures (EA). EA models serve as representation of business, information system and technical aspects of an enterprise to support management and development. By comparing EA models before and after digitalization, the paper analyzes the kinds of changes visible in the EA model. The most important finding is that newly created digitized products and the associated (product)- and enterprise architecture are no longer properly integrated into the overall architecture and even exist in parallel. Thus, the focus of this work is on showing these parallel architectures and proposing derivations for a better integration.
Potentials of smart contracts-based disintermediation in additive manufacturing supply chains
(2019)
We investigate which potentials are created by using smart contracts for disintermediation in supply chains for additive manufacturing. Using a qualitative, critical realist research approach, we analyzed three case studies with companies active in additive manufactures. Based on interviews with experts from these companies, we could identify eight key requirements for disintermediation and associate four potentials of smart contracts-based disintermediation.
The cloud evolved into an attractive execution environment for parallel applications from the High Performance Computing (HPC) domain. Existing research recognized that parallel applications require architectural refactoring to benefit from cloud-specific properties (most importantly elasticity). However, architectural refactoring comes with many challenges and cannot be applied to all applications due to fundamental performance issues. Thus, during the last years, different cloud migration strategies have been considered for different classes of parallel applications. In this paper, we provide a survey on HPC cloud migration research. We investigate on the approaches applied and the parallel applications considered. Based on our findings, we identify and describe three cloud migration strategies.
Zur Unterstützung des Operateurs wird eine patientennahe Informationsanzeige entwickelt, die kontextrelevante Informationen entsprechend der aktuellen Situation bereitstellen kann. Hierfür soll eine Situationserkennung konzipiert werden, die auf unterschiedliche intraoperative Prozesse übertragen werden kann. Ziel der adaptiven Situationserkennung ist das Erkennen spezifischer Situationen durch intraoperative Informationen unterschiedlicher Datenquellen im Operationssaal. Innerhalb der Datenerhebung und -analyse wurden Anwendungsfälle für die Situationserkennung definiert sowie chirurgische Prozessmodelle erstellt, die intraoperative Ereignisse abbilden. Auf Basis dieser Informationen wurde ein Konzept entworfen, das sich zunächst auf die Erkennung abstrakter generalisierter Phasen, unabhängig vom Eingriff, fokussiert und sich Schritt für Schritt auf granulare Prozessschritte spezifizieren lässt. Diese Flexibilität soll die Übertragbarkeit des Konzepts auf intraoperative Prozesse ermöglichen und den Operateur dadurch gezielt mit kontextrelevanten Informationen unterstützen. Das Konzept wird in zukünftigen Schritten weiterentwickelt.
Private equity (PE) firms are investment firms that acquire equity shares in companies. The goal of PE firms is to exit the investment after few years with a substantial increase in value. PE firms often claim to outperform the market, i.e. to create alpha.
The overall aim of this paper is to unravel the mystery of value creation in the PE industry. First, the author presents a conceptual framework for value creation in the PE industry based on a multiple valuation model that breaks down value creation into different elements. Second, the paper evaluates whether PE firms really create value by analysing and combining results from prior empirical studies based on the conceptual framework.
The results show that existing empirical evidence is mixed but that there is indeed a tendency toward a positive evidence that PE firms create economic value in average. However, there are methodological difficulties in measuring the value creation and studies are often subject to bias. Finally, it is pointed out that the question whether PE firms really create value has to be viewed from different perspectives such as the perspective of the PE firm, the investors and the portfolio companies.
Autismus-Spektrum-Störungen (ASD) bei Kindern werden häufig zu spät diagnostiziert und die Begleitung der chronischen Krankheit gestaltet sich schwierig. Der vorgestellte Ansatz erlaubt die Behandlung der Kinder in dem bekannten häuslichen Umfeld und versucht die Beziehungen zwischen Schlaf und Verhalten herauszuarbeiten. Die gewonnenen Erkenntnisse sollen die Lebensqualität der Patienten verbessern und den Eltern Hilfestellung geben. Die notwendige infrastrukturelle Unterstützung wird durch medizinisches Fachpersonal geleistet, das auf einen web-basierten Service zurückgreifen kann, der sämtliche Prozesse (Diagnostik, Datenerfassung, -aufzeichnung und Training etc.) begleitet. Die anonymisierten Daten werden in einem Diagnosesystem zentral abgelegt und können so für zukünftige Behandlungsstrategien nutzbar sein. Die umfassende Lösung setzt auf zentrale Elemente von Smart-Homes und AAL auf.
Semi-automated image data labelling using AprilTags as a pre-processing step for machine learning
(2019)
Data labelling is a pre-processing step to prepare data for machine learning. There are many ways to collect and prepare this data, but these are usually associated with a greater effort. This paper presents an approach to semi-automated image data labelling using AprilTags. The AprilTags attached to the object, which contain a unique ID, make it possible to link the object surfaces to a particular class. This approach will be implemented and used to label data of a stackable box.
The data is evaluated by training a You Only Look Once (YOLO) net, with a subsequent evaluation of the detection results. These results show that the semi-automatically collected and labelled data can certainly be used for machine learning. However, if concise features of an object surface are covered by the AprilTag, there is a risk that the concerned class will not be recognized. It can be assumed that the labelled data can not only be used for YOLO, but also for other machine learning approaches.
Workflow driven support systems in the peri-operative area have the potential to optimize clinical processes and to allow new situation-adaptive support systems. We started to develop a workflow management system supporting all involved actors in the operating theatre with the goal to synchronize the tasks of the different stakeholders by giving relevant information to the right team members. Using the OMG standards BPMN, CMMN and DMN gives us the opportunity to bring established methods from other industries into the medical field. The system shows each addressed actor their information in the right place at the right time to make sure every member can execute their task in time to ensure a smooth workflow. The system has the overall view of all tasks. Accordingly, a workflow management system including the Camunda BPM workflow engine to run the models, and a middleware to connect different systems to the workflow engine and some graphical user interfaces to show necessary information or to interact with the system are used. The complete pipeline is implemented with a RESTful web service. The system is designed to include different systems like hospital information system (HIS) via the RESTful web service very easily and without loss of data. The first prototype is implemented and will be expanded.
Ever since the 1980s, researchers in computer science and robotics have been working on making autonomous cars. Due to recent breakthroughs in research and devel- opment, such as the Bertha Benz Project [ZBS+14], the goal of fully autonomous vehicles seems closer than ever before. Yet a lot of questions remain unanswered. Especially now that the automotive industry moves towards autonomous systems in series production vehicles, the task of precise localization has to be solved with automotive grade sensors and keep memory and processing consumption at a mini- mum. This thesis investigates the Simultaneous Localization and Mapping (SLAM) prob- lem for autonomous driving scenarios on a parking lot using low cost automotive sensors. The main focus is herby devoted to the RAdio Detection And Ranging (RADAR) sensor, which has not been widely analyzed in an autonomous driving scenario so far, even though they are abundant in the automotive industry for ap- plications such as Adaptive Cruise Control (ACC). Due to the high noise floor, the radar sensor has widely been disregarded in the Intelligent Transportation Systems and Robotics communities with regards to SLAM applications. However in this thesis, it is shown that the RADAR sensor proves to be an affordable, robust and precise sensor, when modeling its physical properties correctly. In this regard, a GraphSLAM based framework is introduced, which extracts features from the RADAR sensor and generates an optimized map of the surroundings using the RADAR sensor alone. This framework is used to enable crowd based localization, which is not limited to the RADAR sensor alone. By integrating an automotive Light Detection and Ranging (LiDAR) and stereo camera sensor, a robust and precise localization system can be built that that is suitable for autonomous driving even in complex parking lot scenarios. It it is thereby shown that the RADAR sensor is strongly contributing to obtaining good results in a sensor fusion setup. These results were obtained on an extensive dataset on a parking lot, which has been recorded over the course of several months. It contains different weather conditions, different configurations of parked cars and a multitude of different trajectories to validate the approaches described in this thesis and to come to the conclusion that the RADAR sensor is a reliable sensor in series autonomous driving systems, both in a multi sensor framework and as a single component for localization.