Informatik
Refine
Year of publication
- 2022 (71) (remove)
Document Type
- Conference proceeding (37)
- Journal article (31)
- Book chapter (2)
- Doctoral Thesis (1)
Has full text
- yes (71) (remove)
Is part of the Bibliography
- yes (71)
Institute
- Informatik (71)
Publisher
- Springer (15)
- Hochschule Reutlingen (10)
- Elsevier (7)
- IEEE (6)
- De Gruyter (4)
- University of Hawai'i at Manoa (3)
- Association for Computing Machinery (2)
- IARIA (2)
- Academic Conferences International Limited (1)
- Association for Computing Machinery ACM (1)
Nowadays, the importance of early active patient mobilization in the recovery and rehabilitation phase has increased significantly. One way to involve patients in the treatment is a gamification-like approach, which is one of the methods of motivation in various life processes. This article shows a system prototype for patients who require physical activity because of active early mobilization after medical interventions or during illness. Bedridden patients and people with a sedentary lifestyle (predominantly lying in bed) are also potential users. The main idea for the concept was non-contact system implementation for the patients making them feel effortless during its usage. The system consists of three related parts: hardware, software, and game application. To test the relevance and coherence of the system, it was used by 35 people. The participants were asked to play a video game requiring them to make body movements while lying down. Then they were asked to take part in a small survey to evaluate the system's usability. As a result, we offer a prototype consisting of hardware and software parts that can increase and diversify physical activity during active early mobilization of patients and prevent the occurrence of possible health problems due to predominantly low activity. The proposed design can be possibly implemented in hospitals, rehabilitation centers, and even at home.
Sleep analysis using a Polysomnography system is difficult and expensive. That is why we suggest a non-invasive and unobtrusive measurement. Very few people want the cables or devices attached to their bodies during sleep. The proposed approach is to implement a monitoring system, so the subject is not bothered. As a result, the idea is a non-invasive monitoring system based on detecting pressure distribution. This system should be able to measure the pressure differences that occur during a single heartbeat and during breathing through the mattress. The system consists of two blocks signal acquisition and signal processing. This whole technology should be economical to be affordable enough for every user. As a result, preprocessed data is obtained for further detailed analysis using different filters for heartbeat and respiration detection. In the initial stage of filtration, Butterworth filters are used.
With the progress of technology in modern hospitals, an intelligent perioperative situation recognition will gain more relevance due to its potential to substantially improve surgical workflows by providing situation knowledge in real-time. Such knowledge can be extracted from image data by machine learning techniques but poses a privacy threat to the staff’s and patients’ personal data. De-identification is a possible solution for removing visual sensitive information. In this work, we developed a YOLO v3 based prototype to detect sensitive areas in the image in real-time. These are then deidentified using common image obfuscation techniques. Our approach shows that it is principle suitable for de-identifying sensitive data in OR images and contributes to a privacyrespectful way of processing in the context of situation recognition in the OR.
In our initial DaMoN paper, we set out the goal to revisit the results of “Starring into the Abyss [...] of Concurrency Control with [1000] Cores” (Yu in Proc. VLDB Endow 8: 209-220, 2014). Against their assumption, today we do not see single-socket CPUs with 1000 cores. Instead, multi-socket hardware is prevalent today and in fact offers over 1000 cores. Hence, we evaluated concurrency control (CC) schemes on a real (Intel-based) multi-socket platform. To our surprise, we made interesting findings opposing results of the original analysis that we discussed in our initial DaMoN paper. In this paper, we further broaden our analysis, detailing the effect of hardware and workload characteristics via additional real hardware platforms (IBM Power8 and 9) and the full TPC-C transaction mix. Among others, we identified clear connections between the performance of the CC schemes and hardware characteristics, especially concerning NUMA and CPU cache. Overall, we conclude that no CC scheme can efficiently make use of large multi-socket hardware in a robust manner and suggest several directions on how CC schemes and overall OLTP DBMS should evolve in future.
Even though near-data processing (NDP) can provably reduce data transfers and increase performance, current NDP is solely utilized in read-only settings. Slow or tedious to implement synchronization and invalidation mechanisms between host and smart storage make NDP support for data-intensive update operations difficult. In this paper, we introduce a low-latency cache-coherent shared lock table for update NDP settings in disaggregated memory environments. It utilizes the novel CCIX interconnect technology and is integrated in neoDBMS, a near-data processing DBMS for smart storage. Our evaluation indicates end-to-end lock latencies of ∼80-100ns and robust performance under contention.
Multi-versioning and MVCC are the foundations of many modern DBMSs. Under mixed workloads and large datasets, the creation of the transactional snapshot can become very expensive, as long-running analytical transactions may request old versions, residing on cold storage, for reasons of transactional consistency. Furthermore, analytical queries operate on cold data, stored on slow persistent storage. Due to the poor data locality, snapshot creation may cause massive data transfers and thus lower performance. Given the current trend towards computational storage and near-data processing, it has become viable to perform such operations in-storage to reduce data transfers and improve scalability. neoDBMS is a DBMS designed for near-data processing and computational storage. In this paper, we demonstrate how neoDBMS performs snapshot computation in-situ. We showcase different interactive scenarios, where neoDBMS outperforms PostgreSQL 12 by up to 5×.
Determination of accelerometer sensor position for respiration rate detection: initial research
(2022)
Continuous monitoring of a patient's vital signs is essential in many chronic illnesses. The respiratory rate (RR) is one of the vital signs indicating breathing diseases. This article proposes the initial investigation for determining the accelerometric sensor position of a non-invasive and unobtrusive respiratory rate monitoring system. This research aims to determine the sensor position in relation to the patient, which can provide the most accurate values of the mentioned physiological parameter. In order to achieve the result, the particular system setup, including a mechanical sensor holder construction was used. The breathing signals from 5 participants were analyzed corresponding to the relaxed state. The main criterion for selecting a suitable sensor position was each patient's average acceleration amplitude excursion, which corresponds to the respiratory signal. As a result, we provided one more defined important parameter for the considered system, which was not determined before.
The respiratory rate is a vital sign indicating breathing illness. It is necessary to analyze the mechanical oscillations of the patient's body arising from chest movements. An inappropriate holder on which the sensor is mounted, or an inappropriate sensor position is some of the external factors which should be minimized during signal registration. This paper considers using a non-invasive device placed under the bed mattress and evaluates the respiratory rate. The aim of the work is the development of an accelerometer sensor holder for this system. The normal and deep breathing signals were analyzed, corresponding to the relaxed state and when taking deep breaths. The evaluation criterion for the holder's model is its influence on the patient's respiratory signal amplitude for each state. As a result, we offer a non-invasive system of respiratory rate detection, including the mechanical component providing the most accurate values of mentioned respiratory rate.
Literature reviews are essential for any scientific work, both as part of a dissertation or as a stand-alone work. Scientists benefit from the fact that more and more literature is available in electronic form, and finding and accessing relevant literature has become more accessible through scientific databases. However, a traditional literature review method is characterized by a highly manual process, while technologies and methods in big data, machine learning, and text mining have advanced. Especially in areas where research streams are rapidly evolving, and topics are becoming more comprehensive, complex, and heterogeneous, it is challenging to provide a holistic overview and identify research gaps manually. Therefore, we have developed a framework that supports the traditional approach of conducting a literature review using machine learning and text mining methods. The framework is particularly suitable in cases where a large amount of literature is available, and a holistic understanding of the research area is needed. The framework consists of several steps in which the critical mind of the scientist is supported by machine learning. The unstructured text data is transformed into a structured form through data preparation realized with text mining, making it applicable for various machine learning techniques. A concrete example in the field of smart cities makes the framework tangible.
Data governance have been relevant for companies for a long time. Yet, in the broad discussion on smart cities, research on data governance in particular is scant, even though data governance plays an essential role in an environment with multiple stakeholders, complex IT structures and heterogeneous processes. Indeed, not only can a city benefit from the existing body of knowledge on data governance, but it can also make the appropriate adjustments for its digital transformation. Therefore, this literature review aims to spark research on urban data governance by providing an initial perspective for future studies. It provides a comprehensive overview of data governance and the relevant facets embedded in this strand of research. Furthermore, it provides a fundamental basis for future research on the development of an urban data governance framework.
The global demand for resources such as energy, land, or water is constantly increasing. It is therefore not sur- prising that research on the Food-Energy-Water (FEW) nexus has become a scientific as well as a general focus in recent years. A significant increase in publications since 2015 can be observed, and it can be expected that this trend will continue. A multilevel (macro, meso, and micro) perspective is essential, as the FEW nexus has cross- sectoral interdependencies. Several review studies on the FEW nexus can be found in the literature, in general, it can be concluded that the FEW nexus is a multi-disciplinary and complex topic. The studies examined identify essential fields of action for research, policy, and society. However, questions such as what are the main research fields at each level? Is it possible to divide the research into specific clusters? and do the clusters correlate with the levels, and what are the methods of modeling used in the clusters and levels? are still not fully discussed in the literature. An extensive literature review was conducted to get insight into the existing research areas. Especially in such fields as the FEW nexus, the amount of literature can get huge, and a human could get lost analyzing the literature manually. For that, we created word clouds and performed a cluster- and network-analysis to support the selection of most relevant papers for a detailed reading. In 2021, the most publications were published, with 173 publications, which corresponds to a share of 26.6 %. There has been a significant increase since 2015, and it can be expected that this trend will continue in the coming years. Most of the first authors come from the USA (25.4 %), followed by China with 22.4 %. From the word cloud and the top 20 words, which appear in the title and abstract, it can be deduced that the topic water is the most represented. However, the terms system, resource, model, study, change, development, and management also appear to be very important, which indi- cates the importance of a holistic approach to the topic. In total 9 clusters could be identified at the different levels. It can be seen that three clusters form well. For the others, a rather diffuse picture can be observed. In order to find out which topics are hidden behind the individual clusters, 6 publications from each cluster were subjected to a more detailed examination. With these steps, a number of 54 publications were identified for de- tailed consideration. The modeling approaches that are currently being applied in research can be classified into domain-specific tools (e. g. global water models, crop models or global climate models) and into more general tools to perform for example a life cycle analysis, spatial analysis using geographic information system, or system dynamics for a general understanding of the links between the domains. With the domain-specific tools, detailed research questions can be addressed to answer questions for a specific domain. However, these tools have the disadvantage that especially the links between the sectors food, energy, and water are not fully considered. Many implementations that are made today are at lowest level (micro) relate to bounded spatial areas and are derived from macro and meso level goals.
Public transport maps are typically designed in a way to support route finding tasks for passengers, while they also provide an overview about stations, metro lines, and city-specific attractions. Most of those maps are designed as a static representation, maybe placed in a metro station or printed in a travel guide. In this paper, we describe a dynamic, interactive public transport map visualization enhanced by additional views for the dynamic passenger data on different levels of temporal granularity. Moreover, we also allow extra statistical information in form of density plots, calendar-based visualizations, and line graphs. All this information is linked to the contextual metro map to give a viewer insights into the relations between time points and typical routes taken by the passengers. We also integrated a graph-based view on user-selected routes, a way to interactively compare those routes, an attribute- and property-driven automatic computation of specific routes for one map as well as for all available maps in our repertoire, and finally, also the most important sights in each city are included as extra information to include in a user-selected route. We illustrate the usefulness of our interactive visualization and map navigation system by applying it to the railway system of Hamburg in Germany while also taking into account the extra passenger data. As another indication for the usefulness of the interactively enhanced metro maps we conducted a controlled user experiment with 20 participants.
We present a multitask network that supports various deep neural network based pedestrian detection functions. Besides 2D and 3D human pose, it also supports body and head orientation estimation based on full body bounding box input. This eliminates the need for explicit face recognition. We show that the performance of 3D human pose estimation and orientation estimation is comparable to the state-of-the-art. Since very few data sets exist for 3D human pose and in particular body and head orientation estimation based on full body data, we further show the benefit of particular simulation data to train the network. The network architecture is relatively simple, yet powerful, and easily adaptable for further research and applications.
Motivation: Aim of this project is the automatic classification of total hip endoprosthesis (THEP) components in 2D Xray images. Revision surgeries of total hip arthroplasty (THA) are common procedures in orthopedics and trauma surgery. Currently, around 400.000 procedures per year are performed in the United States (US) alone. To achieve the best possible result, preoperative planning is crucial. Especially if parts of the current THEP system are to be retained.
Methods: First, a ground truth based on 76 X-ray images was created: We used an image processing pipeline consisting of a segmentation step performed by a convolutional neural network and a classification step performed by a support vector machine (SVM). In total, 11 classes (5 pans and 6 shafts) shall be classified.
Results: The ground truth generated was of good quality even though the initial segmentation was performed by technicians. The best segmentation results were achieved using a U-net architecture. For classification, SVM architectures performed much better than additional neural networks.
Conclusions: The overall image processing pipeline performed well, but the ground truth needs to be extended to include a broader variability of implant types and more examples per training class.
This paper reviews suggestions for changes to database technology coming from the work of many researchers, particularly those working with evolving big data. We discuss new approaches to remote data access and standards that better provide for durability and auditability in settings including business and scientific computing. We propose ways in which the language standards could evolve, with proof-of-concept implementations on Github.
In this paper we presented the results of the workshop with the topic: Co-creation in citizen science (CS) for the development of climate adaptation measurements - Which success factors promote, and which barriers hinder a fruitful collaboration and co-creation process between scientists and volunteers? Under consideration of social, motivational, technical/technological and legal factors., which took place at the CitSci2022. We underlined the mentioned factors in the work with scientific literature. Our findings suggest that a clear communication strategy of goals and how citizen scientists can contribute to the project are important. In addition, they have to feel include and that the contribution makes a difference. To achieve this, it is critical to present the results to the citizen scientists. Also, the relationship between scientist and citizen scientists are essential to keep the citizen scientists engaged. Notification of meetings and events needs to be made well in advance and should be scheduled on the attendees' leisure time. The citizen scientists should be especially supported in technical questions. As a result, they feel appreciated and remain part of the project. For legal factors the current General Data Protection Regulation was considered important by the participants of the workshop. For the further research we try to address the individual points and first of all to improve our communication with the citizen scientist about the project goals and how they can contribute. In addition, we should better share the achieved results.
The energy turnaround, digitalization and decreasing revenues forces enterprises in the energy domain to develop new business models. Following a Design Science Research approach, we showed in two action research projects that businesses models in the energy domain result in complex ecosystems with multiple actors. Additionally, we identified that municipal utilities have problems with the systematic development of business models. In order to solve the problem, we captured together with the partners of the enterprises the requirements in a second phase. Further we developed a method which consist of the following components: Method for the creative development of a new business model in form of a Business Model Canvas (BMC). A mapping between the e3Value ontology and the BMC for modelling a business ecosystem. The Business Model Configurator (BMConfig) prototype for modelling and simulating the e3Value-Ontology. The Business model can be quantified and analyzed for its viability. We demonstrate the feasibility of our approach in business model of a power community.
Home health applications have evolved over the last few decades. Assistive systems such as a data platform in connection with health devices can allow for health-related data to be automatically transmitted to a database. However, there remain significant challenges concerning intermodular communication. Central among them is the challenge of achieving interoperability, the ability of devices to communicate and share data with each other. A major goal of this project was to extend an existing data platform (COMES®) and establish working interoperability by connecting assistive devices with differing approaches. We describe this process for a sleep monitoring and a physical exercise device. Furthermore, we aimed to test this setup and the implementation with a data platform in both a laboratory and an in-home setting with 11 elderly participants. The platform modification was realized, and the relevant changes were made so that the incoming data could be processed by the data platform, as well as visually displayed in real-time. Data was recorded by the respective device and transmitted into the data server with minor disruptions. Our observations affirmed that difficulties and data loss are far more likely to occur with increasing technical complexity, in the event of instable internet connection, or when the device setup requires (elderly) subjects to take specific steps for proper functioning. We emphasize the importance for tests and evaluations of home health technologies in real-life circumstances.
The euphoria around microservices has decreased over the years, but the trend of modernizing legacy systems to this novel architectural style is unbroken to date. A variety of approaches have been proposed in academia and industry, aiming to structure and automate the often long-lasting and cost-intensive migration journey. However, our research shows that there is still a need for more systematic guidance. While grey literature is dominant for knowledge exchange among practitioners, academia has contributed a significant body of knowledge as well, catching up on its initial neglect. A vast number of studies on the topic yielded novel techniques, often backed by industry evaluations. However, practitioners hardly leverage these resources. In this paper, we report on our efforts to design an architecture-centric methodology for migrating to microservices. As its main contribution, a framework provides guidance for architects during the three phases of a migration. We refer to methods, techniques, and approaches based on a variety of scientific studies that have not been made available in a similarly comprehensible manner before. Through an accompanying tool to be developed, architects will be in a position to systematically plan their migration, make better informed decisions, and use the most appropriate techniques and tools to transition their systems to microservices.
The scoring of sleep stages is an essential part of sleep studies. The main objective of this research is to provide an algorithm for the automatic classification of sleep stages using signals that may be obtained in a non-obtrusive way. After reviewing the relevant research, the authors selected a multinomial logistic regression as the basis for their approach. Several parameters were derived from movement and breathing signals, and their combinations were investigated to develop an accurate and stable algorithm. The algorithm was implemented to produce successful results: the accuracy of the recognition of Wake/NREM/REM stages is equal to 73%, with Cohen's kappa of 0.44 for the analyzed 19324 sleep epochs of 30 seconds each. This approach has the advantage of using the only movement and breathing signals, which can be recorded with less effort than heart or brainwave signals, and requiring only four derived parameters for the calculations. Therefore, the new system is a significant improvement for non-obtrusive sleep stage identification compared to existing approaches.
The importance of sleep for human life is enormous. It affects physical, mental, and psychological health. Therefore, it is vital to recognise sleep disorders in a timely manner in order to be able to initiate therapy. There are two methods for measuring sleep-related parameters - objective and subjective. Whether the substitution of a subjective method for an objective one is possible is investigated in this paper. Such replacement may bring several advantages, including increased comfort for the user. To answer this research question, a study was conducted in which 75 overnight recordings were evaluated. The primary purpose of this study was to compare both ways of measurement for total sleep time and sleep efficiency, which are essential parameters for, e.g., insomnia diagnosis and treatment. The evaluation results demonstrated that, on average, there are 32 minutes of difference between the two measurement methods when total sleep time is analysed. In contrast, on average, both measurement methods differ by 7.5% for sleep efficiency measurement. It should also be noted that people typically overestimate total sleep time and efficiency with the subjective method, where the perceived values are measured.
Healthy sleep is required for sufficient restoration of the human body and brain. Therefore, in the case of sleep disorders, appropriate therapy should be applied timely, which requires a prompt diagnosis. Traditionally, a sleep diary is a part of diagnosis and therapy monitoring for some sleep disorders, such as cognitive behaviour therapy for insomnia. To automatise sleep monitoring and make it more comfortable for users, substituting a sleep diary with a smartwatch measurement could be considered. With the aim of providing accurate results, a study with a total of 30 night recordings was conducted. Objective sleep measurement with a Samsung Galaxy Watch 4 was compared with a subjective approach (sleep diary), evaluating the four relevant sleep characteristics: time of getting asleep, wake up time, sleep efficiency (SE), and total sleep time (TST). The performed analysis has demonstrated that the median difference between both measurement approaches was equal to 7 and 3 minutes for a time of getting asleep and wake up time correspondingly, which allows substituting a subjective measurement with a smartwatch. The SE was determined with a median difference between the two measurement methods of 5.22%. This result also implicates a possibility of substitution. Some single recordings have indicated a higher variance between the two approaches. Therefore, the conclusion can be made that a substitution provides reliable results primarily in the case of long-term monitoring. The results of the evaluation of the TST measurement do not allow to recommend substitution of the measurement method.
Recognition of sleep and wake states is one of the relevant parts of sleep analysis. Performing this measurement in a contactless way increases comfort for the users. We present an approach evaluating only movement and respiratory signals to achieve recognition, which can be measured non-obtrusively. The algorithm is based on multinomial logistic regression and analyses features extracted out of mentioned above signals. These features were identified and developed after performing fundamental research on characteristics of vital signals during sleep. The achieved accuracy of 87% with the Cohen’s kappa of 0.40 demonstrates the appropriateness of a chosen method and encourages continuing research on this topic.
Sleep is essential to existence, much like air, water, and food, as we spend nearly one-third of our time sleeping. Poor sleep quality or disturbed sleep causes daytime solemnity, which worsens daytime activities' mental and physical qualities and raises the risk of accidents. With advancements in sensor and communication technology, sleep monitoring is moving out of specialized clinics and into our everyday homes. It is possible to extract data from traditional overnight polysomnographic recordings using more basic tools and straightforward techniques. Ballistocardiogram is an unobtrusive, non-invasive, simple, and low-cost technique for measuring cardiorespiratory parameters. In this work, we present a sensor board interface to facilitate the communication between force sensitive resistor sensor and an embedded system to provide a high-performing prototype with an efficient signal-to-noise ratio. We have utilized a multi-physical-layer approach to locate each layer on top of another, yet supporting a low-cost, compact design with easy deployment under the bed frame.
Geometry of music perception
(2022)
Prevalent neuroscientific theories are combined with acoustic observations from various studies to create a consistent geometric model for music perception in order to rationalize, explain and predict psycho-acoustic phenomena. The space of all chords is shown to be a Whitney stratified space. Each stratum is a Riemannian manifold which naturally yields a geodesic distance across strata. The resulting metric is compatible with voice-leading satisfying the triangle inequality. The geometric model allows for rigorous studies of psychoacoustic quantities such as roughness and harmonicity as height functions. In order to show how to use the geometric framework in psychoacoustic studies, concepts for the perception of chord resolutions are introduced and analyzed.
Purpose
Context awareness in the operating room (OR) is important to realize targeted assistance to support actors during surgery. A situation recognition system (SRS) is used to interpret intraoperative events and derive an intraoperative situation from these. To achieve a modular system architecture, it is desirable to de-couple the SRS from other system components. This leads to the need of an interface between such an SRS and context-aware systems (CAS). This work aims to provide an open standardized interface to enable loose coupling of the SRS with varying CAS to allow vendor-independent device orchestrations.
Methods
A requirements analysis investigated limiting factors that currently prevent the integration of CAS in today's ORs. These elicited requirements enabled the selection of a suitable base architecture. We examined how to specify this architecture with the constraints of an interoperability standard. The resulting middleware was integrated into a prototypic SRS and our system for intraoperative support, the OR-Pad, as exemplary CAS for evaluating whether our solution can enable context-aware assistance during simulated orthopedical interventions.
Results
The emerging Service-oriented Device Connectivity (SDC) standard series was selected to specify and implement a middleware for providing the interpreted contextual information while the SRS and CAS are loosely coupled. The results were verified within a proof of concept study using the OR-Pad demonstration scenario. The fulfillment of the CAS’ requirements to act context-aware, conformity to the SDC standard series, and the effort for integrating the middleware in individual systems were evaluated. The semantically unambiguous encoding of contextual information depends on the further standardization process of the SDC nomenclature. The discussion of the validity of these results proved the applicability and transferability of the middleware.
Conclusion
The specified and implemented SDC-based middleware shows the feasibility of loose coupling an SRS with unknown CAS to realize context-aware assistance in the OR.
One of the key challenges for automatic assistance is the support of actors in the operating room depending on the status of the procedure. Therefore, context information collected in the operating room is used to gain knowledge about the current situation. In literature, solutions already exist for specific use cases, but it is doubtful to what extent these approaches can be transferred to other conditions. We conducted a comprehensive literature research on existing situation recognition systems for the intraoperative area, covering 274 articles and 95 cross-references published between 2010 and 2019. We contrasted and compared 58 identified approaches based on defined aspects such as used sensor data or application area. In addition, we discussed applicability and transferability. Most of the papers focus on video data for recognizing situations within laparoscopic and cataract surgeries. Not all of the approaches can be used online for real-time recognition. Using different methods, good results with recognition accuracies above 90% could be achieved. Overall, transferability is less addressed. The applicability of approaches to other circumstances seems to be possible to a limited extent. Future research should place a stronger focus on adaptability. The literature review shows differences within existing approaches for situation recognition and outlines research trends. Applicability and transferability to other conditions are less addressed in current work.
„Bürgerrechtler klagen gegen Weitergabe von Gesundheitsdaten“ – so titelt (spiegel.de, 2022) am 29.04.2022. Dabei geht es um die Weitergabe pseudonymisierter Daten von 73 Millionen Versicherten durch die gesetzlichen Krankenkassen. Diese Daten sollen der Forschung zur Verfügung gestellt werden. Die Kläger bezweifeln, dass die Daten nicht deanonymisiert werden können. Dieses aktuelle Beispiel zeigt einen konkreten und relevanten Anwendungsfall des Themas Anonymisierung/Pseudonymisierung im aktuariellen Kontext auf. Es ist davon auszugehen, dass die Relevanz in den kommenden Jahren weiter zunehmen wird.
Spätestens seit dem Inkrafttreten der DSGVO ist das Thema Datenschutz allgegenwärtig und stellt uns Aktuare vor große Herausforderungen. Europäische Initiativen zur Schaffung eines Binnenmarktes für Daten sollen zwar die Möglichkeit schaffen, Daten einfacher zu teilen und so beispielsweise Dritten für Forschungszwecke zur Verfügung zu stellen, werfen aber auch viele Fragestellungen auf. Eine naheliegende Lösung ist es, Daten zu anonymisieren oder zu pseudonymisieren. Aber was bedeutet das konkret und welche Konsequenzen ergeben sich daraus? Bis zu welchem Grad müssen Daten anonymisiert werden und welche ReIdentifikationsrisiken bestehen weiterhin?
Background
Although teledermatology has been proven internationally to be an effective and safe addition to the care of patients in primary care, there are few pilot projects implementing teledermatology in routine outpatient care in Germany. The aim of this cluster randomized controlled trial was to evaluate whether referrals to dermatologists are reduced by implementing a store-and-forward teleconsultation system in general practitioner practices.
Methods
Eight counties were cluster randomized to the intervention and control conditions. During the 1-year intervention period between July 2018 and June 2019, 46 general practitioner practices in the 4 intervention counties implemented a store-and-forward teledermatology system with Patient Data Management System interoperability. It allowed practice teams to initiate teleconsultations for patients with dermatologic complaints. In the four control counties, treatment as usual was performed. As primary outcome, number of referrals was calculated from routine health care data. Poisson regression was used to compare referral rates between the intervention practices and 342 control practices.
Results
The primary analysis revealed no significant difference in referral rates (relative risk = 1.02; 95% confidence interval = 0.911–1.141; p = .74). Secondary analyses accounting for sociodemographic and practice characteristics but omitting county pairing resulted in significant differences of referral rates between intervention practices and control practices. Matched county pair, general practitioner age, patient age, and patient sex distribution in the practices were significantly related to referral rates.
Conclusions
While a store-and-forward teleconsultation system was successfully implemented in the German primary health care setting, the intervention's effect was superimposed by regional factors. Such regional factors should be considered in future teledermatology research.
Generating synthetic data is a relevant point in the machine learning community. As accessible data is limited, the generation of synthetic data is a significant point in protecting patients' privacy and having more possibilities to train a model for classification or other machine learning tasks. In this work, some generative adversarial networks (GAN) variants are discussed, and an overview is given of how generative adversarial networks can be used for data generation in different fields. In addition, some common problems of the GANs and possibilities to avoid them are shown. Different evaluation methods of the generated data are also described.
Startups play a key role in software-based innovation. They make an important contribution to an economy’s ability to compete and innovate, and their importance will continue to grow due to increasing digitalization. However, the success of a startup depends primarily on market needs and the ability to develop a solution that is attractive enough for customers to choose. A sophisticated technical solution is usually not critical, especially in the early stages of a startup. It is not necessary to be an experienced software engineer to start a software startup. However, this can become problematic as the solution matures and software complexity increases. Based on a proposed solution for systematic software development for early-stage startups, in this paper, we present the key findings of a survey study to identify the methodological and technical priorities of software startups. Among other things, we found that requirements engineering and architecture pose challenges for startups. In addition, we found evidence that startups’ software development approaches do not tend to change over time. An early investment in a more scalable development approach could help avoid long-term software problems. To support such an investment, we propose an extended model for Entrepreneurial Software Engineering that provides a foundation for future research.
Physicians in interventional radiology are exposed to high physical stress. To avoid negative long-term effects resulting from unergonomic working conditions, we demonstrated the feasibility of a system that gives feedback about unergonomic
situations arising during the intervention based on the Azure Kinect camera. The overall feasibility of the approach could be shown.
Ultra wideband real-time locating system for tracking people and devices in the operating room
(2022)
Position tracking within the OR could be one possible input for intraoperative situation recognition. Our approach demonstrates a Real-time Locating System (RTLS) using the Ultra Wideband (UWB) technology to determine the position of people or objects. The UWB RTLS was integrated into the research OR at Reutlingen University and the system’s settings were optimized regarding the four factors accuracy, susceptibility to interference, range, and latency. Therefore, different parameters were adapted and the effects on the factors were compared. Goodtracking quality could be achieved under optimal settings. These results indicate that a UWB RTLS is well suited to determine the position of people and devices in our setting. The feasibility of the system needsto be evaluated under real OR conditions.
Data analysis is becoming increasingly important to pursue organizational goals, especially in the context of Industry 4.0, where a wide variety of data is available. Here numerous challenges arise, especially when using unstructured data. However, this subject has not been focused by research so far. This research paper addresses this gap, which is interesting for science and practice as well. In a study three major challenges of using unstructured data has been identified: analytical know-how, data issues, variety. Additionally, measures how to improve the analysis of unstructured data in the industry 4.0 context are described. Therefore, the paper provides empirical insights about challenges and potential measures when analyzing unstructured data. The findings are presented in a framework, too. Hence, next steps of the research project and future research points become apparent.
There is a growing consensus in research and practice that value-creating networks and ecosystems are supplementing the traditional distinction between the internal firm and market perspectives. To achieve joint value in ecosystems, it is crucial to align the various interests of independently acting ecosystem actors and create a common vision. In this paper, we argue that the ecosystem-wide use of product roadmaps may help with this. To get a better understanding of how roadmapping is conducted in the dynamic ecosystem environment, we systematize the main characteristics of product roadmaps and perform a conceptual comparison with the known challenges of ecosystem management. Comparing the two concepts of ecosystems and product roadmaps, we highlight the fit between the characteristics and objectives of the roadmaps and the challenges of ecosystem management. Hence, we propose to experiment with the ecosystem-wide use of product roadmaps as well as the empirical study of the challenges emerging in the process and the associated redesign of the roadmaps.
Hintergrund: Endoskopische Operationsverfahren haben sich als Goldstandard in der Nasennebenhöhlen-(NNH-)Chirurgie etabliert. Den sich daraus ergebenden Herausforderungen für die chirurgische Ausbildung kann durch den Einsatz von Virtuelle-Realität-(VR-)Trainingssimulatoren begegnet werden. Bislang wurde eine Reihe von Simulatoren für NNH-Operationen entwickelt. Frühere Studien im Hinblick auf den Trainingseffekt wurden jedoch nur mit medizinisch vorgebildeten Probanden durchgeführt oder es wurde nicht über dessen zeitlichen Verlauf berichtet.
Methoden: Ein NNH-CT-Datensatz wurde nach der Segmentierung in ein 3-dimensionales, polygonales Oberflächenmodell überführt und mithilfe von originalem Fotomaterial texturiert. Die Interaktion mit der virtuellen Umgebung erfolgte über ein haptisches Eingabegerät. Während der Simulation wurden die Parameter Eingriffsdauer und Fehleranzahl erfasst. Zehn Probanden absolvierten jeweils eine Trainingseinheit bestehend aus je 5 Übungsdurchläufen an 10 aufeinanderfolgenden Tagen.
Ergebnisse: Vier Probanden verringerten die benötigte Zeit um mehr als 60% im Verlauf des Übungszeitraums. Vier der Probanden verringerten ihre Fehleranzahl um mehr als 60%. Acht von 10 Probanden zeigten eine Verbesserung bezüglich beider Parameter. Im Median wurde im gesamten gemessenen Zeitraum die Dauer des Eingriffs um 46 Sekunden und die Fehleranzahl um 191 reduziert. Die Überprüfung eines Zusammenhangs zwischen den 2 Parametern ergab eine positive Korrelation.
Schlussfolgerung: Zusammenfassend lässt sich feststellen, dass das Training am NNH-Simulator auch bei unerfahrenen Personen die Performance beträchtlich verbessert, sowohl in Bezug auf die Dauer als auch auf die Genauigkeit des Eingriffs.
Hybrid project management is an approach that combines traditional and agile project management techniques. The goal is to benefit from the strengths of each approach, and, at the same time avoid the weaknesses. However, due to the variety of hybrid methodologies that have been presented in the meantime, it is not easy to understand the differences or similarities of the methodologies, as well as, the advantages or disadvantages of the hybrid approach in general. Additionally, there is only fragmented knowledge about prerequisites and success factors for successfully implementing hybrid project management in organizations. Hence, the aim of this study is to provide a structured overview of the current state of research regarding the topic. To address this aim, we have conducted a systematic literature review focusing on a set of specific research questions. As a result, four different hybrid methodologies are discussed, as well as, the definition, benefits, challenges, suitability and prerequisites of hybrid project management. Our study contributes to knowledge by synthesizing and structuring prior work in this growing area of research, which serves as a basis for purposeful and targeted research in the future.
Database management systems and K/V-Stores operate on updatable datasets – massively exceeding the size of available main memory. Tree-based K/V storage management structures became particularly popular in storage engines. B+ -Trees [1, 4] allow constant search performance, however write-heavy workloads yield in inefficient write patterns to secondary storage devices and poor performance characteristics. LSM-Trees [16, 23] overcome this issue by horizontal partitioning fractions of data – small enough to fully reside in main memory, but require frequent maintenance to sustain search performance.
Firstly, we propose Multi-Version Partitioned BTrees (MV-PBT) as sole storage and index management structure in key-sorted storage engines like K/V-Stores. Secondly, we compare MV-PBT against LSM-Trees. The logical horizontal partitioning in MV-PBT allows leveraging recent advances in modern B+ -Tree techniques in a small transparent and memory resident portion of the structure. Structural properties sustain steady read performance, yielding efficient write patterns and reducing write amplification.
We integrated MV-PBT in the WiredTiger [15] KV storage engine. MV-PBT offers an up to 2× increased steady throughput in comparison to LSM-Trees and several orders of magnitude in comparison to B+ -Trees in a YCSB [5] workload.
Uncontrolled movement of instruments in laparoscopic surgery can lead to inadvertent tissue damage, particularly when the dissecting or electrosurgical instrument is located outside the field of view of the laparoscopic camera. The incidence and relevance of such events are currently unknown. The present work aims to identify and quantify potentially dangerous situations using the example of laparoscopic cholecystectomy (LC). Twenty-four final year medical students were prompted to each perform four consecutive LC attempts on a well-established box trainer in a surgical training environment following a standardized protocol in a porcine model. The following situation was defined as a critical event (CE): the dissecting instrument was inadvertently located outside the laparoscopic camera’s field of view. Simultaneous activation of the electrosurgical unit was defined as a highly critical event (hCE). Primary endpoint was the incidence of CEs. While performing 96 LCs, 2895 CEs were observed. Of these, 1059 (36.6%) were hCEs. The median number of CEs per LC was 20.5 (range: 1–125; IQR: 33) and the median number of hCEs per LC was 8.0 (range: 0–54, IQR: 10). Mean total operation time was 34.7 min (range: 15.6–62.5 min, IQR: 14.3 min). Our study demonstrates the significance of CEs as a potential risk factor for collateral damage during LC. Further studies are needed to investigate the occurrence of CE in clinical practice, not just for laparoscopic cholecystectomy but also for other procedures. Systematic training of future surgeons as well as technical solutions address this safety issue.
Digital twins: a meta-review on their conceptualization, application, and reference architecture
(2022)
The concept of digital twins (DTs) is receiving increasing attention in research and management practice. However, various facets around the concept are blurry, including conceptualization, application areas, and reference architectures for DTs. A review of preliminary results regarding the emerging research output on DTs is required to promote further research and implementation in organizations. To do so, this paper asks four research questions: (1) How is the concept of DTs defined? (2) Which application areas are relevant for the implementation of DTs? (3) How is a reference architecture for DTs conceptualized? and (4) Which directions are relevant for further research on DTs? With regard to research methods, we conduct a meta-review of 14 systematic literature reviews on DTs. The results yield important insights for the current state of conceptualization, application areas, reference architecture, and future research directions on DTs.
Purpose
Supporting the surgeon during surgery is one of the main goals of intelligent ORs. The OR-Pad project aims to optimize the information flow within the perioperative area. A shared information space should enable appropriate preparation and provision of relevant information at any time before, during, and after surgery.
Methods
Based on previous work on an interaction concept and system architecture for the sterile OR-Pad system, we designed a user interface for mobile and intraoperative (stationary) use, focusing on the most important functionalities like clear information provision to reduce information overload. The concepts were transferred into a high-fidelity prototype for demonstration purposes. The prototype was evaluated from different perspectives, including a usability study.
Results
The prototype’s central element is a timeline displaying all available case information chronologically, like radiological images, labor findings, or notes. This information space can be adapted for individual purposes (e.g., highlighting a tumor, filtering for own material). With the mobile and intraoperative mode of the system, relevant information can be added, preselected, viewed, and extended during the perioperative process. Overall, the evaluation showed good results and confirmed the vision of the information system.
Conclusion
The high-fidelity prototype of the information system OR-Pad focuses on supporting the surgeon via a timeline making all available case information accessible before, during, and after surgery. The information space can be personalized to enable targeted support. Further development is reasonable to optimize the approach and address missing or insufficient aspects, like the holding arm and sterility concept or new desired features.
The purpose of this paper is to examine the effects of perceived stress on traffic and road safety. One of the leading causes of stress among drivers is the feeling of having a lack of control during the driving process. Stress can result in more traffic accidents, an increase in driver errors, and an increase in traffic violations. To study this phenomenon, the Stress Perceived Questionnaire (PSQ) was used to evaluate the perceived stress while driving in a simulation. The study was conducted with participants from Germany, and they were grouped into different categories based on their emotional stability. Each participant was monitored using wearable devices that measured their instantaneous heart rate (HR). The preference for wearable devices was due to their non-intrusive and portable nature. The results of this study provide an overview of how stress can affect traffic and road safety, which can be used for future research or to implement strategies to reduce road accidents and promote traffic safety.
Since half a decade, there has been an increasing interest in Robotic Process Automation (RPA) by business firms. However, academic literature has been lacking attention to RPA, before adopting the topic to a larger extent. The aim of this study is to review and structure the latest state of scholarly research on RPA. This chapter is based on a systematic literature review that is used as a basis to develop a conceptual framework to structure the field. Our study shows that some areas of RPA have been extensively examined by many authors, e.g. potential benefits of RPA. Other categories, such as empirical studies on adoption of RPA or organisational readiness models, have remained research gaps.
Digital assistants like Alexa, Google Assistant or Siri have seen a large adoption over the past years. Using artificial intelligence (AI) technologies, they provide a vocal interface to physical devices as well as to digital services and have spurred an entire new ecosystem. This comprises the big tech companies themselves, but also a strongly growing community of developers that make these functionalities available via digital platforms. At present, only few research is available to understand the structure and the value creation logic of these AI-based assistant platforms and their ecosystem. This research adopts ecosystem intelligence to shed light on their structure and dynamics. It combines existing data collection methods with an automated approach that proves useful in deriving a network-based conceptual model of Amazon’s Alexa assistant platform and ecosystem. It shows that skills are a key unit of modularity in this ecosystem, which is linked to other elements such as service, data, and money flows. It also suggests that the topology of the Alexa ecosystem may be described using the criteria reflexivity, symmetry, variance, strength, and centrality of the skill coactivations. Finally, it identifies three ways to create and capture value on AI-based assistant platforms. Surprisingly only a few skills use a transactional business model by selling services and goods but many skills are complementary and provide information, configuration, and control services for other skill provider products and services. These findings provide new insights into the highly relevant ecosystems of AI-based assistant platforms, which might serve enterprises in developing their strategies in these ecosystems. They might also pave the way to a faster, data-driven approach for ecosystem intelligence.
The citizen-centered health platform project is intended to provide a platform that can be used in EU cross-border regions, where social and economic exchange occurs across national borders. The overriding challenges are: (a) social: improving citizen-centered health and care provision; (b) technical: providing a digital platform for networking citizens, service providers, and municipal actors; (c) economic: developing long-term successful (sustainable) business models/value chains. The platform should strengthen and expand existing networks and establish new regional networks. Each network addresses particular challenges and apply them in a region-specific manner. Here, the national boundary conditions and the interregional needs play an essential role. These objectives require sufficient participation of civil society representatives. Furthermore, the platform will establish an overarching, sustainable, and knowledge-based network of health experts. The platform is to be jointly developed and implemented in the regions and follow an open-access approach. Therefore, synergies will be shared more quickly, strengthening competencies and competitiveness. In addition to practice partners, scientific and municipal institutions and SMEs are involved. The actors thus contribute to scientific performance, innovative strength, and resilience.
Today many scientific works are using deep learning algorithms and time series, which can detect physiological events of interest. In sleep medicine, this is particularly relevant in detecting sleep apnea, specifically in detecting obstructive sleep apnea events. Deep learning algorithms with different architectures are used to achieve decent results in accuracy, sensitivity, etc. Although there are models that can reliably determine apnea and hypopnea events, another essential aspect to consider is the explainability of these models, i.e., why a model makes a particular decision. Another critical factor is how these deep learning models determine how severe obstructive sleep apnea is in patients based on the apnea-hypopnea index (AHI). Deep learning models trained by two approaches for AHI determination are exposed in this work. Approaches vary depending on the data format the models are fed: full-time series and window-based time series.
The use of deep learning models with medical data is becoming more widespread. However, although numerous models have shown high accuracy in medical-related tasks, such as medical image recognition (e.g. radiographs), there are still many problems with seeing these models operating in a real healthcare environment. This article presents a series of basic requirements that must be taken into account when developing deep learning models for biomedical time series classification tasks, with the aim of facilitating the subsequent production of the models in healthcare. These requirements range from the correct collection of data, to the existing techniques for a correct explanation of the results obtained by the models. This is due to the fact that one of the main reasons why the use of deep learning models is not more widespread in healthcare settings is their lack of clarity when it comes to explaining decision making.
Organizations that operate under uncertainty need to cultivate their ability to manage their primary resource, knowledge, accordingly. Under such conditions, organizations are required to harvest knowledge from two sources: to explore knowledge that is to be found outside the organization as well as exploit knowledge that is contained within. In a knowledge management context these exploitation and exploration activities have been conceptualized as knowledge ambidexterity. While ambidexterity has been studied extensively in contexts as manufacturing or IT, the notion of knowledge ambidexterity remains scarce in current knowledge management research. This study illustrates knowledge ambidexterity and elaborates its positive impact on organizational performance. Our study furthermore answers the question of how the use of enterprise social media (ESM) can facilitate the performance effects of knowledge ambidexterity. Drawing on the theory of communication visibility, we argue that ESM (e.g., Microsoft Teams, Slack, etc.) allow employees to communicate unhindered while making these communications visible. This allows for capturing tacit knowledge within these communications - this form of knowledge is generally hard to codify and can be a source of competitive edge. With respect to knowledge ambidexterity, ESM use can capture tacit knowledge aspects originating from inside and outside the organization, which fosters the development of a competitive advantage and, thus, supports its positive effect on organizational performance. This paper contributes to IT-enabled ambidexterity research in two aspects: (1) It sheds light on knowledge ambidexterity and, thereby, addresses a major practical challenge for knowledge-intensive organizations, and (2) it elaborates on the effects that ESM use can have on the relationship between knowledge ambidexterity and organizational performance. This work-in-progress paper offers a better understanding of the phenomenon of ambidexterity in a knowledge context, while providing insights on the facilitating role of ESM. Our research serves as a foundation for future empirical examinations of the concept of knowledge ambidexterity.
For a long time, most discrete accelerators have been attached to host systems using various generations of the PCI Express interface. However, with its lack of support for coherency between accelerator and host caches, fine-grained interactions require frequent cache-flushes, or even the use of inefficient uncached memory regions. The Cache Coherent Interconnect for Accelerators (CCIX) was the first multi-vendor standard for enabling cache-coherent host-accelerator attachments, and already is indicative of the capabilities of upcoming standards such as Compute Express Link (CXL). In our work, we compare and contrast the use of CCIX with PCIe when interfacing an ARM-based host with two generations of CCIX-enabled FPGAs. We provide both low-level throughput and latency measurements for accesses and address translation, as well as examine an application-level use-case of using CCIX for fine-grained synchronization in an FPGA-accelerated database system. We can show that especially smaller reads from the FPGA to the host can benefit from CCIX by having roughly 33% shorter latency than PCIe. Small writes to the host have a latency roughly 32% higher than PCIe, though, since they carry a higher coherency overhead. For the database use-case, the use of CCIX allowed to maintain a constant synchronization latency even with heavy host-FPGA parallelism.