Informatik
Refine
Document Type
- Journal article (198) (remove)
Is part of the Bibliography
- yes (198)
Institute
- Informatik (198)
Publisher
- Elsevier (44)
- Springer (28)
- De Gruyter (12)
- MDPI (10)
- IARIA (7)
- Emerald (6)
- IEEE (6)
- International Academy of Business Disciplines (3)
- Riga Technical University Press (3)
- Sage (3)
Background
Personalized medicine requires the integration and analysis of vast amounts of patient data to realize individualized care. With Surgomics, we aim to facilitate personalized therapy recommendations in surgery by integration of intraoperative surgical data and their analysis with machine learning methods to leverage the potential of this data in analogy to Radiomics and Genomics.
Methods
We defined Surgomics as the entirety of surgomic features that are process characteristics of a surgical procedure automatically derived from multimodal intraoperative data to quantify processes in the operating room. In a multidisciplinary team we discussed potential data sources like endoscopic videos, vital sign monitoring, medical devices and instruments and respective surgomic features. Subsequently, an online questionnaire was sent to experts from surgery and (computer) science at multiple centers for rating the features’ clinical relevance and technical feasibility.
Results
In total, 52 surgomic features were identified and assigned to eight feature categories. Based on the expert survey (n = 66 participants) the feature category with the highest clinical relevance as rated by surgeons was “surgical skill and quality of performance” for morbidity and mortality (9.0 ± 1.3 on a numerical rating scale from 1 to 10) as well as for long-term (oncological) outcome (8.2 ± 1.8). The feature category with the highest feasibility to be automatically extracted as rated by (computer) scientists was “Instrument” (8.5 ± 1.7). Among the surgomic features ranked as most relevant in their respective category were “intraoperative adverse events”, “action performed with instruments”, “vital sign monitoring”, and “difficulty of surgery”.
Conclusion
Surgomics is a promising concept for the analysis of intraoperative data. Surgomics may be used together with preoperative features from clinical data and Radiomics to predict postoperative morbidity, mortality and long-term outcome, as well as to provide tailored feedback for surgeons.
Development of an expert system to overpass citizens technological barriers on smart home and living
(2023)
Adopting new technologies can be overwhelming, even for people with experience in the field. For the general public, learning about new implementations, releases, brands, and enhancements can cause them to lose interest. There is a clear need to create point sources and platforms that provide helpful information about the novel and smart technologies, assisting users, technicians, and providers with products and technologies. The purpose of these platforms is twofold, as they can gather and share information on interests common to manufacturers and vendors. This paper presents the ”Finde-Dein-SmartHome” tool. Developed in association with the Smart Home & Living competence center [5] to help users learn about, understand, and purchase available technologies that meet their home automation needs. This tool aims to lower the usability barrier and guide potential customers to clear their doubts about privacy and pricing. Communities can use the information provided by this tool to identify market trends that could eventually lower costs for providers and incentivize access to innovative home technologies and devices supporting long-term care.
Software is an integrated part of new features within the automotive sector, car manufacturers, the Hersteller Initiative Software (HIS) consortium defined metrics to determine software quality. Yet, problems with assigning metrics to quality attributes often occur in practice. The specified boundary values lead to discussions between contractors and clients as different standards and metric sets are used. This paper studies metrics used in the automotive sector and the quality attributes they address. The HIS, ISO/IEC 25010:2011, and ISO/IEC 26262:2018 are utilized to draw a big picture illustrating (i) which metrics and boundary values are reported in literature, (ii) how the metrics match the standards, (iii) which quality attributes are addressed, and (iv) how the metrics are supported by tools. Our findings from analyzing 38 papers include a catalog of 112 metrics of which 17 define boundary values and 48 are supported by tools. Most of the metrics are concerned with source code, are generic, and not specifically designed for automotive software development. We conclude that many metrics exist, but a clear definition of the metrics' context, notably regarding the construction of flexible and efficient measurement suites, is missing.
Near-data processing in database systems on native computational storage under HTAP workloads
(2022)
Today’s Hybrid Transactional and Analytical Processing (HTAP) systems, tackle the ever-growing data in combination with a mixture of transactional and analytical workloads. While optimizing for aspects such as data freshness and performance isolation, they build on the traditional data-to-code principle and may trigger massive cold data transfers that impair the overall performance and scalability. Firstly, in this paper we show that Near-Data Processing (NDP) naturally fits in the HTAP design space. Secondly, we propose an NDP database architecture, allowing transactionally consistent in-situ executions of analytical operations in HTAP settings. We evaluate the proposed architecture in state-of-the-art key/value-stores and multi-versioned DBMS. In contrast to traditional setups, our approach yields robust, resource- and cost-effcient performance.
nKV in action: accelerating KVstores on native computational storage with NearData processing
(2020)
Massive data transfers in modern data intensive systems resulting from low data-locality and data-to-code system design hurt their performance and scalability. Near-data processing (NDP) designs represent a feasible solution, which although not new, has yet to see widespread use.
In this paper we demonstrate various NDP alternatives in nKV, which is a key/value store utilizing native computational storage and near-data processing. We showcase the execution of classical operations (GET, SCAN) and complex graph-processing algorithms (Betweenness Centrality) in-situ, with 1.4x-2.7x better performance due to NDP. nKV runs on real hardware - the COSMOS+ platform.
The paper describes how eye-tracking can be used to explore electronic patient records (EPR) in a sterile environment. As an information display, we used a system that we developed for the presentation of patient data and for supporting surgical hand disinfection. The eye-tracking was performed using the Tobii Eye Tracker 4C, and the connection between the eye-tracker and the HTML website was realized using the Tobii EyeX Chrome Extension. Interactions with the EPR are triggered by fixations of icons. The interaction was working as intended, but test persons reported a high mental load while using the system.
Respiratory diseases are leading causes of death and disability in the world. The recent COVID-19 pandemic is also affecting the respiratory system. Detecting and diagnosing respiratory diseases requires both medical professionals and the clinical environment. Most of the techniques used up to date were also invasive or expensive.
Some research groups are developing hardware devices and techniques to make possible a non-invasive or even remote respiratory sound acquisition. These sounds are then processed and analysed for clinical, scientific, or educational purposes.
We present the literature review of non-invasive sound acquisition devices and techniques.
The results are about a huge number of digital tools, like microphones, wearables, or Internet of Thing devices, that can be used in this scope.
Some interesting applications have been found. Some devices make easier the sound acquisition in a clinic environment, but others make possible daily monitoring outside that ambient. We aim to use some of these devices and include the non-invasive recorded respiratory sounds in a Digital Twin system for personalized health.
Context
In a world of high dynamics and uncertainties, it is almost impossible to have a long-term prediction of which products, services, or features will satisfy the needs of the customer. To counter this situation, the conduction of Continuous Improvement or Design Thinking for product discovery are common approaches. A major constraint in conducting product discovery activities is the high effort to discover and validate features and requirements. In addition, companies struggle to integrate product discovery activities into their agile processes and iterations.
Objective
This paper aims at suggests a supportive tool, the “Discovery Effort Worthiness (DEW) Index”, for product owners and agile teams to determine a suitable amount of effort that should be spent on Design Thinking activities. To operationalize DEW, proposals for practitioners are presented that can be used to integrate product discovery into product development and delivery.
Method
A case study was conducted for the development of the DEW index. In addition, we conducted an expert workshop to develop proposals for the integration of product discovery activities into the product development and delivery process.
Results
First, we present the "Discovery Effort Worthiness Index" in form of a formula. Second, we identified requirements that must be fulfilled for systematic integration of product discovery activities into product development and delivery. Third, we derived from the requirements proposals for the integration of product discovery activities with a company's product development and delivery.
Conclusion
The developed "Discovery Effort Worthiness Index" provides a tool for companies and their product owners to determine how much effort they should spend on Design Thinking methods to discover and validate requirements. Integrating product discovery with product development and delivery should ensure that the results of product discovery are incorporated into product development. This aims to systematically analyze product risks to increase the chance of product success.
Analysis of multicellular patterns is required to understand tissue organizational processes. By using a multi-scale object oriented image processing method, the spatial information of cells can be extracted automatically. Instead of manual segmentation or indirect measurements, such as general distribution of contrast or flow, the orientation and distribution of individual cells is extracted for quantitative analysis. Relevant objects are identified by feature queries and no low-level knowledge of image processing is required.
Hardly any software development process is used as prescribed by authors or standards. Regardless of company size or industry sector, a majority of project teams and companies use hybrid development methods (short: hybrid methods) that combine different development methods and practices. Even though such hybrid methods are highly individualized, a common understanding of how to systematically construct synergetic practices is missing. In this article, we make a first step towards a statistical construction procedure for hybrid methods. Grounded in 1467 data points from a large‐scale practitioner survey, we study the question: What are hybrid methods made of and how can they be systematically constructed? Our findings show that only eight methods and few practices build the core of modern software development. Using an 85% agreement level in the participants' selections, we provide examples illustrating how hybrid methods can be characterized by the practices they are made of. Furthermore, using this characterization, we develop an initial construction procedure, which allows for defining a method frame and enriching it incrementally to devise a hybrid method using ranked sets of practice.
Purpose: This study aims to conceptualize and test the effect of consumers´ perceptions of complaint handling quality (PCHQ) in both traditional and social media channels.
Design/methodology/approach: Study 1 systematically reviews the relevant literature and then carries out a consumer and manager survey. This approach aims to conceptualize the dimensionality of PCHQ. Study 2 tests the effect of PCHQ on key marketing outcomes. Using survey data from a German telecommunications company, the study provides an explanation for the differences in outcomes across traditional (hotline) and social media channels.
Findings: Study 1 reveals that PCHQ is best conceptualized as a five dimensional construct with 15 facets. There are significant differences between customers and managers in terms of the importance attached to the various dimensions. The construct shows strong psychometric properties with high reliability and validity, thereby opening up opportunities to treat these facets as measurement indicators for the construct. Study 2 indicates that the effect of PCHQ on consumer loyalty and word-of-mouth (WOM) communication is stronger in social media than in traditional channels. Procedural justice and the overall quality of service solutions emerge as general dimensions of PCHQ because they are equally important in both channels. In contrast, interactional justice, distributive justice and customer effort have varying effects across the two channels.
Research limitations/implications: This study contributes to the understanding of a firm´s channel selection for complaint handling in two ways. First, it evaluates and conceptualizes the PCHQ construct. Second, it compares the effects of different dimensions of PCHQ on key marketing outcomes across traditional and socialmedia channels.
Practical implications: This study enables managers to understand the difference in efficacy attached to different dimensions of PCHQ. It further highlights such differences across traditional and social media service channels. For example, the effect of complaint handling on social media is of particular importance when generating WOM communication.
Originality/value: This study offers a comprehensive conceptualization of the PCHQ construct and reveals the general and channel contingent effects of its different dimensions on key marketing outcomes.
Purpose
As a response to the increased frequency of disruptive events and intense competition, organizational agility has become a key concept in organizational research. Fostering organizational agility requires leveraging knowledge that exists both outside (exploration) and inside (exploitation) the organization. This research tests the so-called ambidexterity hypothesis, which claims that a balance between exploration and exploitation leads to increased organizational outcomes, including the development of organizational agility. Complementing previously established measurement models on ambidexterity, this research proposes an alternative measurement model to analyze how ambidexterity can enhance organizational agility and, indirectly, performance, taking into consideration the moderating effect of environmental competitiveness.
Design/methodology/approach
A review of existing measurement models for ambidexterity shows that tension, a crucial aspect of ambidexterity, is often neglected. The authors, therefore, develop a new measurement model of ambidexterity to incorporate ambidexterity-induced tension. Using this measurement model, they examine the effect of ambidexterity on the development of entrepreneurial and adaptive agility as well as performance.
Findings
Ambidexterity positively influences both entrepreneurial and adaptive agility, indicating that a balance between exploration and exploitation has superior organizational effects. This finding confirms the ambidexterity hypothesis with respect to organizational agility. Furthermore, both entrepreneurial and adaptive agility drive organizational performance. These two indirect effects via agility fully mediate the impact of ambidexterity on organizational performance. Finally, environmental competitiveness positively moderates the relationship between ambidexterity and adaptive agility.
Originality/value
The findings extend research on ambidexterity by showing its positive effects on organizational agility. Furthermore, the study proposes an alternative operationalization to capture the ambidexterity construct that may lay the groundwork for further applications of the ambidexterity concept.
This work is a study about a comparison of survey tools and it should help developers in selecting a suited tool for application in an AAL environment. The first step was to identify the basic required functionality of the survey tools used for AAL technologies and to compare these tools by their functionality and assignments. The comparative study was derived from the data obtained, previous literature studies and further technical data. A list of requirements was stated and ordered in terms of relevance to the target application domain. With the help of an integrated assessment method, the calculation of a generalized estimate value was performed and the result is explained. Finally, the planned application of this tool in a running project is explained.
The use of deep learning models with medical data is becoming more widespread. However, although numerous models have shown high accuracy in medical-related tasks, such as medical image recognition (e.g. radiographs), there are still many problems with seeing these models operating in a real healthcare environment. This article presents a series of basic requirements that must be taken into account when developing deep learning models for biomedical time series classification tasks, with the aim of facilitating the subsequent production of the models in healthcare. These requirements range from the correct collection of data, to the existing techniques for a correct explanation of the results obtained by the models. This is due to the fact that one of the main reasons why the use of deep learning models is not more widespread in healthcare settings is their lack of clarity when it comes to explaining decision making.
Background: Polysomnography (PSG) is the gold standard for detecting obstructive sleep apnea (OSA). However, this technique has many disadvantages when using it outside the hospital or for daily use. Portable monitors (PMs) aim to streamline the OSA detection process through deep learning (DL).
Materials and methods: We studied how to detect OSA events and calculate the apnea-hypopnea index (AHI) by using deep learning models that aim to be implemented on PMs. Several deep learning models are presented after being trained on polysomnography data from the National Sleep Research Resource (NSRR) repository. The best hyperparameters for the DL architecture are presented. In addition, emphasis is focused on model explainability techniques, concretely on Gradient-weighted Class Activation Mapping (Grad-CAM).
Results: The results for the best DL model are presented and analyzed. The interpretability of the DL model is also analyzed by studying the regions of the signals that are most relevant for the model to make the decision. The model that yields the best result is a one-dimensional convolutional neural network (1D-CNN) with 84.3% accuracy.
Conclusion: The use of PMs using machine learning techniques for detecting OSA events still has a long way to go. However, our method for developing explainable DL models demonstrates that PMs appear to be a promising alternative to PSG in the future for the detection of obstructive apnea events and the automatic calculation of AHI.
Introduction
Despite its high accuracy, polysomnography (PSG) has several drawbacks for diagnosing obstructive sleep apnea (OSA). Consequently, multiple portable monitors (PMs) have been proposed.
Objective
This systematic review aims to investigate the current literature to analyze the sets of physiological parameters captured by a PM to select the minimum number of such physiological signals while maintaining accurate results in OSA detection.
Methods
Inclusion and exclusion criteria for the selection of publications were established prior to the search. The evaluation of the publications was made based on one central question and several specific questions.
Results
The abilities to detect hypopneas, sleep time, or awakenings were some of the features studied to investigate the full functionality of the PMs to select the most relevant set of physiological signals. Based on the physiological parameters collected (one to six), the PMs were classified into sets according to the level of evidence. The advantages and the disadvantages of each possible set of signals were explained by answering the research questions proposed in the methods.
Conclusions
The minimum number of physiological signals detected by PMs for the detection of OSA depends mainly on the purpose and context of the sleep study. The set of three physiological signals showed the best results in the detection of OSA.
The development of automatic solutions for the detection of physiological events of interest is booming. Improvements in the collection and storage of large amounts of healthcare data allow access to these data faster and more efficiently. This fact means that the development of artificial intelligence models for the detection and monitoring of a large number of pathologies is becoming increasingly common in the medical field. In particular, developing deep learning models for detecting obstructive apnea (OSA) events is at the forefront. Numerous scientific studies focus on the architecture of the models and the results that these models can provide in terms of OSA classification and Apnea-Hypopnea-Index (AHI) calculation. However, little focus is put on other aspects of great relevance that are crucial for the training and performance of the models. Among these aspects can be found the set of physiological signals used and the preprocessing tasks prior to model training. This paper covers the essential requirements that must be considered before training the deep learning model for obstructive sleep apnea detection, in addition to covering solutions that currently exist in the scientific literature by analyzing the preprocessing tasks prior to training.
The citizen-centered health platform project is intended to provide a platform that can be used in EU cross-border regions, where social and economic exchange occurs across national borders. The overriding challenges are: (a) social: improving citizen-centered health and care provision; (b) technical: providing a digital platform for networking citizens, service providers, and municipal actors; (c) economic: developing long-term successful (sustainable) business models/value chains. The platform should strengthen and expand existing networks and establish new regional networks. Each network addresses particular challenges and apply them in a region-specific manner. Here, the national boundary conditions and the interregional needs play an essential role. These objectives require sufficient participation of civil society representatives. Furthermore, the platform will establish an overarching, sustainable, and knowledge-based network of health experts. The platform is to be jointly developed and implemented in the regions and follow an open-access approach. Therefore, synergies will be shared more quickly, strengthening competencies and competitiveness. In addition to practice partners, scientific and municipal institutions and SMEs are involved. The actors thus contribute to scientific performance, innovative strength, and resilience.
Bausparverträge sind kombinierte Spar- und Finanzierungsinstrumente, die für die breite Bevölkerung ausgelegt sind. Im Jahr 2020 umfasste der Bestand an Bausparverträgen in Deutschland ca. 25 Mio. Verträge. Ein wesentlicher Teil der Attraktivität des Bausparvertrags für Kunden liegt in der hohen Flexibilität dieser Finanzprodukte, die im Vertragsablauf eine flexible Anpassung an individuelle Finanzierungsbedingungen ermöglicht. In der Sparphase sind das insbesondere Möglichkeiten zur Erhöhung, Ermäßigung und Teilung der Verträge sowie zur relativ flexiblen Anpassung der Sparrate. Bei einem zuteilungsreifen Vertrag kann die Sparphase innerhalb bestimmter zeitlicher Grenzen fortgesetzt werden. In der Darlehensphase sind flexible Sondertilgungen jederzeit und ohne Vorfälligkeitsentschädigung möglich.
Die Vielzahl eingebetteter Optionen beeinflussen sich wechselseitig und müssen in ihrer Wirkungsweise immer gesamthaft betrachtet und gesteuert werden. Die empirische Erfahrung der letzten Jahrzehnte zeigt bezüglich der Optionsausübung ein Kundenverhalten, das sich zwar an finanzmathematischen Überlegungen orientiert, aber nicht vollständig finanzrational abläuft.
Introduction: Telemedicine reduces greenhouse gas emissions (CO2eq); however, results of studies vary extremely in dependence of the setting. This is the first study to focus on effects of telemedicine on CO2 imprint of primary care.
Methods: We conducted a comprehensive retrospective study to analyze total CO2eq emissions of kilometers (km) saved by telemedical consultations. We categorized prevented and provoked patient journeys, including pharmacy visits. We calculated CO2eq emission savings through primary care telemedical consultations in comparison to those that would have occurred without telemedicine. We used the comprehensive footprint approach, including all telemedical cases and the CO2eq emissions by the telemedicine center infrastructure. In order to determine the net ratio of CO2eq emissions avoided by the telemedical center, we calculated the emissions associated with the provision of telemedical consultations (including also the total consumption of physicians’ workstations) and subtracted them from the total of avoided CO2eq emissions. Furthermore, we also considered patient cases in our calculation that needed to have an in-person visit after the telemedical consultation. We calculated the savings taking into account the source of the consumed energy (renewable or not).
Results: 433 890 telemedical consultations overall helped save 1 800 391 km in travel. On average, 1 telemedical consultation saved 4.15 km of individual transport and consumed 0.15 kWh. We detected savings in almost every cluster of patients. After subtracting the CO2eq emissions caused by the telemedical center, the data reveal savings of 247.1 net tons of CO2eq emissions in total and of 0.57 kg CO2eq per telemedical consultation. The comprehensive footprint approach thus indicated a reduced footprint due to telemedicine in primary care.
Discussion: Integrating a telemedical center into the health care system reduces the CO2 footprint of primary care medicine; this is true even in a densely populated country with little use of cars like Switzerland. The insight of this study complements previous studies that focused on narrower aspects of telemedical consultations.