610 Medizin, Gesundheit
Refine
Document Type
- Journal article (90)
- Conference proceeding (84)
- Book chapter (12)
- Doctoral Thesis (2)
- Book (1)
- Patent / Standard / Guidelines (1)
- Report (1)
Is part of the Bibliography
- yes (191)
Institute
- Informatik (123)
- Life Sciences (48)
- Technik (12)
- ESB Business School (7)
Publisher
Current noninvasive methods of clinical practice often do not identify the causes of conductive hearing loss due to pathologic changes in the middle ear with sufficient certainty. Wideband acoustic immittance (WAI) measurement is noninvasive, inexpensive and objective. It is very sensitive to pathologic changes in the middle ear and therefore promising for diagnosis. However, evaluation of the data is difficult because of large interindividual variations. Machine learning methods like Convolutional neural networks (CNN) which might be able to deal with this overlaying pattern require a large amount of labeled measurement data for training and validation. This is difficult to provide given the low prevalence of many middle-ear pathologies. Therefore, this study proposes an approach in which the WAI training data of the CNN are simulated with a finite-element ear model and the Monte-Carlo method. With this approach, virtual populations of normal, otosclerotic, and disarticulated ears were generated, consistent with the averaged data of measured populations and well representing the qualitative characteristics of individuals. The CNN trained with the virtual data achieved for otosclerosis an AUC of 91.1 %, a sensitivity of 85.7 %, and a specificity of 85.2 %. For disarticulation, an AUC of 99.5 %, sensitivity of 100 %, and specificity of 93.1 % was achieved. Furthermore, it was estimated that specificity could potentially be increased to about 99 % in both pathological cases if stapes reflex threshold measurements were used to confirm the diagnosis. Thus, the procedures’ performance is comparable to classifiers from other studies trained with real measurement data, and therefore the procedure offers great potential for the diagnosis of rare pathologies or early-stages pathologies. The clinical potential of these preliminary results remains to be evaluated on more measurement data and additional pathologies.
Background: Wideband acoustic immittance (WAI) and wideband tympanometry (WBT) are promising approaches to improve diagnosis accuracy in middle-ear diagnosis, though due to significant interindividual difference, their analysis and interpretation remains challenging. Recent approaches have come up, implementing machine learning (ML) or deep learning classifiers trained with measured WAI or WBT data for the classification of otitis media or otosclerosis. Also, first approaches have been made in identifying important regions from the WBT data, which the classifiers used for their decision-making.
Methods: Two classifiers, a convolutional neural network (CNN) and the ML algorithm extreme gradient boosting (XGB), are trained on artificial data obtained with a finite-element ear model providing the middle-ear measurements energy reflectance (ER), pressure reflectance phase, impedance amplitude and phase. The performance of both classifiers is evaluated by cross-validation on artificial test data and by classification of real measurement data from the literature using the metrics macro-recall and macro-F1 score. The feature contributions are quantified using the feature importance ‘gain’ for XGB and deep Taylor decomposition for CNN.
Results: In the cross-validation with artificial data, the macro-recall and macro-F1 scores are similar, namely 91.2% for XGB and 94.5% for CNN. For the classification with real measurement data the macro-recall and macro-F1-score were 81.8% and 38.2% (XGB) and 81.0% and 54.8% (CNN), respectively. The key features identified are ER between 600–1,000 Hz together with impedance phase between 600–1,000 Hz for XGB and ER up to 1,500 Hz for CNN.
Conclusions: We were able to show that the applied classifiers CNN and XGB trained with simulated data lead to a reasonably well performance on real data. We conclude that using simulation-based WAI data can be a successful strategy for classifier training and that XGB can be applied to WAI data. Furthermore, ML interpretability algorithms are useful to identify relevant key features for differential diagnosis and to increase confidence in classifier decisions. Further evaluation using more measured data, especially for pathological cases, is essential.
Unintrusive health monitoring systems is important when continuous monitoring of the patient vital signals is required. In this paper, signals obtained from accelerometers placed under a bed are processed with ballistocardiography algorithms and compared with synchronized electrocardiographic signals.
Aim:
The primary objective of this study was to examine and explain the public panic consumption model based on the stimulus–organism–response theory, during the peak time period of the COVID-19 pandemic in China.
Subject and methods:
The research data were collected through the questionnaires adapted for the purpose of this survey, which included a total of 408 participants (33% female) from the global population. The stepwise regression analysis has been conducted.
Results:
The results have shown that both physical social networks and online social networks have a significant positive impact on infection risk perception, but the impact of physical social networks has proved to be a better predictor. Infection information obtained from physical social networks affects conformity buying and uncontrolled self-medication if the perceived risk perception is higher. When it comes to the impact of risk perception on conformity buying, health change has a significant negative regulatory effect. In the impact of risk perception on the uncontrolled self-medication, health change has no significant regulatory effect.
Conclusions:
During the recent epidemic states, it is noticeable that public panic consumption has aggravated the difficulty of emergency management. Especially when it comes to the management of medical materials and medicines. In order to tackle this challenge in the future, the trigger mechanisms behind panic consumption are revealed.
Introduction: After every surgical procedure, the surgeon is responsible for preparing an individual surgical report. This is often done with a time delay and relies on the correct memory of the surgeon, which can lead to inaccurat documentation. Semiautomatic recording, systematic storage and text processing of all intraoperative data would be time saving for the surgeon and form the foundation for quality management and subsequent data analysis.
Method: In collaboration with the school of informatics at Reutlingen University, the prototype of a semi-automatic checklist tool was developed using the example of cochlear implantation. The basis for this is a BPMN model (Business Process Model and Notation) of the procedure, which was created using a workflow analysis of the surgical process of cochlear implantation. Based on this, automatically generated, dynamic checklists are displayed via a user interface on an Android tablet. The intraoperative interaction, handeling of different input devices and the verification of medical correctness were tested on a phantom model.
Results: The user interface allows a simple intuitive handling by the surgeon or assistant, which can be well implemented in the intraoperative setting. The checklist allows individual recording and storage of both clinical data and surgical steps. In addition, an automated surgical report can be generated, customized and saved. The dynamic generation of the checklist via a BPMN model allows the easy transfer of the tool to other use cases.
Summary: The utilization of a dynamic checklist tool simplifies the collection, storage, and analysis of surgical data. Its potential for broader applications in clinical practice awaits further exploration via clinical studies are planned
Einleitung: Nach jedem chirurgischen Eingriff ist die Erstellung eines individuellen OP-Berichtes Aufgabe des Operateurs. Dies geschieht häufig mit zeitlicher Verzögerung und aus der Erinnerung heraus, was zu einer fehlerhaften Dokumentation führen kann. Die semiautomatische Erfassung, systematische Speicherung und Textverarbeitung aller intraoperativen Daten würde nicht nur eine Zeitersparnis für den Operateur ermöglichen, sondern auch die Grundlage für Qualitätssicherung und zukünftige Datenanalysen schaffen.
Methode: In Zusammenarbeit mit der Medizininformatik der Hochschule Reutlingen wurde ein Prototyp eines halbautomatischen Checklisten-Tools am Beispiel der Cochlea Implantation entwickelt. Grundlage hierfür stellt ein BPMN-Modell (Business Process Model and Notation) dar, das anhand einer Workflowanalyse des chirurgischen Prozesses erstellt wurde. Basierend auf diesem werden automatisch generierte, dynamische Checklisten über eine Benutzeroberfläche auf einem Android-Tablet dargestellt. Die Anwendbarkeit im operativen Setting, die Steuerung über verschiedene Eingabegeräte und Überprüfung der inhaltlichen Prozessschritte wurden an einem Phantommodell getestet.
Ergebnisse: Der Prototyp ermöglicht eine intuitive Bedienung und sterile Interaktion und eignet sich damit gut für den intraoperativen Einsatz. Die Checkliste realisiert die individuelle Erfassung und Speicherung von klinischen Daten und OP-Schritten. Zusätzlich kann ein automatisierter OP-Bericht erstellt und gespeichert werden. Die dynamische Generierung der Checkliste über ein BPMN-Modell ermöglicht dabei die einfache Übertragung auf andere Anwendungsfälle.
Zusammenfassung: Die Verwendung einer dynamischen Checkliste vereinfacht die Erfassung, Speicherung und Verarbeitung von chirurgischen Daten.
One of the most underdiagnosed medical conditions worldwide is depression. It has been demonstrated that the current classical procedures for early detection of depression are insufficient, which emphasizes the importance of seeking a more efficient approach to overcome this challenge. One of the most promising opportunities is arising in the field of Artificial Intelligence as AI-based models could have the capacity to offer a fast, widely accessible, unbiased and efficient method to address this problem. In this paper, we compared three natural language processing models, namely, BERT, GPT-3.5 and GPT-4 on three different datasets. Our findings show that different levels of efficacy are shown by fine-tuned BERT, GPT-3.5, and GPT-4 in identifying depression from textual data. By comparing the models on the metrics such as accuracy, precision, and recall, our results have shown that GPT-4 outperforms both BERT and GPT-3.5 models, even without previous fine-tuning, showcasing its enormous potential to be utilized for automated depression detection on textual data. In the paper, we present newly introduced datasets, fine-tuning and model testing processes, while also addressing limitations and discussing further considerations for future research.
Cultured Meat (CM) is a growing field in cellular agriculture, driven by the environmental impact of conventional meat production, which contributes to climate change and occupies ≈70% of arable land. As demand for meat alternatives rises, research in this area expands. CM production relies on tissue engineering techniques, where a limited number of animal cells are cultured in vitro and processed to create meat-like tissue comprising muscle and adipose components. Currently, CM is primarily produced on a small scale in pilot facilities. Producing a large cell mass based on suitable cell sources and bioreactors remains challenging. Advanced manufacturing methods and innovative materials are required to subsequently process this cell mass into CM products on a large scale. Consequently, CM is closely linked with biofabrication, a suite of technologies for precisely arranging cellular aggregates and cell-material composites to construct specific structures, often using robotics. This review provides insights into contemporary biomedical biofabrication technologies, focusing on significant advancements in muscle and adipose tissue biofabrication for CM production. Novel materials for biofabricating CM are also discussed, emphasizing their edibility and incorporation of healthful components. Finally, initial studies on biofabricated CM are examined, addressing current limitations and future challenges for large-scale production.
This paper discusses the development and application of an augmented reality (AR) system for assisting in nail implantation procedures for complex tibial fractures. Traditional procedures involve extensive x-ray usage from various angles, leading to increased radiation exposure and prolonged surgical times. The study presents a method using pre- and post-operative computed tomography (CT) data sets and a convolutional neural network (CNN) trained on segmented bone and metal objects. The augmented reality system overlays accurate 3D representations of bony fragments and implants onto the surgeon's view, aiming to reduce radiation exposure and intervention time. The study demonstrates successful segmentation of bone and metal objects in cases of heavy metal artifacts, achieving promising results with a relatively low number of training sets. The integration of this system into the clinical workflow could potentially improve surgical outcomes, significantly reduce radiation time, and therefore improve patient safety.
Precise neurosurgical guidance is critical for successful brain surgeries and plays a vital role in all phases of image-guided neurosurgery (IGN). Neuronavigation software enables real-time tracking of surgical tools, ensuring their presentation with high precision in relation to a virtual patient model. Therefore, this work focuses on the development of a novel multimodal IGN system, leveraging deep learning and explainable AI to enhance brain tumor surgery outcomes. The study establishes the clinical and technical requirements of the system for brain tumor surgeries. NeuroIGN adopts a modular architecture, including brain tumor segmentation, patient registration, and explainable output prediction, and integrates open-source packages into an interactive neuronavigational display. The NeuroIGN system components underwent validation and evaluation in both laboratory and simulated operating room (OR) settings. Experimental results demonstrated its accuracy in tumor segmentation and the success of ExplainAI in increasing the trust of medical professionals in deep learning. The proposed system was successfully assembled and set up within 11 min in a pre-clinical OR setting with a tracking accuracy of 0.5 (± 0.1) mm. NeuroIGN was also evaluated as highly useful, with a high frame rate (19 FPS) and real-time ultrasound imaging capabilities. In conclusion, this paper describes not only the development of an open-source multimodal IGN system but also demonstrates the innovative application of deep learning and explainable AI algorithms in enhancing neuronavigation for brain tumor surgeries. By seamlessly integrating pre- and intra-operative patient image data with cutting-edge interventional devices, our experiments underscore the potential for deep learning models to improve the surgical treatment of brain tumors and long-term post-operative outcomes.