Informatik
Refine
Document Type
- Article (27)
- Conference Proceeding (27)
- Part of a Book (1)
Is part of the Bibliography
- yes (55)
Institute
- Informatik (55)
Publisher
- Springer (14)
- De Gruyter (10)
- Hochschule Reutlingen (4)
- IEEE (4)
- SPIE (4)
- Deutsche Gesellschaft für Computer- und Roboterassistierte Chirurgie e.V. (3)
- IOS Press (3)
- GMDS e.V. (2)
- PZH Verlag, TEWISS-Technik und Wissen GmbH (2)
- Thieme (2)
Purpose
Supporting the surgeon during surgery is one of the main goals of intelligent ORs. The OR-Pad project aims to optimize the information flow within the perioperative area. A shared information space should enable appropriate preparation and provision of relevant information at any time before, during, and after surgery.
Methods
Based on previous work on an interaction concept and system architecture for the sterile OR-Pad system, we designed a user interface for mobile and intraoperative (stationary) use, focusing on the most important functionalities like clear information provision to reduce information overload. The concepts were transferred into a high-fidelity prototype for demonstration purposes. The prototype was evaluated from different perspectives, including a usability study.
Results
The prototype’s central element is a timeline displaying all available case information chronologically, like radiological images, labor findings, or notes. This information space can be adapted for individual purposes (e.g., highlighting a tumor, filtering for own material). With the mobile and intraoperative mode of the system, relevant information can be added, preselected, viewed, and extended during the perioperative process. Overall, the evaluation showed good results and confirmed the vision of the information system.
Conclusion
The high-fidelity prototype of the information system OR-Pad focuses on supporting the surgeon via a timeline making all available case information accessible before, during, and after surgery. The information space can be personalized to enable targeted support. Further development is reasonable to optimize the approach and address missing or insufficient aspects, like the holding arm and sterility concept or new desired features.
Background
Personalized medicine requires the integration and analysis of vast amounts of patient data to realize individualized care. With Surgomics, we aim to facilitate personalized therapy recommendations in surgery by integration of intraoperative surgical data and their analysis with machine learning methods to leverage the potential of this data in analogy to Radiomics and Genomics.
Methods
We defined Surgomics as the entirety of surgomic features that are process characteristics of a surgical procedure automatically derived from multimodal intraoperative data to quantify processes in the operating room. In a multidisciplinary team we discussed potential data sources like endoscopic videos, vital sign monitoring, medical devices and instruments and respective surgomic features. Subsequently, an online questionnaire was sent to experts from surgery and (computer) science at multiple centers for rating the features’ clinical relevance and technical feasibility.
Results
In total, 52 surgomic features were identified and assigned to eight feature categories. Based on the expert survey (n = 66 participants) the feature category with the highest clinical relevance as rated by surgeons was “surgical skill and quality of performance” for morbidity and mortality (9.0 ± 1.3 on a numerical rating scale from 1 to 10) as well as for long-term (oncological) outcome (8.2 ± 1.8). The feature category with the highest feasibility to be automatically extracted as rated by (computer) scientists was “Instrument” (8.5 ± 1.7). Among the surgomic features ranked as most relevant in their respective category were “intraoperative adverse events”, “action performed with instruments”, “vital sign monitoring”, and “difficulty of surgery”.
Conclusion
Surgomics is a promising concept for the analysis of intraoperative data. Surgomics may be used together with preoperative features from clinical data and Radiomics to predict postoperative morbidity, mortality and long-term outcome, as well as to provide tailored feedback for surgeons.
Glioblastomas are the most aggressive fast-growing primary brain cancer which originate in the glial cells of the brain. Accurate identification of the malignant brain tumor and its sub-regions is still one of the most challenging problems in medical image segmentation. The Brain Tumor Segmentation Challenge (BraTS) has been a popular benchmark for automatic brain glioblastomas segmentation algorithms since its initiation. In this year, BraTS 2021 challenge provides the largest multi-parametric (mpMRI) dataset of 2,000 pre-operative patients. In this paper, we propose a new aggregation of two deep learning frameworksnamely, DeepSeg and nnU-Net for automatic glioblastoma recognition in pre-operative mpMRI. Our ensemble method obtains Dice similarity scores of 92.00, 87.33, and 84.10 and Hausdorff Distances of 3.81, 8.91, and 16.02 for the enhancing tumor, tumor core, and whole tumor regions, respectively, on the BraTS 2021 validation set, ranking us among the top ten teams. These experimental findings provide evidence that it can be readily applied clinically and thereby aiding in the brain cancer prognosis, therapy planning, and therapy response monitoring. A docker image for reproducing our segmentation results is available online at (https://hub.docker.com/r/razeineldin/deepseg21).
Intraoperative imaging can assist neurosurgeons to define brain tumours and other surrounding brain structures. Interventional ultrasound (iUS) is a convenient modality with fast scan times. However, iUS data may suffer from noise and artefacts which limit their interpretation during brain surgery. In this work, we use two deep learning networks, namely UNet and TransUNet, to make automatic and accurate segmentation of the brain tumour in iUS data. Experiments were conducted on a dataset of 27 iUS volumes. The outcomes show that using a transformer with UNet is advantageous providing an efficient segmentation modelling long-range dependencies between each iUS image. In particular, the enhanced TransUNet was able to predict cavity segmentation in iUS data with an inference rate of more than 125 FPS. These promising results suggest that deep learning networks can be successfully deployed to assist neurosurgeons in the operating room.
With the progress of technology in modern hospitals, an intelligent perioperative situation recognition will gain more relevance due to its potential to substantially improve surgical workflows by providing situation knowledge in real-time. Such knowledge can be extracted from image data by machine learning techniques but poses a privacy threat to the staff’s and patients’ personal data. De-identification is a possible solution for removing visual sensitive information. In this work, we developed a YOLO v3 based prototype to detect sensitive areas in the image in real-time. These are then deidentified using common image obfuscation techniques. Our approach shows that it is principle suitable for de-identifying sensitive data in OR images and contributes to a privacyrespectful way of processing in the context of situation recognition in the OR.
Ultra wideband real-time locating system for tracking people and devices in the operating room
(2022)
Position tracking within the OR could be one possible input for intraoperative situation recognition. Our approach demonstrates a Real-time Locating System (RTLS) using the Ultra Wideband (UWB) technology to determine the position of people or objects. The UWB RTLS was integrated into the research OR at Reutlingen University and the system’s settings were optimized regarding the four factors accuracy, susceptibility to interference, range, and latency. Therefore, different parameters were adapted and the effects on the factors were compared. Goodtracking quality could be achieved under optimal settings. These results indicate that a UWB RTLS is well suited to determine the position of people and devices in our setting. The feasibility of the system needsto be evaluated under real OR conditions.
The paper describes how eye-tracking can be used to explore electronic patient records (EPR) in a sterile environment. As an information display, we used a system that we developed for the presentation of patient data and for supporting surgical hand disinfection. The eye-tracking was performed using the Tobii Eye Tracker 4C, and the connection between the eye-tracker and the HTML website was realized using the Tobii EyeX Chrome Extension. Interactions with the EPR are triggered by fixations of icons. The interaction was working as intended, but test persons reported a high mental load while using the system.
Hintergrund: Endoskopische Operationsverfahren haben sich als Goldstandard in der Nasennebenhöhlen-(NNH-)Chirurgie etabliert. Den sich daraus ergebenden Herausforderungen für die chirurgische Ausbildung kann durch den Einsatz von Virtuelle-Realität-(VR-)Trainingssimulatoren begegnet werden. Bislang wurde eine Reihe von Simulatoren für NNH-Operationen entwickelt. Frühere Studien im Hinblick auf den Trainingseffekt wurden jedoch nur mit medizinisch vorgebildeten Probanden durchgeführt oder es wurde nicht über dessen zeitlichen Verlauf berichtet.
Methoden: Ein NNH-CT-Datensatz wurde nach der Segmentierung in ein 3-dimensionales, polygonales Oberflächenmodell überführt und mithilfe von originalem Fotomaterial texturiert. Die Interaktion mit der virtuellen Umgebung erfolgte über ein haptisches Eingabegerät. Während der Simulation wurden die Parameter Eingriffsdauer und Fehleranzahl erfasst. Zehn Probanden absolvierten jeweils eine Trainingseinheit bestehend aus je 5 Übungsdurchläufen an 10 aufeinanderfolgenden Tagen.
Ergebnisse: Vier Probanden verringerten die benötigte Zeit um mehr als 60% im Verlauf des Übungszeitraums. Vier der Probanden verringerten ihre Fehleranzahl um mehr als 60%. Acht von 10 Probanden zeigten eine Verbesserung bezüglich beider Parameter. Im Median wurde im gesamten gemessenen Zeitraum die Dauer des Eingriffs um 46 Sekunden und die Fehleranzahl um 191 reduziert. Die Überprüfung eines Zusammenhangs zwischen den 2 Parametern ergab eine positive Korrelation.
Schlussfolgerung: Zusammenfassend lässt sich feststellen, dass das Training am NNH-Simulator auch bei unerfahrenen Personen die Performance beträchtlich verbessert, sowohl in Bezug auf die Dauer als auch auf die Genauigkeit des Eingriffs.
Purpose
Context awareness in the operating room (OR) is important to realize targeted assistance to support actors during surgery. A situation recognition system (SRS) is used to interpret intraoperative events and derive an intraoperative situation from these. To achieve a modular system architecture, it is desirable to de-couple the SRS from other system components. This leads to the need of an interface between such an SRS and context-aware systems (CAS). This work aims to provide an open standardized interface to enable loose coupling of the SRS with varying CAS to allow vendor-independent device orchestrations.
Methods
A requirements analysis investigated limiting factors that currently prevent the integration of CAS in today's ORs. These elicited requirements enabled the selection of a suitable base architecture. We examined how to specify this architecture with the constraints of an interoperability standard. The resulting middleware was integrated into a prototypic SRS and our system for intraoperative support, the OR-Pad, as exemplary CAS for evaluating whether our solution can enable context-aware assistance during simulated orthopedical interventions.
Results
The emerging Service-oriented Device Connectivity (SDC) standard series was selected to specify and implement a middleware for providing the interpreted contextual information while the SRS and CAS are loosely coupled. The results were verified within a proof of concept study using the OR-Pad demonstration scenario. The fulfillment of the CAS’ requirements to act context-aware, conformity to the SDC standard series, and the effort for integrating the middleware in individual systems were evaluated. The semantically unambiguous encoding of contextual information depends on the further standardization process of the SDC nomenclature. The discussion of the validity of these results proved the applicability and transferability of the middleware.
Conclusion
The specified and implemented SDC-based middleware shows the feasibility of loose coupling an SRS with unknown CAS to realize context-aware assistance in the OR.
Purpose
Artificial intelligence (AI), in particular deep neural networks, has achieved remarkable results for medical image analysis in several applications. Yet the lack of explainability of deep neural models is considered the principal restriction before applying these methods in clinical practice.
Methods
In this study, we propose a NeuroXAI framework for explainable AI of deep learning networks to increase the trust of medical experts. NeuroXAI implements seven state-of-the-art explanation methods providing visualization maps to help make deep learning models transparent.
Results
NeuroXAI has been applied to two applications of the most widely investigated problems in brain imaging analysis, i.e., image classification and segmentation using magnetic resonance (MR) modality. Visual attention maps of multiple XAI methods have been generated and compared for both applications. Another experiment demonstrated that NeuroXAI can provide information flow visualization on internal layers of a segmentation CNN.
Conclusion
Due to its open architecture, ease of implementation, and scalability to new XAI methods, NeuroXAI could be utilized to assist radiologists and medical professionals in the detection and diagnosis of brain tumors in the clinical routine of cancer patients. The code of NeuroXAI is publicly accessible at https://github.com/razeineldin/NeuroXAI.