Refine
Document Type
- Conference proceeding (25) (remove)
Has full text
- yes (25) (remove)
Is part of the Bibliography
- yes (25)
Institute
- Informatik (25)
Publisher
Automatic segmentation is essential for the brain tumor diagnosis, disease prognosis, and follow-up therapy of patients with gliomas. Still, accurate detection of gliomas and their sub-regions in multimodal MRI is very challenging due to the variety of scanners and imaging protocols. Over the last years, the BraTS Challenge has provided a large number of multi-institutional MRI scans as a benchmark for glioma segmentation algorithms. This paper describes our contribution to the BraTS 2022 Continuous Evaluation challenge. We propose a new ensemble of multiple deep learning frameworks namely, DeepSeg, nnU-Net, and DeepSCAN for automatic glioma boundaries detection in pre-operative MRI. It is worth noting that our ensemble models took first place in the final evaluation on the BraTS testing dataset with Dice scores of 0.9294, 0.8788, and 0.8803, and Hausdorf distance of 5.23, 13.54, and 12.05, for the whole tumor, tumor core, and enhancing tumor, respectively. Furthermore, the proposed ensemble method ranked first in the final ranking on another unseen test dataset, namely Sub-Saharan Africa dataset, achieving mean Dice scores of 0.9737, 0.9593, and 0.9022, and HD95 of 2.66, 1.72, 3.32 for the whole tumor, tumor core, and enhancing tumor, respectively.
This project aims to evaluate existing big data infrastructures for their applicability in the operating room to support medical staff with context-sensitive systems. Requirements for the system design were generated. The project compares different data mining technologies, interfaces, and software system infrastructures with a focus on their usefulness in the peri-operative setting. The lambda architecture was chosen for the proposed system design, which will provide data for both postoperative analysis and real-time support during surgery.
Introduction: Even if there is a standard procedure of CI surgery, especially in pediatric surgery surgical steps often differ individually due to anatomical variations, malformations or unforseen events. This is why every surgical report should be created individually, which takes time and relies on the correct memory of the surgeon. A standardized recording of intraoperative data and subsequent storage as well as text processing would therefore be desirable and provides the basis for subsequent data processing, e.g. in the context of research or quality assurance.
Method: In cooperation with Reutlingen University, we conducted a workflow analysis of the prototype of a semi-automatic checklist tool. Based on automatically generated checklists generated from BPMN models a prototype user interface was developed for an android tablet. Functions such as uploading photos and files, manual user entries, the interception of foreseeable deviations from the normal course of operations and the automatic creation of OP documentation could be implemented. The system was tested in a remote usability test on a petrous bone model.
Result: The user interface allows a simple intuitive handling, which can be well implemented in the intraoperative setting. Clinical data as well as surgical steps could be individually recorded and saved via DICOM. An automatic surgery report could be created and saved.
Summary: The use of a dynamic checklist tool facilitates the capture, storage and processing of surgical data. Further applications in clinical practice are pending.
Physicians in interventional radiology are exposed to high physical stress. To avoid negative long-term effects resulting from unergonomic working conditions, we demonstrated the feasibility of a system that gives feedback about unergonomic
situations arising during the intervention based on the Azure Kinect camera. The overall feasibility of the approach could be shown.
Motivation: Aim of this project is the automatic classification of total hip endoprosthesis (THEP) components in 2D Xray images. Revision surgeries of total hip arthroplasty (THA) are common procedures in orthopedics and trauma surgery. Currently, around 400.000 procedures per year are performed in the United States (US) alone. To achieve the best possible result, preoperative planning is crucial. Especially if parts of the current THEP system are to be retained.
Methods: First, a ground truth based on 76 X-ray images was created: We used an image processing pipeline consisting of a segmentation step performed by a convolutional neural network and a classification step performed by a support vector machine (SVM). In total, 11 classes (5 pans and 6 shafts) shall be classified.
Results: The ground truth generated was of good quality even though the initial segmentation was performed by technicians. The best segmentation results were achieved using a U-net architecture. For classification, SVM architectures performed much better than additional neural networks.
Conclusions: The overall image processing pipeline performed well, but the ground truth needs to be extended to include a broader variability of implant types and more examples per training class.
Glioblastomas are the most aggressive fast-growing primary brain cancer which originate in the glial cells of the brain. Accurate identification of the malignant brain tumor and its sub-regions is still one of the most challenging problems in medical image segmentation. The Brain Tumor Segmentation Challenge (BraTS) has been a popular benchmark for automatic brain glioblastomas segmentation algorithms since its initiation. In this year, BraTS 2021 challenge provides the largest multi-parametric (mpMRI) dataset of 2,000 pre-operative patients. In this paper, we propose a new aggregation of two deep learning frameworksnamely, DeepSeg and nnU-Net for automatic glioblastoma recognition in pre-operative mpMRI. Our ensemble method obtains Dice similarity scores of 92.00, 87.33, and 84.10 and Hausdorff Distances of 3.81, 8.91, and 16.02 for the enhancing tumor, tumor core, and whole tumor regions, respectively, on the BraTS 2021 validation set, ranking us among the top ten teams. These experimental findings provide evidence that it can be readily applied clinically and thereby aiding in the brain cancer prognosis, therapy planning, and therapy response monitoring. A docker image for reproducing our segmentation results is available online at (https://hub.docker.com/r/razeineldin/deepseg21).
Die Bereitstellung klinischer Informationen im Operationssaal ist ein wichtiger Aspekt zur Unterstützung des chirurgischen Teams. Die roboter-assistierte Ösophagusresektion ist ein besonders komplexer Eingriff, der Potenzial zur workflowbasierten Unterstützung bietet. Wir präsentieren erste Ergebnisse der Entwicklung eines Checklisten-Tools mit der zugrundeliegenden Modellierung des chirurgischen Workflows und Informationsbedarf der Chirurgen. Das Checklisten-Tool zeigt hierfür die durchzuführenden Schritte chronologisch an und stellt zusätzliche Informationen kontextadaptiert bereit. Eine automatische Dokumentation von Start- und Endzeiten einzelner OP-Phasen und Schritte soll zukünftige Prozessanalysen der Operation ermöglichen.
A hybrid deep registration of MR scans to interventional ultrasound for neurosurgical guidance
(2021)
Despite the recent advances in image-guided neurosurgery, reliable and accurate estimation of the brain shift still remains one of the key challenges. In this paper, we propose an automated multimodal deformable registration method using hybrid learning-based and classical approaches to improve neurosurgical procedures. Initially, the moving and fixed images are aligned using classical affine transformation (MINC toolkit), and then the result is provided to the convolutional neural network, which predicts the deformation field using backpropagation. Subsequently, the moving image is transformed using the resultant deformation into a moved image. Our model was evaluated on two publicly available datasets: the retrospective evaluation of cerebral tumors (RESECT) and brain images of tumors for evaluation (BITE). The mean target registration errors have been reduced from 5.35 ± 4.29 to 0.99 ± 0.22 mm in the RESECT and from 4.18 ± 1.91 to 1.68 ± 0.65 mm in the BITE. Experimental results showed that our method improved the state-of-the-art in terms of both accuracy and runtime speed (170 ms on average). Hence, the proposed method provides a fast runtime for 3D MRI to intra-operative US pair in a GPU-based implementation, which shows a promise for its applicability in assisting the neurosurgical procedures compensating for brain shift.
In networked operating room environments, there is an emerging trend towards standardized non-proprietary communication protocols which allow to build new integration solutions and flexible human-machine interaction concepts. The most prominent endeavor is the IEEE 11073 SDC protocol. For some uses cases, it would be helpful if not just medical devices could be controlled based on SDC, but also building automation systems like light, shutters, air condition, etc. For those systems, the KNX protocol is widely used. We build an SDC-to-KNX gateway which allows to use the SDC protocol for sending commands to connected KNX devices. The first prototype system was successfully implemented at the demonstration operating room at Reutlingen University. This is a first step toward the integration of a broader variety of KNX devices.
Documentation of clinical processes, especially in the perioperative are, is a base requirement for quality of service. Nonetheless, the documentation is a burden for the medical staff since it distracts from the clinical core process. An intuitive and user-friendly documentation system could increase documentation quality and reduce documentation workload. The optimal system solution would know what happened and the person documenting the step would need a single “confirm” button. In many cases, such a linear flow of activities is given as long as only one profession (e.g. anaestesiology, scrub nurse) is considered, but even in such cases, there might be derivations from the linear process flow and further interaction is required.
The increasing heterogenecity of students at German Universities of Applied Sciences and the growing importance of digitization call for a rethinking of teaching and learning within higher education. In the next years, changing the learning ecosystem by developing and reflecting upon new teaching and learning techniques using methods of digitalization will be both - most relevant and very challenging. The following article introduces two different learning scenarios, which exemplify the implementation of new educational models that allow discontinuity of time and place, technology and process in teaching and learning. Within a blended learning apporach, the first learning scenario aims at adapting and individualizing the knowledge transfer in the course Foundations of Computer Science by providing knowledge individually and situation-specifically. The second learning scenario proposes a web-based tool to facilitate digital learning environments and thus digital learning communities and the possibility of computer-supported learning. The overall aim of both learning scenarios is to enhance learning for diverse groups by providing a different smart learning ecosystem in stepping away from a teacher-based to a student-centered approach. Both learning scenarios exemplarily represent the educational vision of Reutlingen University - its development into an interactive university.
Zur Unterstützung des Operateurs wird eine patientennahe Informationsanzeige entwickelt, die kontextrelevante Informationen entsprechend der aktuellen Situation bereitstellen kann. Hierfür soll eine Situationserkennung konzipiert werden, die auf unterschiedliche intraoperative Prozesse übertragen werden kann. Ziel der adaptiven Situationserkennung ist das Erkennen spezifischer Situationen durch intraoperative Informationen unterschiedlicher Datenquellen im Operationssaal. Innerhalb der Datenerhebung und -analyse wurden Anwendungsfälle für die Situationserkennung definiert sowie chirurgische Prozessmodelle erstellt, die intraoperative Ereignisse abbilden. Auf Basis dieser Informationen wurde ein Konzept entworfen, das sich zunächst auf die Erkennung abstrakter generalisierter Phasen, unabhängig vom Eingriff, fokussiert und sich Schritt für Schritt auf granulare Prozessschritte spezifizieren lässt. Diese Flexibilität soll die Übertragbarkeit des Konzepts auf intraoperative Prozesse ermöglichen und den Operateur dadurch gezielt mit kontextrelevanten Informationen unterstützen. Das Konzept wird in zukünftigen Schritten weiterentwickelt.
Workflow driven support systems in the peri-operative area have the potential to optimize clinical processes and to allow new situation-adaptive support systems. We started to develop a workflow management system supporting all involved actors in the operating theatre with the goal to synchronize the tasks of the different stakeholders by giving relevant information to the right team members. Using the OMG standards BPMN, CMMN and DMN gives us the opportunity to bring established methods from other industries into the medical field. The system shows each addressed actor their information in the right place at the right time to make sure every member can execute their task in time to ensure a smooth workflow. The system has the overall view of all tasks. Accordingly, a workflow management system including the Camunda BPM workflow engine to run the models, and a middleware to connect different systems to the workflow engine and some graphical user interfaces to show necessary information or to interact with the system are used. The complete pipeline is implemented with a RESTful web service. The system is designed to include different systems like hospital information system (HIS) via the RESTful web service very easily and without loss of data. The first prototype is implemented and will be expanded.
In der Orthopädie werden Robotersysteme bereits seit mehreren Jahren erfolgreich unterstützend eingesetzt. Dieser Ansatz erfordert die vorgelagerte Erstellung eines digitalen Modells auf Basis von medizinischen Bilddatensätzen. Die Erstellung und Überprüfung der Modelle soll in einer browserbasierten Client- Server-Anwendung erfolgen. Hierfür ist die Darstellung von zweidimensionalen und dreidimensionalen Datensätzen erforderlich. Basis dieses Papers ist die Entwicklung eines Ansatzes zur interaktiven, browserbasierten dreidimensionalen Darstellung medizinischer Planungsdaten. Die Anwendung stellt ein Proof of Concept dar, ob die bestehenden Desktopanwendungen zur Darstellung von Planungsdaten ersetzt werden können. Mit Hilfe des Frameworks AMI.js wurde die Anwendung umgesetzt. Sie erfüllt alle definierten Anforderungen und kann somit die aktuellen Desktopanwendungen ersetzen.
This study is about estimating the reproducibility of finding palpation points of three different anatomical landmarks in the human body (Xiphoid Process and the 2 Hip Crests) to support a navigated ultrasound application. On 6 test subjects with different body mass index the three palpation points were located five times by two examiners. The deviation from the target position was calculated and correlated to the fat thickness above each palpation point. The reproducibility of the measurements had a mean error of ≈13.5 mm +- 4 mm, which seems to be sufficient for the desired application field.
Radiofrequency ablation is an ablation technique to treat tumors with focused heat. Computer tomography, ultrasound and magnetic resonance imaging (MRI) are imaging modalities which can be used for image-guided procedures. MRI offers several advantages in comparison to the other imaging modalities, such as radiation-free fluoroscopic imaging, temperature mapping, a high-soft-tissue contrast and free selection of imaging planes. This work addresses the application of 3Dcontrollers for controlling interventional, fluoroscopic MR sequences at the scenario of MR guided radiofrequency ablation of hepatic malignancies. During this procedure, the interventionalist can monitor the targeting of the tumor with near-real time fluoroscopic sequences. In general, adjustments of the imaging planes are necessary during tumor targeting, which is performed by an assistant in the control room. Therefore, communication between the interventionalist in the scanner room and the assistant in the control room is essential. However, verbal communication is impaired due to the loud scanning noises. Alternatively, non-verbal communication between the two persons is possible, however limited to a few gestures and susceptible to misunderstandings. This work is analyzing different 3D-controllers to enable control of interventional MR sequences during MR-guided procedures directly by the interventionalist. Leap Motion, Wii Remote, SpaceNavigator, Phantom Omni and Foot Switch were selected. For that a simulation was built in C++ with VTK to feign the real scenario for test purposes. Previous results showed that Leap Motion is not suitable for the application while Wii Remote and Foot Switch are possible input devices. Final evaluation showed a generally time reduction with the use of 3D-controllers. Best results were reached with Wii Remote in 34 seconds. Handholding input devices like Wii Remote have further potential to integrate them in real environment to reduce intervention time.
Die minimal-invasive Chirurgie (MIC) entwickelt sich durch den Einsatz von medizinischen Robotern wie dem da Vinci System von Intuitive Surgical stetig weiter. Hierdurch kann eine bessere oder gleichwertige Operation bei deutlich geringerer körperlicher Belastung des Operateurs erreicht werden. Dabei entstehen jedoch neue Problemstellungen wie beispielsweise Kollision zwischen Roboterarmen und die benötigte Zeit zum Einrichten einer geeigneten Roboterkonfiguration. Daher ist eine effiziente Vorbereitung und Planung der Interventionen erforderlich. Diese Arbeit präsentiert einen Ansatz für eine verbesserte Planung mit Augmented Reality (AR) und einer Robotik Simulationssoftware (RS). Die Robotik Simulation dient zur Berechnung einer Roboterkonfiguration unter Vorgabe der Port-Positionen. Augmented Reality wird verwendet, um die berechneten Pose in der realen Umgebung zu visualisieren und somit leichter in den Operationssaal zu übertragen.
Die Segmentierung und das Tracking von minimal-invasiven robotergeführten Instrumenten ist ein wesentlicher Bestandteil für verschiedene computer assistierte Eingriffe. Allerdings treten in der minimal-invasiven Chirurgie, die das Anwendungsfeld für den hier beschriebenen Ansatz darstellt, häufig Schwierigkeiten durch Reflexionen, Schatten oder visuelle Verdeckungen durch Rauch und Organe auf und erschweren die Segmentierung und das Tracking der Instrumente.
Dieser Beitrag stellt einen Deep Learning Ansatz für ein markerloses Tracking von minimal-invasiven Instrumenten vor und wird sowohl auf simulierten als auch realen Daten getestet. Es wird ein simulierter als auch realer Datensatz mit Ground Truth Kennzeichnung für die binäre Segmentierung von Instrument und Hintergrund erstellt. Für den simulierten Datensatz werden Bilder aus einem simulierten Instrument und realem Hintergrund zusammengesetzt. Im Falle des realen Datensatzes spricht man von der Zusammensetzung der Bilder aus einem realen Instrument und Hintergrund. Insgesamt wird auf den simulierten Daten eine Pixelgenauigkeit von 94.70 Prozent und auf den realen Daten eine Pixelgenauigkeit von 87.30 Prozent erreicht.
Clinical reading centers provide expertise for consistent, centralized analysis of medical data gathered in a distributed context. Accordingly, appropriate software solutions are required for the involved communication and data management processes. In this work, an analysis of general requirements and essential architectural and software design considerations for reading center information systems is provided. The identified patterns have been applied to the implementation of the reading center platform which is currently operated at the Center of Ophthalmology of the University Hospital of Tübingen.
Information systems, which support the workflow in the clinical area, are currently limited to organizational processes. This work shows a first approach of an information system supporting all actors in the perioperative area. The first prototype and proof of concept was a task manager, giving all actors information about their task and the task of all other actors during an intervention. Based on this initial task manager, we implemented an information system based on a workflow engine controlling all processes and all information necessary for the intervention. A second part was the development of a perioperative process visualization which was developed based on a user centered approach jointly with clinicians and OR members.
An operating room is a stressful work environment. Nevertheless, all involved persons have to work safely as there is no space for mistakes. To ensure a high level of concentration and seamless interaction, all involved persons have to know their own tasks and the tasks of their colleagues. The entire team must work synchronously at all times. To optimize the overall workflow, a task manager supporting the team was developed. In parallel, a common conceptual design of a business process visualization was developed, which makes all relevant information accessible in real-time during a surgery. In this context an overview of all processes in the operating room was created and different concepts for the graphical representation of these user-dependent processes were developed. This paper describes the concept of the task manager as well as the general concept in the field of surgery.
Informationstechnische Systeme, die den Arbeitsablauf im klinischen Bereich unterstützen, sind aktuell auf organisatorische Abläufe beschränkt. Diese Arbeit stellt einen ersten Ansatz vor, wie solch ein System in den perioperativen Bereich eingebracht werden kann. Hierzu wurde eine Workflow Engine mit einer perioperativen Prozess-Visualisierung verknüpft. Das System wurde nach Modell-View-Controller-Prinzip implementiert. Als "Controller" kommt die Workflow Engine zum Einsatz; also "Modell" ein Prozessmodell, mit den erforderlichen klinischen Daten. Der "View" wurde durch eine abgekoppelte Anwendung realisiert, welche auf Web-Technologien basiert. Drei Visualisierungen, die Workflow Engine sowie die Anbindung beider über eine Datenbankschnittstelle, wurden erfolgreich umgesetzt. Bei den drei Visualisierungen wurden jeweils eine Ansicht für den OP-Koordinator, den Springer und eine Ansicht für die Übersicht einer OP erstellt.
Model-guided Therapy and Surgical Workflow Systems are two interrelated research fields, which have been developed separately in the last years. To make full use of both technologies, it is necessary to integrate them and connect them to Hospital Information Systems. We propose a framework for integration of Model-guided Therapy in Hospital Information Systems based on the Electronic Medical Record, and a taskbased Workflow Management System, which is suitable for clinical end users. Two prototypes - one based on Business Process Modeling Language, one based on the serum-board - are presented. From the experience with these prototypes, we developed a novel personalized visualization system for Surgical Workflows and Model-guided Therapy. Key challenges for further development are automated situation detection and a common communication infrastructure.
An operation room is a stressful work environment. Nevertheless, all involved persons have to work safely as there is no space for making mistakes. To ensure a high level of concentration and seamless interaction, all involved persons have to know their own tasks and tasks of their colleagues. The entire team must work synchronously at all times. However, the operation room (OR) is a noisy environment and the actors have to set their focus on their work. To optimize the overall workflow, a task manager supporting the team was developed. Each actor is equipped with a client terminal showing a summary of their own tasks. Moreover, a big screen displays all tasks of all actors. The architecture is a distributed system based on a communication framework that supports the interaction of all clients with the task manager. A prototype of the task manager and several clients have been developed and implemented. The system represents a proof-of-concept for further development. This paper describes the concept of the task manager.
Multi-dimensional patient data, such as time varying volume data, data of different imaging modalities, surface segmentations etc. are of growing importance in the clinical routine. For many use cases, it is of major importance to replicate a certain visualization of a data set created on one machine on a different computer using different software tools. Up until now, there exists no standardized methodology for this consistent presentation. We propose an extension of the Digital Imaging und Communications in Medicine (DICOM) called “Multi dimensional Presentation State” and outline scope and first results of the standardization process.