Informatik
Refine
Document Type
- Conference proceeding (621)
- Journal article (227)
- Book chapter (55)
- Doctoral Thesis (18)
- Book (10)
- Anthology (10)
- Patent / Standard / Guidelines (2)
- Report (2)
- Working Paper (2)
- Issue of a journal (1)
Is part of the Bibliography
- yes (948)
Institute
- Informatik (948)
- ESB Business School (2)
- Technik (2)
- Zentrale Einrichtungen (2)
Publisher
- Springer (218)
- Hochschule Reutlingen (104)
- IEEE (98)
- Gesellschaft für Informatik (64)
- Elsevier (56)
- Association for Computing Machinery (44)
- IARIA (29)
- De Gruyter (19)
- RWTH Aachen (16)
- Università Politecnica delle Marche (14)
An important aspect of achieving global climate neutrality and food security is transforming our food system. To support the goal, Germany has set a national target of reaching a 30% share in organic farming. When looking at the transformation process from conventional to organic farming, it becomes apparent that measures need to be taken to reach the anticipated goal. Using Design Science Research, we model and analyze the as-is farm-to-fork value chain of public out-of-home-eaten meals to identify the central barriers and drivers of organic transformation. From the insights gained in the modeling process, we derive a digital platform model that addresses the current issues. We propose a digitally supported value network instead of a hierarchical value chain to share the co-design opportunities for different stakeholders more equally. We then elaborate on the potential to overcome the barriers to organic transformation with the network-based platform. To specify the main functionalities of the digital platform architecture, we map user requirements with the proposed to-be value network. The results further emphasize the need for a change in the current value chain perspective. We conclusively propose to further develop existing approaches under consideration of our identified requirements and the overall sustainability goal, rather than focusing solely on individual dimensions or metrics.
Objectives: Content-based access (CBA) to medical image archives, i.e. data retrieval by means of image-based numerical features computed automatically, has capabilities to improve diagnostics, research and education. In this study, the applicability of CBA methods in dentomaxillofacial radiology is evaluated.
Methods: Recent research has discovered numerical features that were successfully applied for an automatic categorization of radiographs. In our experiments, oral and maxillofacial radiographs were obtained from the day-to-day routine of a university hospital and labelled by an experienced dental radiologist regarding the technique and direction of imaging, as well as the displayed anatomy and biosystem. In total, 2000 radiographs of 71 classes with at least 10 samples per class were analysed. A combination of co-occurrence-based texture features and correlation-based similarity measures was used in leaving-one-out experiments for automatic classification. The impact of automatic detection and separation of multi-field images and automatic separability of biosystems were analysed.
Results: Automatic categorization yielded error rates of 23.20%, 7.95% and 4.40% with respect to a correct match within the first, fifth and tenth best returns. These figures improved to 23.05%, 7.00%, 4.20%, and 20.05%, 5.65% and 3.25% if automatic decomposition was applied and the classifier was optimized to the dentomaxillofacial imagery, respectively. The dentulous and implant systems were difficult to distinguish. Experiments on non-dental radiographs (10 000 images of 57 classes) yielded 12.6%, 5.6% and 3.6%.
Conclusion: Using the same numerical features as in medical radiology, oral and maxillofacial radiographs can be reliably indexed by global texture features for CBA and data mining.
The medical automatic annotation task issued by the cross language evaluation forum (CLEF) aims at a fair comparison of state-of-the art algorithms for medical content-based image retrieval (CBIR). The contribution of this work is twofold: at first, a logical decomposition of the CBIR task is presented, and key elements to support the relevant steps are identified: (i) implementation of algorithms for feature extraction, feature comparison, and classifier combination, (ii) visualization of extracted features and retrieval results, (iii) generic evaluation of retrieval algorithms, and (iv) optimization of the parameters for the retrieval algorithms and their combination. Data structures and tools to address these key elements are integrated into an existing framework for image retrieval in medical applications (IRMA). Secondly, baseline results for the CLEF annotation tasks 2005–2007 are provided applying the IRMA framework, where global features and corresponding distance measures are combined within a nearest neighbor approach. Using identical classifier parameters and combination weights for each year shows that the task difficulty decreases over the years. The declining rank of the baseline submission also indicates the overall advances in CBIR concepts. Furthermore, a rough comparison between participants who submitted in only one of the years becomes possible.
The refugee crisis has reached historic proportions, with more than 82 million people on the run. Access to healthcare is often difficult for them due to a lack of medical records and language barriers. This paper examines a digital medical documentation system for refugees that captures, stores, and translates records. International data protection standards are considered. The contribution consists of designing a system that manages and translates medical data across borders and integrates a prediction model for epidemics in refugee camps.
The interconnection of medical devices in an operating room (OR) represents a major step in optimizing clinical processes and increasing the quality of treatment. The IEEE 11073 Service-oriented Device Connectivity (SDC) standard family constitutes the foundation for manufacturerindependent information exchange and remote control of medical devices. However, integrating new SDC-capable devices into an existing OR network poses a major challenge for medical device manufacturers. Thus, suitable integration models are required. This work focuses on the definition of three possible integration models and their comparison according to architectural design patterns. Thereby, the use case of integrating a high-frequency (HF) surgical device to interconnect with existing SDC-capable devices is pursued. One of the models, which focuses on high expandability and low coupling, was successfully applied to interconnect an HF surgical device with an OR light in the research OR of Reutlingen University. The results indicate transferability to other integration scenarios and are intended to further promote manufacturer-independent integrated ORs.
Background: Digitalization in disaster medicine holds significant potential to accelerate rescue operations and ultimately save lives. Mass casualty incidents demand rapid and accurate information management to coordinate effective responses. Currently, first responders manually record triage results on patient cards, and brief information is communicated to the command post via radio communication. Although this process is widely used in practice, it involves several time-consuming and error-prone tasks. To address these issues, we designed, implemented, and evaluated an app-based mobile triage system. This system allows users to document responder details, triage categories, injury patterns, GPS locations, and other important information, which can then be transmitted automatically to the incident commanders.
Objective: This study aims to design and evaluate an app-based mobile system as a triage and coordination tool for emergency and disaster medicine, comparing its effectiveness with the conventional paper-based system.
Methods: A total of 38 emergency medicine personnel participated in a within-subject experimental study, completing 2 triage sessions with 30 patient cards each: one session using the app-based mobile system and the other using the paper-based tool. The accuracy of the triages and the time taken for each session were measured. Additionally, we implemented the User Experience Questionnaire along with other items to assess participants’ subjective ratings of the 2 triage tools.
Results: Our 2 (triage tool) × 2 (tool order) mixed multivariate analysis of variance revealed a significant main effect for the triage tool (P<.001). Post hoc analyses indicated that participants were significantly faster (P<.001) and more accurate (P=.005) in assigning patients to the correct triage category when using the app-based mobile system compared with the paper-based tool. Additionally, analyses showed significantly better subjective ratings for the app-based mobile system compared with the paper-based tool, in terms of both school grading (P<.001) and across all 6 scales of the User Experience Questionnaire (all P<.001). Of the 38 participants, 36 (95%) preferred the app-based mobile system. There was no significant main effect for tool order (P=.24) or session order (P=.06) in our model.
Conclusions: Our findings demonstrate that the app-based mobile system not only matches the performance of the conventional paper-based tool but may even surpass it in terms of efficiency and usability. This advancement could further enhance the potential of digitalization to optimize processes in disaster medicine, ultimately leading to the possibility of saving more lives.
The manual deployment of applications distributed across the cloud, fog, and edge is error-prone and complex. TOSCA is a standard for modeling the deployment of cloud applications in a vendor-neutral and technology-independent manner that is also suitable for the fog and edge continuum. However, there exist various TOSCA orchestrators with different functionalities. Thus, selecting an appropriate TOSCA orchestrator requires technical expertise since all the available orchestrators must be analyzed regarding technical, functional, legal, and organizational requirements. In this paper, we tackle this issue and present a systematic technology review of TOSCA orchestrators. Our goal is to support project managers, developers, and researchers in selecting a suitable TOSCA orchestrator. For this, we select actively maintained general-purpose open-source TOSCA orchestrators. Moreover, we introduce the TOSCA Orchestrator Classification Framework and present a selection support system.
RESTful APIs based on HTTP are one of the most important ways to make data and functionality available to applications and software services. However, the quality of the API design strongly impacts API understandability and usability, and many rules have been specified for this. While we have evidence for the effectiveness of many design rules, it is still difficult for practitioners to identify rule violations in their design. We therefore present RESTRuler, a Java-based open-source tool that uses static analysis to detect design rule violations in OpenAPI descriptions. The current prototype supports 14 rules that go beyond simple syntactic checks and partly rely on natural language processing. The modular architecture also makes it easy to implement new rules. To evaluate RESTRuler, we conducted a benchmark with over 2,300 public OpenAPI descriptions and asked 7 API experts to construct 111 complicated rule violations. For robustness, RESTRuler successfully analyzed 99% of the used real-world OpenAPI definitions, with some failing due to excessive size. For performance efficiency, the tool performed well for the majority of files and could analyze 84% in less than 23 seconds with low CPU and RAM usage. Lastly, for effectiveness, RESTRuler achieved a precision of 91% (ranging from 60% to 100% per rule) and recall of 68% (ranging from 46% to 100%). Based on these variations between rule implementations, we identified several opportunities for improvements. While RESTRuler is still a research prototype, the evaluation suggests that the tool is quite robust to errors, resource-efficient for most APIs, and shows good precision and decent recall. Practitioners can use it to improve the quality of their API design.
Purpose:
Competency models are widespread in entrepreneurship and help develop educational offerings. Although existing models cater to specific sub-disciplines, the field of Industry 4.0 startups still needs a tailored competency. Therefore, this study aims to bridge this gap by developing a specific competency model to address the unique challenges in Industry 4.0 entrepreneurship.
Design/methodology/approach:
The research approach involved a content analysis and interview study in compiling and categorizing the necessary competencies to succeed in the Industry 4.0 domain. The developed model was subjected to different forms of validation using the Content Validity Index and inter-rater reliability incorporating expert feedback.
Findings:
The described multi-methodological approach resulted in the proposed “CompEntre 4.0” model, which contains 23 crucial competencies for Industry 4.0 startups. The results of this model validation demonstrate that it meets the necessary threshold values, establishing its reliability and potential for future use and further improvement.
Practical implications:
By providing a structured framework tailored to the specific demands of this domain, the competency model has the potential to guide and empower entrepreneurs, improving their prospects for success in the rapidly evolving landscape of Industry 4.0.
Originality/value:
While there are specific competency models for the entrepreneurship field and for specific sub-disciplines of entrepreneurship, there is, despite numerous specifics, no competency model for Industry 4.0 entrepreneurship yet.
Unveiling hurdles in software engineering education : the role of learning management systems
(2024)
Learning management systems (LMSs) are established tools in higher education, especially in the field of software engineering (SE). The onset of the COVID-19 pandemic further amplified the utilization of these systems, which necessitated their integration into educational curricula for both lecturers and students. However, adopting LMSs within SE education has presented distinctive challenges impeding their seamless incorporation into the courses. This paper aims to scrutinize the challenges and requirements encountered by professors, lecturers, and students in the domain of SE education when using LMSs. We conducted an empirical study that included (i) a survey with 47 professors/lecturers and 133 students, (ii) an analysis of the ensuing data, and (iii) 18 additional interviews conducted with professors and lecturers to delve into nuanced variations in viewpoints. The findings derived from our study reveal that the challenges and requirements pertaining to LMSs are rather specific depending on the scope and size of the respective courses. Nevertheless, many participants have a consensus on numerous challenges and requirements for improving certain features of LMSs in order to improve their usage in SE education. The findings are valuable for advancing research and development in the field of LMSs and provide guidance for lecturers in SE education.