Informatik
Refine
Year of publication
Document Type
- Conference proceeding (570) (remove)
Is part of the Bibliography
- yes (570)
Institute
- Informatik (570)
- Technik (2)
Publisher
- Springer (119)
- Hochschule Reutlingen (102)
- IEEE (83)
- Gesellschaft für Informatik e.V (51)
- Association for Computing Machinery (34)
- IARIA (19)
- RWTH Aachen (15)
- Association for Information Systems (12)
- SciTePress (12)
- Deutsche Gesellschaft für Computer- und Roboterassistierte Chirurgie e.V. (9)
- University of Hawai'i at Manoa (8)
- Università Politecnica delle Marche (8)
- IOP Publishing (5)
- SPIE. The International Society for Optical Engineering (5)
- University of Zagreb (5)
- Curran Associates Inc. (4)
- OpenProceedings (4)
- University of Hawaii at Manoa (4)
- Deutsche Gesellschaft für Computer- und Roboterassistierte Chirurgie e. V. (3)
- EuroMed Press (3)
- Universität Konstanz (3)
- Academic Conferences International (2)
- American Marketing Association (2)
- GMDS e.V. (2)
- HTWG Konstanz (2)
- IADIS Press (2)
- IBM Research Division (2)
- International Society for Photogrammetry and Remote Sensing (2)
- Smart Home & Living Baden-Württemberg e.V. (2)
- The Association for Computing Machinery, Inc. (2)
- Academic Conferences International Limited (1)
- American Institute of Physics (1)
- Association for Computing Machinery ACM (1)
- CIDR (1)
- Cambridge University Press (1)
- Copenhagen Business School (1)
- Cuvillier Verlag (1)
- DGMP (1)
- EMAC (1)
- Ed2.0Work (1)
- Elektronikpraxis, Vogel Business Media GmbH & Co. KG (1)
- Elsevier (1)
- Eurographics Association (1)
- German Medical Science Publishing House (1)
- IADIS (1)
- International Association for Development of the Information Society (1)
- Johannes Kepler University Linz (1)
- Lund University (1)
- Morressier (1)
- NextMed (1)
- SISSA (1)
- Shaker Verlag (1)
- The Association for Computing Machinery (1)
- University of Belgrade (1)
- University of Portsmouth (1)
- University of Zagreb Faculty of Organization and Informatics (1)
- Universität Trier (1)
- Universität des Saarlandes (1)
- libreriauniversitaria.it.edizioni (1)
- vwh Verlag Werner Hülsbusch (1)
In the context of digital transformation, having a data-driven organizational culture has been recognized as an important factor for data analytics capabilities, innovativeness and competitive advantage of firms. However, the current literature on data-driven culture (DDC) is fragmented, lacking both a synthesis of findings and a theoretical foundation. Therefore, the aim of this work has been to develop a comprehensive framework for understanding DDC and the mechanisms that can be used to embed such a culture in organizations as well as structuring prior dispersed findings on the topic. Based on the foundation of organizational culture theory, we employed a Design Science Research (DSR) approach using a systematic literature review and expert interviews to build and evaluate a transformation-oriented framework. This research contributes to knowledge by synthesizing previously dispersed knowledge in a holistic framework, as well as, by providing a conceptual framework to guide the transformation towards a DDC.
The performance and scalability of modern data-intensive systems are limited by massive data movement of growing datasets across the whole memory hierarchy to the CPUs. Such traditional processor-centric DBMS architectures are bandwidth- and latency-bound. Processing-in-Memory (PIM) designs seek to overcome these limitations by integrating memory and processing functionality on the same chip. PIM targets near- or in-memory data processing, leveraging the greater in-situ parallelism and bandwidth.
In this paper, we introduce pimDB and provide an initial comparison of processor-centric and PIM-DBMS approaches under different aspects, such as scalability and parallelism, cache-awareness, or PIM-specific compute/bandwidth tradeoffs. The evaluation is performed end-to-end on a real PIM hardware system from UPMEM.
Software development teams have to face stress caused by deadlines, staff turnover, or individual differences in commitment, expertise, and time zones. While students are typically taught the theory of software project management, their exposure to such stress factors is usually limited. However, preparing students for the stress they will have to endure once they work in project teams is important for their own sake, as well as for the sake of team performance in the face of stress. Team performance has been linked to the diversity of software development teams, but little is known about how diversity influences the stress experienced in teams. In order to shed light on this aspect, we provided students with the opportunity to self-experience the basics of project management in self-organizing teams, and studied the impact of six diversity dimensions on team performance, coping with stressors, and positive perceived learning effects. Three controlled experiments at two universities with a total of 65 participants suggest that the social background impacts the perceived stressors the most, while age and work experience have the highest impact on perceived learnings. Most diversity dimensions have a medium correlation with the quality of work, yet no significant relation to the team performance. This lays the foundation to improve students’ training for software engineering teamwork based on their diversity-related needs and to create diversity-sensitive awareness among educators, employers and researchers.
For large-scale processes as implemented in organizations that develop software in regulated domains, comprehensive software process models are implemented, e.g., for compliance requirements. Creating and evolving such processes is demanding and requires software engineers having substantial modeling skills to create consistent and certifiable processes. While teaching process engineering to students, we observed issues in providing and explaining models. In this paper, we present an exploratory study in which we aim to shed light on the challenges students face when it comes to modeling. Our findings show that students are capable of doing basic modeling tasks, yet, fail in utilizing models correctly. We conclude that the required skills, notably abstraction and solution development, are underdeveloped due to missing practice and routine. Since modeling is key to many software engineering disciplines, we advocate for intensifying modeling activities in teaching.
Near-Data Processing (NDP) is a key computing paradigm for reducing the ever growing time and energy costs of data transport versus computations. With their flexibility, FPGAs are an especially suitable compute element for NDP scenarios. Even more promising is the exploitation of novel and future non-volatile memory (NVM) technologies for NDP, which aim to achieve DRAM-like latencies and throughputs, while providing large capacity non-volatile storage.
Experimentation in using FPGAs in such NVM-NDP scenarios has been hindered, though, by the fact that the NVM devices/FPGA boards are still very rare and/or expensive. It thus becomes useful to emulate the access characteristics of current and future NVMs using off-the-shelf DRAMs. If such emulation is sufficiently accurate, the resulting FPGA-based NDP computing elements can be used for actual full-stack hardware/software benchmarking, e.g., when employed to accelerate a database.
For this use, we present NVMulator, an open-source easy-to-use hardware emulation module that can be seamlessly inserted between the NDP processing elements on the FPGA and a conventional DRAM-based memory system. We demonstrate that, with suitable parametrization, the emulated NVM can come very close to the performance characteristics of actual NVM technologies, specifically Intel Optane. We achieve 0.62% and 1.7% accuracy for cache line sized accesses for read and write operations, while utilizing only 0.54% of LUT logic resources on a Xilinx/AMD AU280 UltraScale+ FPGA board. We consider both file-system as well as database access patterns, examining the operation of the RocksDB database when running on real or emulated Optane-technology memories.
Das Motto der diesjahrigen Informatics Inside wird, wie ich finde, in beeindruckender Weise gegenwärtig durch Werkzeuge der generativen KI demonstriert. ChatGPT, Midjourney und Co. ermöglichen uns eine innovative Interaktion mit Information, die uns auffordert unsere bisherigen Vorstellungen von Erkenntnisfähigkeit und Wertschöpfung zu überdenken. Diese Notwendigkeit ist in der Informatik zwar bereits seit den 1930er Jahren bekannt, aber erst die praktische Umsetzung mit modernen Computern macht die formalen Überlegungen hierzu erfahrbar. Daraus resultierende Verunsicherungen, beispielsweise im Hinblick auf Arbeitsplatze, sind gleichermaßen Herausforderung und Chance dieses wichtige Thema einer breiten Öffentlichkeit bekannt zu machen. Hierbei wird einmal mehr deutlich wie tiefgreifend die Informatik in unsere Leben hineinwirkt und welche Verantwortung damit verbunden ist. Vor diesem großen Hintergrund könnte der Hinweis auf Bits und Bytes im Tagungsmotto fast schon wie ein unbedeutendes Detail wirken, was jedoch weit gefehlt wäre. Folgen aus Null und Eins bilden nach wie vor die Bausteine der Informatik und es ist die Aufgabe der angewandten Informatik hieraus nützliche und sinnvolle Anwendungen zu kombinieren.
Die Informatics Inside bietet hierfür einen entsprechenden Rahmen bereits in der akademischen Ausbildung. Unsere Studierenden planen, organisieren und gestalten diese Tagung jedes Jahr eigenstandig. Auch die Themen für die Fachbeiträge wurden von den Studierenden eigenstandig ausgewählt. Aus meiner Sicht bilden die resultierenden Ausarbeitungen in diesem Tagungsband die spannende Vielfalt von Anwendungsthemen des Human Centered Computings sehr gut ab. Dabei zeigt sich ebenfalls deutlich die Bereitschaft unserer Studierenden, die Verantwortung für eine sinnvolle und kreative Gestaltung der digitalen Zukunft zu übernehmen.
Reutlingen, den 15.11.2023 Prof. Dr. rer. medic. Christian Thies
OpenAPI, WADL, RAML, and API Blueprint are popular formats for documenting Web APIs. Although these formats are in general both human and machine-readable, only the part of the format describing the syntax of a Web API is machine-understandable. Descriptions, which explain the meaning and purpose of Web API elements, are embedded as natural language text snippets into documents and target human readers but not machines. To enable machines to read and process these state-of-practice Web API documentation, we propose a Transformer model that solves the generic task of identifying a Web API element within a syntax structure that matches a natural language query. For our first prototype, we focus on the Web API integration task of matching output with input parameters and fined-tuned a pre-trained CodeBERT model to the downstream task of question answering with samples from 2,321 OpenAPI documentation. We formulate the original question answering problem as a multiple choice task: given a semantic natural language description of an output parameter (question) and the syntax of the input schema (paragraph), the model chooses the input parameter (answer) in the schema that best matches the description. The paper describes the data preparation, tokenization, and fine-tuning process as well as discusses possible applications of our model as part of a recommender system. Furthermore, we evaluate the generalizability and the robustness of our fine-tuned model, with the result that it achieves an accuracy of 81.46% correctly chosen parameters.
The relevance of Robotic Process Automation (RPA) has increased over the last few years. Combining RPA with Artificial Intelligence (AI) can further enhance the business value of the technology. The aim of this research was to analyze applications, terminology, benefits, and challenges of combining the two technologies. A total of 60 articles were analyzed in a systematic literature review to evaluate the aforementioned areas. The results show that by adding AI, RPA applications can be used in more complex contexts, it is possible to minimize the human factor during the development process, and AI-based decision-making can be integrated into RPA routines. This paper also presents a current overview of the used terminology. Moreover, it shows that by integrating AI, some unseen challenges in RPA projects can emerge, but also a lot of new benefits will come along with it. Based on the outcome, it is concluded that the topic offers a lot of potential, but further research and development is required. The result of this study help researches to gain an overview of the state-of-the-art in combining RPA and AI.
What might the attendee be able to do after being in your session?
Our work shows how to connect intra-operative devices via IEEE 11073 Service-oriented Device Connectivity (SDC).
Description of the Problem or Gap
Standardized device communication is essential for interoperability, availability of device data, and therefore for the intelligent operating room (OR) and arising solutions. The SDC standard was developed to make information from medical devices available in a uniform manner and enable interoperability. Existing devices are rarely SDC-capable and need interfaces to be interoperable via SDC.
Methods: What did you do to address the problem or gap?
We conceived an SDC-based architecture consisting of a service provider and service consumer. In our concept, the service provider is connected to the medical device and capable to translate the proprietary protocol of the device into SDC and vice versa. The service consumer is used to request or send information via the SDC protocol to the service provider and can function as a uniform bidirectional interface (e.g. for displaying or controlling). This concept was exemplarily demonstrated with the patient monitor MX800 of Philips to retrieve the device data (e.g. vital parameters) via SDC and partly for the operating light marLED X of KLS Martin Group.
Results: What was the outcome(s) of what you did to address the problem or gap?
The patient monitor MX800 was connected to a Raspberry Pi (RPi) via LAN, on which the service provider is running. The python script on the RPi establishes a connection to the monitor and translates incoming and outgoing messages from the proprietary protocol to SDC and vice versa to/from the service consumer. The service consumer is running on a laptop and acts as a simulation for different kinds of systems that want to get vital parameters or other information from the patient monitor. The operating light marLED X was connected to an RPi via USB-to-RS232. A python script on the RPi establishes a connection to the light and makes it possible via proprietary commands to get information of the light (e.g. status) and to control it (e.g. toggle the light, increment the intensity). A translation to SDC is not integrated yet.
Discussion of Results
Our practical implementation shows that medical devices can be accessed via external connections to get device data and control the device via commands. The example SDC implementation of the patient monitor MX800 makes it possible to request its data via the standardized communication protocol SDC. This is also possible for the operating light marLED X if its proprietary protocol is analyzed to be translatable to/from SDC. This would allow to control the device from an external system, or automatically depending on the status of the ongoing procedure. The advantage is, that existing intra-operative devices can be extended by a service provider which is capable of translating the proprietary protocol of the device in SDC and vice versa. This enables interoperability and an intelligent OR that, for example, is aware of all devices, their status, and data and can use this information to optimally support the surgeons and their team (e.g. provision of information, automated documentation). This interoperability allows that future innovations merely need to understand the SDC protocol instead of all vendor-dependent communication protocols.
Conclusion
Standardized device communication is essential to reach interoperability, and therefore intelligent ORs. Our contribution addresses the possibility of subsequently making medical devices SDC-capable. This may eliminate the need of understanding all the different proprietary protocols when developing new innovative solutions for the OR.
Introduction: Even if there is a standard procedure of CI surgery, especially in pediatric surgery surgical steps often differ individually due to anatomical variations, malformations or unforseen events. This is why every surgical report should be created individually, which takes time and relies on the correct memory of the surgeon. A standardized recording of intraoperative data and subsequent storage as well as text processing would therefore be desirable and provides the basis for subsequent data processing, e.g. in the context of research or quality assurance.
Method: In cooperation with Reutlingen University, we conducted a workflow analysis of the prototype of a semi-automatic checklist tool. Based on automatically generated checklists generated from BPMN models a prototype user interface was developed for an android tablet. Functions such as uploading photos and files, manual user entries, the interception of foreseeable deviations from the normal course of operations and the automatic creation of OP documentation could be implemented. The system was tested in a remote usability test on a petrous bone model.
Result: The user interface allows a simple intuitive handling, which can be well implemented in the intraoperative setting. Clinical data as well as surgical steps could be individually recorded and saved via DICOM. An automatic surgery report could be created and saved.
Summary: The use of a dynamic checklist tool facilitates the capture, storage and processing of surgical data. Further applications in clinical practice are pending.
This project aims to evaluate existing big data infrastructures for their applicability in the operating room to support medical staff with context-sensitive systems. Requirements for the system design were generated. The project compares different data mining technologies, interfaces, and software system infrastructures with a focus on their usefulness in the peri-operative setting. The lambda architecture was chosen for the proposed system design, which will provide data for both postoperative analysis and real-time support during surgery.
Automatic segmentation is essential for the brain tumor diagnosis, disease prognosis, and follow-up therapy of patients with gliomas. Still, accurate detection of gliomas and their sub-regions in multimodal MRI is very challenging due to the variety of scanners and imaging protocols. Over the last years, the BraTS Challenge has provided a large number of multi-institutional MRI scans as a benchmark for glioma segmentation algorithms. This paper describes our contribution to the BraTS 2022 Continuous Evaluation challenge. We propose a new ensemble of multiple deep learning frameworks namely, DeepSeg, nnU-Net, and DeepSCAN for automatic glioma boundaries detection in pre-operative MRI. It is worth noting that our ensemble models took first place in the final evaluation on the BraTS testing dataset with Dice scores of 0.9294, 0.8788, and 0.8803, and Hausdorf distance of 5.23, 13.54, and 12.05, for the whole tumor, tumor core, and enhancing tumor, respectively. Furthermore, the proposed ensemble method ranked first in the final ranking on another unseen test dataset, namely Sub-Saharan Africa dataset, achieving mean Dice scores of 0.9737, 0.9593, and 0.9022, and HD95 of 2.66, 1.72, 3.32 for the whole tumor, tumor core, and enhancing tumor, respectively.
Enterprises and societies currently face essential challenges, and digital transformation can contribute to their resolution. Enterprise architecture (EA) is useful for promoting digital transformation in global companies and information societies covering ecosystem partners. The advancement of new business models can be promoted with digital platforms and architectures for Industry 4.0 and Society 5.0. Therefore, products from the sector of healthcare, manufacturing and energy, etc. can increase in value. The adaptive integrated digital architecture framework (AIDAF) for Industry 4.0 and the design thinking approach is expected to promote and implement the digital platforms and digital products for healthcare, manufacturing and energy communities more efficiently. In this paper, we propose various cases of digital transformation where digital platforms and products are designed and evaluated for digital IT, digital manufacturing and digital healthcare with Industry 4.0 and Society 5.0. The vision of AIDAF applications to perform digital transformation in global companies is explained and referenced, extended toward the digitalized ecosystems such as Society 5.0 and Industry 4.0.
Mobile monitoring of outpatients during cancer therapy becomes possible through technological advancements. This study leveraged a new remote patient monitoring app for in-between systemic therapy sessions. Patients’ evaluation showed that the handling is feasible. Clinical implementation must consider an adaptive development cycle for reliable operations.
Current advances in Artificial Intelligence (AI) combined with other digitalization efforts are changing the role of technology in service ecosystems. Human-centered intelligent systems and services are the target of many current digitalization efforts and part of a massive digital transformation based on digital technologies. Artificial intelligence, in particular, is having a powerful impact on new opportunities for shared value creation and the development of smart service ecosystems. Motivated by experiences and observations from digitalization projects, this paper presents new methodological experiences from academia and practice on a joint view of digital strategy and architecture of intelligent service ecosystems and explores the impact of digitalization based on real case study results. Digital enterprise architecture models serve as an integral representation of business, information, and technology perspectives of intelligent service-based enterprise systems to support management and development. This paper focuses on the novel aspect of closely aligned digital strategy and architecture models for intelligent service ecosystems and highlights the fundamental business mechanism of AI-based value creation, the corresponding digital architecture, and management models. We present key strategy-oriented architecture model perspectives for intelligent systems.
In today’s education, healthcare, and manufacturing sectors, organizations and information societies are discussing new enhancements to corporate structure and process efficiency using digital platforms. These enhancements can be achieved using digital tools. Industry 5.0 and Society 5.0 give several potentials for businesses to enhance the adaptability and efficacy of their industrial processes, paving the door for developing new business models facilitated by digital platforms. Society 5.0 can contribute to a super-intelligent society that includes the healthcare industry. In the past decade, the Internet of Things, Big Data Analytics, Neural Networks, Deep Learning, and Artificial Intelligence (AI) have revolutionized our approach to various job sectors, from manufacturing and finance to consumer products. AI is developing quickly and efficiently. We have heard of the latest artificial intelligence chatbot, ChatGPT. OpenAI created this, which has taken the internet by storm. We tested the effectiveness of a considerable language model referred to as ChatGPT on four critical questions concerning “Society 5.0”, “Healthcare 5.0”, “Industry,” and “Future Education” from the perspectives of Age 5.0.
Die folgende Veröffentlichung ist ein Konferenzband, der im Sommersemester 2023 stattgefundenen Studierendenkonferenz Informatics Inisde, die für die Fakultät Informatik und die Studierenden ein besonderes Ereignis ist. Mit der Veröffentlichung Ihrer Artikel in diesem Konferenzband haben die Studierende eine handfeste Publikation, die durch ein Peer-Review inhaltlich qualitätsgesichert ist.
In diesem Jahr gibt es eine neue Herausforderung: Seit dem Jahr 2022 steht ChatGPT von OpenAI zur Verfügung, das verblüffende Texte mit nachvollziehbarer Argumentation verfassen kann. Eine Nutzung des Werkzeugs für die Erstellung eines wissenschaftlichen Artikels ist denkbar und gleichzeitig schwer zu beweisen. Ein kritischer Umgang mit Technologie ist wichtiger als ein pauschales Verbot. Dennoch braucht es Regeln im Umgang mit Künstlicher Intelligenz, die einen ethisch richtigen Einsatz solcher Werkzeuge begrenzt. Umso wichtiger ist es, dass umfassender Sachverstand und kritisches Denken vermittelt wird, damit mögliche Fehler oder Plagiatsfälle entlarvt werden können.
Damit sind wir mitten im Thema: Informatik ist allgegenwärtig und in äußerst vielen Produkten in der Industrie und des täglichen Lebens vorhanden. Die vielfältigen Aufsätze dieser Konferenz zeigen das. Sehen Sie selbst, wie breit die Verfahren, Algorithmen, Methoden und Technologieanwendungen sind: Von Augmented-Reality, über Videoübertragung im Operationssaal, hin zu Standards für strukturierten Daten und Künstlicher Intelligenz zeigen die Beiträge doch, wie weit läufig die Informatik inzwischen ist. Allen gemeinsam ist eines: Die menschzentrierte Anwendung von Technologie, die in dem Master Human-centered Computing als Basis aller Veranstaltungen aufgefasst werden.
Non-fungible tokens (NFTs) are unique digital assets that have recently gained significant popularity, particularly in the digital art sector. The success of NFTs and other blockchain-based innovations depends on their ac-acceptance and use by consumers. This study aims to understand the impact of moral values on the acceptance of NFTs. Based on a quantitative survey with over 800 complete responses, the analysis shows that moral aspects of NFTs are indeed important for potential users. However, there is an attitude-behavior gap, as the positive impact of moral values on the intention to use NFTs is not reflected in the actual current usage of NFTs by the respondents. This study contributes to knowledge by providing new empirical data on the acceptance of NFTs and highlighting the role of moral values on the acceptance decision.
Digital twins deployed in production are important in practice and interesting for research. Currently, mostly structured data coming from e.g., sensors and timestamps of related stations, are integrated into Digital Twins. However, semi- and unstructured data are also important to display the current status of a digital twin (e.g., of a machinery or produced good). Process Mining and Text Mining in combination can be used to support the use of log file data to understand the current state of the process as well as highlight issues. Therefore, issue related reactions can be taken more quickly, targeted and cost oriented. Applying a design science research approach; here a prototype as an artefact based on derived requirements is developed. This prototype helps to understand and to clarify the possibilities of Process Mining and Text Mining based on log data for production related Digital Twins. Contributions for practice and research are described. Furthermore, limitations of the research and future opportunities are pointed out.
The volume includes papers presented at the International KES Conference on Human Centred Intelligent Systems 2023 (KES HCIS 2023), held in Rome, Italy on June 14–16, 2023. This book highlights new trends and challenges in intelligent systems, which play an important part in the digital transformation of many areas of science and practice. It includes papers offering a deeper understanding of the human-centred perspective on artificial intelligence, of intelligent value co-creation, ethics, value-oriented digital models, transparency, and intelligent digital architectures and engineering to support digital services and intelligent systems, the transformation of structures in digital businesses and intelligent systems based on human practices, as well as the study of interaction and the co-adaptation of humans and systems.