Informatik
Refine
Year of publication
- 2023 (34) (remove)
Document Type
- Conference proceeding (34) (remove)
Has full text
- yes (34) (remove)
Is part of the Bibliography
- yes (34)
Institute
- Informatik (34)
Publisher
- IEEE (7)
- Springer (6)
- Academic Conferences International (2)
- Association for Computing Machinery (2)
- Hochschule Reutlingen (2)
- IARIA (2)
- IOP Publishing (2)
- RWTH Aachen (2)
- University of Hawaii at Manoa (2)
- Association for Information Systems (1)
- Cambridge University Press (1)
- Elsevier (1)
- Gesellschaft für Informatik e.V (1)
- International Association for Development of the Information Society (1)
- Morressier (1)
- OpenProceedings (1)
This project aims to evaluate existing big data infrastructures for their applicability in the operating room to support medical staff with context-sensitive systems. Requirements for the system design were generated. The project compares different data mining technologies, interfaces, and software system infrastructures with a focus on their usefulness in the peri-operative setting. The lambda architecture was chosen for the proposed system design, which will provide data for both postoperative analysis and real-time support during surgery.
Large critical systems, such as those created in the space domain, are usually developed by a large number of organizations and, furthermore, they have to comply with standards. Yet, the different stakeholders often do not have a common understanding of the needed quality of requirements specifications. Achieving such a common understanding is a laborious process that is currently not sufficiently supported. Moreover, such a common understanding must be aligned with the standards. In this paper, we present an approach that can be used to align the different stakeholder perceptions regarding the quality of requirements specifications. Existing quality models for requirements specifications are analyzed for equivalences, and transferred into a common representation, the so-called Aligned Quality Map (AQM). Furthermore, a process is defined that supports the alignment of different stakeholder perspectives with regard to the quality of requirements specifications using AQM, which is validated in a case study in the context of European space projects. AQM has been created and populated with an initial set of quality models. It is designed in such way that it can be extended to include further quality models. The case study has shown that an alignment of different stakeholder perspectives and the quality model of the European Cooperation for Space Standardization using AQM is feasible. The approach allows for aligning different stakeholder perspectives for a common understanding of the quality of requirements specifications in the context of standards. Furthermore, AQM supports the assessment of requirements specifications.
Intelligent Tutoring Systems (ITSs) are increasingly used in modern education to automatically give students individual feedback on their performance. The advantage for students is fast individual feedback on their answers to asked questions, while lecturers benefit from considerable time savings and easy delivery of educational material. Of course, it is important that the provided feedback is as effective as direct feedback from the lecturer. However, in digital teaching, lecturers cannot assess the student’s knowledge precisely but can only provide information on which questions were answered correctly and incorrectly. Therefore, this paper presents a concept for integrating ITS elements into the gamified e-learning platform IT-REX so that the feedback quality can be improved to support students in the best possible way.
We introduce bloomRF as a unified method for approximate membership testing that supports both point- and range-queries. As a first core idea, bloomRF introduces novel prefix hashing to efficiently encode range information in the hash-code of the key itself. As a second key concept, bloomRF proposes novel piecewisemonotone hash-functions that preserve local order and support fast range-lookups with fewer memory accesses. bloomRF has near-optimal space complexity and constant query complexity. Although, bloomRF is designed for integer domains, it supports floating-points, and can serve as a multi-attribute filter. The evaluation in RocksDB and in a standalone library shows that it is more efficient and outperforms existing point-range-filters by up to 4× across a range of settings and distributions, while keeping the false-positive rate low.
Non-fungible tokens (NFTs) are unique digital assets that have recently gained significant popularity, particularly in the digital art sector. The success of NFTs and other blockchain-based innovations depends on their ac-acceptance and use by consumers. This study aims to understand the impact of moral values on the acceptance of NFTs. Based on a quantitative survey with over 800 complete responses, the analysis shows that moral aspects of NFTs are indeed important for potential users. However, there is an attitude-behavior gap, as the positive impact of moral values on the intention to use NFTs is not reflected in the actual current usage of NFTs by the respondents. This study contributes to knowledge by providing new empirical data on the acceptance of NFTs and highlighting the role of moral values on the acceptance decision.
The relevance of Robotic Process Automation (RPA) has increased over the last few years. Combining RPA with Artificial Intelligence (AI) can further enhance the business value of the technology. The aim of this research was to analyze applications, terminology, benefits, and challenges of combining the two technologies. A total of 60 articles were analyzed in a systematic literature review to evaluate the aforementioned areas. The results show that by adding AI, RPA applications can be used in more complex contexts, it is possible to minimize the human factor during the development process, and AI-based decision-making can be integrated into RPA routines. This paper also presents a current overview of the used terminology. Moreover, it shows that by integrating AI, some unseen challenges in RPA projects can emerge, but also a lot of new benefits will come along with it. Based on the outcome, it is concluded that the topic offers a lot of potential, but further research and development is required. The result of this study help researches to gain an overview of the state-of-the-art in combining RPA and AI.
In the context of digital transformation, having a data-driven organizational culture has been recognized as an important factor for data analytics capabilities, innovativeness and competitive advantage of firms. However, the current literature on data-driven culture (DDC) is fragmented, lacking both a synthesis of findings and a theoretical foundation. Therefore, the aim of this work has been to develop a comprehensive framework for understanding DDC and the mechanisms that can be used to embed such a culture in organizations as well as structuring prior dispersed findings on the topic. Based on the foundation of organizational culture theory, we employed a Design Science Research (DSR) approach using a systematic literature review and expert interviews to build and evaluate a transformation-oriented framework. This research contributes to knowledge by synthesizing previously dispersed knowledge in a holistic framework, as well as, by providing a conceptual framework to guide the transformation towards a DDC.
The Fifteenth International Conference on Advances in Databases, Knowledge, and Data Applications (DBKDA 2023), held between March 13 – 17, 2023, continued a series of international events covering a large spectrum of topics related to advances in fundamentals on databases, evolution of relation between databases and other domains, data base technologies and content processing, as well as specifics in applications domains databases.
Advances in different technologies and domains related to databases triggered substantial improvements for content processing, information indexing, and data, process and knowledge mining. The push came from Web services, artificial intelligence, and agent technologies, as well as from the generalization of the XML adoption.
High-speed communications and computations, large storage capacities, and load-balancing for distributed databases access allow new approaches for content processing with incomplete patterns, advanced ranking algorithms and advanced indexing methods.
Evolution on e-business, ehealth and telemedicine, bioinformatics, finance and marketing, geographical positioning systems put pressure on database communities to push the ‘de facto’ methods to support new requirements in terms of scalability, privacy, performance, indexing, and heterogeneity of both content and technology.
Digital twins deployed in production are important in practice and interesting for research. Currently, mostly structured data coming from e.g., sensors and timestamps of related stations, are integrated into Digital Twins. However, semi- and unstructured data are also important to display the current status of a digital twin (e.g., of a machinery or produced good). Process Mining and Text Mining in combination can be used to support the use of log file data to understand the current state of the process as well as highlight issues. Therefore, issue related reactions can be taken more quickly, targeted and cost oriented. Applying a design science research approach; here a prototype as an artefact based on derived requirements is developed. This prototype helps to understand and to clarify the possibilities of Process Mining and Text Mining based on log data for production related Digital Twins. Contributions for practice and research are described. Furthermore, limitations of the research and future opportunities are pointed out.
In the era of digital transformation, the notion of software quality transcends its traditional boundaries, necessitating an expansion to encompass the realms of value creation for customers and the business. Merely optimizing technical aspects of software quality can result in diminishing returns. Product discovery techniques can be seen as a powerful mechanism for crafting products that align with an expanded concept of quality - one that incorporates value creation. Previous research has shown that companies struggle to determine appropriate product discovery techniques for generating, validating, and prioritizing ideas for new products or features to ensure they meet the needs and desires of the customers and the business. For this reason, we conducted a grey literature review to identify various techniques for product discovery. First, the article provides an overview of different techniques and assesses how frequently they are mentioned in the literature review. Second, we mapped these techniques to an existing product discovery process from previous research to provide concrete guidelines for establishing product discovery in their organizations. The analysis shows, among other things, the increasing importance of techniques to structure the problem exploration process and the product strategy process. The results are interpreted regarding the importance of the techniques to practical applications and recognizable trends.
Software development teams have to face stress caused by deadlines, staff turnover, or individual differences in commitment, expertise, and time zones. While students are typically taught the theory of software project management, their exposure to such stress factors is usually limited. However, preparing students for the stress they will have to endure once they work in project teams is important for their own sake, as well as for the sake of team performance in the face of stress. Team performance has been linked to the diversity of software development teams, but little is known about how diversity influences the stress experienced in teams. In order to shed light on this aspect, we provided students with the opportunity to self-experience the basics of project management in self-organizing teams, and studied the impact of six diversity dimensions on team performance, coping with stressors, and positive perceived learning effects. Three controlled experiments at two universities with a total of 65 participants suggest that the social background impacts the perceived stressors the most, while age and work experience have the highest impact on perceived learnings. Most diversity dimensions have a medium correlation with the quality of work, yet no significant relation to the team performance. This lays the foundation to improve students’ training for software engineering teamwork based on their diversity-related needs and to create diversity-sensitive awareness among educators, employers and researchers.
Gamification has been increasingly applied to software engineering education in the past. The approaches vary from applying game elements on a conceptual phase in the course to using specific tools to engage the students more and support their learning goals. However, existing tools usually have game elements, such as quizzes or challenges, but do not provide a more computer game-like experience. Therefore, we try to raise the level of gamified learning experience to another level by proposing Gamify-IT. Gamify-IT is a Unity- and web-based game platform intended to help students learn software engineering. It follows an immersive role-play game characteristic where the students explore a world, find and solve minigames and clear dungeons with SE tasks. Lecturers can configure the worlds, e.g., to add content hints. Furthermore, they can add and configure minigames and dungeons to include exercises in a fully gamified way. Thereby, they customize their course in Gamify-IT to adapt the world very precisely to other materials such as lectures or exercises. Results of an evaluation of our initial prototype show that (i) students like to engage with the platform, (ii) students are motivated to learn when using Gamify-IT, and (iii) the minigames support students in understanding the learning objectives.
Recent work on database application development platforms has sought to include a declarative formulation of a conceptual data model in the application code, using annotations or attributes. Some recent work has used metadata to include the details of such formulations in the physical database, and this approach brings significant advantages in that the model can be enforced across a range of applications for a single database. In previous work, we have discussed the advantages for enterprise integration of typed graph data models (TGM), which can play a similar role in graphical databases, leveraging the existing support for the unified modelling language UML. Ideally, the integration of systems designed with different models, for example, graphical and relational database, should also be supported. In this work, we implement this approach, using metadata in a relational database management system (DBMS).
Knowledge-intensive organizations primarily rely on knowledge and expertise as key strategic resources. In light of economic, social, and health-related crises in recent years, such organizations increasingly need to operate in dynamic environments. However, examinations on dynamic capabilities specifically in knowledge-intensive organizations remain scarce. This is remarkable given the role that knowledge holds as an economic resource in developed countries. To provide an explanation of how knowledge-intensive organizations can prevail among competitors under dynamic conditions, the authors integrate two literature streams in a knowledge-intensive context: the knowledge-based view and the dynamic capabilities approach. The knowledge-based view focuses on the nature of organizational knowledge as a critical resource and illustrates specific properties of knowledge in contrast to traditional means of labor such as capital. The dynamic capabilities approach on the other hand is about a firm's ability to integrate, build, and reconfigure internal and external resources and can be drawn on to explain organizational success through adaptation to dynamic contexts. In this conceptual study, the authors propose a research model linking knowledge processes to organizational performance through two different paths: (1) Operational capabilities permit organizations to make their living in the present and refer to efficiency. (2) Dynamic capabilities allow organizations to change their resource base and, therefore, enable their long-term survival in dynamic environments by focusing on effectiveness. Additionally, the authors hypothesize a moderating effect of environmental dynamics on the relationship between dynamic capabilities and performance. The study offers a comprehensive overview on the interplay between dynamic capabilities and the knowledge-based view, offering valuable insights for both researchers and practitioners in the field.
The basis for developing future products in the automotive industry is finding creative and innovative solutions. Ideas can be found by means of creativity methods that support product developers throughout the creative process. Product developers are provided with a variety of different and new methods. This leads to a “method jungle” in which it is difficult for product developers to find the most suitable path. The successful use of methods in product development goes hand in hand with the acceptance and implementation of the methods. Despite the added value, only a low use is observed in the development process. The field of Creativity Support Tools also offers a wide variety of different tools that support the creativity process. Although a chasm exists between the many CSTs that are developed and what creative practitioners actually use. Therefore, previous studies iteratively developed a user-centered tool called “IDEA” that tries to provide a tool that responds to users' needs. The question arises how the developed tool IDEA performs in “real life setting” regarding its UX and usability as well as the creativity method acceptance and level of mental workload.
Measuring cardiorespiratory parameters in sleep, using non-contact sensors and the Ballistocardiography technique has received much attention due to the low-cost, unobtrusive, and non-invasive method. Designing a user-friendly, simple-to-use, and easy-to-deployment preserving less error-prone remains open and challenging due to the complex morphology of the signal. In this work, using four forcesensitive resistor sensors, we conducted a study by designing four distributions of sensors, in order to simplify the complexity of the system by identifying the region of interest for heartbeat and respiration measurement. The sensors are deployed under the mattress and attached to the bed frame without any interference with the subjects. The four distributions are combined in two linear horizontal, one linear vertical, and one square, covering the influencing region in cardiorespiratory activities. We recruited 4 subjects and acquired data in four regular sleeping positions, each for a duration of 80 seconds. The signal processing was performed using discrete wavelet transform bior 3.9 and smooth level of 4 as well as bandpass filtering. The results indicate that we have achieved the mean absolute error of 2.35 and 4.34 for respiration and heartbeat, respectively. The results recommend the efficiency of a triangleshaped structure of three sensors for measuring heartbeat and respiration parameters in all four regular sleeping positions.
Mobile monitoring of outpatients during cancer therapy becomes possible through technological advancements. This study leveraged a new remote patient monitoring app for in-between systemic therapy sessions. Patients’ evaluation showed that the handling is feasible. Clinical implementation must consider an adaptive development cycle for reliable operations.
Die folgende Veröffentlichung ist ein Konferenzband, der im Sommersemester 2023 stattgefundenen Studierendenkonferenz Informatics Inisde, die für die Fakultät Informatik und die Studierenden ein besonderes Ereignis ist. Mit der Veröffentlichung Ihrer Artikel in diesem Konferenzband haben die Studierende eine handfeste Publikation, die durch ein Peer-Review inhaltlich qualitätsgesichert ist.
In diesem Jahr gibt es eine neue Herausforderung: Seit dem Jahr 2022 steht ChatGPT von OpenAI zur Verfügung, das verblüffende Texte mit nachvollziehbarer Argumentation verfassen kann. Eine Nutzung des Werkzeugs für die Erstellung eines wissenschaftlichen Artikels ist denkbar und gleichzeitig schwer zu beweisen. Ein kritischer Umgang mit Technologie ist wichtiger als ein pauschales Verbot. Dennoch braucht es Regeln im Umgang mit Künstlicher Intelligenz, die einen ethisch richtigen Einsatz solcher Werkzeuge begrenzt. Umso wichtiger ist es, dass umfassender Sachverstand und kritisches Denken vermittelt wird, damit mögliche Fehler oder Plagiatsfälle entlarvt werden können.
Damit sind wir mitten im Thema: Informatik ist allgegenwärtig und in äußerst vielen Produkten in der Industrie und des täglichen Lebens vorhanden. Die vielfältigen Aufsätze dieser Konferenz zeigen das. Sehen Sie selbst, wie breit die Verfahren, Algorithmen, Methoden und Technologieanwendungen sind: Von Augmented-Reality, über Videoübertragung im Operationssaal, hin zu Standards für strukturierten Daten und Künstlicher Intelligenz zeigen die Beiträge doch, wie weit läufig die Informatik inzwischen ist. Allen gemeinsam ist eines: Die menschzentrierte Anwendung von Technologie, die in dem Master Human-centered Computing als Basis aller Veranstaltungen aufgefasst werden.
Das Motto der diesjahrigen Informatics Inside wird, wie ich finde, in beeindruckender Weise gegenwärtig durch Werkzeuge der generativen KI demonstriert. ChatGPT, Midjourney und Co. ermöglichen uns eine innovative Interaktion mit Information, die uns auffordert unsere bisherigen Vorstellungen von Erkenntnisfähigkeit und Wertschöpfung zu überdenken. Diese Notwendigkeit ist in der Informatik zwar bereits seit den 1930er Jahren bekannt, aber erst die praktische Umsetzung mit modernen Computern macht die formalen Überlegungen hierzu erfahrbar. Daraus resultierende Verunsicherungen, beispielsweise im Hinblick auf Arbeitsplatze, sind gleichermaßen Herausforderung und Chance dieses wichtige Thema einer breiten Öffentlichkeit bekannt zu machen. Hierbei wird einmal mehr deutlich wie tiefgreifend die Informatik in unsere Leben hineinwirkt und welche Verantwortung damit verbunden ist. Vor diesem großen Hintergrund könnte der Hinweis auf Bits und Bytes im Tagungsmotto fast schon wie ein unbedeutendes Detail wirken, was jedoch weit gefehlt wäre. Folgen aus Null und Eins bilden nach wie vor die Bausteine der Informatik und es ist die Aufgabe der angewandten Informatik hieraus nützliche und sinnvolle Anwendungen zu kombinieren.
Die Informatics Inside bietet hierfür einen entsprechenden Rahmen bereits in der akademischen Ausbildung. Unsere Studierenden planen, organisieren und gestalten diese Tagung jedes Jahr eigenstandig. Auch die Themen für die Fachbeiträge wurden von den Studierenden eigenstandig ausgewählt. Aus meiner Sicht bilden die resultierenden Ausarbeitungen in diesem Tagungsband die spannende Vielfalt von Anwendungsthemen des Human Centered Computings sehr gut ab. Dabei zeigt sich ebenfalls deutlich die Bereitschaft unserer Studierenden, die Verantwortung für eine sinnvolle und kreative Gestaltung der digitalen Zukunft zu übernehmen.
Reutlingen, den 15.11.2023 Prof. Dr. rer. medic. Christian Thies