004 Informatik
Refine
Year of publication
- 2021 (74) (remove)
Document Type
- Conference proceeding (48)
- Journal article (20)
- Book chapter (4)
- Book (1)
- Anthology (1)
Is part of the Bibliography
- yes (74)
Institute
- Informatik (55)
- ESB Business School (17)
- Technik (3)
Publisher
- Springer (21)
- IEEE (8)
- Elsevier (7)
- Association for Information Systems (AIS) (5)
- ACM (4)
- Hochschule Reutlingen (3)
- IARIA (3)
- International Society for Photogrammetry and Remote Sensing (2)
- University of Hawai'i at Manoa (2)
- CIDR (1)
We analyze economics PhDs’ collaborations in peer-reviewed journals from 1990 to 2014 and investigate such collaborations’ quality in relation to each co-author’s research quality, field and specialization. We find that a greater overlap between co-authors’ previous research fields is significantly related to a greater publication success of co-authors’ joint work and this is robust to alternative specifications. Co-authors that engage in a distant collaboration are significantly more likely to have a large research overlap, but this significance is lost when co-authors’ social networks are accounted for. High quality collaboration is more likely to emerge as a result of an interaction between specialists and generalists with overlapping fields of expertise. Regarding interactions across subfields of economics (interdisciplinarity), it is more likely conducted by co- authors who already have interdisciplinary portfolios, than by co-authors who are specialized or starred in different subfields.
Massive data transfers in modern data-intensive systems resulting from low data-locality and data-to-code system design hurt their performance and scalability. Near-Data processing (NDP) and a shift to code-to-data designs may represent a viable solution as packaging combinations of storage and compute elements on the same device has become feasible. The shift towards NDP system architectures calls for revision of established principles. Abstractions such as data formats and layouts typically spread multiple layers in traditional DBMS, the way they are processed is encapsulated within these layers of abstraction. The NDP-style processing requires an explicit definition of cross-layer data formats and accessors to ensure in-situ executions optimally utilizing the properties of the underlying NDP storage and compute elements. In this paper, we make the case for such data format definitions and investigate the performance benefits under RocksDB and the COSMOS hardware platform.
Continuous monitoring of individual vital parameters can provide information for the assessment of one’s health and indications of medical problems in the context of personalized medicine. Correlations between parameters and health issues are to be evaluated. As one project in this topic area, a telemedicine platform is implemented to gather data of outpatients via wearables and accumulate them for physicians and researchers to review. This work extracts requirements, draws use case scenarios, and shows the current system architecture consisting of a patient application, a physician application with a web server, and a backend server application. In further work, the prototype will assist to develop a vendor-free and open monitoring solution. A conclusion on functionality and usability will be evaluated in an imminent first study.
Imagine a world in which the search for tomorrow's trends of (software) products is not subject to a long and laborious data search but is possible with a single mouse click. Through the use of artificial intelligence (AI), this reality is made possible and is to be further advanced through research. The study therefore aims to provide an initial overview of the young research field. Based on research, expert interviews, company and student surveys, current application possibilities of AI in the innovation process (defined as Smart Innovation), existing challenges that slow down the further development are discussed in more detail and future application possibilities are presented. Finally, a recommendation for action is made for business, politics and science to help overcome the current obstacles together and thus drive the future of Smart Innovation.
Imagine a world in which the search for tomorrow's trends is not subject to a long and laborious data search but is possible with a single mouse click. Through the use of artificial intelligence (AI), this reality is made possible and is to be further advanced through research. The study therefore aims to provide an initial overview of the young research field. Based on research, expert interviews, company and student surveys, current application possibilities of AI in the innovation process (defined as Smart Innovation), existing challenges that slow down the further development are discussed in more detail and future application possibilities are presented. Finally, a recommendation for action is made for business, politics and science to help overcome the current obstacles together and thus drive the future of Smart Innovation.
In recent years, the Graph Model has become increasingly popular, especially in the application domain of social networks. The model has been semantically augmented with properties and labels attached to the graph elements. It is difficult to ensure data quality for the properties and the data structure because the model does not need a schema. In this paper, we propose a schema bound Typed Graph Model with properties and labels. These enhancements improve not only data quality but also the quality of graph analysis. The power of this model is provided by using hyper-nodes and hyper-edges, which allows to present data structures on different abstraction levels. We prove that the model is at least equivalent in expressive power to most popular data models. Therefore, it can be used as a supermodel for model management and data integration. We illustrate by example the superiority of this model over the property graph data model of Hidders and other prevalent data models, namely the relational, object-oriented, XML model, and RDF Schema.
Deep learning-based EEG detection of mental alertness states from drivers under ethical aspects
(2021)
One of the most critical factors for a successful road trip is a high degree of alertness while driving. Even a split second of inattention or sleepiness in a crucial moment, will make the difference between life and death. Several prestigious car manufacturers are currently pursuing the aim of automated drowsiness identification to resolve this problem. The path between neuro-scientific research in connection with artificial intelligence and the preservation of the dignity of human individual’s and its inviolability, is very narrow. The key contribution of this work is a system of data analysis for EEGs during a driving session, which draws on previous studies analyzing heart rate (ECG), brain waves (EEG), and eye function (EOG). The gathered data is hereby treated as sensitive as possible, taking ethical regulations into consideration. Obtaining evaluable signs of evolving exhaustion includes techniques that obtain sleeping stage frequencies, problematic are hereby the correlated interference’s in the signal. This research focuses on a processing chain for EEG band splitting that involves band-pass filtering, principal component analysis (PCA), independent component analysis (ICA) with automatic artefact severance, and fast fourier transformation (FFT). The classification is based on a step-by-step adaptive deep learning analysis that detects theta rhythms as a drowsiness predictor in the pre-processed data. It was possible to obtain an offline detection rate of 89% and an online detection rate of 73%. The method is linked to the simulated driving scenario for which it was developed. This leaves space for more optimization on laboratory methods and data collection during wakefulness-dependent operations.
Seit über 12 Jahren findet nun die Informatics Inside als Informatikkonferenz an der Hochschule Reutlingen statt, in diesem Jahr zum zweiten Mal in einem halbjährigen Rhythmus, d.h. auch im Herbst. Diese Wissenschaftliche Konferenz des Masterstudiengangs Human-Centered Computing wird von den Studierenden selbst organisiert und durchgeführt. Sie erhalten während ihres Masterstudiums die Gelegenheit sich in einem selbstgewählten Fachthema zu vertiefen. Dies kann an der Hochschule, in einem Unternehmen, einem Forschungsinstitut oder im Ausland durchgeführt werden. Gerade diese flexible Ausgestaltung des Moduls „Wissenschaftliche Vertiefung“ führt zu einem sehr breiten Themenspektrum, das von den Studierenden bearbeitet wird. Neben der eigentlichen fachlichen Vertiefung spielt auch die Präsentation und Verteidigung von wissenschaftlichen Ergebnissen eine wichtige Rolle und dies weit über das Studium hinaus. Ein gewähltes Fachgebiet so allgemeinverständlich aufzubereiten und zu vermitteln, dass es auch für Nicht-Spezialisten verständlich wird, stellt immer wieder eine besondere Herausforderung dar. Dieser Herausforderung stellen sich die Studierenden im Rahmen der Herbstkonferenz zur Wissenschaftlichen Vertiefung am 24. November 2021. Bereits zum vierten Mal wird die Veranstaltung in einem online-Modus stattfinden, einschließlich eines virtuellen Begleitprogramms.
Das Themenspektrum der diesjährigen Herbstkonferenz ist wieder sehr vielfältig und breit gefächert. So erwarten Sie u.a. Beiträge aus dem Gesundheitssektor, dem Maschinellen Lernen, der KI und VR sowie dem Marketing und E-Learning. Allen gemein ist ein sehr starker Bezug zu innovativen Informatikansätzen, was sich auch in dem Wortspiel und Motto „RockIT Science“ der Konferenz widerspiegelt. Die Informatik durchdringt fast alle beruflichen und privaten Anwendungsbereiche und hat zunehmend größeren Einfluss auf unser tägliches Leben. Dies kann einerseits Besorgnis und andererseits Begeisterung auslösen. Gerade letzteres wollen die Studierenden mit Ihren Beiträgen erreichen und es auch mal im Informatiksektor „rocken“ lassen.
Together with many success stories, promises such as the increase in production speed and the improvement in stakeholders' collaboration have contributed to making agile a transformation in the software industry in which many companies want to take part. However, driven either by a natural and expected evolution or by contextual factors that challenge the adoption of agile methods as prescribed by their creator(s), software processes in practice mutate into hybrids over time. Are these still agile In this article, we investigate the question: what makes a software development method agile We present an empirical study grounded in a large-scale international survey that aims to identify software development methods and practices that improve or tame agility. Based on 556 data points, we analyze the perceived degree of agility in the implementation of standard project disciplines and its relation to used development methods and practices. Our findings suggest that only a small number of participants operate their projects in a purely traditional or agile manner (under 15%). That said, most project disciplines and most practices show a clear trend towards increasing degrees of agility. Compared to the methods used to develop software, the selection of practices has a stronger effect on the degree of agility of a given discipline. Finally, there are no methods or practices that explicitly guarantee or prevent agility. We conclude that agility cannot be defined solely at the process level. Additional factors need to be taken into account when trying to implement or improve agility in a software company. Finally, we discuss the field of software process-related research in the light of our findings and present a roadmap for future research.
Prior to the introduction of AI-based forecast models in the procurement department of an industrial retail company, we assessed the digital skills of the procurement employees and surveyed their attitudes toward a new digital technology. The aim of the survey was to ascertain important contextual factors which are likely to influence the acceptance and the successful use of the new forecast tool. What we find is that the digital skills of the employees show an intermediate level and that their attitudes toward key aspects of new digital technologies are largely positive. Thus, the conditions for high acceptance and the successful use of the models are good, as evidenced by the high intention of the procurement staff to use the models. In line with previous research, we find that the perceived usefulness of a new technology and the perceived ease of use are significant drivers of the willingness to use the new forecast tool.