Informatik
Refine
Year of publication
Document Type
- Journal article (199) (remove)
Is part of the Bibliography
- yes (199)
Institute
- Informatik (199)
- ESB Business School (1)
Publisher
- Elsevier (44)
- Springer (39)
- De Gruyter (14)
- MDPI (10)
- Emerald (7)
- IARIA (7)
- IEEE (7)
- Association for Computing Machinery (4)
- Riga Technical University Press (4)
- Wiley (4)
- American Marketing Association (3)
- International Academy of Business Disciplines (3)
- PeerJ Inc. (3)
- Sage Publishing (3)
- Taylor & Francis (3)
- CSW-Verlag (2)
- Deutsche Aktuarvereinigung (DAV) e.V. (2)
- IGI Global (2)
- IGI Publishing (2)
- Thieme (2)
- World Scientific Publishing (2)
- 3m5.Media GmbH (1)
- ARVO (1)
- Association of Computing Machinery (1)
- Circle International (1)
- Cornell Universiy (1)
- DUZ Medienhaus (1)
- EDP Sciences (1)
- Fachausschuß Management der Anwendungsentwicklung und -wartung (1)
- Frontiers Media (1)
- Frontiers Research Foundation (1)
- GITO Verlag (1)
- Hochschule Reutlingen (1)
- IADIS (1)
- IOP Publishing (1)
- Inderscience Publishers (1)
- International Association for Development of the Information Society (1)
- JMIR Publications (1)
- PLOS (1)
- Pallas Press (1)
- SciKA (1)
- Society for Science and Education (1)
- Springer Nature (1)
- Technical University (1)
- Tomas Bata University in Zlín (1)
- UVW, Universitätsverlag Verlag Webler (1)
- University of Jaén (1)
- University of Minho (1)
- WEKA Fachmedien (1)
- imc (1)
Saving energy and road safety became important in the last decades, hence several driving assistant systems were developed that help to improve the driving behaviour. However, these driving systems cover the area of either energy-efficiency or safety. Furthermore, they do not consider the reaction of the driver to a shown recommendation and the driver stress level. In this paper, the decision process of showing a recommendation to the driver in an energy-efficient and safety relevant driving system is presented. The decision process considers the driver's reaction to a shown recommendation and the driver stress in order to increase the user acceptance and the road safety. The results of the evaluation showed that the driving system was able to show recommendations when needed, while suppressing recommendations when the driver ignored a recommendation repeatedly or when the driver was in stress.
The situation in the markets is changing rapidly and competition in the business sector is increasing rapidly. As a result, corporate marketing decisions are based on creating greater value for the consumer, which creates competitiveness and provides an advantage in competing for future customer loyalty. The purpose of this study is to determine whether there is a link between marketing communication tools and consumer perceived value in pursuit of consumer loyalty. Qualitative (observational research) and quantitative (a questionnaire survey) research methods were used to investigate the problem empirically. The observational research elucidated the value provided to the consumer by the research objects through marketing communication tools, supplementing the key questions for the quantitative study. Correlation and regression analysis were used in the study, with the results showing a statistically significant relationship between marketing communication tools and consumer perceived value in terms of user loyalty. It has also been determined that the greatest and strongest relationship in consumer value creation through marketing communication tools is the appropriate, mutually coordinated and complementary use of a package of marketing communication tools to achieve synergies that create the preconditions for increasing consumer loyalty in a competitive market.
The introduction of smart contracts has expanded the applicability of blockchains to many domains beyond finance and cryptocurrencies. Moreover, different blockchain technologies have evolved that target special requirements. As a result, in practice, often a combination of different blockchain systems is required to achieve an overall goal. However, due to the heterogeneity of blockchain protocols, the execution of distributed business transactions that span several blockchains leads to multiple interoperability and integration challenges. Therefore, in this article, we examine the domain of Cross-Chain Smart Contract Invocations (CCSCIs), which are distributed transactions that involve the invocation of smart contracts hosted on two or more blockchain systems. We conduct a systematic multi-vocal literature review to get an overview of the available CCSCI approaches. We select 20 formal literature studies and 13 high-quality gray literature studies, extract data from them, and analyze it to derive the CCSCI Classification Framework. With the help of the framework, we group the approaches into two categories and eight subcategories. The approaches differ in multiple characteristics, e.g., the mechanisms they follow, and the capabilities and transaction processing semantics they offer. Our analysis indicates that all approaches suffer from obstacles that complicate real-world adoption, such as the low support for handling heterogeneity and the need for trusted third parties.
Electronic word-of-mouth (eWoM) communication has received a lot of attention from the academic community. As multiple research papers focus on specific facets of eWoM, there is a need to integrate current research results systematically. Thus, this paper presents a scientific literature analysis in order to determine the current state-of-the-art in the field of eWoM.
The scoring of sleep stages is one of the essential tasks in sleep analysis. Since a manual procedure requires considerable human and financial resources, and incorporates some subjectivity, an automated approach could result in several advantages. There have been many developments in this area, and in order to provide a comprehensive overview, it is essential to review relevant recent works and summarise the characteristics of the approaches, which is the main aim of this article. To achieve it, we examined articles published between 2018 and 2022 that dealt with the automated scoring of sleep stages. In the final selection for in-depth analysis, 125 articles were included after reviewing a total of 515 publications. The results revealed that automatic scoring demonstrates good quality (with Cohen's kappa up to over 0.80 and accuracy up to over 90%) in analysing EEG/EEG + EOG + EMG signals. At the same time, it should be noted that there has been no breakthrough in the quality of results using these signals in recent years. Systems involving other signals that could potentially be acquired more conveniently for the user (e.g. respiratory, cardiac or movement signals) remain more challenging in the implementation with a high level of reliability but have considerable innovation capability. In general, automatic sleep stage scoring has excellent potential to assist medical professionals while providing an objective assessment.
This paper reviews suggestions for changes to database technology coming from the work of many researchers, particularly those working with evolving big data. We discuss new approaches to remote data access and standards that better provide for durability and auditability in settings including business and scientific computing. We propose ways in which the language standards could evolve, with proof-of-concept implementations on Github.
Intraoperative imaging can assist neurosurgeons to define brain tumours and other surrounding brain structures. Interventional ultrasound (iUS) is a convenient modality with fast scan times. However, iUS data may suffer from noise and artefacts which limit their interpretation during brain surgery. In this work, we use two deep learning networks, namely UNet and TransUNet, to make automatic and accurate segmentation of the brain tumour in iUS data. Experiments were conducted on a dataset of 27 iUS volumes. The outcomes show that using a transformer with UNet is advantageous providing an efficient segmentation modelling long-range dependencies between each iUS image. In particular, the enhanced TransUNet was able to predict cavity segmentation in iUS data with an inference rate of more than 125 FPS. These promising results suggest that deep learning networks can be successfully deployed to assist neurosurgeons in the operating room.
Purpose: Gliomas are the most common and aggressive type of brain tumors due to their infiltrative nature and rapid progression. The process of distinguishing tumor boundaries from healthy cells is still a challenging task in the clinical routine. Fluid attenuated inversion recovery (FLAIR) MRI modality can provide the physician with information about tumor infiltration. Therefore, this paper proposes a new generic deep learning architecture, namely DeepSeg, for fully automated detection and segmentation of the brain lesion using FLAIR MRI data.
Methods: The developed DeepSeg is a modular decoupling framework. It consists of two connected core parts based on an encoding and decoding relationship. The encoder part is a convolutional neural network (CNN) responsible for spatial information extraction. The resulting semantic map is inserted into the decoder part to get the full-resolution probability map. Based on modified U-Net architecture, different CNN models such as residual neural network (ResNet), dense convolutional network (DenseNet), and NASNet have been utilized in this study.
Results: The proposed deep learning architectures have been successfully tested and evaluated on-line based on MRI datasets of brain tumor segmentation (BraTS 2019) challenge, including s336 cases as training data and 125 cases for validation data. The dice and Hausdorff distance scores of obtained segmentation results are about 0.81 to 0.84 and 9.8 to 19.7 correspondingly.
Conclusion: This study showed successful feasibility and comparative performance of applying different deep learning models in a new DeepSeg framework for automated brain tumor segmentation in FLAIR MR images. The proposed DeepSeg is open source and freely available at https://github.com/razeineldin/DeepSeg/.
Normal breathing during sleep is essential for people’s health and well-being. Therefore, it is crucial to diagnose apnoea events at an early stage and apply appropriate therapy. Detection of sleep apnoea is a central goal of the system design described in this article. To develop a correctly functioning system, it is first necessary to define the requirements outlined in this manuscript clearly. Furthermore, the selection of appropriate technology for the measurement of respiration is of great importance. Therefore, after performing initial literature research, we have analysed in detail three different methods and made a selection of a proper one according to determined requirements. After considering all the advantages and disadvantages of the three approaches, we decided to use the impedance measurement-based one. As a next step, an initial conceptual design of the algorithm for detecting apnoea events was created. As a result, we developed an activity diagram on which the main system components and data flows are visually represented.
We present an approach for segmenting individual cells and lamellipodia in epithelial cell clusters using fully convolutional neural networks. The method will set the basis for measuring cell cluster dynamics and expansion to improve the investigation of collective cell migration phenomena. The fully learning-based front-end avoids classical feature engineering, yet the network architecture needs to be designed carefully. Our network predicts how likely each pixel belongs to one of the classes and, thus, is able to segment the image. Besides characterizing segmentation performance, we discuss how the network will be further employed.
Detecting the adherence of driving rules in an energy-efficient, safe and adaptive driving system
(2016)
An adaptive and rule-based driving system is being developed that tries to improve the driving behavior in terms of the energy-efficiency and safety by giving recommendations. Therefore, the driving system has to monitor the adherence of driving rules by matching the rules to the driving behavior. However, existing rule matching algorithms are not sufficient, as the data within a driving system is changing frequently. In this paper a rule matching algorithm is introduced that is able to handle frequently changing data within the context of the driving system. 15 journeys were used to evaluate the performance of the rule matching algorithms. The results showed that the introduced algorithm outperforms existing algorithms in the context of the driving system. Thus, the introduced algorithm is suited for matching frequently changing data against rules with a higher performance, why it will be used in the driving system for the detection of broken energy-efficiency of safety-relevant driving rules.
Uncontrolled movements of laparoscopic instruments can lead to inadvertent injury of adjacent structures. The risk becomes evident when the dissecting instrument is located outside the field of view of the laparoscopic camera. Technical solutions to ensure patient safety are appreciated. The present work evaluated the feasibility of an automated binary classification of laparoscopic image data using Convolutional Neural Networks (CNN) to determine whether the dissecting instrument is located within the laparoscopic image section. A unique record of images was generated from six laparoscopic cholecystectomies in a surgical training environment to configure and train The CNN. By using a temporary version of the neural network, the annotation of the training image files could be automated and accelerated. A combination of oversampling and selective data augmentation was used to enlarge the fully labelled image data set and prevent loss of accuracy due to imbalanced class volumes. Subsequently the same approach was applied to the comprehensive, fully annotated Cholec80 database. The described process led to the generation of extensive and balanced training image data sets. The performance of the CNN-based binary classifiers was evaluated on separate test records from both databases. On our recorded data, an accuracy of 0.88 with regard to the safety-relevant classification was achieved. The subsequent evaluation on the Cholec80 data set yielded an accuracy of 0.84. The presented results demonstrate the feasibility of a binary classification of laparoscopic image data for the detection of adverse events in a surgical training environment using a specifically configured CNN architecture.
The focus of the developed maturity model was set on processes. The concept of the widespread CMM and its practices has been transferred to the perioperative domain and the concept of the new maturity model. Additional optimization goals and technological as well as networking-specific aspects enable a process- and object-focused view of the maturity model in order to ensure broad coverage of different subareas. The evaluation showed that the model is applicable to the perioperative field. Adjustments and extensions of the maturity model are future steps to improve the rating and classification of the new maturity model.
Recent advances in artificial intelligence have enabled promising applications in neurosurgery that can enhance patient outcomes and minimize risks. This paper presents a novel system that utilizes AI to aid neurosurgeons in precisely identifying and localizing brain tumors. The system was trained on a dataset of brain MRI scans and utilized deep learning algorithms for segmentation and classification. Evaluation of the system on a separate set of brain MRI scans demonstrated an average Dice similarity coefficient of 0.87. The system was also evaluated through a user experience test involving the Department of Neurosurgery at the University Hospital Ulm, with results showing significant improvements in accuracy, efficiency, and reduced cognitive load and stress levels. Additionally, the system has demonstrated adaptability to various surgical scenarios and provides personalized guidance to users. These findings indicate the potential for AI to enhance the quality of neurosurgical interventions and improve patient outcomes. Future work will explore integrating this system with robotic surgical tools for minimally invasive surgeries.
Development of an expert system to overpass citizens technological barriers on smart home and living
(2023)
Adopting new technologies can be overwhelming, even for people with experience in the field. For the general public, learning about new implementations, releases, brands, and enhancements can cause them to lose interest. There is a clear need to create point sources and platforms that provide helpful information about the novel and smart technologies, assisting users, technicians, and providers with products and technologies. The purpose of these platforms is twofold, as they can gather and share information on interests common to manufacturers and vendors. This paper presents the ”Finde-Dein-SmartHome” tool. Developed in association with the Smart Home & Living competence center [5] to help users learn about, understand, and purchase available technologies that meet their home automation needs. This tool aims to lower the usability barrier and guide potential customers to clear their doubts about privacy and pricing. Communities can use the information provided by this tool to identify market trends that could eventually lower costs for providers and incentivize access to innovative home technologies and devices supporting long-term care.
Sustainable technologies are being increasingly used in various areas of human life. While they have a multitude of benefits, they are especially useful in health monitoring, especially for certain groups of people, such as the elderly. However, there are still several issues that need to be addressed before its use becomes widespread. This work aims to clarify the aspects that are of great importance for increasing the acceptance of the use of this type of technology in the elderly. In addition, we aim to clarify whether the technologies that are already available are able to ensure acceptable accuracy and whether they could replace some of the manual approaches that are currently being used. A two-week study with people 65 years of age and over was conducted to address the questions posed here, and the results were evaluated. It was demonstrated that simplicity of use and automatic functioning play a crucial role. It was also concluded that technology cannot yet completely replace traditional methods such as questionnaires in some areas. Although the technologies that were tested were classified as being “easy to use”, the elderly population in the current study indicated that they were not sure that they would use these technologies regularly in the long term because the added value is not always clear, among other issues. Therefore, awareness-raising must take place in parallel with the development of technologies and services.
Wie digital ist ein Unternehmen aufgestellt? Wie weit ist es im Vergleich mit anderen Unternehmen der Branche? Um dies zu eruieren, eignen sich digitale Reifegradmodelle. Sie bieten eine Beschreibung der Ist-Situation, regen zur Reflexion über die wichtigen Fragen der Digitalisierung an und zeigen, welche Faktoren sich beeinflussen. Kontinuierlich eingesetzt lassen sie sich als Monitoring des digitalen Transformationsprozesses nutzen.
Digitalization and enterprise architecture management: a perspective on benefits and challenges
(2023)
Many companies digitally transform their business models, processes, and services. They have also been using Enterprise Architecture Management approaches for a long time to synchronize corporate strategy and information technology. Such digitalization projects bring different challenges for Enterprise Architecture Management. Without understanding and addressing them, Enterprise Architecture Management projects will fail or not deliver the expected value. Since existing research has not yet addressed these challenges, they were investigated based on a qualitative expert study with leading industry experts from Europe. Furthermore, potential benefits of digitalization projects for Enterprise Architecture Management were researched. Our results provide a theoretical framework consisting of five identified challenges, triggers and a number of benefits. Furthermore, we discuss in what ways digitalization and EAM is a promising topic for future research.
A holistic approach to digitization enables decision-makers to achieve new efficiency in corporate performance management. The digitalization improves the quality, validity and speed of information retrieval and processing. At present, most corporations are confronted with the problem of not being able to organize, categorize and visualize decision-relevant information. To meet the challenges of information management, the Management Cockpit provides an information center for managers. In accordance with the specific working environment of the executives, the Management Cockpit offers a quick and comprehensive overview of the company's situation. Today, the current situation of a company is no longer only influenced by internal factors, but also by its public image. Social media monitoring and analysis is therefore a crucial component for the external factors of successful management. Real-time monitoring of the emotions and behaviors of consumers and customers thus contributes to effective controlling of allbusiness areas. The intelligent factories promise to collect data for internal factors, but the current reality in manufacturing looks different. Production often consists of a large number of different machines, with varying degrees of digitization and limited sensor data availability. In order to close this gap, we developed a compact sensor board with network components, which allows a flexible design with different sensors for a wide variety of applications. The sensor data enable decision makers to adapt the supply chain based on their internal and external observations in the Management Cockpit. Due to the realtime and long-term monitoring and analytic possibilities the Management Cockpit provides a multi-dimensional view of the company and supports an holistic Corporate Performance Management.
Enterprise Governance, Risk and Compliance (GRC) systems are key to managing risks threatening modern enterprises from many different angles. Key constituent to GRC systems is the definition of controls that are implemented on the different layers of an Enterprise Architecture (EA). Controls become part of a “concern” of the EA, which allows to use an EA viewpoint to cover control compliance assessments. In this article we explore this relationship further, derive a metamodel linking control and EA, and elicit how this linkage give rise to a hierarchic understanding of the viewpoint concept for EAs. We complement these considerations with an expository instantiation in a cockpit for control compliance applied in an international enterprise in the insurance industry.
Context
Web APIs are one of the most used ways to expose application functionality on the Web, and their understandability is important for efficiently using the provided resources. While many API design rules exist, empirical evidence for the effectiveness of most rules is lacking.
Objective
We therefore wanted to study 1) the impact of RESTful API design rules on understandability, 2) if rule violations are also perceived as more difficult to understand, and 3) if demographic attributes like REST-related experience have an influence on this.
Method
We conducted a controlled Web-based experiment with 105 participants, from both industry and academia and with different levels of experience. Based on a hybrid between a crossover and a between-subjects design, we studied 12 design rules using API snippets in two complementary versions: one that adhered to a rule and one that was a violation of this rule. Participants answered comprehension questions and rated the perceived difficulty.
Results
For 11 of the 12 rules, we found that violation performed significantly worse than rule for the comprehension tasks. Regarding the subjective ratings, we found significant differences for 9 of the 12 rules, meaning that most violations were subjectively rated as more difficult to understand. Demographics played no role in the comprehension performance for violation.
Conclusions
Our results provide first empirical evidence for the importance of following design rules to improve the understandability of Web APIs, which is important for researchers, practitioners, and educators.
Purpose – This paper aims to complement the current understanding about user engagement in electronic word-of-mouth (eWoM) communications across online services and product communities. It examines the effect of the senders’ prior experience with products and services, and their extent of acquaintance with other community members, on user engagement with the eWoM.
Design/methodology/approach – The study used a sample of 576 unique user postings from the corporate fan page of two German firms: a service community of a telecom provider and a product community of a car manufacturer. Multiple regression analysis is used to test the conceptual model.
Findings – Senders’ prior experience and acquaintance positively affect user engagement with eWoM, and these effects differ across communities for products and services and across their influence on “likes” and “comments”. The results also suggest that communities for products are orientated toward information sharing, while those discussing services engage in information building.
Research limitations/implications – This research explains mechanisms of user engagement with eWoM and opens directions for future research around motives, content and social media tools within the structures of online communities. The insights on information-handling dimensions of online tools and antecedents to their use contribute to the research on two prioritized topics by the Marketing Science Institute – "Measuring and
Communicating the Value of Online Marketing Activities and Investments" and "Leveraging Digital/Social/Mobile Technology".
Practical implications – This research offers insights for firms to leverage user engagement and facilitate eWoM generation through members who have a higher number of acquaintances or who have more experience with the product or service. Executives should concentrate their community engagement strategies on the identification and utilization of power users. The conceptualization and empirical test about the role of likes and comments will help social media managers to create and better capture value from their social media metrics.
Originality/value – The insights about the underlying factors that influence engagement with eWoM advance our understanding about the usage of online content.
We examine the role of communication from users on dropout from digital learning systems to answer the following questions: (1) how does the sentiment within qualitative signals (user comments) affect dropout rates? (2) does the variance in the proportion of positive and negative sentiments affect dropout rates? (3) how do quantitative signals (e.g. likes) moderate the effect of the qualitative signals? and (4) how does the effect of qualitative signals on dropout rates change across early and late stages of learning? Our hypotheses draws from learning theory and self-regulation theory, and were tested using data of 447 learning videos across 32 series of online tutorials, spanning 12 different fields of learning. The findings indicate a main effect of negative sentiment on dropout rates but no effect of positive sentiment on preventing dropout behaviour. This main effect is stronger in the early stages of learning and weakens at later stages. We also observe an effect of the extent of variance of positive and negative sentiments on dropout behaviour. The effects are negatively moderated by quantitative signals. Overall, making commenting more broad-based rather than polarised can be a useful strategy in managing learning, transferring knowledge, and building consensus.
A new class of information system architecture, decision-oriented service systems, is spreading more and more. Decision-oriented service systems provide services that support decisions in business processes and products based on the capabilities of cloud-computing environments. To pave the way for the creation of design methods of business processes and products based on decision-oriented service systems, this article introduces a capability-oriented approach. Starting from technological capabilities, more abstract operational and dynamic capabilities are created. The framework created is based on an integrated conceptualization of decision-oriented service systems that allows capturing synergetic effects. By creating the framework, the gap between the technological capabilities of technologies and the strategic goals of enterprises shall be narrowed.
"Learning by doing" in Higher Education in technical disciplines is mostly realized by hands-on labs. It challenges the exploratory aptitude and curiosity of a person. But, exploratory learning is hindered by technical situations that are not easy to establish and to verify. Technical skills are, however, mandatory for employees in this area. On the other side, theoretical concepts are often compromised by commercial products. The challenge is to contrast and reconcile theory with practice. Another challenge is to implement a self-assessment and grading scheme that keeps up with the scalability of e-learning courses. In addition, it should allow the use of different commercial products in the labs and still grade the assignment results automatically in a uniform way. In two European Union funded projects we designed, implemented, and evaluated a unique e-learning reference model, which realizes a modularized teaching concept that provides easily reproducible virtual hands-on labs. The novelty of the approach is to use software products of industrial relevance to compare with theory and to contrast different implementations. In a sample case study, we demonstrate the automated assessment for the creative database modeling and design task. Pilot applications in several European countries demonstrated that the participants gained highly sustainable competences that improved their attractiveness for employment.
Fatigue and drowsiness are responsible for a significant percentage of road traffic accidents. There are several approaches to monitor the driver's drowsiness, ranging from the driver's steering behavior to the analysis of the driver, e.g. eye tracking, blinking, yawning, or electrocardiogram (ECG). This paper describes the development of a low-cost ECG sensor to derive heart rate variability (HRV) data for drowsiness detection. The work includes hardware and software design. The hardware was implemented on a printed circuit board (PCB) designed so that the board can be used as an extension shield for an Arduino. The PCB contains a double, inverted ECG channel including low-pass filtering and provides two analog outputs to the Arduino, which combines them and performs the analog-to-digital conversion. The digital ECG signal is transferred to an NVidia embedded PC where the processing takes place, including QRS-complex, heart rate, and HRV detection as well as visualization features. The resulting compact sensor provides good results in the extraction of the main ECG parameters. The sensor is being used in a larger frame, where facial-recognition-based drowsiness detection is combined with ECG-based detection to improve the recognition rate under unfavorable light or occlusion conditions.
With on-demand access to compute resources, pay-per-use, and elasticity, the cloud evolved into an attractive execution environment for High Performance Computing (HPC). Whereas elasticity, which is often referred to as the most beneficial cloud-specific property, has been heavily used in the context of interactive (multi-tier) applications, elasticity-related research in the HPC domain is still in its infancy. Existing parallel computing theory as well as traditional metrics to analytically evaluate parallel systems do not comprehensively consider elasticity, i.e., the ability to control the number of processing units at runtime. To address these issues, we introduce a conceptual framework to understand elasticity in the context of parallel systems, define the term elastic parallel system, and discuss novel metrics for both elasticity control at runtime as well as the ex post performance evaluation of elastic parallel systems. Based on the conceptual framework, we provide an in depth analysis of existing research in the field to describe the state-of-the art and compile our findings into a research agenda for future research on elastic parallel systems.
Empirical software engineering experts on the use of students and professionals in experiments
(2018)
Using students as participants remains a valid simplification of reality needed in laboratory contexts. It is an effective way to advance software engineering theories and technologies but, like any other aspect of study settings, should be carefully considered during the design, execution, interpretation, and reporting of an experiment. The key is to understand which developer population portion is being represented by the participants in an experiment. Thus, a proposal for describing experimental participants is put forward.
Enhancing data-driven algorithms for human pose estimation and action recognition through simulation
(2020)
Recognizing human actions, reliably inferring their meaning and being able to potentially exchange mutual social information are core challenges for autonomous systems when they directly share the same space with humans. Intelligent transport systems in particular face this challenge, as interactions with people are often required. The development and testing of technical perception solutions is done mostly on standard vision benchmark datasets for which manual labelling of sensory ground truth has been a tedious but necessary task. Furthermore, rarely occurring human activities are underrepresented in these datasets, leading to algorithms not recognizing such activities. For this purpose, we introduce a modular simulation framework, which offers to train and validate algorithms on various human-centred scenarios. We describe the usage of simulation data to train a state-of-the-art human pose estimation algorithm to recognize unusual human activities in urban areas. Since the recognition of human actions can be an important component of intelligent transport systems, we investigated how simulations can be applied for his purpose. Laboratory experiments show that we can train a recurrent neural network with only simulated data based on motion capture data and 3D avatars, which achieves an almost perfect performance in the classification of those human actions on real data.
Entrepreneurial software engineering: towards a hybrid development method for early-stage startups
(2021)
A considerable share of innovative software-intensive products is developed by startups. However, product development in an early-stage startup is not a sequential process. A business idea is usually based on a number of assumptions. The riskiest assumptions need to be tested. Depending on the test results, a product strategy may change several times. This raises the question of how to create sufficiently stable software using engineering principles despite a dynamic product strategy that is subject to many uncertainties. Hybrid development methods that combine agile aspects with classical engineering methods seem to be a good choice in such a start-up context. This paper proposes a lightweight hybrid development method that provides early-stage startups with a framework to support the development of single-feature minimum viable products. The method was derived from a start-up company's founding case and evaluated in expert interviews. The proposed method is intended to provide a basis for discussion between practitioners and scientists with the aim of better understanding the application of software engineering principles in software start-ups.
Hintergrund: Endoskopische Operationsverfahren haben sich als Goldstandard in der Nasennebenhöhlen-(NNH-)Chirurgie etabliert. Den sich daraus ergebenden Herausforderungen für die chirurgische Ausbildung kann durch den Einsatz von Virtuelle-Realität-(VR-)Trainingssimulatoren begegnet werden. Bislang wurde eine Reihe von Simulatoren für NNH-Operationen entwickelt. Frühere Studien im Hinblick auf den Trainingseffekt wurden jedoch nur mit medizinisch vorgebildeten Probanden durchgeführt oder es wurde nicht über dessen zeitlichen Verlauf berichtet.
Methoden: Ein NNH-CT-Datensatz wurde nach der Segmentierung in ein 3-dimensionales, polygonales Oberflächenmodell überführt und mithilfe von originalem Fotomaterial texturiert. Die Interaktion mit der virtuellen Umgebung erfolgte über ein haptisches Eingabegerät. Während der Simulation wurden die Parameter Eingriffsdauer und Fehleranzahl erfasst. Zehn Probanden absolvierten jeweils eine Trainingseinheit bestehend aus je 5 Übungsdurchläufen an 10 aufeinanderfolgenden Tagen.
Ergebnisse: Vier Probanden verringerten die benötigte Zeit um mehr als 60% im Verlauf des Übungszeitraums. Vier der Probanden verringerten ihre Fehleranzahl um mehr als 60%. Acht von 10 Probanden zeigten eine Verbesserung bezüglich beider Parameter. Im Median wurde im gesamten gemessenen Zeitraum die Dauer des Eingriffs um 46 Sekunden und die Fehleranzahl um 191 reduziert. Die Überprüfung eines Zusammenhangs zwischen den 2 Parametern ergab eine positive Korrelation.
Schlussfolgerung: Zusammenfassend lässt sich feststellen, dass das Training am NNH-Simulator auch bei unerfahrenen Personen die Performance beträchtlich verbessert, sowohl in Bezug auf die Dauer als auch auf die Genauigkeit des Eingriffs.
Elasticity is considered to be the most beneficial characteristic of cloud environments, which distinguishes the cloud from clusters and grids. Whereas elasticity has become mainstream for web-based, interactive applications, it is still a major research challenge how to leverage elasticity for applications from the high-performance computing (HPC) domain, which heavily rely on efficient parallel processing techniques. In this work, we specifically address the challenges of elasticity for parallel tree search applications. Well-known meta-algorithms based on this parallel processing technique include branch-and-bound and backtracking search. We show that their characteristics render static resource provisioning inappropriate and the capability of elastic scaling desirable. Moreover, we discuss how to construct an elasticity controller that reasons about the scaling behavior of a parallel system at runtime and dynamically adapts the number of processing units according to user-defined cost and efficiency thresholds. We evaluate a prototypical elasticity controller based on our findings by employing several benchmarks for parallel tree search and discuss the applicability of the proposed approach. Our experimental results show that, by means of elastic scaling, the performance can be controlled according to user-defined thresholds, which cannot be achieved with static resource provisioning.
Erfolg durch Kooperation
(2009)
The scoring of sleep stages is an essential part of sleep studies. The main objective of this research is to provide an algorithm for the automatic classification of sleep stages using signals that may be obtained in a non-obtrusive way. After reviewing the relevant research, the authors selected a multinomial logistic regression as the basis for their approach. Several parameters were derived from movement and breathing signals, and their combinations were investigated to develop an accurate and stable algorithm. The algorithm was implemented to produce successful results: the accuracy of the recognition of Wake/NREM/REM stages is equal to 73%, with Cohen's kappa of 0.44 for the analyzed 19324 sleep epochs of 30 seconds each. This approach has the advantage of using the only movement and breathing signals, which can be recorded with less effort than heart or brainwave signals, and requiring only four derived parameters for the calculations. Therefore, the new system is a significant improvement for non-obtrusive sleep stage identification compared to existing approaches.
Nowadays, the importance of early active patient mobilization in the recovery and rehabilitation phase has increased significantly. One way to involve patients in the treatment is a gamification-like approach, which is one of the methods of motivation in various life processes. This article shows a system prototype for patients who require physical activity because of active early mobilization after medical interventions or during illness. Bedridden patients and people with a sedentary lifestyle (predominantly lying in bed) are also potential users. The main idea for the concept was non-contact system implementation for the patients making them feel effortless during its usage. The system consists of three related parts: hardware, software, and game application. To test the relevance and coherence of the system, it was used by 35 people. The participants were asked to play a video game requiring them to make body movements while lying down. Then they were asked to take part in a small survey to evaluate the system's usability. As a result, we offer a prototype consisting of hardware and software parts that can increase and diversify physical activity during active early mobilization of patients and prevent the occurrence of possible health problems due to predominantly low activity. The proposed design can be possibly implemented in hospitals, rehabilitation centers, and even at home.
Purpose
Artificial intelligence (AI), in particular deep neural networks, has achieved remarkable results for medical image analysis in several applications. Yet the lack of explainability of deep neural models is considered the principal restriction before applying these methods in clinical practice.
Methods
In this study, we propose a NeuroXAI framework for explainable AI of deep learning networks to increase the trust of medical experts. NeuroXAI implements seven state-of-the-art explanation methods providing visualization maps to help make deep learning models transparent.
Results
NeuroXAI has been applied to two applications of the most widely investigated problems in brain imaging analysis, i.e., image classification and segmentation using magnetic resonance (MR) modality. Visual attention maps of multiple XAI methods have been generated and compared for both applications. Another experiment demonstrated that NeuroXAI can provide information flow visualization on internal layers of a segmentation CNN.
Conclusion
Due to its open architecture, ease of implementation, and scalability to new XAI methods, NeuroXAI could be utilized to assist radiologists and medical professionals in the detection and diagnosis of brain tumors in the clinical routine of cancer patients. The code of NeuroXAI is publicly accessible at https://github.com/razeineldin/NeuroXAI.
In the last few years, business firms have substantially invested into the artificial intelligence (AI) technology. However, according to several studies, a significant percentage of AI projects fail or do not deliver business value. Due to the specific characteristics of AI projects, the existing body of knowledge about success and failure of information systems (IS) projects in general may not be transferrable to the context of AI. Therefore, the objective of our research has been to identify factors that can lead to AI project failure. Based on interviews with AI experts, this article identifies and discusses 12 factors that can lead to project failure. The factors can be further classified into five categories: unrealistic expectations, use case related issues, organizational constraints, lack of key resources, and, technological issues. This research contributes to knowledge by providing new empirical data and synthesizing the results with related findings from prior studies. Our results have important managerial implications for firms that aim to adopt AI by helping the organizations to anticipate and actively manage risks in order to increase the chances of project success.
Adoption of artificial intelligence (AI) has risen sharply in recent years but many firms are not successful in realising the expected benefits or even terminate projects before completion. While there are a number of previous studies that highlight challenges in AI projects, critical factors that lead to project failure are mostly unknown. The aim of this study is therefore to identify distinct factors that are critical for failure of AI projects. To address this, interviews with experts in the field of AI from different industries are conducted and the results are analyzed using qualitative analysis methods. The results show that both, organizational and technological issues can cause project failure. Our study contributes to knowledge by reviewing previously identified challenges in terms of their criticality for project failure based on new empirical data, as well as, by identifying previously unknown factors.
The relative pros and cons of using students or practitioners in experiments in empirical software engineering have been discussed for a long time and continue to be an important topic. Following the recent publication of “Empirical software engineering experts on the use of students and professionals in experiments” by Falessi, Juristo, Wohlin, Turhan, Münch, Jedlitschka, and Oivo (EMSE, February 2018) we received a commentary by Sjøberg and Bergersen. Given that the topic is of great methodological interest to the community and requires nuanced treatment, we invited two editorial board members, Martin Shepperd and Per Runeson, respectively, to provide additional views.
Online credit card fraud presents a significant challenge in the field of eCommerce. In 2012 alone, the total loss due to credit card fraud in the US amounted to $ 54 billion. Especially online games merchants have difficulties applying standard fraud detection algorithms to achieve timely and accurate detection. This paper describes the Special constrains of this domain and highlights the reasons why conventional algorithms are not quite effective to deal with this problem. Our suggested solution for the problem originates from the fields of feature construction joined with the field of temporal sequence data mining. We present Feature construction techniques, which are able to create discriminative features based on a sequence of transaction and are able to incorporate the time into the classification process. In addition to that, a framework is presented that allows for an automated and adaptive change of features in case the underlying pattern is changing.
The promise of immutable documents to make it easier and less expensive for consumers and producers to collaborate in a verifiable way would represent an enormous progress, especially as companies strive for establish service contracts which are based on the flow of many small transactions using machine-to-machine communication. The blockchain technology logs these data, verifies the authenticity and make them available for service offers. This work deals with an architecture enabling to setup order processing between consumers and produceers using blockchain. In this way, the technical feasibility is shown and the special characteristics of blockchain production networks will be discussed.
Die einzelne, allumfassende Managementmethode für ein ganzheitliches Leistungsmanagement gibt es nicht. Vielmehr ist das Zusammenspiel aller erfolgskritischen Managementdisziplinen im Rahmen eines integrativen Managementsystems wichtig, bei dem alle Akteure und Beteiligten auch bei unterschiedlichem Fokus und Sichtweise koordiniert an einem Strang ziehen. Erfolgskritisch ist es jedoch, dass eine unternehmensindividuelle Anpassung mit einem ganzheitlichen Erfahrungshintergrund geplant, komponiert und verzahnt wird. Management Cockpits können als Stufenlösung einen wertvollen Beitrag erbringen, indem sie als Integrationsebene eine Transparenz und Kommunikationsplattform für ein ganzheitliches Leistungsmanagement generieren, selbst wenn die vollständige, fachliche, methodische, prozessuale und technische Integration noch nicht komplett vollzogen bzw. erreicht ist.
Geometry of music perception
(2022)
Prevalent neuroscientific theories are combined with acoustic observations from various studies to create a consistent geometric model for music perception in order to rationalize, explain and predict psycho-acoustic phenomena. The space of all chords is shown to be a Whitney stratified space. Each stratum is a Riemannian manifold which naturally yields a geodesic distance across strata. The resulting metric is compatible with voice-leading satisfying the triangle inequality. The geometric model allows for rigorous studies of psychoacoustic quantities such as roughness and harmonicity as height functions. In order to show how to use the geometric framework in psychoacoustic studies, concepts for the perception of chord resolutions are introduced and analyzed.
Wo treffe ich meine Kunden? Was lerne ich aus dem Feedback meiner User? Wie messe ich Erfolg? Im Sozialnetzwerk muss man die richtigen Fragen stellen, sagt Internet-Forscher Prof. Alexander Rossmann. Seine Studie Auf der Suche nach dem Return on Social Media an der Uni St. Gallen sorgte einst für Furore.
This paper explores the application of People Analytics in
recruiting professors for universities of applied sciences. Using data-driven personas, the research project aims to identify and communicate the different paths and connections leading candidates to a professorship. The authors introduce the concept of personas, describe the underlying data source and derive an example for the current project.
Software engineering education is under constant pressure to provide students with industry-relevant knowledge and skills. Educators must address issues beyond exercises and theories that can be directly rehearsed in small settings. Industry training has similar requirements of relevance as companies seek to keep their workforce up to date with technological advances. Real-life software development often deals with large, software-intensive systems and is influenced by the complex effects of teamwork and distributed software development, which are hard to demonstrate in an educational environment. A way to experience such effects and to increase the relevance of software engineering education is to apply empirical studies in teaching. In this paper, we show how different types of empirical studies can be used for educational purposes in software engineering. We give examples illustrating how to utilize empirical studies, discuss challenges, and derive an initial guideline that supports teachers to include empirical studies in software engineering courses. Furthermore, we give examples that show how empirical studies contribute to high-quality learning outcomes, to student motivation, and to the awareness of the advantages of applying software engineering principles. Having awareness, experience, and understanding of the actions required, students are more likely to apply such principles under real-life constraints in their working life.
Handling complexity in modern software engineering : editorial introduction to issue 32 of CSIMQ
(2022)
The potential of the Internet and related digital technologies, such as the Internet of Things (IoT), cognition and artificial intelligence, data analytics, services computing, cloud computing, mobile systems, collaboration networks, and cyber-physical systems, are both strategic drivers and enablers of modern digital platforms with fast-evolving ecosystems of intelligent services for digital products. This issue of CSIMQ presents three recent articles on modern software engineering. First, we focus on continuous software development and place it in the context of software architectures and digital transformation. The first contribution is followed by the description of the basis of specific security requirements and adequate digital monitoring mechanisms. Finally, we present a practical example of the digital management of livestock farming.
Die DGCH registriert vermehrt Klagen aus der klinischen Praxis hinsichtlich der nicht vollständigen Vernetzung bzw. Integration von Gerätesystemen im Chirurgischen OP. Die Anzahl, der Funktionsumfang und der Komplexitätsgrad der verwendeten Geräte nehmen ständig zu und machen die Bedienung immer aufwendiger und damit schwieriger und fehleranfälliger, sodass eine Verbesserung bei der Unterstützung im Ablauf wünschenswert ist. Die Sektion Computer- und telematikassistierte Chirurgie (CTAC) der DGCH hat es auf Veranlassung des Generalsekretärs deshalb übernommen, eine aktuelle Bestandsaufnahme vorzunehmen und mögliche Ansätze zur Verbesserung des derzeitigen Status zu bewerten.
To evaluate the quality of sleep, it is important to determine how much time was spent in each sleep stage during the night. The gold standard in this domain is an overnight polysomnography (PSG). But the recording of the necessary electrophysiological signals is extensive and complex and the environment of the sleep laboratory, which is unfamiliar to the patient, might lead to distorted results. In this paper, a sleep stage detection algorithm is proposed that uses only the heart rate signal, derived from electrocardiogram (ECG), as a discriminator. This would make it possible for sleep analysis to be performed at home, saving a lot of effort and money. From the heart rate, using the fast Fourier transformation (FFT), three parameters were calculated in order to distinguish between the different sleep stages. ECG data along with a hypnogram scored by professionals was used from Physionet database, making it easy to compare the results. With an agreement rate of 41.3%, this approach is a good foundation for future research.