Informatik
Refine
Document Type
- Conference proceeding (570)
- Journal article (199)
- Book chapter (62)
- Doctoral Thesis (18)
- Book (10)
- Anthology (10)
- Patent / Standard / Guidelines (2)
- Report (2)
- Working Paper (2)
Is part of the Bibliography
- yes (875)
Institute
- Informatik (875)
- Technik (2)
- ESB Business School (1)
Publisher
- Springer (205)
- Hochschule Reutlingen (104)
- IEEE (90)
- Gesellschaft für Informatik e.V (62)
- Elsevier (47)
- Association for Computing Machinery (38)
- IARIA (26)
- RWTH Aachen (15)
- De Gruyter (14)
- Association for Information Systems (12)
Significant advances have been achieved in mobile robot localization and mapping in dynamic environments, however these are mostly incapable of dealing with the physical properties of automotive radar sensors. In this paper we present an accurate and robust solution to this problem, by introducing a memory efficient cluster map representation. Our approach is validated by experiments that took place on a public parking space with pedestrians, moving cars, as well as different parking configurations to provide a challenging dynamic environment. The results prove its ability to reproducibly localize our vehicle within an error margin of below 1% with respect to ground truth using only point based radar targets. A decay process enables our map representation to support local updates.
In any autonomous driving system, the map for localization plays a vital part that is often underestimated. The map describes the world around the vehicle outside of the sensor view and is a main input into the decision making process in highly complicated scenarios. Thus there are strict requirements towards the accuracy and timeliness of the map. We present a robust and reliable approach towards crowd based mapping using a GraphSLAM framework based on radar sensors. We show on a parking lot that even in dynamically changing environments, the localization results are very accurate and reliable even in unexplored terrain without any map data. This can be achieved by collaborative map updates from multiple vehicles. To show these claims experimentally, the Joint Graph Optimization is compared to the ground truth on an industrial parking space. Mapping performance is evaluated using a dense map from a total station as reference and localization results are compared with a deeply coupled DGPS/INS system.
Die Entwicklung eines Medizinproduktes benötigt in der Regel mehrere Jahre. Gesetzliche Vorgaben, wie zum Beispiel das Medizinprodukte Durchführungsgesetz, bestimmen, welche Schritte während der Entwicklung durchgeführt werden müssen. Deren Einhaltung muss in der technischen Dokumentation nachgewiesen werden. Die darin enthaltenen technischen Dokumente entstehen im Verlauf der Entwicklung. Diese bauen aufeinander auf und verweisen sich gegenseitig. Dadurch entstehen heterogene und unübersichtliche Strukturen. Eine Lösung für dieses Problem bietet Traceability. Traceability sorgt dafür, dass die Anforderungen an das Medizinprodukt mit Dokumenten, wie dem Anforderungskatalog, Lastenheft oder der Spezifikation verknüpft werden können. Somit ist jederzeit nachvollziehbar, welche Anforderungen mit welchem Test, welchen Änderungen oder welchen Ergebnissen zusammenhängen. Ein wichtiger Prozess bei der Entwicklung von Medizinprodukten ist zudem das Usability Engineering, wodurch die Sicherheit eines Medizinprodukts sichergestellt und Risiken bei der Anwendung minimiert werden sollen. In diesem Prozess entstehen viele Artefakte, wie zum Beispiel Usability-Berichte. Um den Überblick über alle Usability-Daten behalten zu können, können diese mithilfe von Traceability verknüpft werden. In diesem Artikel wird herausgestellt, welche Voraussetzungen für das Usability Engineering in der Medizintechnik an Traceability gestellt
werden.
Bausparverträge sind kombinierte Spar- und Finanzierungsinstrumente, die für die breite Bevölkerung ausgelegt sind. Im Jahr 2020 umfasste der Bestand an Bausparverträgen in Deutschland ca. 25 Mio. Verträge. Ein wesentlicher Teil der Attraktivität des Bausparvertrags für Kunden liegt in der hohen Flexibilität dieser Finanzprodukte, die im Vertragsablauf eine flexible Anpassung an individuelle Finanzierungsbedingungen ermöglicht. In der Sparphase sind das insbesondere Möglichkeiten zur Erhöhung, Ermäßigung und Teilung der Verträge sowie zur relativ flexiblen Anpassung der Sparrate. Bei einem zuteilungsreifen Vertrag kann die Sparphase innerhalb bestimmter zeitlicher Grenzen fortgesetzt werden. In der Darlehensphase sind flexible Sondertilgungen jederzeit und ohne Vorfälligkeitsentschädigung möglich.
Die Vielzahl eingebetteter Optionen beeinflussen sich wechselseitig und müssen in ihrer Wirkungsweise immer gesamthaft betrachtet und gesteuert werden. Die empirische Erfahrung der letzten Jahrzehnte zeigt bezüglich der Optionsausübung ein Kundenverhalten, das sich zwar an finanzmathematischen Überlegungen orientiert, aber nicht vollständig finanzrational abläuft.
Digital Enterprise Architecture allows multiple viewpoints on a company’s IT landscape. To gain valuable information out of huge amounts of operational data, it is indispensable to have both an understanding of the operations architecture and an engine capable of managing Big Data. The mechanism of understanding huge amounts of data is based on three main steps: collect, process and use. The main idea is focused on extracting valuable information out of Big Data to make better design decisions. The Elastic Stack is an open-source solution to comfortably and quickly handle Big Data scenarios.
Measuring cardiorespiratory parameters in sleep, using non-contact sensors and the Ballistocardiography technique has received much attention due to the low-cost, unobtrusive, and non-invasive method. Designing a user-friendly, simple-to-use, and easy-to-deployment preserving less error-prone remains open and challenging due to the complex morphology of the signal. In this work, using four forcesensitive resistor sensors, we conducted a study by designing four distributions of sensors, in order to simplify the complexity of the system by identifying the region of interest for heartbeat and respiration measurement. The sensors are deployed under the mattress and attached to the bed frame without any interference with the subjects. The four distributions are combined in two linear horizontal, one linear vertical, and one square, covering the influencing region in cardiorespiratory activities. We recruited 4 subjects and acquired data in four regular sleeping positions, each for a duration of 80 seconds. The signal processing was performed using discrete wavelet transform bior 3.9 and smooth level of 4 as well as bandpass filtering. The results indicate that we have achieved the mean absolute error of 2.35 and 4.34 for respiration and heartbeat, respectively. The results recommend the efficiency of a triangleshaped structure of three sensors for measuring heartbeat and respiration parameters in all four regular sleeping positions.
Assistive environments are entering our homes faster than ever. However, there are still various barriers to be broken. One of the crucial points is a personalization of offered services and integration of assistive technologies in common objects and therefore in a regular daily routine. Recognition of sleep patterns for the preliminary sleep study is one of the Health services that could be performed in an undisturbing way. This article proposes the hardware system for the measurement of bio-vital signals necessary for initial sleep study in a nonobtrusive way. The first results confirm the potential of measurement of breathing and movement signals with the proposed system.
The goal of the presented project is to develop the concept of home e-health centers for barrier-free and cross-border telemedicine. AAL technologies are already present on the market but there is still a gap to close until they can be used for ordinary patient needs. The general idea needs to be accompanied by new services, which should be brought together in order to provide a full coverage of service for the users. Sleep and stress were chosen as predominant influence in the population. The executed scientific study of available home devices analyzing sleep has provided the necessary to select appropriate devices. The first choice for the project implementation is the device EMFIT QS+. This equipment provides a part of a complete system that a home telemedical hospital can provide at a level of precision and communication with internal and/or external health services.
Autism spectrum disorders (ASD) affect a large number of children both in the Russian Federation and in Germany. Early diagnosis is key for these children, because the sooner parents notice such disorders in a child and the rehabilitation and treatment program starts, the higher the likelihood of his social adaptation. The difficulties in raising such a child lie in the complexity of his learning outside of children's groups and the complexity of his medical care. In this regard, the development of digital applications that facilitate medical care and education of such children at home is important and relevant. The purpose of the project is to improve the availability and quality of healthcare and social adaptation at home of children with ASD through the use of digital technologies.
An ongoing challenge in our days is to lower the impact on the quality of life caused by dysfunctionality through individual support. With the background of an aging society and continuous increases in costs for care, a holistic solution is needed. This solution must integrate individual needs and preferences, locally available possibilities, regional conditions, professional and informal caregivers and provide the flexibility to implement future requirements. The proposed model is a result of a common initiative to overcome the major obstacles and to center a solution on individual needs caused by dysfunctionality.
The citizen-centered health platform project is intended to provide a platform that can be used in EU cross-border regions, where social and economic exchange occurs across national borders. The overriding challenges are: (a) social: improving citizen-centered health and care provision; (b) technical: providing a digital platform for networking citizens, service providers, and municipal actors; (c) economic: developing long-term successful (sustainable) business models/value chains. The platform should strengthen and expand existing networks and establish new regional networks. Each network addresses particular challenges and apply them in a region-specific manner. Here, the national boundary conditions and the interregional needs play an essential role. These objectives require sufficient participation of civil society representatives. Furthermore, the platform will establish an overarching, sustainable, and knowledge-based network of health experts. The platform is to be jointly developed and implemented in the regions and follow an open-access approach. Therefore, synergies will be shared more quickly, strengthening competencies and competitiveness. In addition to practice partners, scientific and municipal institutions and SMEs are involved. The actors thus contribute to scientific performance, innovative strength, and resilience.
The development of automatic solutions for the detection of physiological events of interest is booming. Improvements in the collection and storage of large amounts of healthcare data allow access to these data faster and more efficiently. This fact means that the development of artificial intelligence models for the detection and monitoring of a large number of pathologies is becoming increasingly common in the medical field. In particular, developing deep learning models for detecting obstructive apnea (OSA) events is at the forefront. Numerous scientific studies focus on the architecture of the models and the results that these models can provide in terms of OSA classification and Apnea-Hypopnea-Index (AHI) calculation. However, little focus is put on other aspects of great relevance that are crucial for the training and performance of the models. Among these aspects can be found the set of physiological signals used and the preprocessing tasks prior to model training. This paper covers the essential requirements that must be considered before training the deep learning model for obstructive sleep apnea detection, in addition to covering solutions that currently exist in the scientific literature by analyzing the preprocessing tasks prior to training.
Introduction
Despite its high accuracy, polysomnography (PSG) has several drawbacks for diagnosing obstructive sleep apnea (OSA). Consequently, multiple portable monitors (PMs) have been proposed.
Objective
This systematic review aims to investigate the current literature to analyze the sets of physiological parameters captured by a PM to select the minimum number of such physiological signals while maintaining accurate results in OSA detection.
Methods
Inclusion and exclusion criteria for the selection of publications were established prior to the search. The evaluation of the publications was made based on one central question and several specific questions.
Results
The abilities to detect hypopneas, sleep time, or awakenings were some of the features studied to investigate the full functionality of the PMs to select the most relevant set of physiological signals. Based on the physiological parameters collected (one to six), the PMs were classified into sets according to the level of evidence. The advantages and the disadvantages of each possible set of signals were explained by answering the research questions proposed in the methods.
Conclusions
The minimum number of physiological signals detected by PMs for the detection of OSA depends mainly on the purpose and context of the sleep study. The set of three physiological signals showed the best results in the detection of OSA.
The massive use of patient data for the training of artificial intelligence algorithms is common nowadays in medicine. In this scientific work, a statistical analysis of one of the most used datasets for the training of artificial intelligence models for the detection of sleep disorders is performed: sleep health heart study 2. This study focuses on determining whether the gender and age of the patients have a relevant influence to consider working with differentiated datasets based on these variables for the training of artificial intelligence models.
The digital twin concept has been widely known for asset monitoring in the industry for a long time. A clear example is the automotive industry. Recently, there has also been significant interest in the application of digital twins in healthcare, especially in genomics in what is known as precision medicine. This work focuses on another medical speciality where digital twins can be applied, sleep medicine. However, there is still great controversy about the fundamentals that constitute digital twins, such as what this concept is based on and how it can be included in healthcare effectively and sustainably. This article reviews digital twins and their role so far in what is known as personalized medicine. In addition, a series of steps will be exposed for a possible implementation of a digital twin for a patient suffering from sleep disorders. For this, artificial intelligence techniques, clinical data management, and possible solutions for explaining the results derived from artificial intelligence models will be addressed.
Today many scientific works are using deep learning algorithms and time series, which can detect physiological events of interest. In sleep medicine, this is particularly relevant in detecting sleep apnea, specifically in detecting obstructive sleep apnea events. Deep learning algorithms with different architectures are used to achieve decent results in accuracy, sensitivity, etc. Although there are models that can reliably determine apnea and hypopnea events, another essential aspect to consider is the explainability of these models, i.e., why a model makes a particular decision. Another critical factor is how these deep learning models determine how severe obstructive sleep apnea is in patients based on the apnea-hypopnea index (AHI). Deep learning models trained by two approaches for AHI determination are exposed in this work. Approaches vary depending on the data format the models are fed: full-time series and window-based time series.
The use of deep learning models with medical data is becoming more widespread. However, although numerous models have shown high accuracy in medical-related tasks, such as medical image recognition (e.g. radiographs), there are still many problems with seeing these models operating in a real healthcare environment. This article presents a series of basic requirements that must be taken into account when developing deep learning models for biomedical time series classification tasks, with the aim of facilitating the subsequent production of the models in healthcare. These requirements range from the correct collection of data, to the existing techniques for a correct explanation of the results obtained by the models. This is due to the fact that one of the main reasons why the use of deep learning models is not more widespread in healthcare settings is their lack of clarity when it comes to explaining decision making.
Background: Polysomnography (PSG) is the gold standard for detecting obstructive sleep apnea (OSA). However, this technique has many disadvantages when using it outside the hospital or for daily use. Portable monitors (PMs) aim to streamline the OSA detection process through deep learning (DL).
Materials and methods: We studied how to detect OSA events and calculate the apnea-hypopnea index (AHI) by using deep learning models that aim to be implemented on PMs. Several deep learning models are presented after being trained on polysomnography data from the National Sleep Research Resource (NSRR) repository. The best hyperparameters for the DL architecture are presented. In addition, emphasis is focused on model explainability techniques, concretely on Gradient-weighted Class Activation Mapping (Grad-CAM).
Results: The results for the best DL model are presented and analyzed. The interpretability of the DL model is also analyzed by studying the regions of the signals that are most relevant for the model to make the decision. The model that yields the best result is a one-dimensional convolutional neural network (1D-CNN) with 84.3% accuracy.
Conclusion: The use of PMs using machine learning techniques for detecting OSA events still has a long way to go. However, our method for developing explainable DL models demonstrates that PMs appear to be a promising alternative to PSG in the future for the detection of obstructive apnea events and the automatic calculation of AHI.
This work is a study about a comparison of survey tools and it should help developers in selecting a suited tool for application in an AAL environment. The first step was to identify the basic required functionality of the survey tools used for AAL technologies and to compare these tools by their functionality and assignments. The comparative study was derived from the data obtained, previous literature studies and further technical data. A list of requirements was stated and ordered in terms of relevance to the target application domain. With the help of an integrated assessment method, the calculation of a generalized estimate value was performed and the result is explained. Finally, the planned application of this tool in a running project is explained.
Modern component-based architectural styles, e.g., microservices, enable developing the components independently from each other. However, this independence can result in problems when it comes to managing issues, such as bugs, as developer teams can freely choose their technology stacks, such as issue management systems (IMSs), e.g., Jira, GitHub, or Redmine. In the case of a microservice architecture, if an issue of a downstream microservice depends on an issue of an upstream microservice, this must be both identified and communicated, and the downstream service’s issues should link to its causing issue. However, agile project management today requires efficient communication, which is why more and more teams are communicating through comments in the issues themselves. Unfortunately, IMSs are not integrated with each other, thus, semantically linking these issues is not supported, and identifying such issue dependencies from different IMSs is time-consuming and requires manual searching in multiple IMS technologies. This results in many context switches and prevents developers from being focused and getting things done. Therefore, in this paper, we present a concept for seamlessly integrating different IMS technologies into each other and providing a better architectural context. The concept is based on augmenting the websites of issue management systems through a browser extension. We validate the approach with a prototypical implementation for the Chrome browser. For evaluation, we conducted expert interviews, which approved that the presented approach provides significant advantages for managing issues of agile microservice architectures.
Gamification has been increasingly applied to software engineering education in the past. The approaches vary from applying game elements on a conceptual phase in the course to using specific tools to engage the students more and support their learning goals. However, existing tools usually have game elements, such as quizzes or challenges, but do not provide a more computer game-like experience. Therefore, we try to raise the level of gamified learning experience to another level by proposing Gamify-IT. Gamify-IT is a Unity- and web-based game platform intended to help students learn software engineering. It follows an immersive role-play game characteristic where the students explore a world, find and solve minigames and clear dungeons with SE tasks. Lecturers can configure the worlds, e.g., to add content hints. Furthermore, they can add and configure minigames and dungeons to include exercises in a fully gamified way. Thereby, they customize their course in Gamify-IT to adapt the world very precisely to other materials such as lectures or exercises. Results of an evaluation of our initial prototype show that (i) students like to engage with the platform, (ii) students are motivated to learn when using Gamify-IT, and (iii) the minigames support students in understanding the learning objectives.
Im Rahmen der wissenschaftlichen Vertiefung soll auf Basis einer vorhandenen Gebrauchstauglichkeitsanalyse einer mobilen Applikation das Risikomanagement geplant und durchgeführt werden. Die Applikation ist Bestandteil eines In vitro-Diagnostikums, welches transplantierten Patienten im Alltag bei der Bewertung ihrer Blutwerte und des Gesundheitszustandes, sowie bei der korrekt dosierten Einnahme der erforderlichen Medikamente unterstützen soll.
Medical applications are becoming increasingly important in the current development of health care and therefore a crucial part of the medical industry. The work focuses on the analysis of requirements and the challenges arisen from designing mobile medical applications in relation to the user interface. The paper describes the current status in the development of mobile medical apps and illustrates the development of e-health market. The author will explain the requirements and will illustrate the hurdles and problems. He refers to the German market which is similar to the European and compares that with the market in the USA.
Medical applications are becoming increasingly important in the current development of health care and therefore a crucial part of the medical industry. An essential component is the development of user interfaces for mobile medical applications. The conceptual process is crucial for the further development of the main development process. Inconsistency or errors in the conceptual phase, have a serious impact on all areas and could prevent the certification for market approval.
This paper presents a guide to support developer with this process. It was developed based on a requirement analysis of the legal requirements to publish a medical device.
Die Erfindung betrifft einen Rollstuhl mit einem Gestell mit Rädern, einem Sitz sowie zwei gegenüber dem Sitz verlagerbaren Fußplatten und ein Trainingsgerät zur Bewegungstherapie der unteren Extremitäten einer in dem Rollstuhl sitzenden Person. Um das Trainingsgerät vereinfacht auszubilden, enthält das Trainingsgerät unabhängig von einer Fahrbewegung des Rollstuhls betreibbar eine an dem Gestell befestigbare, von einer Steuereinheit gesteuerte Elektromaschine, welche zur wechselweise erzwungenen Verlagerung der beiden Fußplatten mit den Fußplatten mechanisch gekoppelt ist.
In recent years, the rise of social media received significant importance in marketing research and practice. Consequently, interfaces to social media platforms have also been integrated into Business-to-Business (B2B) salesforce applications, although very little is as yet known about their usage and general impact on B2B sales performance. This paper evaluates 1) the conceptualization of social media usage in dyadic B2B relationships; 2) the effects of a more differentiated usage construct on customer satisfaction; 3) antecedents of social media usage on multiple levels; and 4) the effectiveness of social media usage for different types of cus-tomers. The framework presented here is tested cross-industry against data collected from dyadic buyer-seller relationships in the IT service industry. The results elucidate the precondi-tions and the impact of social media usage strategies in B2B sales relations.
The question of why individuals adopt information technology has been present in the information systems research since the past quarter century. One of the most used models for predicting the technology usage was introduced by Fred David: The Technology Acceptance Model (TAM). It describes the influence of perceived usefulness and perceived ease of use on attitude, behavioral intention and system usage. The first two mentioned factors in turn are influenced by external variables. Although a plethora of papers exists about the TAM , an extensive analysis of the role of the external variables in the model is still missing. This paper aims to give an overview ove the most important variables. In an extensive literature review, we identified 763 relevant papers, found 552 unique single extenal variables, characterized the most important of them, and described the frequency of their appearance. Additionally, we grouped these variables into four categories (organizational characteristis, system characteristics, user personal characteristics, and other variables). Afterwards we discuss the results and show implications for theory and practice.
In recent times, enterprises have been increasingly dealing with the use of social media in internal communication and collaboration. In particular, so-called Enterprise Social Networks (ESN) promise meaningful benefits for the nature of work in corporations. However, these platforms often suffer from poor degrees of use. This raises the question of what initiatives enterprise can launch in order to stimulate the vitality of ESN. Since the use of ESN is often voluntary, individual adoption by employees need to be examined to find an answer. Therefore, the Unified Theory of Acceptance and Use of Technology (UTAUT) model was selected for the theoretical foundation of this paper. Following a qualitative research approach, the available research provides an analysis of expert interviews on specific ESN implementation strategies and included factors. In order to extensively conceptualize and generalize these strategic considerations, we conducted an inductive coding process. The results reveal that ESN implementation strategies can be understood as a multi-level construct (individual vs. group vs. organizational level) containing different factors dependent on the degree of documentation and intensity. This research in progress describes a qualitative evaluation as a preliminary study for further quantitative analysis of an ESN adoption model.
Der Siegeszug von Social Media im privaten Umfeld hat die Vorteile dieser Kommunikationswerkzeuge aufgezeigt. Unternehmen versuchen, diese Erfolge für sich zu nutzen und setzen Social Media für ihre Kommunikationsaktivitäten ein. In der externen Kommunikation etwa ermöglichen diese Werkzeuge einen schnellen und unkomplizierten Nachrichtenaustausch mit Kunden oder helfen Kundenexpertise in organisationale Prozesse, etwa Produktentwicklung oder Kundenbeschwerdemanagement, zu integrieren. Auch in der internen Kommunikation entstehen durch den Einsatz von Social Media neue Kanäle. Eine spezielle Gruppe von Social-Media Werkzeugen für die interne Kommunikation und Kollaboration wird als Enterprise Social Networks (ESN) bezeichnet.
This research addresses the question of why employees use enterprise social networks (ESN). Against the background of technology acceptance research, we propose an extended unified theory of acceptance and use of technology (UTAUT) model, adapt it to an ESN context, and test our model against data from ESN users of large and medium-sized enterprises. We use partial least squares structural equation modeling to gain insights into the determinants of ESN use. This paper contributes to ESN acceptance research by evaluating a model containing determinants of ESN use. It also examines the effects of determinants on five different usage dimensions of ESN. The results reveal that facilitating conditions are the main driver of ESN use while the impact of intention to use is comparably small. Implications for theory and practice are discussed.
Purpose
As a response to the increased frequency of disruptive events and intense competition, organizational agility has become a key concept in organizational research. Fostering organizational agility requires leveraging knowledge that exists both outside (exploration) and inside (exploitation) the organization. This research tests the so-called ambidexterity hypothesis, which claims that a balance between exploration and exploitation leads to increased organizational outcomes, including the development of organizational agility. Complementing previously established measurement models on ambidexterity, this research proposes an alternative measurement model to analyze how ambidexterity can enhance organizational agility and, indirectly, performance, taking into consideration the moderating effect of environmental competitiveness.
Design/methodology/approach
A review of existing measurement models for ambidexterity shows that tension, a crucial aspect of ambidexterity, is often neglected. The authors, therefore, develop a new measurement model of ambidexterity to incorporate ambidexterity-induced tension. Using this measurement model, they examine the effect of ambidexterity on the development of entrepreneurial and adaptive agility as well as performance.
Findings
Ambidexterity positively influences both entrepreneurial and adaptive agility, indicating that a balance between exploration and exploitation has superior organizational effects. This finding confirms the ambidexterity hypothesis with respect to organizational agility. Furthermore, both entrepreneurial and adaptive agility drive organizational performance. These two indirect effects via agility fully mediate the impact of ambidexterity on organizational performance. Finally, environmental competitiveness positively moderates the relationship between ambidexterity and adaptive agility.
Originality/value
The findings extend research on ambidexterity by showing its positive effects on organizational agility. Furthermore, the study proposes an alternative operationalization to capture the ambidexterity construct that may lay the groundwork for further applications of the ambidexterity concept.
This research evaluates current measurement scales for ambidexterity and proposes a new approach for the measurement of this important construct. We argue that current measurement approaches may be unsuitable to capture the concept of ambidexterity. Through a systematic scale development process, we derive a measurement scale with dual items that simultaneously refer to both dimensions, exploitation and exploration, thus reflecting the true nature of ambidexterity. An extensive pre-test with 39 executives suggests that our scale is suitable for capturing ambidexterity. Our measurement model enhances conceptual clarity of ambidexterity and can serve as a base for future investigations of the concept.
Knowledge-intensive organizations primarily rely on knowledge and expertise as key strategic resources. In light of economic, social, and health-related crises in recent years, such organizations increasingly need to operate in dynamic environments. However, examinations on dynamic capabilities specifically in knowledge-intensive organizations remain scarce. This is remarkable given the role that knowledge holds as an economic resource in developed countries. To provide an explanation of how knowledge-intensive organizations can prevail among competitors under dynamic conditions, the authors integrate two literature streams in a knowledge-intensive context: the knowledge-based view and the dynamic capabilities approach. The knowledge-based view focuses on the nature of organizational knowledge as a critical resource and illustrates specific properties of knowledge in contrast to traditional means of labor such as capital. The dynamic capabilities approach on the other hand is about a firm's ability to integrate, build, and reconfigure internal and external resources and can be drawn on to explain organizational success through adaptation to dynamic contexts. In this conceptual study, the authors propose a research model linking knowledge processes to organizational performance through two different paths: (1) Operational capabilities permit organizations to make their living in the present and refer to efficiency. (2) Dynamic capabilities allow organizations to change their resource base and, therefore, enable their long-term survival in dynamic environments by focusing on effectiveness. Additionally, the authors hypothesize a moderating effect of environmental dynamics on the relationship between dynamic capabilities and performance. The study offers a comprehensive overview on the interplay between dynamic capabilities and the knowledge-based view, offering valuable insights for both researchers and practitioners in the field.
This paper provides an introduction to the topic of enterprise social networks (ESN) and illustrates possible applications, potentials, and challenges for future research. It outlines an analysis of research papers containing a literature overview in the field of ESN. Subsequently, single relevant research papers are analysed and further research potentials derived therefrom. This yields seven promising areas for further research: (1) user behaviour; (2) effects of ESN usage; (3) management, leadership, and governance; (4) value assessment and success measurement; (5) cultural effects, (6) architecture and design of ESN; and (7) theories, research designs and methods. This paper characterises these areas and articulates further research directions.
Organizations that operate under uncertainty need to cultivate their ability to manage their primary resource, knowledge, accordingly. Under such conditions, organizations are required to harvest knowledge from two sources: to explore knowledge that is to be found outside the organization as well as exploit knowledge that is contained within. In a knowledge management context these exploitation and exploration activities have been conceptualized as knowledge ambidexterity. While ambidexterity has been studied extensively in contexts as manufacturing or IT, the notion of knowledge ambidexterity remains scarce in current knowledge management research. This study illustrates knowledge ambidexterity and elaborates its positive impact on organizational performance. Our study furthermore answers the question of how the use of enterprise social media (ESM) can facilitate the performance effects of knowledge ambidexterity. Drawing on the theory of communication visibility, we argue that ESM (e.g., Microsoft Teams, Slack, etc.) allow employees to communicate unhindered while making these communications visible. This allows for capturing tacit knowledge within these communications - this form of knowledge is generally hard to codify and can be a source of competitive edge. With respect to knowledge ambidexterity, ESM use can capture tacit knowledge aspects originating from inside and outside the organization, which fosters the development of a competitive advantage and, thus, supports its positive effect on organizational performance. This paper contributes to IT-enabled ambidexterity research in two aspects: (1) It sheds light on knowledge ambidexterity and, thereby, addresses a major practical challenge for knowledge-intensive organizations, and (2) it elaborates on the effects that ESM use can have on the relationship between knowledge ambidexterity and organizational performance. This work-in-progress paper offers a better understanding of the phenomenon of ambidexterity in a knowledge context, while providing insights on the facilitating role of ESM. Our research serves as a foundation for future empirical examinations of the concept of knowledge ambidexterity.
Organizational agility may be an antidote against threats from volatile, uncertain, complex, or ambiguous corporate environments. While agility has been extensively examined in manufacturing enterprises, comparably less is known about agility in knowledge-intensive organizations. As results may not be transferable, there is still some confusion about how agility in knowledge-intensive organizations can be characterized, what factors facilitate its development, what its organizational effects are, and what environmental conditions favor these effects. This study closes these gaps by presenting a systematic literature review on agility in knowledge-intensive organizations. A systematic literature search led to a sample of 37 relevant papers for our review. Integrating the knowledge-based view and a dynamic capabilities perspective, we (1) present different relevant conceptualizations of organizational agility, (2) discuss relevant knowledge management-related as well as information technology-related capabilities that support the development of organizational agility, and (3) shed light on the moderating role of environmental conditions in enhancing organizational agility and its effect on organizational performance. This academic paper adds value to theory by synthesizing existing research on agility in knowledge-intensive organizations. It furthermore may serve as a map for closing research gaps by proposing an extensive agenda for future research. Our study expands existing literature reviews on agility with its specific focus on a knowledge-intensive context and its integration of the research streams of knowledge management capabilities as well as information technology capabilities. It integrates relevant organizational knowledge management practices and the use of knowledge management systems to ensure superior performance effects. Our study can serve as a base for future examinations of organizational agility by illustrating fruitful topics for further examination as well as open questions. It may also provide value to practitioners by showing what factors favor the development of agility in knowledge-intensive organizations and what organizational effects can be achieved under which conditions.
Die minimal-invasive Chirurgie (MIC) entwickelt sich durch den Einsatz von medizinischen Robotern wie dem da Vinci System von Intuitive Surgical stetig weiter. Hierdurch kann eine bessere oder gleichwertige Operation bei deutlich geringerer körperlicher Belastung des Operateurs erreicht werden. Dabei entstehen jedoch neue Problemstellungen wie beispielsweise Kollision zwischen Roboterarmen und die benötigte Zeit zum Einrichten einer geeigneten Roboterkonfiguration. Daher ist eine effiziente Vorbereitung und Planung der Interventionen erforderlich. Diese Arbeit präsentiert einen Ansatz für eine verbesserte Planung mit Augmented Reality (AR) und einer Robotik Simulationssoftware (RS). Die Robotik Simulation dient zur Berechnung einer Roboterkonfiguration unter Vorgabe der Port-Positionen. Augmented Reality wird verwendet, um die berechneten Pose in der realen Umgebung zu visualisieren und somit leichter in den Operationssaal zu übertragen.
Continuous monitoring of individual vital parameters can provide information for the assessment of one’s health and indications of medical problems in the context of personalized medicine. Correlations between parameters and health issues are to be evaluated. As one project in this topic area, a telemedicine platform is implemented to gather data of outpatients via wearables and accumulate them for physicians and researchers to review. This work extracts requirements, draws use case scenarios, and shows the current system architecture consisting of a patient application, a physician application with a web server, and a backend server application. In further work, the prototype will assist to develop a vendor-free and open monitoring solution. A conclusion on functionality and usability will be evaluated in an imminent first study.
Die Arbeit stellt die Vision des Internet of Things (IoT) vor und betrachtet sowohl Möglichkeiten der Nutzung als auch Gefahrenpotentiale für die Sicherheit der Nutzer. Insbesondere wird hierbei der Anwendungsfall Smart Home näher betrachtet und am Beispiel ZigBee gravierende Schwächen dieser Geräte aufgezeigt.
Diese Arbeit befasst sich mit möglichen Eingabegeräten für VR-Anwendungen, die mit HMDs betrachtet werden. Es wird überprüft, ob grundlegende Interaktionsmöglichkeiten wie Navigation durch den Raum, Texteingabe und Objektauswahl mit den evaluierten Geräten umsetzbar ist. Untersucht werden der Leap Motion Controller, die Kinect 2, das Myo-Armband, der Xbox-Controller und die Razer Hydra.
Applications often need to be deployed in different variants due to different customer requirements. However, since modern applications often need to be deployed using multiple deployment technologies in combination, such as Ansible and Terraform, the deployment variability must be considered in a holistic way. To tackle this, we previously developed Variability4TOSCA and the prototype OpenTOSCA Vintner, which is a TOSCA preprocessing and management layer that implements Variability4TOSCA. In this demonstration, we present a detailed case study that shows how to model a deployment using Variability4TOSCA, how to resolve the variability using Vintner, and how the result can be deployed.
Application systems often need to be deployed in different variants if requirements that influence their implementation, hosting, and configuration differ between customers. Therefore, deployment technologies, such as Ansible or Terraform, support a certain degree of variability modeling. Besides, modern application systems typically consist of various software components deployed using multiple deployment technologies that only support their proprietary, non-interoperable variability modeling concepts. The Variable Deployment Metamodel (VDMM) manages the deployment variability across heterogeneous deployment technologies based on a single variable deployment model. However, VDMM currently only supports modeling conditional components and their relations which is sometimes too coarse-grained since it requires modeling entire components, including their implementation and deployment configuration for each different component variant. Therefore, we extend VDMM by a more fine-grained approach for managing the variability of component implementations and their deployment configurations, e.g., if a cheap version of a SaaS deployment provides only a community edition of the software and not the enterprise edition, which has additional analytical reporting functionalities built-in. We show that our extended VDMM can be used to realize variable deployments across different individual deployment technologies using a case study and our prototype OpenTOSCA Vintner.
Purpose: This study aims to conceptualize and test the effect of consumers´ perceptions of complaint handling quality (PCHQ) in both traditional and social media channels.
Design/methodology/approach: Study 1 systematically reviews the relevant literature and then carries out a consumer and manager survey. This approach aims to conceptualize the dimensionality of PCHQ. Study 2 tests the effect of PCHQ on key marketing outcomes. Using survey data from a German telecommunications company, the study provides an explanation for the differences in outcomes across traditional (hotline) and social media channels.
Findings: Study 1 reveals that PCHQ is best conceptualized as a five dimensional construct with 15 facets. There are significant differences between customers and managers in terms of the importance attached to the various dimensions. The construct shows strong psychometric properties with high reliability and validity, thereby opening up opportunities to treat these facets as measurement indicators for the construct. Study 2 indicates that the effect of PCHQ on consumer loyalty and word-of-mouth (WOM) communication is stronger in social media than in traditional channels. Procedural justice and the overall quality of service solutions emerge as general dimensions of PCHQ because they are equally important in both channels. In contrast, interactional justice, distributive justice and customer effort have varying effects across the two channels.
Research limitations/implications: This study contributes to the understanding of a firm´s channel selection for complaint handling in two ways. First, it evaluates and conceptualizes the PCHQ construct. Second, it compares the effects of different dimensions of PCHQ on key marketing outcomes across traditional and socialmedia channels.
Practical implications: This study enables managers to understand the difference in efficacy attached to different dimensions of PCHQ. It further highlights such differences across traditional and social media service channels. For example, the effect of complaint handling on social media is of particular importance when generating WOM communication.
Originality/value: This study offers a comprehensive conceptualization of the PCHQ construct and reveals the general and channel contingent effects of its different dimensions on key marketing outcomes.
Entrepreneurs and small and medium enterprises usually have issues on developing new prototypes, new ideas or testing new techniques. In order to help them, in the last years, academic Software Factories, a new concept of collaboration between universities and companies have been developed. Software Factories provide a unique environment for students and companies. Students benefit from the possibility of working in a real work environment learning how to apply the state of the art of the existing techniques and showing their skills to entrepreneurs. Companies benefit from the risk-free environment where they can develop new ideas, in a protected environment. Universities finally benefit from this setup as a perfect environment for empirical studies in industrial-like environment. In this paper, we present the network of academic Software Factories in Europe, showing how companies had already benefit from existing Software Factories and reporting success stories. The results of this paper can increase the network of the factories and help other universities and companies to setup similar environment to boost the local economy.
Near-Data Processing (NDP) is a key computing paradigm for reducing the ever growing time and energy costs of data transport versus computations. With their flexibility, FPGAs are an especially suitable compute element for NDP scenarios. Even more promising is the exploitation of novel and future non-volatile memory (NVM) technologies for NDP, which aim to achieve DRAM-like latencies and throughputs, while providing large capacity non-volatile storage.
Experimentation in using FPGAs in such NVM-NDP scenarios has been hindered, though, by the fact that the NVM devices/FPGA boards are still very rare and/or expensive. It thus becomes useful to emulate the access characteristics of current and future NVMs using off-the-shelf DRAMs. If such emulation is sufficiently accurate, the resulting FPGA-based NDP computing elements can be used for actual full-stack hardware/software benchmarking, e.g., when employed to accelerate a database.
For this use, we present NVMulator, an open-source easy-to-use hardware emulation module that can be seamlessly inserted between the NDP processing elements on the FPGA and a conventional DRAM-based memory system. We demonstrate that, with suitable parametrization, the emulated NVM can come very close to the performance characteristics of actual NVM technologies, specifically Intel Optane. We achieve 0.62% and 1.7% accuracy for cache line sized accesses for read and write operations, while utilizing only 0.54% of LUT logic resources on a Xilinx/AMD AU280 UltraScale+ FPGA board. We consider both file-system as well as database access patterns, examining the operation of the RocksDB database when running on real or emulated Optane-technology memories.
For a long time, most discrete accelerators have been attached to host systems using various generations of the PCI Express interface. However, with its lack of support for coherency between accelerator and host caches, fine-grained interactions require frequent cache-flushes, or even the use of inefficient uncached memory regions. The Cache Coherent Interconnect for Accelerators (CCIX) was the first multi-vendor standard for enabling cache-coherent host-accelerator attachments, and already is indicative of the capabilities of upcoming standards such as Compute Express Link (CXL). In our work, we compare and contrast the use of CCIX with PCIe when interfacing an ARM-based host with two generations of CCIX-enabled FPGAs. We provide both low-level throughput and latency measurements for accesses and address translation, as well as examine an application-level use-case of using CCIX for fine-grained synchronization in an FPGA-accelerated database system. We can show that especially smaller reads from the FPGA to the host can benefit from CCIX by having roughly 33% shorter latency than PCIe. Small writes to the host have a latency roughly 32% higher than PCIe, though, since they carry a higher coherency overhead. For the database use-case, the use of CCIX allowed to maintain a constant synchronization latency even with heavy host-FPGA parallelism.
Hardly any software development process is used as prescribed by authors or standards. Regardless of company size or industry sector, a majority of project teams and companies use hybrid development methods (short: hybrid methods) that combine different development methods and practices. Even though such hybrid methods are highly individualized, a common understanding of how to systematically construct synergetic practices is missing. In this article, we make a first step towards a statistical construction procedure for hybrid methods. Grounded in 1467 data points from a large‐scale practitioner survey, we study the question: What are hybrid methods made of and how can they be systematically constructed? Our findings show that only eight methods and few practices build the core of modern software development. Using an 85% agreement level in the participants' selections, we provide examples illustrating how hybrid methods can be characterized by the practices they are made of. Furthermore, using this characterization, we develop an initial construction procedure, which allows for defining a method frame and enriching it incrementally to devise a hybrid method using ranked sets of practice.
Among the multitude of software development processes available, hardly any is used by the book. Regardless of company size or industry sector, a majority of project teams and companies use customized processes that combine different development methods— so-called hybrid development methods. Even though such hybrid development methods are highly individualized, a common understanding of how to systematically construct synergetic practices is missing. In this paper, we make a first step towards devising such guidelines. Grounded in 1,467 data points from a large-scale online survey among practitioners, we study the current state of practice in process use to answer the question: What are hybrid development methods made of? Our findings reveal that only eight methods and few practices build the core of modern software development. This small set allows for statistically constructing hybrid development methods. Using an 85% agreement level in the participants’ selections, we provide two examples illustrating how hybrid development methods are characterized by the practices they are made of. Our evidence-based analysis approach lays the foundation for devising hybrid development methods.
Regardless of company size or industry sector, a majority of project teams and companies use customized processes that combine different development methods-so-called hybrid development methods. Even though such hybrid development methods are highly individualized, a common understanding of how to systematically construct synergetic practices is missing. Based on 1,467 data points from a large-scale online survey among practitioners, we study the current state of practice in process use to answer the question: What are hybrid development methods made of? Our findings reveal that only eight methods and few practices build the core of modern software development. This small set allows for statistically constructing hybrid development methods.
Facial expressions play a dominant role in facilitating social interactions. We endeavor to develop tactile displays to reinstate facial expression modulated communication. The high spatial and temporal dimensionality of facial movements poses a unique challenge when designing tactile encodings of them. A further challenge is developing encodings that are at-tuned to the perceptual characteristics of our skin. A caveat of using vibrotactile displays is that tactile stimuli have been shown to induce perceptual tactile aftereffects when used on the fingers, arm and face. However, at present, despite the prevalence of waist-worn tactile displays, no such investigations of tactile aftereffects at the waist region exist in the literature, though they are warranted by the unique sensory and perceptual signalling characteristics of this area. Using an adaptation paradigm we investigated the presence of perceptual tactile aftereffects induced by continuous and burst vibrotactile stimuli delivered at the navel, side and spinal regions of the waist. We report evidence that the tactile perception topology of the waist is non-uniform, and specifically that the navel and spine regions are resistant to adaptive aftereffects while side regions are more prone to perceptual adaptations to continuous but not burst stimulations. Results of our current investigations highlight the unique set of challenges posed by designing waist-worn tactile displays. These and future perceptual studies can directly inform more realistic and effective implementations of complex high-dimensional spatiotemporal social cues.