Informatik
Refine
Year of publication
Document Type
- Conference proceeding (567)
- Journal article (199)
- Book chapter (62)
- Doctoral Thesis (18)
- Book (10)
- Anthology (10)
- Patent / Standard / Guidelines (2)
- Report (2)
- Working Paper (2)
Is part of the Bibliography
- yes (872)
Institute
- Informatik (872)
- Technik (2)
- ESB Business School (1)
Publisher
- Springer (173)
- Hochschule Reutlingen (104)
- IEEE (89)
- Gesellschaft für Informatik (60)
- Elsevier (46)
- ACM (33)
- IARIA (26)
- Springer Gabler (15)
- De Gruyter (12)
- Association for Information Systems (AIS) (11)
- RWTH Aachen (11)
- Università Politecnica delle Marche (11)
- MDPI (10)
- Deutsche Gesellschaft für Computer- und Roboterassistierte Chirurgie e.V. (9)
- SCITEPRESS (8)
- Haufe (7)
- IOS Press (7)
- Emerald (6)
- University of Hawai'i at Manoa (6)
- Association for Computing Machinery (5)
- SPIE (5)
- Fac. of Organization & Informatics, Univ. of Zagreb (4)
- IGI Global (4)
- RWTH (4)
- Springer International Publishing (4)
- University of Hawaii at Manoa (4)
- Universität Tübingen (4)
- American Marketing Association (3)
- Deutsche Gesellschaft für Computer- und Roboterassistierte Chirurgie e. V. (3)
- International Academy of Business Disciplines (3)
- Open Proceedings.org, Univ. of Konstanz (3)
- Riga Technical University Press (3)
- Sage (3)
- Science and Technology Publications (3)
- Springer Science + Business Media B.V (3)
- Springer Science + Business Media B.V. (3)
- University of Konstanz, University Library (3)
- Wiley-Blackwell (3)
- Academic Conferences International (2)
- American Marketing Assoc. (2)
- BioMed Central (2)
- CSW-Verlag (2)
- Curran Associates (2)
- Curran Associates Inc. (2)
- Deutsche Aktuarvereinigung (DAV) e.V. (2)
- EuroMed Press (2)
- GMDS e.V. (2)
- Gabler (2)
- Gesellschaft für Informatik e.V (2)
- HTWG Konstanz (2)
- IADIS (2)
- IADIS Press (2)
- IBM Research Division (2)
- International Association for Development of the Information Society (2)
- International Society for Photogrammetry and Remote Sensing (2)
- PeerJ Ltd. (2)
- Smart Home & Living Baden-Württemberg e.V. (2)
- Springer Vieweg (2)
- Taylor & Francis (2)
- The Association for Computing Machinery, Inc. (2)
- Thieme (2)
- University of Hawaii (2)
- University of the West of Scotland (2)
- Universität Stuttgart (2)
- 3m5.Media GmbH (1)
- AIP Publishing (1)
- ARVO (1)
- Academic Conferences International Limited (1)
- Association for Computing Machinery ACM (1)
- Association of Computing Machinery (1)
- CIDR (1)
- CMP-WEKA-Verlag (1)
- Cambridge University Press (1)
- Circle International (1)
- Copenhagen Business School (1)
- Cornell Universiy (1)
- Cuvillier Verlag (1)
- DIMECC Oy (1)
- DUZ Medienhaus (1)
- Deutsche Gesellschaft für Medizinische Physik (1)
- Deutsche Gesellschaft für die Computer- und Roboterassistierte Chirurgie e.V. (1)
- EDP Sciences (1)
- EMAC (1)
- Ed2.0Work (1)
- Elektronikpraxis, Vogel Business Media GmbH & Co. KG (1)
- EuroMedPress (1)
- Eurographics Association (1)
- Fachausschuß Management der Anwendungsentwicklung und -wartung (1)
- Faculty of Economics (1)
- Faculty of Organization and Informatics, University of Zagreb (1)
- Frontiers Media (1)
- Frontiers Research Foundation (1)
- GBI-Genios (1)
- GITO-Verl. (1)
- German Medical Science Publishing House (1)
- Haufe Group (1)
- Hochschule Heilbronn (1)
- Hochschule der Medien (1)
- IGI Publ. (1)
- IGI Publishing (1)
- IMC Information multimedia communication AG (1)
- Inderscience Publ. (1)
- Inst. of Electrical and Electronics Engineers (1)
- JMIR Publications (1)
- Johannes Kepler University Linz (1)
- Karlsruher Institut für Technologie (1)
- Lund University (1)
- MCB University Press (1)
- MFG Stiftung Baden-Württemberg (1)
- MHP. a Porsche Company (1)
- Morressier (1)
- NextMed (1)
- OpenProceedings (1)
- PLOS (1)
- Pabst Science Publishers (1)
- Pallas Press (1)
- PeerJ (1)
- Riga Technical University (1)
- Routledge, Taylor & Francis Group (1)
- SISSA (1)
- SSRN (1)
- SciKA (1)
- Science and Technology Publications, Lda (1)
- Shaker Verlag (1)
- Society for Science and Education (1)
- Springer Nature (1)
- Springer Science + Business Media (1)
- Technical University (1)
- Technische Universität Darmstadt (1)
- The Association for Computing Machinery (1)
- UVW, Universitätsverlag Verlag Webler (1)
- Univ. de Jaén (1)
- Universidad Carlos III de Madrid (1)
- University of Minho (1)
- University of Portsmouth (1)
- University of Zagreb Faculty of Organization and Informatics (1)
- Universität Leipzig (1)
- Universität Trier (1)
- Universität des Saarlandes (1)
- Univerzita Tomáe Bati (1)
- Wiley (1)
- World Scientific (1)
- World Scientific Publishing (1)
- de Gruyter (1)
- dpunkt-Verlag (1)
- libreriauniversitaria.it.edizioni (1)
- vwh (1)
Perceptual integration of kinematic components in the recognition of emotional facial expressions
(2018)
According to a long-standing hypothesis in motor control, complex body motion is organized in terms of movement primitives, reducing massively the dimensionality of the underlying control problems. For body movements, this low dimensional organization has been convincingly demonstrated by the learning of low-dimensional representations from kinematic and EMG data. In contrast, the effective dimensionality of dynamic facial expressions is unknown, and dominant analysis approaches have been based on heuristically defined facial ‘‘action units,’’ which reflect contributions of individual face muscles. We determined the effective dimensionality of dynamic facial expressions by learning of a low dimensional model from 11 facial expressions. We found an amazingly low dimensionality with only two movement primitives being sufficient to simulate these dynamic expressions with high accuracy. This low dimensionality is confirmed statistically, by Bayesian model comparison of models with different numbers of primitives, and by a psychophysical experiment that demonstrates that expressions, simulated with only two primitives, are indistinguishable from natural ones.
In addition, we find statistically optimal integration of the emotion information specified by these primitives in visual perception. Taken together, our results indicate that facial expressions might be controlled by a very small number of independent control units, permitting very low dimensional parametrization of the associated facial expression.
Context: Companies increasingly strive to adapt to market and ecosystem changes in real time. Gauging and understanding team performance in such changing environments present a major challenge.
Objective: This paper aims to understand how software developers experience the continuous adaptation of performance in a modern, highly volatile environment using Lean and Agile software development methodology. This understanding can be used as a basis for guiding formation and maintenance of high-performing teams, to inform performance improvement initiatives, and to improve working conditions for software developers.
Method: A qualitative multiple-case study using thematic interviews was conducted with 16 experienced practitioners in five organisations.
Results: We generated a grounded theory, Performance Alignment Work, showing how software developers experience performance. We found 33 major categories of performance factors and relationships between the factors. A cross-case comparison revealed similarities and differences between different kinds and different sizes of organisations.
Conclusions: Based on our study, software teams are engaged in a constant cycle of interpreting their own performance and negotiating its alignment with other stakeholders. While differences across organisational sizes exist, a common set of performance experiences is present despite differences in context variables. Enhancing performance experiences requires integration of soft factors, such as communication, team spirit, team identity, and values, into the overall development process. Our findings suggest a view of software development and software team performance that centres around behavioural and social sciences.
Monitoring heart rate and breathing is essential in understanding the physiological processes for sleep analysis. Polysomnography (PSG) system have traditionally been used for sleep monitoring, but alternative methods can help to make sleep monitoring more portable in someone's home. This study conducted a series of experiments to investigate the use of pressure sensors placed under the bed as an alternative to PSG for monitoring heart rate and breathing during sleep. The following sets of experiments involved the addition of small rubber domes - transparent and black - that were glued to the pressure sensor. The resulting data were compared with the PSG system to determine the accuracy of the pressure sensor readings. The study found that the pressure sensor provided reliable data for extracting heart rate and respiration rate, with mean absolute errors (MAE) of 2.32 and 3.24 for respiration and heart rate, respectively. However, the addition of small rubber hemispheres did not significantly improve the accuracy of the readings, with MAEs of 2.3 bpm and 7.56 breaths per minute for respiration rate and heart rate, respectively. The findings of this study suggest that pressure sensors placed under the bed may serve as a viable alternative to traditional PSG systems for monitoring heart rate and breathing during sleep. These sensors provide a more comfortable and non-invasive method of sleep monitoring. However, the addition of small rubber domes did not significantly enhance the accuracy of the readings, indicating that it may not be a worthwhile addition to the pressure sensor system.
Sleep is an important aspect in life of every human being. The average sleep duration for an adult is approximately 7 h per day. Sleep is necessary to regenerate physical and psychological state of a human. A bad sleep quality has a major impact on the health status and can lead to different diseases. In this paper an approach will be presented, which uses a long-term monitoring of vital data gathered by a body sensor during the day and the night supported by mobile application connected to an analyzing system, to estimate sleep quality of its user as well as give recommendations to improve it in real-time. Actimetry and historical data will be used to improve the individual recommendations, based on common techniques used in the area of machine learning and big data analysis.
Assistive environments are entering our homes faster than ever. However, there are still various barriers to be broken. One of the crucial points is a personalization of offered services and integration of assistive technologies in common objects and therefore in a regular daily routine. Recognition of sleep patterns for the preliminary sleep study is one of the Health services that could be performed in an undisturbing way. This article proposes the hardware system for the measurement of bio-vital signals necessary for initial sleep study in a nonobtrusive way. The first results confirm the potential of measurement of breathing and movement signals with the proposed system.
The performance and scalability of modern data-intensive systems are limited by massive data movement of growing datasets across the whole memory hierarchy to the CPUs. Such traditional processor-centric DBMS architectures are bandwidth- and latency-bound. Processing-in-Memory (PIM) designs seek to overcome these limitations by integrating memory and processing functionality on the same chip. PIM targets near- or in-memory data processing, leveraging the greater in-situ parallelism and bandwidth.
In this paper, we introduce pimDB and provide an initial comparison of processor-centric and PIM-DBMS approaches under different aspects, such as scalability and parallelism, cache-awareness, or PIM-specific compute/bandwidth tradeoffs. The evaluation is performed end-to-end on a real PIM hardware system from UPMEM.
Human pose estimation (HPE) is integral to scene understanding in numerous safety-critical domains involving human-machine interaction, such as autonomous driving or semi-automated work environments. Avoiding costly mistakes is synonymous with anticipating failure in model predictions, which necessitates meta-judgments on the accuracy of the applied models. Here, we propose a straightforward human pose regression framework to examine the behavior of two established methods for simultaneous aleatoric and epistemic uncertainty estimation: maximum a-posteriori (MAP) estimation with Monte-Carlo variational inference and deep evidential regression (DER). First, we evaluate both approaches on the quality of their predicted variances and whether these truly capture the expected model error. The initial assessment indicates that both methods exhibit the overconfidence issue common in deep probabilistic models. This observation motivates our implementation of an additional recalibration step to extract reliable confidence intervals. We then take a closer look at deep evidential regression, which, to our knowledge, is applied comprehensively for the first time to the HPE problem. Experimental results indicate that DER behaves as expected in challenging and adverse conditions commonly occurring in HPE and that the predicted uncertainties match their purported aleatoric and epistemic sources. Notably, DER achieves smooth uncertainty estimates without the need for a costly sampling step, making it an attractive candidate for uncertainty estimation on resource-limited platforms.
An enormous amount of data in the context of business processes is stored as images. They contain valuable information for business process management. Up to now this data had to be integrated manually into the business process. By advances of capturing it is possible to extract information from an increasing number of images. Therefore, we systematically investigate the potentials of Image Mining for business process management by a literature research and an in-depth analysis of the business process lifecycle. As a first step to evaluate our research, we developed a prototype for recovering process model information from drawings using Rapidminer.
Potentials of smart contracts-based disintermediation in additive manufacturing supply chains
(2019)
We investigate which potentials are created by using smart contracts for disintermediation in supply chains for additive manufacturing. Using a qualitative, critical realist research approach, we analyzed three case studies with companies active in additive manufactures. Based on interviews with experts from these companies, we could identify eight key requirements for disintermediation and associate four potentials of smart contracts-based disintermediation.
Due to decreased mobility or families living apart, older adults are especially vulnerable to the issue of social isolation. Literature suggests that technology can help to prevent this isolation. The present work addresses an approach to participate in society by sharing knowledge that is cherished. We propose the cooking recipe exchange application PrecRec for older adults to make them feel precious and valued. PrecRec has been developed and evaluated in an iterative process with eleven older adults. The results show that a broad perspective has to be taken into account when designing such systems.
Additive manufacturing (AM) is a promising manufacturing method for many industrial sectors. For this application, industrial requirements such as high production volumes and coordinated implementation must be taken into account. These tasks of the internal handling of production facilities are carried out by the Production Planning and Control (PPC) information system. A key factor in the planning and scheduling is the exact calculation of manufacturing times. For this purpose we investigate the use of Machine Learning (ML) for the prediction of manufacturing times of AM facilities.
Predictive maintenance information systems: the underlying conditions and technological aspects
(2020)
Predictive maintenance has the potential to improve the reliability of production and service provisioning. However, there is little knowledge about the proper implementation of predictive maintenance in research and practice. Therefore, we conducted a multi-case study and investigated underlying conditions and technological aspects for implementing a predictive maintenance system and where it leads to. We found that predictive maintenance initiatives are triggered by severe impacts of failures on revenue and profit. Furthermore, successful predictive maintenance initiatives require that pre-conditions are fulfilled: Data must be available and accessible. Very important is also the support by the management. We identified four factors important for the implementation of predictive maintenance. The integration of data is highly facilitated by Cloud-based mechanisms. The detection of events is enabled by advanced analytics. The execution of predictive maintenance operations is supported by data-driven process automation and visualization.
Preface of IDEA 2015
(2016)
Preliminary results of homomorphic deconvolution application to surface EMG signals during walking
(2021)
Homomorphic deconvolution is applied to sEMG signals recorded during walking. Gastrocnemius lateralis and tibialis anterior signals were acquired according to SENIAM recommendation. MUAP parameters like amplitude and scale were estimated, whilst the MUAP shape parameter was fixed. This features a useful time-frequency representation of sEMG signal. Estimation of scale MUAP parameter was verified extracting the mean frequency of filtered EMG signal, extracted from the scale parameter estimated with two different MUAP shape values.
Proceedings of the International Workshop on Mobile Networks for Biometric Data Analysis (mBiDA)
(2014)
Prevention and treatment of common and widesprea (chronic) diseases is a challenge in any modern Society and vitally important for health maintenance in aging societies. Capturing biometric data is a cornerstone for any analysis and Treatment strategy. Latest advances in sensor technology allow accurate data measurement in a non-intrusive way. In many cases, it is necessary to provide online monitoring and real-time data capturing to support patients´ prevention plans or to allow medical professionals to access the current status. Different communication standards are required to push sensor data and to store and analyze them on different (mobile) platforms. The objective of the workshop is to show new and innovative approaches dedicated to biometric data capture and analysis in a non-intrusive way maintaining mobility. Examples can be found in human centered ambient intelligence attributed with sensors or even in methodologies applied in automotive real-time conform mobile system design. The workshop´s main challenge is to focus on approaches promoting non-intrusiveness, reliable prediction algorithms and high user-acceptance. The workshop will provide overview presentations, Young researcher poster tracks, doctoral tracks and classical peer-review full paper tracks. Especially, would like to encourage students and young researchers to participate and to contribute to the workshop. Scientific contributions to the event are peer-reviewed by a suited program committee.
In recent years companies have faced challenges by high market dynamics, rapidly evolving technologies and shifting user expectations. Together with the adaption of lean and agile practices, it is increasingly difficult to predict upfront which products, features or services will satisfy the needs of the customers and the organization. Currently, many new products fail to produce a significant financial return. One reason is that companies are not doing enough product discovery activities. Product discovery aims at tackling the various risks before the implementation of a product starts. The academic literature only provides little guidance for conducting product discovery in practice. Objective: In order to gain a better understanding of product discovery activities in practice, this paper aims at identifying motivations, approaches, challenges, risks, and pitfalls of product discovery reported in the grey literature. Method: We performed a grey literature review (GLR) according to the guidelines to Garousi et al. Results: The study shows that the main motivation for conducting product discovery activities is to reduce the uncertainty to a level that makes it possible to start building a solution that provides value for the customers and the business. Several product discovery approaches are reported in the grey literature which include different phases such as alignment, problem exploration, ideation, and validation. Main challenges are, among others, the lack of clarity of the problem to be solved, the prescription of concrete solutions through management or experts, and the lack of cross-functional collaboration.
Context: A product roadmap is an important tool in product development. It sets the strategic direction in which the product is to be developed to achieve the company’s vision. However, for product roadmaps to be successful, it is essential that all stakeholders agree with the company’s vision and objectives and are aligned and committed to a common product plan.
Objective: In order to gain a better understanding of product roadmap alignment, this paper aims at identifying measures, activities and techniques in order to align the different stakeholders around the product roadmap.
Method: We conducted a grey literature review according the guidelines to Garousi et al.
Results: Several approaches to gain alignment were identified such as defining and communicating clear objectives based on the product vision, conducting cross-functional workshops, shuttle diplomacy, and mission briefing. In addition, our review identified the “Behavioural Change Stairway Model” that suggests five steps to gain alignment by building empathy and a trustful relationship.
Product roadmaps are an important tool in product development. They provide direction, enable consistent development in relation to a product vision and support communication with relevant stakeholders. There are many different formats for product roadmaps, but they are often based on the assumption that the future is highly predictable. However, especially software-intensive businesses are faced with increasing market dynamics, rapidly evolving technologies and changing user expectations. As a result, many organizations are wondering what roadmap format is appropriate for them and what components it should have to deal with an unpredictable future. Objectives: To gain a better understanding of the formats of product roadmaps and their components, this paper aims to identify suitable formats for the development and handling of product roadmaps in dynamic and uncertain markets. Method: We performed a grey literature review (GLR) according to the guidelines from Garousi. Results: A Google search identified 426 articles, 25 of which were included in this study. First, various components of the roadmap were identified, especially the product vision, themes, goals, outcomes and outputs. In addition, various product roadmap formats were discovered, such as feature-based, goal-oriented, outcome-driven and a theme-based roadmap. The roadmap components were then assigned to the various product roadmap formats. This overview aims at providing initial decision support for companies to select a suitable product roadmap format and adapt it to their own needs.
Context: Companies in highly dynamic markets increasingly struggle with their ability to plan product development and to create reliable roadmaps. A main reason is the decreasing lack of predictability of markets, technologies, and customer behaviors. New approaches for product roadmapping seem to be necessary in order to cope with today's highly dynamic conditions. Little research is available with respect to such new approaches. Objective: In order to better understand the state of the art and to identify research gaps, this article presents a review of the scientific literature with respect to product roadmapping. Method: We performed a systematic literature review (SLR) with respect to identify papers in the field of computer science. Results: After filtering, the search resulted in a set of 23 relevant papers. The identified papers focus on different aspects such as roadmap types, processes for creating and updating roadmaps, problems and challenges with roadmapping, approaches to visualize roadmaps, generic frameworks and specific aspects such as the combination of roadmaps with business modeling. Overall, the scientific literature covers many important aspects of roadmapping but does provide only little knowledge on how to create product roadmaps under highly dynamic conditions. Research gaps address, for instance, the inclusion of goals or outcomes into product roadmaps, the alignment of a roadmap with a product vision, and the inclusion of product discovery activities in product roadmaps. In addition, the transformation from traditional roadmapping processes to new ways of roadmapping is not sufficiently addressed in the scientific literature.
Context: Currently, most companies apply approaches for product roadmapping that are based on the assumption that the future is highly predicable. However, nowadays companies are facing the challenge of increasing market dynamics, rapidly evolving technologies, and shifting user expectations. Together with the adaption of lean and agile practices it makes it increasingly difficult to plan and predict upfront which products, services or features will satisfy the needs of the customers. Therefore, they are struggling with their ability to provide product roadmaps that fit into dynamic and uncertain market environments and that can be used together with lean and agile software development practices.
Objective: To gain a better understanding of modern product roadmapping processes, this paper aims to identify suitable processes for the creation and evolution of product roadmaps in dynamic and uncertain market environments.
Method: We performed a Grey Literature Review (GLR) according to the guidelines from Garousi et al.
Results: 32 approaches to product roadmapping were identified. Typical characteristics of these processes are the strong connection between the product roadmap and the product vision, an emphasis on stakeholder alignment, the definition of business and customer goals as part of the roadmapping process, a high degree of flexibility with respect to reaching these goals, and the inclusion of validation activities in the roadmapping process. An overall goal of nearly all approaches is to avoid waste by early reducing development and business risks. From the list of the 32 approaches found, four representative roadmapping processes are described in detail.
Product roadmaps in the new mobility domain: state of the practice and industrial experiences
(2021)
Context: The New Mobility industry is a young market that includes high market dynamics and is therefore associated with a high degree of uncertainty. Traditional product roadmapping approaches such a detailed planning of features over a long-time horizon typically fail in such environments. For this reason, companies that are active in the field of New Mobility are faced with the challenge of keeping their product roadmaps reliable for stakeholders while at the same time being able to react flexibly to changing market requirements.
Objective: The goal of this paper is to identify the state of practice regarding product roadmapping of New Mobility companies. In addition, the related challenges within the product roadmapping process as well as the success factors to overcome these challenges will be highlighted.
Method: We conducted semi-structured expert interviews with 8 experts (7 German company and one Finnish company) from the field of New Mobility and performed a content analysis.
Results: Overall the results of the study showed that the participating companies are aware of the requirements that the New Mobility sector entails. Therefore, they exhibit a high level of maturity in terms of product roadmapping. Nevertheless, some aspects were revealed that pose specific challenges for the participating companies. One major challenge, for example, is that New Mobility in terms of public clients is often a tender business with non-negotiable product requirements. Thus, the product roadmap can be significantly influenced from the outside. As factors for a successful product roadmapping mainly soft factors such as trust between all people involved in the product development process and transparency throughout the entire roadmapping process were mentioned.
Context: The software-intensive business is characterized by increasing market dynamics, rapid technological changes, and fast-changing customer behaviors. Organizations face the challenge of moving away from traditional roadmap formats to an outcome-oriented approach that focuses on delivering value to the customer and the business. An important starting point and a prerequisite for creating such outcome-oriented roadmaps is the development of a product vision to which internal and external stakeholders can be aligned. However, the process of creating a product vision is little researched and understood.
Objective: The goal of this paper is to identify lessons-learned from product vision workshops, which were conducted to develop outcome-oriented product roadmaps.
Method: We conducted a multiple-case study consisting of two different product vision workshops in two different corporate contexts.
Results: Our results show that conducting product vision workshops helps to create a common understanding among all stakeholders about the future direction of the products. In addition, we identified key organizational aspects that contribute to the success of product vision workshops, including the participation of employees from functionally different departments.
Enterprise architecture management (EAM) is a holistic approach to tackle the complex Business and IT architecture. The transformation of an organization’s EA towards a strategy-oriented system is a continuous task. Many stakeholders have to elaborate on various parts of the EA to reach the best decisions to shape the EA towards an optimized support of the organizations’ capabilities. Since the real world is too complex, analyzing techniques are needed to detect optimization potentials and to get all information needed about an issue. In practice visualizations are commonly used to analyze EAs. However these visualizations are mostly static and do not provide analyses. In this article we combine analyzing techniques from literature and interactive visualizations to support stakeholders in EA decision-making.
Die minimal-invasive Chirurgie (MIC) entwickelt sich durch den Einsatz von medizinischen Robotern wie dem da Vinci System von Intuitive Surgical stetig weiter. Hierdurch kann eine bessere oder gleichwertige Operation bei deutlich geringerer körperlicher Belastung des Operateurs erreicht werden. Dabei entstehen jedoch neue Problemstellungen wie beispielsweise Kollision zwischen Roboterarmen und die benötigte Zeit zum Einrichten einer geeigneten Roboterkonfiguration. Daher ist eine effiziente Vorbereitung und Planung der Interventionen erforderlich. Diese Arbeit präsentiert einen Ansatz für eine verbesserte Planung mit Augmented Reality (AR) und einer Robotik Simulationssoftware (RS). Die Robotik Simulation dient zur Berechnung einer Roboterkonfiguration unter Vorgabe der Port-Positionen. Augmented Reality wird verwendet, um die berechneten Pose in der realen Umgebung zu visualisieren und somit leichter in den Operationssaal zu übertragen.
Putting actions in context: visual action adaptation aftereffects are modulated by social contexts
(2014)
The social context in which an action is embedded provides important information for the interpretation of an action. Is this social context integrated during the visual recognition of an action? We used a behavioural visual adaptation paradigm to address this question and measured participants’ perceptual bias of a test action after they were adapted to one of two adaptors (adaptation after-effect). The action adaptation after effect was measured for the same set of adaptors in two different social contexts. Our results indicate that the size of the adaptation effect varied with social context (social context modulation) although the physical appearance of the adaptors remained unchanged. Three additional experiments provided evidence that the observed social context modulation of the adaptation effect are owed to the adaptation of visual action recognition processes. We found that adaptation is critical for the social context modulation (experiment 2). Moreover, the effect is not mediated by emotional content of the action alone (experiment 3) and visual information about the action seems to be critical for the emergence of action adaptation effects (experiment 4). Taken together these results suggest that processes underlying visual action recognition are sensitive to the social context of an action.
Uncontrolled movement of instruments in laparoscopic surgery can lead to inadvertent tissue damage, particularly when the dissecting or electrosurgical instrument is located outside the field of view of the laparoscopic camera. The incidence and relevance of such events are currently unknown. The present work aims to identify and quantify potentially dangerous situations using the example of laparoscopic cholecystectomy (LC). Twenty-four final year medical students were prompted to each perform four consecutive LC attempts on a well-established box trainer in a surgical training environment following a standardized protocol in a porcine model. The following situation was defined as a critical event (CE): the dissecting instrument was inadvertently located outside the laparoscopic camera’s field of view. Simultaneous activation of the electrosurgical unit was defined as a highly critical event (hCE). Primary endpoint was the incidence of CEs. While performing 96 LCs, 2895 CEs were observed. Of these, 1059 (36.6%) were hCEs. The median number of CEs per LC was 20.5 (range: 1–125; IQR: 33) and the median number of hCEs per LC was 8.0 (range: 0–54, IQR: 10). Mean total operation time was 34.7 min (range: 15.6–62.5 min, IQR: 14.3 min). Our study demonstrates the significance of CEs as a potential risk factor for collateral damage during LC. Further studies are needed to investigate the occurrence of CE in clinical practice, not just for laparoscopic cholecystectomy but also for other procedures. Systematic training of future surgeons as well as technical solutions address this safety issue.
Context: An experiment-driven approach to software product and service development is gaining increasing attention as a way to channel limited resources to the efficient creation of customer value. In this approach, software capabilities are developed incrementally and validated in continuous experiments with stakeholders such as customers and users. The experiments provide factual feedback for guiding subsequent development.
Objective: This paper explores the state of the practice of experimentation in the software industry. It also identifies the key challenges and success factors that practitioners associate with the approach.
Method: A qualitative survey based on semi-structured interviews and thematic coding analysis was conducted. Ten Finnish software development companies, represented by thirteen interviewees, participated in the study.
Results: The study found that although the principles of continuous experimentation resonated with industry practitioners, the state of the practice is not yet mature. In particular, experimentation is rarely systematic and continuous. Key challenges relate to changing the organizational culture, accelerating the development cycle speed, and finding the right measures for customer value and product success. Success factors include a supportive organizational culture, deep customer and domain knowledge, and the availability of the relevant skills and tools to conduct experiments.
Conclusions: It is concluded that the major issues in moving towards continuous experimentation are on an organizational level; most significant technical challenges have been solved. An evolutionary approach is proposed as a way to transition towards experiment-driven development.
Real Time Charging (RTC) applications that reside in the telecommunications domain have the need for extremely fast database transactions. Today´s providers rely mostly on in-memory databases for this kind of information processing. A flexible and modular benchmark suite specifically designed for this domain provides a valuable framework to test the performance of different DB candidates. Besides a data and a load generator, the suite also includes decoupled database connectors and use case components for convenient customization and extension. Such easily produced test results can be used as guidance for choosing a subset of candidates for further tuning/testing and finally evaluating the database most suited to the chosen use cases. This is why our benchmark suite can be of value for choosing databases for RTC use cases.
Context: The current situation and future scenarios of the automotive domain require a new strategy to develop high quality software in a fast pace. In the automotive domain, it is assumed that a combination of agile development practices and software product lines is beneficial, in order to be capable to handle high frequency of improvements. This assumption is based on the understanding that agile methods introduce more flexibility in short development intervals. Software product lines help to manage the high amount of variants and to improve quality by reuse of software for long term development.
Goal: This study derives a better understanding of the expected benefits for a combination. Furthermore, it identifies the automotive specific challenges that prevent the adoption of agile methods within the software product line.
Method: Survey based on 16 semi structured interviews from the automotive domain, an internal workshop with 40 participants and a discussion round on ESE congress 2016. The results are analyzed by means of thematic coding.
Reality mining refers to an application of data mining, using sensor data to drive behavioral patterns in the real world. However, research in this field started a decade ago when technology was far behind today's state of the art. This paper discusses which requirements are now posed to applications in the context of reality mining. A survey has shown which sensors are available in state-of-the-art smartphones and usable to gather data for reality mining. As another contribution of this paper, a reality mining application architecture is proposed to facilitate the implementation of such applications. A proof of concept verifies the assumptions made on reality mining and the presented architecture.
This document presents an algorithm for a nonobtrusive recognition of Sleep/Wake states using signals derived from ECG, respiration, and body movement captured while lying in a bed. As a core mathematical base of system data analytics, multinomial logistic regression techniques were chosen. Derived parameters of the three signals are used as the input for the proposed method. The overall achieved accuracy rate is 84% for Wake/Sleep stages, with Cohen’s kappa value 0.46. The presented algorithm should support experts in analyzing sleep quality in more detail. The results confirm the potential of this method and disclose several ways for its improvement.
The recovery of our body and brain from fatigue directly depends on the quality of sleep, which can be determined from the results of a sleep study. The classification of sleep stages is the first step of this study and includes the measurement of vital data and their further processing. The non-invasive sleep analysis system is based on a hardware sensor network of 24 pressure sensors providing sleep phase detection. The pressure sensors are connected to an energy-efficient microcontroller via a system-wide bus. A significant difference between this system and other approaches is the innovative way in which the sensors are placed under the mattress. This feature facilitates the continuous use of the system without any noticeable influence on the sleeping person. The system was tested by conducting experiments that recorded the sleep of various healthy young people. Results indicate the potential to capture respiratory rate and body movement.
At DBKDA 2019, we demonstrated that StrongDBMS with simple but rigorous optimistic algorithms, provides better performance in situations of high concurrency than major commercial database management systems (DBMS). The demonstration was convincing but the reasons for its success were not fully analysed. There is a brief account of the results below. In this short contribution, we wish to discuss the reasons for the results. The analysis leads to a strong criticism of all DBMS algorithms based on locking, and based on these results, it is not fanciful to suggest that it is time to re-engineer existing DBMS.
Medizinprodukte sind Gegenstände, Stoffe oder Software mit medizinischer Zweckbestimmung für die Anwendung am Menschen. Diese werden von Medizinprodukteherstellern entwickelt und auf den Markt eingeführt. Da die falsche Anwendung von Medizinprodukten bei Menschen zu Verletzbarkeit des menschlichen Körpers führen kann, ist eine angemessene Qualität der Medizinprodukte zu gewährtleisten. Um die Sicherstellung der Qualität einzuhalten, sind Medizinproduktehersteller verpflichtet, sich an die Medizinprodukteverordnung (MDR) zu halten. Für risikoreiche Produkte ist ergänzend die Nutzung eines Qualitätsmanagementsystems (QMS) verpflichtend. Dieses steuert die Struktur, Verantwortlichkeiten, Verfahren und Prozesse des Unternehmens, die für die Medizinprodukteentwicklung notwendig sind. In Zeiten der Digitalisierung werden Softwarelösungen eingesetzt, um die zeitaufwendigen Dokumentations- und Administrationstätigkeiten im QMS zu reduzieren und die Prozesse zu optimieren. Mit der Einführung einer Software wird ein QMS in der Praxis auch als elektronisches QMS (eQMS) bezeichnet. Weiterhin muss das gesamte QMS mit den Regularien konform sein. Deshalb ist das Ziel dieser Arbeit, mithilfe der regulatorischen Anforderungen herauszuarbeiten, welche Vorgaben bei der Einführung eines eQMS zu beachten sind und wie diese erfüllt werden können. Diese Arbeit bezieht sich auf die regulatorsichen Vorgaben aus der MDR und der ISO 13485. Die Norm beinhaltet Anforderungen an ein QMS von Medizinprodukten.
In this paper, an approach is introduced how reinforcement learning can be used to achieve interoperability between heterogeneous Internet of Things (IoT) components. More specifically, we model an HTTP REST service as a Markov Decision Process and adapt Q-Learning to the properties of REST so that an agent in the role of an HTTP REST client can learn the semantics of the service and, especially an optimal sequence of service calls to achieve an application specific goal. With our approach, we want to open up and facilitate a discussion in the community, as we see the key for achieving interoperability in IoT by the utilization of artificial intelligence techniques.
This study is about estimating the reproducibility of finding palpation points of three different anatomical landmarks in the human body (Xiphoid Process and the 2 Hip Crests) to support a navigated ultrasound application. On 6 test subjects with different body mass index the three palpation points were located five times by two examiners. The deviation from the target position was calculated and correlated to the fat thickness above each palpation point. The reproducibility of the measurements had a mean error of ≈13.5 mm +- 4 mm, which seems to be sufficient for the desired application field.
Medical applications are becoming increasingly important in the current development of health care and therefore a crucial part of the medical industry. The work focuses on the analysis of requirements and the challenges arisen from designing mobile medical applications in relation to the user interface. The paper describes the current status in the development of mobile medical apps and illustrates the development of e-health market. The author will explain the requirements and will illustrate the hurdles and problems. He refers to the German market which is similar to the European and compares that with the market in the USA.
Since half a decade, there has been an increasing interest in Robotic Process Automation (RPA) by business firms. However, academic literature has been lacking attention to RPA, before adopting the topic to a larger extent. The aim of this study is to review and structure the latest state of scholarly research on RPA. This chapter is based on a systematic literature review that is used as a basis to develop a conceptual framework to structure the field. Our study shows that some areas of RPA have been extensively examined by many authors, e.g. potential benefits of RPA. Other categories, such as empirical studies on adoption of RPA or organisational readiness models, have remained research gaps.
Current data-intensive systems suffer from scalability as they transfer massive amounts of data to the host DBMS to process it there. Novel near-data processing (NDP) DBMS architectures and smart storage can provably reduce the impact of raw data movement. However, transferring the result-set of an NDP operation may increase the data movement, and thus, the performance overhead. In this paper, we introduce a set of in-situ NDP result-set management techniques, such as spilling, materialization, and reuse. Our evaluation indicates a performance improvement of 1.13 × to 400 ×.
Revenue management information systems are very important in the hospitality sector. Revenue decisions can be better prepared based on different information from different information systems and decision strategies. There is a lack of research about the usage of such systems in small and medium-sized hotels and architectural configurations. Our paper empirically shows the current development of revenue information systems. Furthermore, we define future developments and requirements to improve such systems and the architectural base.
In this paper we present our work in progress on revisiting traditional DBMS mechanisms to manage space on native Flash and how it is administered by the DBA. Our observations and initial results show that: the standard logical database structures can be used for physical organization of data on native Flash; at the same time higher DBMS performance is achieved without incurring extra DBA overhead. Initial experimental evaluation indicates a 20% increase in transactional throughput under TPC-C, by performing intelligent data placement on Flash, less erase operations and thus better Flash longevity.
The automation of work by means of disruptive technologies such as Artificial Intelligence (AI) and Robotic Process Automation (RPA) is currently intensely discussed in business practice and academia. Recent studies indicate that many tasks manually conducted by humans today will not in the future. In a similar vein, it is expected that new roles will emerge. The aim of this study is to analyze prospective employment opportunities in the context of RPA in order to foster our understanding of the pivotal qualifications, expertise and skills necessary to find an occupation in a completely changing world of work. This study is based on an explorative, content analysis of 119 job advertisements related to RPA in Germany. The data was collected from major German online job platforms, qualitatively coded, and subsequently analyzed quantitatively. The research indicates that there indeed are employment opportunities, especially in the consulting sector. The positions require different technological expertise such as specific programming languages and knowledge in statistics. The results of this study provide guidance for organizations and individuals on reskilling requirements for future employment. As many of the positions require profound IT expertise, the generally accepted perspective that existing employees affected by automation can be retrained to work in the emerging positions has to be seen extremely critical. This paper contributes to the body of knowledge by providing a novel perspective on the ongoing discussion of employment opportunities, and reskilling demands of the existing workforce in the context of recent technological developments and automation.
46 Prozent der Arbeitsplätze in der Automobilindustrie sind bis 2030 durch Automatisierung und Digitalisierung bedroht – die Tätigkeiten werden dann nicht mehr von Menschen, sondern von intelligenten Robotern und Systemen erledigt. Das ist das zentrale Ergebnis unserer Studie „Digitale Transformation – Der Einfluss der Digitalisierung auf die Workforce in der Automobilindustrie“, die wir gemeinsam mit dem Herman Hollerith Lehr- und Forschungszentrum an der Hochschule Reutlingen erstellt haben.
Significant advances have been achieved in mobile robot localization and mapping in dynamic environments, however these are mostly incapable of dealing with the physical properties of automotive radar sensors. In this paper we present an accurate and robust solution to this problem, by introducing a memory efficient cluster map representation. Our approach is validated by experiments that took place on a public parking space with pedestrians, moving cars, as well as different parking configurations to provide a challenging dynamic environment. The results prove its ability to reproducibly localize our vehicle within an error margin of below 1% with respect to ground truth using only point based radar targets. A decay process enables our map representation to support local updates.
In this paper, we present a new approach for achieving robust performance of data structures making it easier to reuse the same design for different hardware generations but also for different workloads. To achieve robust performance, the main idea is to strictly separate the data structure design from the actual strategies to execute access operations and adjust the actual execution strategies by means of so-called configurations instead of hard-wiring the execution strategy into the data structure. In our evaluation we demonstrate the benefits of this configuration approach for individual data structures as well as complex OLTP workloads.
Die Erfindung betrifft einen Rollstuhl mit einem Gestell mit Rädern, einem Sitz sowie zwei gegenüber dem Sitz verlagerbaren Fußplatten und ein Trainingsgerät zur Bewegungstherapie der unteren Extremitäten einer in dem Rollstuhl sitzenden Person. Um das Trainingsgerät vereinfacht auszubilden, enthält das Trainingsgerät unabhängig von einer Fahrbewegung des Rollstuhls betreibbar eine an dem Gestell befestigbare, von einer Steuereinheit gesteuerte Elektromaschine, welche zur wechselweise erzwungenen Verlagerung der beiden Fußplatten mit den Fußplatten mechanisch gekoppelt ist.
RoPose-Real: real world dataset acquisition for data-driven industrial robot arm pose estimation
(2019)
It is necessary to employ smart sensory systems in dynamic and mobile workspaces where industrial robots are mounted on mobile platforms. Such systems should be aware of flexible and non-stationary workspaces and able to react autonomously to changing situations. Building upon our previously presented RoPose-system, which employs a convolutional neural network architecture that has been trained on pure synthetic data to estimate the kinematic chain of an industrial robot arm system, we now present RoPose-Real. RoPose-Real extends the prior system with a comfortable and targetless extrinsic calibration tool, to allow for the production of automatically annotated datasets for real robot systems. Furthermore, we use the novel datasets to train the estimation network with real world data. The extracted pose information is used to automatically estimate the observing sensor pose relative to the robot system. Finally we evaluate the performance of the presented subsystems in a real world robotic scenario.
As production workspaces become more mobile and dynamic it becomes increasingly important to reliably monitor the overall state of the environment. Therein manipulators or other robotic systems likely have to be able to act autonomously together with humans and other systems within a joint workspace. Such interactions require that all components in non-stationary environments are able to perceive the state relative to each other. As vision-sensors provide a rich source of information to accomplish this, we present RoPose, a convolutional neural network (CNN) based approach, to estimate the two dimensional joint configuration of a simulated industrial manipulator from a camera image. This pose information can further be used by a novel targetless calibration setup to estimate the pose of the camera relative to the manipulator’s space. We present a pipeline to automatically generate synthetic training data and conclude with a discussion of the potential usage of the same pipeline to acquire real image datasets of physically existent robots.
Rotating machinery occupies a predominant place in many industrial applications. However, rotating machines are often encountered with severe vibration problems. The measurement of these machines’ vibrations signal is of particular importance since it plays a crucial role in predictive maintenance. When the vibrations are too high, they often cause fatigue failure. They announce an unexpected stop or break and, consequently, a significant loss of productivity or an attack on the personnel’s safety. Therefore, fault identification at early stages will significantly enhance the machine’s health and significantly reduce maintenance costs. Although considerable efforts have been made to master the field of machine diagnostics, the usual signal processing methods still present several drawbacks. This paper examines the rotating machinery condition monitoring in the time and frequency domains. It also provides a framework for the diagnosis process based on machine learning by analyzing the vibratory signals.
Methods based exclusively on heart rate hardly allow to differentiate between physical activity, stress, relaxation, and rest, that is why an additional sensor like activity/movement sensor added for detection and classification. The response of the heart to physical activity, stress, relaxation, and no activity can be very similar. In this study, we can observe the influence of induced stress and analyze which metrics could be considered for its detection. The changes in the Root Mean Square of the Successive Differences provide us with information about physiological changes. A set of measurements collecting the RR intervals was taken. The intervals are used as a parameter to distinguish four different stages. Parameters like skin conductivity or skin temperature were not used because the main aim is to maintain a minimum number of sensors and devices and thereby to increase the wearability in the future.