Ja
Refine
Document Type
- Conference proceeding (437) (remove)
Is part of the Bibliography
- yes (437)
Institute
- Informatik (274)
- ESB Business School (78)
- Technik (69)
- Texoversum (13)
- Life Sciences (4)
- Zentrale Einrichtungen (2)
Publisher
- Hochschule Reutlingen (105)
- Gesellschaft für Informatik e.V (54)
- IARIA (19)
- Elsevier (17)
- RWTH Aachen (15)
- SciTePress (15)
- Deutsche Gesellschaft für Computer- und Roboterassistierte Chirurgie e.V. (11)
- Hochschule Ulm (11)
- Stellenbosch University (11)
- University of Hawai'i at Manoa (10)
Modern component-based architectural styles, e.g., microservices, enable developing the components independently from each other. However, this independence can result in problems when it comes to managing issues, such as bugs, as developer teams can freely choose their technology stacks, such as issue management systems (IMSs), e.g., Jira, GitHub, or Redmine. In the case of a microservice architecture, if an issue of a downstream microservice depends on an issue of an upstream microservice, this must be both identified and communicated, and the downstream service’s issues should link to its causing issue. However, agile project management today requires efficient communication, which is why more and more teams are communicating through comments in the issues themselves. Unfortunately, IMSs are not integrated with each other, thus, semantically linking these issues is not supported, and identifying such issue dependencies from different IMSs is time-consuming and requires manual searching in multiple IMS technologies. This results in many context switches and prevents developers from being focused and getting things done. Therefore, in this paper, we present a concept for seamlessly integrating different IMS technologies into each other and providing a better architectural context. The concept is based on augmenting the websites of issue management systems through a browser extension. We validate the approach with a prototypical implementation for the Chrome browser. For evaluation, we conducted expert interviews, which approved that the presented approach provides significant advantages for managing issues of agile microservice architectures.
Different network architectures are being used to build remote laboratories. Historically, it has been difficult to integrate industrial control systems with higher level IT systems like enterprise resource planning (ERP), manufacturing execution systems (MES), and manufacturing operations management (MOM). Getting these systems to communicate with one another has proven to be relatively difficult due to the absence of shared protocols between them. The Open Platform Communications United Architecture (OPC-UA) protocol was introduced as a remedy for this issue and is gaining popularity, but what if open-source protocols that are widely used in the IT industry could be used instead? This paper presents the development of an IT-Architecture for a cyber-physical industrial control systems laboratory that enables a seamless interconnection and integration of its elements. The architecture utilises Node-Red technology. Node-RED is an open-source programming platform developed by IBM that is focused on making it simple to link physical components, APIs, and web services. This cyber-physical laboratory is for learning principles of an industrial cascaded process control factory. Finally, this text will also discuss future work relating to digital twin (DT). A coupled tank system is selected as a teaching factory to illustrate a range of fluid control application in a typical chemical process factory.
The basis for developing future products in the automotive industry is finding creative and innovative solutions. Ideas can be found by means of creativity methods that support product developers throughout the creative process. Product developers are provided with a variety of different and new methods. This leads to a “method jungle” in which it is difficult for product developers to find the most suitable path. The successful use of methods in product development goes hand in hand with the acceptance and implementation of the methods. Despite the added value, only a low use is observed in the development process. The field of Creativity Support Tools also offers a wide variety of different tools that support the creativity process. Although a chasm exists between the many CSTs that are developed and what creative practitioners actually use. Therefore, previous studies iteratively developed a user-centered tool called “IDEA” that tries to provide a tool that responds to users' needs. The question arises how the developed tool IDEA performs in “real life setting” regarding its UX and usability as well as the creativity method acceptance and level of mental workload.
Transforming our food system is important to achieving global climate neutrality and food security. Germany has set a national target of reaching a 30% share in organic farming to support the goal. When looking at the transformation process from conventional to organic farming, it becomes apparent that measures need to be taken to reach this anticipated goal. A particular emphasis of this work is placed on finding a digital solution and process improvements to ensure longevity and efficiency. Interviews with actors along the farm-to-fork value chain were conducted to identify central barriers and drivers of organic transformation. The results of the interviews show firstly, that three subsystems need to be distinguished when talking about the farm-to-fork value chain: (1) farmers, (2) intermediaries, and (3) the canteen system. Although all three subsystems can be combined to form a coherent value chain, they rarely act and communicate beyond the boundaries of their subsystem. Secondly, we were able to allocate primary barriers and drivers to each of the subsystems, highlighting the need to include all three in the transformation process and aim for a comprehensive digital solution. This work explores the potential of a network-based platform to improve the current practice of rigid and strictly hierarchical value chains. We focus on deriving user requirements from the interviews to describe the necessary functionality of the platform to address the identified barriers and exploit existing drivers.
Applications often need to be deployed in different variants due to different customer requirements. However, since modern applications often need to be deployed using multiple deployment technologies in combination, such as Ansible and Terraform, the deployment variability must be considered in a holistic way. To tackle this, we previously developed Variability4TOSCA and the prototype OpenTOSCA Vintner, which is a TOSCA preprocessing and management layer that implements Variability4TOSCA. In this demonstration, we present a detailed case study that shows how to model a deployment using Variability4TOSCA, how to resolve the variability using Vintner, and how the result can be deployed.
Intelligent Tutoring Systems (ITSs) are increasingly used in modern education to automatically give students individual feedback on their performance. The advantage for students is fast individual feedback on their answers to asked questions, while lecturers benefit from considerable time savings and easy delivery of educational material. Of course, it is important that the provided feedback is as effective as direct feedback from the lecturer. However, in digital teaching, lecturers cannot assess the student’s knowledge precisely but can only provide information on which questions were answered correctly and incorrectly. Therefore, this paper presents a concept for integrating ITS elements into the gamified e-learning platform IT-REX so that the feedback quality can be improved to support students in the best possible way.
The members of the European TRIZ Campus (ETC) have been learning from and working together with many honorable members of MATRIZ Official for many years and feel very connected to the official International TRIZ Association.
To further spread the TRIZ methodology and TRIZ teaching in the European area in the past 12 months the ETC has put a lot of thought in how making TRIZ accessible to a broader audi-ence and getting more professionals in touch with the methodology was one of the focal points.
To this end, we have developed new formats such as the "Trainer Day" to support trainers on their way into practice. We have drawn up detailed quality guidelines for the teaching of the TRIZ methodology, which are intended to provide orientation for the design of training classes and docu-mentation. We strive for exchange with representatives of "neighbouring" methods such as Six sigma, Lean, DFMA and Design Thinking to indicate synergies and added value among methods and approaches of different kinds. We are testing formats for community building, in order to connect users of all places more strongly with the TRIZ methodology through communication and information of-fers. If TRIZ users feel alone in their organizations, the exchange outside their organi-zation helps them to keep up with the TRIZ methodology. Moreover, the ETC strives to increase the ability to communicate the benefits of TRIZ-usage inside organizations. We discuss, how to reach teachers and students of all age, to make them the unique way of inventive thinking accessible.
In our paper we want to give other MATRIZ Official members insights and share our experi-ences and best practices with our fellow MO members.
Organizational agility may be an antidote against threats from volatile, uncertain, complex, or ambiguous corporate environments. While agility has been extensively examined in manufacturing enterprises, comparably less is known about agility in knowledge-intensive organizations. As results may not be transferable, there is still some confusion about how agility in knowledge-intensive organizations can be characterized, what factors facilitate its development, what its organizational effects are, and what environmental conditions favor these effects. This study closes these gaps by presenting a systematic literature review on agility in knowledge-intensive organizations. A systematic literature search led to a sample of 37 relevant papers for our review. Integrating the knowledge-based view and a dynamic capabilities perspective, we (1) present different relevant conceptualizations of organizational agility, (2) discuss relevant knowledge management-related as well as information technology-related capabilities that support the development of organizational agility, and (3) shed light on the moderating role of environmental conditions in enhancing organizational agility and its effect on organizational performance. This academic paper adds value to theory by synthesizing existing research on agility in knowledge-intensive organizations. It furthermore may serve as a map for closing research gaps by proposing an extensive agenda for future research. Our study expands existing literature reviews on agility with its specific focus on a knowledge-intensive context and its integration of the research streams of knowledge management capabilities as well as information technology capabilities. It integrates relevant organizational knowledge management practices and the use of knowledge management systems to ensure superior performance effects. Our study can serve as a base for future examinations of organizational agility by illustrating fruitful topics for further examination as well as open questions. It may also provide value to practitioners by showing what factors favor the development of agility in knowledge-intensive organizations and what organizational effects can be achieved under which conditions.
Knowledge-intensive organizations primarily rely on knowledge and expertise as key strategic resources. In light of economic, social, and health-related crises in recent years, such organizations increasingly need to operate in dynamic environments. However, examinations on dynamic capabilities specifically in knowledge-intensive organizations remain scarce. This is remarkable given the role that knowledge holds as an economic resource in developed countries. To provide an explanation of how knowledge-intensive organizations can prevail among competitors under dynamic conditions, the authors integrate two literature streams in a knowledge-intensive context: the knowledge-based view and the dynamic capabilities approach. The knowledge-based view focuses on the nature of organizational knowledge as a critical resource and illustrates specific properties of knowledge in contrast to traditional means of labor such as capital. The dynamic capabilities approach on the other hand is about a firm's ability to integrate, build, and reconfigure internal and external resources and can be drawn on to explain organizational success through adaptation to dynamic contexts. In this conceptual study, the authors propose a research model linking knowledge processes to organizational performance through two different paths: (1) Operational capabilities permit organizations to make their living in the present and refer to efficiency. (2) Dynamic capabilities allow organizations to change their resource base and, therefore, enable their long-term survival in dynamic environments by focusing on effectiveness. Additionally, the authors hypothesize a moderating effect of environmental dynamics on the relationship between dynamic capabilities and performance. The study offers a comprehensive overview on the interplay between dynamic capabilities and the knowledge-based view, offering valuable insights for both researchers and practitioners in the field.
In recent years, the demand for accurate and efficient 3D body scanning technologies has increased, driven by the growing interest in personalised textile development and health care. This position paper presents the implementation of a novel 3D body scanner that integrates multiple RGB cameras and image stitching techniques to generate detailed point clouds and 3D mesh models. Our system significantly enhances the scanning process, achieving higher resolution and fidelity while reducing the cost, time and effort required for data acquisition and processing. Furthermore, we evaluate the potential use cases and applications of our 3D body scanner, focusing on the textile technology and health sectors. In textile development, the 3D scanner contributes to bespoke clothing production, allowing designers to construct made-to-measure garments, thus minimising waste and enhancing customer satisfaction through fitting clothing. In mental health care, the 3D body scanner can be employed as a tool for body image analysis, providing valuable insights into the psychological and emotional aspects of self-perception. By exploring the synergy between the 3D body scanner and these fields, we aim to foster interdisciplinary collaborations that drive advancements in personalisation, sustainability, and well-being.
Patterns are virtually simulated in 3D CAD programs before production to check the fit. However, achieving lifelike representations of human avatars, especially regarding soft tissue dynamics, remains challenging. This is mainly since conventional avatars in garment CAD programs are simulated with a continuous hard surface and not corresponding to the human physical and mechanical body properties of soft tissue. In the real world, the human body’s natural shape is affected by the contact pressure of tight-fitting textiles. To verify the fit of a simulated garment, the interactions between the individual body shape and the garment must be considered. This paper introduces an innovative approach to digitising the softness of human tissue using 4D scanning technology. The primary objective of this research is to explore the interactions between tissue softness and different compression levels of apparel, exerting pressure on the tissue to capture the changes in the natural shape. Therefore, to generate data and model an avatar with soft body physics, it is essential to capture the deform ability and elasticity of the soft tissue and map it into the modification options for a simulation. To aim this, various methods from different fields were researched and compared to evaluate 4D scanning as the most suitable method for capturing tissue deformability in vivo. In particular, it should be considered that the human body has different deformation capabilities depending on age, the amount of muscle and body fat. In addition, different tissue zones have different mechanical properties, so it is essential to identify and classify them to back up these properties for the simulation. It has been shown that by digitising the obtained data of the different defined applied pressure levels, a prediction of the deformation of the tissue of the exact person becomes possible. As technology advances and data sets grow, this approach has the potential to reshape how we verify fit digitally with soft avatars and leverage their realistic soft tissue properties for various practical purposes.
This article presents a modified method of performing power flow calculations as an alternative to pure energy-based simulations of off-grid hybrid systems. The enhancement consists in transforming the scenario-based power flow method into a discrete time-dependent algorithm with the inclusion of bus and controller dynamics.
Platforms feature increasingly complex architectures with regard to interconnecting with other digital platforms as well as with a variety of devices and services. This development also impacts the structure of digital platform ecosystems and forces providers of these services, devices, and services to incorporate this complexity in their decision-making. To contribute to the existing body of knowledge on measuring ecosystem complexity, the present research proposes two key artefacts based on ecosystem intelligence: On the one hand, complementarity graphs represent ecosystems with an ecosystem's functional modules as vertices and complementarities as edges. The nodes carry information about the category membership of the module. On the other hand, a process is suggested that can collect important information for ecosystem intelligence using proxies and web scraping. Our approach allows replacing data, which today is largely unavailable due to competitive reasons. We demonstrated the use of the artefacts in category-oriented complementarity maps that aggregate the information from complementarity graphs and support decision-making. They show which combination of module categories creates strong and weak complementarities. The paper evaluates complementarity maps and the data collection process by creating category-oriented complementarity graphs on the Alexa skill ecosystem and concludes with a call to pursue more research based on functional ecosystem intelligence.
Online-Portal "MINTFabrik"
(2023)
Das browserbasierte Online-Portal "MINTFabrik" entstand im Zuge der Maßnahmen zur Minderung von Lernrückständen mit der Idee, eine Lücke zu schließen, die es oft bei großen Online-Brückenkursen gibt: Ein Mangel an Übungsaufgaben, die schnell zugänglich sind, einfach ausgesucht werden können und gut auf bestimmte Lehrveranstaltungen und deren Anforderungen zugeschnitten sind. Die Entwicklung erfolgte in einer Kooperation der Hochschule Reutlingen mit der Tübinger Softwarefirma "Let´s Make Sense GmbH". Das Portal verzichtet bewusst auf eine Lektionsstruktur und besteht ausschließlich aus einzelnen Lernbausteinen (Items), d.h. Video-Tutorials, VisuApps und Aufgaben, die über eine komfortable Suche mit Filtern erreichbar sind und direkt bearbeitet werden können. Ein besonderes Merkmal der MINTFabrik sind Mikrokurse, die von Lehrenden und Studierenden erstellt werden können. Das sind kleine Einheiten aus einigen wenigen Items, die beliebig miteinander kombinierbar sind.
Smart factories, driven by the integration of automation and digital technologies, have revolutionized industrial production by enhancing efficiency, productivity, and flexibility. However, the optimization and continuous improvement of these complex systems present numerous challenges, especially when real-world data collection is time-consuming, expensive, or limited. In this paper, we propose a novel method for semi-automated improvement of smart factories using synthetic data and cause-effect-relations, while incorporating the aspect of self-organization. The method leverages the power of synthetic data generation techniques to create representative datasets that mimic the behaviour of real-world manufacturing systems. These synthetic datasets serve together with the cause-and-effect relationships as a valuable resource for factory optimization, as they enable extensive experimentation and analysis without the constraints of limited or costly real-world data. Furthermore, the method embraces the concept of self organization within smart factories. By allowing the system to adapt and optimize itself based on feedback from the synthetic data, cause-effect-relationships, the factory can dynamically reconfigure and adjust its processes. To facilitate the improvement process, the method integrates the synthetic data with advanced analytics and machine learning algorithms as well as and the cause-and-effect relationships. This synergy between human expertise and technological advancements represents a compelling path towards a truly optimized smart factory of the future.
Production planning and control are characterized by unplanned events or so-called turbulences. Turbulences can be external, originating outside the company (e.g., delayed delivery by a supplier), or internal, originating within the company (e.g., failures of production and intralogistics resources). Turbulences can have far reaching consequences for companies and their customers, such as delivery delays due to process delays. For target-optimized handling of turbulences in production, forecasting methods incorporating process data in combination with the use of existing flexibility corridors of flexible production systems offer great potential. Probabilistic, data-driven forecasting methods allow determining the corresponding probabilities of potential turbulences. However, a parallel application of different forecasting methods is required to identify an appropriate one for the specific application. This requires a large database, which often is unavailable and, therefore, must be created first. A simulation-based approach to generate synthetic data is used and validated to create the necessary database of input parameters for the prediction of internal turbulences. To this end, a minimal system for conducting simulation experiments on turbulence scenarios was developed and implemented. A multi-method simulation of the minimal system synthetically generates the required process data, using agent-based modeling for the autonomously controlled system elements and event-based modeling for the stochastic turbulence events. Based on this generated synthetic data and the variation of the input parameters in the forecast, a comparative study of data-driven probabilistic forecasting methods was conducted using a data analytics tool. Forecasting methods of different types (including regression, Bayesian models, nonlinear models, decision trees, ensemble, deep learning) were analyzed in terms of prediction quality, standard deviation, and computation time. This resulted in the identification ofappropriate forecasting methods, and required input parameters for the considered turbulences.
The fifth generation of mobile communication (5G) is a wireless technology developed to provide reliable, fast data transmission for industrial applications, such as autonomous mobile robots and connect cyber-physical systems using Internet of Things (IoT) sensors. In this context, private 5G networks enable the full performance of industrial applications built on dedicated 5G infrastructures. However, emerging wireless communication technologies such as 5G are a complex and challenging topic for training in learning factories, often lacking physical or visual interaction. Therefore, this paper aims to develop a real-time performance monitoring system of private 5G networks and different industrial 5G devices to visualise the performance and impact factors influencing 5G for students and future connectivity experts. Additionally, this paper presents the first long-term measurements of private 5G networks and shows the performance gap between the actual and targeted performance of private 5G networks.
Since its first publication in 2015, the learning factory morphology has been frequently used to design new learning factories and to classify existing ones. The structuring supports the concretization of ideas and promotes exchange between stakeholders.
However, since the implementation of the first learning factories, the learning factory concept has constantly evolved.
Therefore, in the Working Group "Learning Factory Design" of the International Association of Learning Factories, the existing morphology has been revised and extended based on an analysis of the trends observed in the evolution of learning factory concepts. On the one hand, new design elements were complemented to the previous seven design dimensions, and on the other hand, new design dimensions were added. The revised version of the morphology thus provides even more targeted support in the design of new learning factories in the future.
The increase in product variance and shorter product lifecycles result in higher production ramp-up frequencies and promote the usage of mixed-model lines. The ramp-up is considered a critical step in the product life cycle and in the automotive industry phases of the ramp-up are often executed on separated production lines (pilot lines) or factories (pilot plants) to verify processes and to qualify employees without affecting the production of other products in the mixed-model line. The required financial funds for planning and maintaining dedicated pilot lines prevent small and medium-sized enterprises (SMEs) from the application. Hence, SMEs require different tools for piloting and training during the production ramp-up. Learning islands on which employees can be trained through induced and autonomous learning propose a solution. In this work, a concept for the development and application which contains the required organization, activities, and materials is developed through expert interviews. The results of a case study application with a medium-sized automotive manufacturer show that learning islands are a viable tool for employee qualification and process verification during the ramp-up of mixed-model lines.
The presented research is dedicated to estimation of the correlation between the level of renewable energy sources and the costs of congestion management in electric networks in selected European countries. Data of six countries in North-West European area (Italy, Spain, Germany, France, Poland and Austria) were investigated. Factors considered included grid congestion costs including re-dispatching costs as well as countertrading costs, gross electricity generation, installed capacity of electric generating facilities, installed capacity of electric non-dispatchable renewable energy sources and total electricity consumption. Special attention is paid to the share of renewable energy sources. It is found that the grid congestion costs are not clearly affected by penetration of non-dispatchable renewables in all the analysed countries and therefore a clear mathematical correlation cannot no be extrapolated with the available data. The results of this research show in general a loose dependency of the grid congestion costs on the penetration of renewables and a strong dependency on the total electrical consumption of the country.
Development of an IoT-based inventory management solution and training module using smart bins
(2023)
Flexibility, transparency and changeability of warehouse environments are playing an increasingly important role to achieve a cost-efficient production of small batch sizes. This results in increasing requirements for warehouses in terms of flexibility, scalability, reconfigurability and transparency of material and information flows to deal with large number of different components and variable material and information flows due to small batch sizes. Therefore, an IoT-based inventory management solution and training module has been developed, implemented and validated at Werk150 – the Factory on campus of the ESB Business School. Key elements of the developed solution are smart bins using weight mats to track the bin’s content and additional sensors and buttons which are connected to an IoT – Hub to collect data of material consumption and manual handling operations. The use of weight mats for the smart bins offers the possibility to measure the container content independent of the specific component geometry and thus for a variety of components based on the specific component weights. The developed solution enables focusing on key for success elements of the system to provide synchronization of the flow of materials and information resulting an increase of flexibility and significantly higher transparency of the material flow. AIbased algorithms are applied to analyse the gathered data and to initiate process optimizations by providing the logistics decision makers a profound and transparent basis for decision making. In order to provide students and industry visitors of the learning factory with the necessary competences and to support the transfer into practice, a training module on IoT-based inventory management was developed and implemented.
Circular economy aims to support reuse and extends the product life cycles through repair, remanufacturing, upgrades and retrofits, as well as closing material cycles through recycling. To successfully manage the necessary transformation processes to circular economy, manufacturing enterprises rely on the competency of their employees. The definition of competency requirements for circular economy-oriented production networks will contribute to the operationalization of circular economy. The International Association of Learning Factories (IALF) statesin its mission the development of learning systems addressing these challenges for training of students and further education of industry employees. To identify the required competencies for circular economy, the major changes of the product life cycle phases have been investigated based on the state of the science and compared to the socio-technical infrastructure and thematic fields of the learning factories considered in this paper. To operationalize the circular economy approach in the product design and production phase in learning factories, an approach for a cross learning factory network (so called "Cross Learning Factory Product Production System (CLFPPS)") has been developed. The proposed CLFPPS represents a network on the design dimensions of learning factories. This approach contributes to the promotion of circular economy in learning factories as it makes use of and combines the focus areas of different learning factories. This enables the CLFPPS to offer a holistic view on the product life cycle in production networks.
The world is becoming increasingly digital. People have become used to learning and interacting with the world around them through technology, accelerated even further by the Covid-19 pandemic. This is especially relevant to the generation currently entering education systems and the workforce. Considering digital aids and methods of learning are important for future learning. The increasing online learning needs open the case for integrating digital learning aspects such as serious gaming within education and training systems. Learning factories fall amongst the education and training systems that can benefit from integration with digital learning extensions. Digital capabilities such as digital twins and models further enable the exploration of integrating digital serious games as an extension of learning factories. Since learning factories are meant for a range of different learning, training, and research purposes, such serious games need to be adaptable across stakeholder perspectives to maximize the value gained from the time and cost invested into such design and development. Research into the development of adaptive serious games for multiple stakeholder perspectives must first determine whether such development can be developed that reaches the objectives set for different included stakeholder perspectives. The purpose of this research is to investigate this at the hand of the practical development of a digital adaptive serious game for stakeholder perspectives.
Product engineering and subsequent phases of product lifecycles are predominantly managed in isolation. Companies therefore do not fully exploit potentials through using data from smart factories and product usage. The novel intelligent and integrated Product Lifecycle Management (i²PLM) describes an approach that uses these data for product engineering. This paper describes the i²PLM, shows the cause-and-effect relationships in this context and presents in detail the validation of the approach. The i²PLM is applied and validated on a smart product in an industrial research environment. Here, the subsequent generation of a smart lunchbox is developed based on production and sensor data. The results of the validation give indications for further improvements of the i²PLM. This paper describes how to integrate the i²PLM into a learning factory.
Das Motto der diesjahrigen Informatics Inside wird, wie ich finde, in beeindruckender Weise gegenwärtig durch Werkzeuge der generativen KI demonstriert. ChatGPT, Midjourney und Co. ermöglichen uns eine innovative Interaktion mit Information, die uns auffordert unsere bisherigen Vorstellungen von Erkenntnisfähigkeit und Wertschöpfung zu überdenken. Diese Notwendigkeit ist in der Informatik zwar bereits seit den 1930er Jahren bekannt, aber erst die praktische Umsetzung mit modernen Computern macht die formalen Überlegungen hierzu erfahrbar. Daraus resultierende Verunsicherungen, beispielsweise im Hinblick auf Arbeitsplatze, sind gleichermaßen Herausforderung und Chance dieses wichtige Thema einer breiten Öffentlichkeit bekannt zu machen. Hierbei wird einmal mehr deutlich wie tiefgreifend die Informatik in unsere Leben hineinwirkt und welche Verantwortung damit verbunden ist. Vor diesem großen Hintergrund könnte der Hinweis auf Bits und Bytes im Tagungsmotto fast schon wie ein unbedeutendes Detail wirken, was jedoch weit gefehlt wäre. Folgen aus Null und Eins bilden nach wie vor die Bausteine der Informatik und es ist die Aufgabe der angewandten Informatik hieraus nützliche und sinnvolle Anwendungen zu kombinieren.
Die Informatics Inside bietet hierfür einen entsprechenden Rahmen bereits in der akademischen Ausbildung. Unsere Studierenden planen, organisieren und gestalten diese Tagung jedes Jahr eigenstandig. Auch die Themen für die Fachbeiträge wurden von den Studierenden eigenstandig ausgewählt. Aus meiner Sicht bilden die resultierenden Ausarbeitungen in diesem Tagungsband die spannende Vielfalt von Anwendungsthemen des Human Centered Computings sehr gut ab. Dabei zeigt sich ebenfalls deutlich die Bereitschaft unserer Studierenden, die Verantwortung für eine sinnvolle und kreative Gestaltung der digitalen Zukunft zu übernehmen.
Reutlingen, den 15.11.2023 Prof. Dr. rer. medic. Christian Thies
The article pleads for Education for Sustainable Development (ESD) in the textile and fashion sector and shows possibilities how this can be implemented from elementary school to higher education and vocational training. It begins by highlighting the non-sustainable practices and deficits that can be found in the fashion and textile sector worldwide and explains the sustainability goals in the context of the UN Roadmap ESD for 2030. In order to raise the awareness for sustainability and implement these goals, education is needed. The article introduces the concept of ESD as a guiding principle with the core element design competence, implemented by the interdisciplinary method of Design Thinking (DT). In order to successfully teach the ESD-relevant design competence, various didactic principles are required. It can be shown that they are very similar to the principles and phases of DT. Within a research project DT and its potential for implementing ESD has been investigated in teaching-learning situations at elementary schools as well as in an interdisciplinary seminar for student teachers. These findings have been transferred to the EU project Fashion DIET, which pursues the goal of implementing ESD in the textile and fashion sector. By means of an online pilot workshop, the methods and principles of DT were presented and explained to lecturers, teachers and educators, who gave their feedback on the potential of DT as a method to implement ESD as a guiding principle in their curricula.
It is widely recognized that Education for Sustainable Development (ESD) plays a critical role in creating a more sustainable world by fostering the development of the knowledge, skills, understanding, values, and actions necessary for such change (UNESCO, 2020). In this context, ESD represents a holistic approach that focuses on lifelong learning to create informed people who can make decisions today and in the future. Related to the textile and fashion industry, ESD is an appropriate approach to continuously implement sustainability aspects in education and training. To achieve this goal, the European project "Sustainable Fashion Curriculum at Textile Universities in Europe - Development, Implementation and Evaluation of a Teaching Module for Educators" (Fashion DIET) has developed a digital teaching module in a partnership between a University of Education and universities with textile departments. The main objective of the project is to elaborate an ESD module for university lecturers in order to introduce a sustainable fashion curriculum in textile universities in Europe and implement it in educational systems. The project therefore aims to train educators along the textile supply chain, to inform the young generation about the latest aspects of sustainability and raise awareness by implementing ESD in textile education. This paper presents the learning outcomes of the modules on sustainable fashion design and related production technologies developed by the technical university partners, as part of the total of 42 courses covering didactic-methodological approaches and the sustainable orientation of the fashion market, offered at the consortium level. The project content is made available as Open Educational Resources through Glocal Campus, an open-access e-learning platform that enables virtual collaboration between universities.
We address the problem of 3D face recognition based on either 3D sensor data, or on a 3D face reconstructed from a 2D face image. We focus on 3D shape representation in terms of a mesh of surface normal vectors. The first contribution of this work is an evaluation of eight different 3D face representations and their multiple combinations. An important contribution of the study is the proposed implementation, which allows these representations to be computed directly from 3D meshes, instead of point clouds. This enhances their computational efficiency. Motivated by the results of the comparative evaluation, we propose a 3D face shape descriptor, named Evolutional Normal Maps, that assimilates and optimises a subset of six of these approaches. The proposed shape descriptor can be modified and tuned to suit different tasks. It is used as input for a deep convolutional network for 3D face recognition. An extensive experimental evaluation using the Bosphorus 3D Face, CASIA 3D Face and JNU-3D Face datasets shows that, compared to the state of the art methods, the proposed approach is better in terms of both computational cost and recognition accuracy.
Introduction: Even if there is a standard procedure of CI surgery, especially in pediatric surgery surgical steps often differ individually due to anatomical variations, malformations or unforseen events. This is why every surgical report should be created individually, which takes time and relies on the correct memory of the surgeon. A standardized recording of intraoperative data and subsequent storage as well as text processing would therefore be desirable and provides the basis for subsequent data processing, e.g. in the context of research or quality assurance.
Method: In cooperation with Reutlingen University, we conducted a workflow analysis of the prototype of a semi-automatic checklist tool. Based on automatically generated checklists generated from BPMN models a prototype user interface was developed for an android tablet. Functions such as uploading photos and files, manual user entries, the interception of foreseeable deviations from the normal course of operations and the automatic creation of OP documentation could be implemented. The system was tested in a remote usability test on a petrous bone model.
Result: The user interface allows a simple intuitive handling, which can be well implemented in the intraoperative setting. Clinical data as well as surgical steps could be individually recorded and saved via DICOM. An automatic surgery report could be created and saved.
Summary: The use of a dynamic checklist tool facilitates the capture, storage and processing of surgical data. Further applications in clinical practice are pending.
This project aims to evaluate existing big data infrastructures for their applicability in the operating room to support medical staff with context-sensitive systems. Requirements for the system design were generated. The project compares different data mining technologies, interfaces, and software system infrastructures with a focus on their usefulness in the peri-operative setting. The lambda architecture was chosen for the proposed system design, which will provide data for both postoperative analysis and real-time support during surgery.
Simulation eines dezentralen Regelungssystems zur netzdienlichen Erzeugung von grünem Wasserstoff
(2023)
Wasserstoff wird einen bedeutenden Beitrag zum Wandel von Industrie und Gesellschaft in eine klimaneutrale Zukunft leisten. Der Aufbau und die ökologisch und ökonomisch sinnvolle Nutzung einer Wasserstoffinfrastruktur sind hierbei die zentralen Herausforderungen. Ein notwendiger Baustein ist die effiziente Bereitstellung von grünem Strom und dem daraus produzierten grünen Wasserstoff. Der vorliegende Beitrag stellt ein dezentrales Regel- und Kommunikationssystem vor, mit dem Angebot und Nachfrage von grünem Strom und Wasserstoff in einem System aus dezentralen Akteuren in Einklang gebracht werden. In einer hierzu entwickelten Simulationsumgebung wird die Funktion und der Nutzen dieses dezentralen Ansatzes verdeutlicht.
The replacement of conventional material with recyclates affects product personality, particularly regarding sustainability aspects influencing consumer behaviour. A definition of personality for products made of recyclates is missing in literature. As these products require appropriate aesthetics based on material origin to communicate the advantage concerning sustainability, there is a need for research in this regard. This paper aims to develop an adequate personality of a reusable water bottle made of ocean plastic by collecting personality traits that evoke associations related to the material's origin and sustainability. We conducted two quantitative field studies. Study 1 collected associated visual perceived attributes and context-related personality traits in order to develop and visualize a preliminary design. Study 2 evaluated the design regarding associated personality traits. The overall outcome was a product personality scale consisting of 23 items plus a concrete design recommendation for a water bottle made of recycled ocean plastic. The assessment of degree of sustainability was strongly influenced by participants’ associations with personal use, familiarity with usage and the factor of stability and resilience.
Mobile monitoring of outpatients during cancer therapy becomes possible through technological advancements. This study leveraged a new remote patient monitoring app for in-between systemic therapy sessions. Patients’ evaluation showed that the handling is feasible. Clinical implementation must consider an adaptive development cycle for reliable operations.
Die folgende Veröffentlichung ist ein Konferenzband, der im Sommersemester 2023 stattgefundenen Studierendenkonferenz Informatics Inisde, die für die Fakultät Informatik und die Studierenden ein besonderes Ereignis ist. Mit der Veröffentlichung Ihrer Artikel in diesem Konferenzband haben die Studierende eine handfeste Publikation, die durch ein Peer-Review inhaltlich qualitätsgesichert ist.
In diesem Jahr gibt es eine neue Herausforderung: Seit dem Jahr 2022 steht ChatGPT von OpenAI zur Verfügung, das verblüffende Texte mit nachvollziehbarer Argumentation verfassen kann. Eine Nutzung des Werkzeugs für die Erstellung eines wissenschaftlichen Artikels ist denkbar und gleichzeitig schwer zu beweisen. Ein kritischer Umgang mit Technologie ist wichtiger als ein pauschales Verbot. Dennoch braucht es Regeln im Umgang mit Künstlicher Intelligenz, die einen ethisch richtigen Einsatz solcher Werkzeuge begrenzt. Umso wichtiger ist es, dass umfassender Sachverstand und kritisches Denken vermittelt wird, damit mögliche Fehler oder Plagiatsfälle entlarvt werden können.
Damit sind wir mitten im Thema: Informatik ist allgegenwärtig und in äußerst vielen Produkten in der Industrie und des täglichen Lebens vorhanden. Die vielfältigen Aufsätze dieser Konferenz zeigen das. Sehen Sie selbst, wie breit die Verfahren, Algorithmen, Methoden und Technologieanwendungen sind: Von Augmented-Reality, über Videoübertragung im Operationssaal, hin zu Standards für strukturierten Daten und Künstlicher Intelligenz zeigen die Beiträge doch, wie weit läufig die Informatik inzwischen ist. Allen gemeinsam ist eines: Die menschzentrierte Anwendung von Technologie, die in dem Master Human-centered Computing als Basis aller Veranstaltungen aufgefasst werden.
Digital twins deployed in production are important in practice and interesting for research. Currently, mostly structured data coming from e.g., sensors and timestamps of related stations, are integrated into Digital Twins. However, semi- and unstructured data are also important to display the current status of a digital twin (e.g., of a machinery or produced good). Process Mining and Text Mining in combination can be used to support the use of log file data to understand the current state of the process as well as highlight issues. Therefore, issue related reactions can be taken more quickly, targeted and cost oriented. Applying a design science research approach; here a prototype as an artefact based on derived requirements is developed. This prototype helps to understand and to clarify the possibilities of Process Mining and Text Mining based on log data for production related Digital Twins. Contributions for practice and research are described. Furthermore, limitations of the research and future opportunities are pointed out.
AI-based prediction and recommender systems are widely used in various industry sectors. However, general acceptance of AI-enabled systems is still widely uninvestigated. Therefore, firstly we conducted a survey with 559 respondents. Findings suggested that AI-enabled systems should be fair, transparent, consider personality traits and perform tasks efficiently. Secondly, we developed a system for the Facial Beauty Prediction (FBP) benchmark that automatically evaluates facial attractiveness. As our previous experiments have proven, these results are usually highly correlated with human ratings. Consequently they also reflect human bias in annotations. An upcoming challenge for scientists is to provide training data and AI algorithms that can withstand distorted information. In this work, we introduce AntiDiscriminationNet (ADN), a superior attractiveness prediction network. We propose a new method to generate an unbiased convolutional neural network (CNN) to improve the fairn ess of machine learning in facial dataset. To train unbiased networks we generate synthetic images and weight training data for anti-discrimination assessments towards different ethnicities. Additionally, we introduce an approach with entropy penalty terms to reduce the bias of our CNN. Our research provides insights in how to train and build fair machine learning models for facial image analysis by minimising implicit biases. Our AntiDiscriminationNet finally outperforms all competitors in the FBP benchmark by achieving a Pearson correlation coefficient of PCC = 0.9601.
Recent work on database application development platforms has sought to include a declarative formulation of a conceptual data model in the application code, using annotations or attributes. Some recent work has used metadata to include the details of such formulations in the physical database, and this approach brings significant advantages in that the model can be enforced across a range of applications for a single database. In previous work, we have discussed the advantages for enterprise integration of typed graph data models (TGM), which can play a similar role in graphical databases, leveraging the existing support for the unified modelling language UML. Ideally, the integration of systems designed with different models, for example, graphical and relational database, should also be supported. In this work, we implement this approach, using metadata in a relational database management system (DBMS).
The Fifteenth International Conference on Advances in Databases, Knowledge, and Data Applications (DBKDA 2023), held between March 13 – 17, 2023, continued a series of international events covering a large spectrum of topics related to advances in fundamentals on databases, evolution of relation between databases and other domains, data base technologies and content processing, as well as specifics in applications domains databases.
Advances in different technologies and domains related to databases triggered substantial improvements for content processing, information indexing, and data, process and knowledge mining. The push came from Web services, artificial intelligence, and agent technologies, as well as from the generalization of the XML adoption.
High-speed communications and computations, large storage capacities, and load-balancing for distributed databases access allow new approaches for content processing with incomplete patterns, advanced ranking algorithms and advanced indexing methods.
Evolution on e-business, ehealth and telemedicine, bioinformatics, finance and marketing, geographical positioning systems put pressure on database communities to push the ‘de facto’ methods to support new requirements in terms of scalability, privacy, performance, indexing, and heterogeneity of both content and technology.
The Fourteenth International Conference on Advances in Databases, Knowledge, and Data Applications (DBKDA 2022), held between May 22 – 26, 2022, continued a series of international events covering a large spectrum of topics related to advances in fundamentals on databases, evolution of relation between databases and other domains, data base technologies and content processing, as well as specifics in applications domains databases.
Advances in different technologies and domains related to databases triggered substantial improvements for content processing, information indexing, and data, process and knowledge mining. The push came from Web services, artificial intelligence, and agent technologies, as well as from the generalization of the XML adoption.
High-speed communications and computations, large storage capacities, and load-balancing for distributed databases access allow new approaches for content processing with incomplete patterns, advanced ranking algorithms and advanced indexing methods.
Evolution on e-business, ehealth and telemedicine, bioinformatics, finance and marketing, geographical positioning systems put pressure on database communities to push the ‘de facto’ methods to support new requirements in terms of scalability, privacy, performance, indexing, and heterogeneity of both content and technology.
Purpose
Artificial intelligence (AI), in particular deep learning (DL), has achieved remarkable results for medical image analysis in several applications. Yet the lack of human-like explanations of such systems is considered the principal restriction before utilizing these methods in clinical practice (Yang, Ye, & Xia, 2022).
Methods
Explainable Artificial Intelligence (XAI) provides a human-explainable and interpretable description of the “black-box” nature of DL (Gulum, Trombley, & Kantardzic, 2021). An effective XAI diagnosis generator, namely NeuroXAI (refer to Fig. 1), has been developed to extract 3D explanations from convolutional neural networks (CNN) models of brain gliomas (Zeineldin et al., 2022). By providing visual justification maps, NeuroXAI can help make DL models transparent and thus increase the trust of medical experts.
Results
NeuroXAI has been applied to two applications of the most widely investigated problems in brain imaging analysis, i.e. image classification and segmentation using magnetic resonance imaging (MRI). Visual attention maps of multiple XAI methods have been generated and compared for both applications, which could help to provide transparency about the performance of DL systems.
Conclusion
NeuroXAI helps to understand the prediction process of 3D CNN networks for brain glioma using human-understandable explanations. Results revealed that the investigated DL models behave in a logical human-like manner and can improve the analytical process of the MRI images systematically. Due to its open architecture, ease of implementation, and scalability to new XAI methods, NeuroXAI could be utilized to assist medical professionals in the detection and diagnosis of brain tumors. NeuroXAI code is publicly accessible at https://github.com/razeineldin/NeuroXAI
We introduce bloomRF as a unified method for approximate membership testing that supports both point- and range-queries. As a first core idea, bloomRF introduces novel prefix hashing to efficiently encode range information in the hash-code of the key itself. As a second key concept, bloomRF proposes novel piecewisemonotone hash-functions that preserve local order and support fast range-lookups with fewer memory accesses. bloomRF has near-optimal space complexity and constant query complexity. Although, bloomRF is designed for integer domains, it supports floating-points, and can serve as a multi-attribute filter. The evaluation in RocksDB and in a standalone library shows that it is more efficient and outperforms existing point-range-filters by up to 4× across a range of settings and distributions, while keeping the false-positive rate low.
The aim of this paper is to show to what extent Artificial Intelligence can be used to optimize forecasting capability in procurement as well as to compare AI with traditional statistic methods. At the same time this article presents the status quo of the research project ANIMATE. The project applies Artificial Intelligence to forecast customer orders in medium-sized companies.
Precise forecasts are essential for companies. For planning, decision making and controlling. Forecasts are applied, e.g. in the areas of supply chain, production or purchasing. Medium-sized companies have major challenges in using suitable methods to improve their forecasting ability.
Companies often use proven methods such as classical statistics as the ARIMA algorithm. However, simple statistics often fail while applied for complex non-linear predictions.
Initial results show that even a simple MLP ANN produces better results than traditional statistic methods. Furthermore, a baseline (Implicit Sales Expectation) of the company was used to compare the performance. This comparison also shows that the proposed AI method is superior.
Until the developed method becomes part of corporate practice, it must be further optimized. The model has difficulties with strong declines, for example due to holidays. The authors are certain that the model can be further improved. For example, through more advanced methods, such as a FilterNet, but also through more data, such as external data on holiday periods.
Industrial practice is characterized by random events, also referred to as internal and external turbulences, which disturb the target-oriented planning and execution of production and logistics processes. Methods of probabilistic forecasting, in contrast to single value predictions, allow an estimation of the probability of various future outcomes of a random variable in the form of a probability density function instead of predicting the probability of a specific single outcome. Probabilistic forecasting methods, which are embedded into the analytics process to gain insights for the future based on historical data, therefore offer great potential for incorporating uncertainty into planning and control in industrial environments. In order to familiarize students with these potentials, a training module on the application of probabilistic forecasting methods in production and intralogistics was developed in the learning factory 'Werk150' of the ESB Business School (Reutlingen University). The theoretical introduction to the topic of analytics, probabilistic forecasting methods and the transition to the application domain of intralogistics is done based on examples from other disciplines such as weather forecasting and energy consumption forecasting. In addition, data sets of the learning factory are used to familiarize the students with the steps of the analytics process in a practice-oriented manner. After this, the students are given the task of identifying the influencing factors and required information to capture intralogistics turbulences based on defined turbulence scenarios (e.g. failure of a logistical resource) in the learning factory. Within practical production scenario runs, the students apply probabilistic forecasting using and comparing different probabilistic forecasting methods. The graduate training module allows the students to experience the potentials of using probabilistic forecasting methods to improve production and intralogistics processes in context with turbulences and to build up corresponding professional and methodological competencies.
There is a growing consensus in research and practice that value-creating networks and ecosystems are supplementing the traditional distinction between the internal firm and market perspectives. To achieve joint value in ecosystems, it is crucial to align the various interests of independently acting ecosystem actors and create a common vision. In this paper, we argue that the ecosystem-wide use of product roadmaps may help with this. To get a better understanding of how roadmapping is conducted in the dynamic ecosystem environment, we systematize the main characteristics of product roadmaps and perform a conceptual comparison with the known challenges of ecosystem management. Comparing the two concepts of ecosystems and product roadmaps, we highlight the fit between the characteristics and objectives of the roadmaps and the challenges of ecosystem management. Hence, we propose to experiment with the ecosystem-wide use of product roadmaps as well as the empirical study of the challenges emerging in the process and the associated redesign of the roadmaps.
The energy turnaround, digitalization and decreasing revenues forces enterprises in the energy domain to develop new business models. Following a Design Science Research approach, we showed in two action research projects that businesses models in the energy domain result in complex ecosystems with multiple actors. Additionally, we identified that municipal utilities have problems with the systematic development of business models. In order to solve the problem, we captured together with the partners of the enterprises the requirements in a second phase. Further we developed a method which consist of the following components: Method for the creative development of a new business model in form of a Business Model Canvas (BMC). A mapping between the e3Value ontology and the BMC for modelling a business ecosystem. The Business Model Configurator (BMConfig) prototype for modelling and simulating the e3Value-Ontology. The Business model can be quantified and analyzed for its viability. We demonstrate the feasibility of our approach in business model of a power community.
Data governance have been relevant for companies for a long time. Yet, in the broad discussion on smart cities, research on data governance in particular is scant, even though data governance plays an essential role in an environment with multiple stakeholders, complex IT structures and heterogeneous processes. Indeed, not only can a city benefit from the existing body of knowledge on data governance, but it can also make the appropriate adjustments for its digital transformation. Therefore, this literature review aims to spark research on urban data governance by providing an initial perspective for future studies. It provides a comprehensive overview of data governance and the relevant facets embedded in this strand of research. Furthermore, it provides a fundamental basis for future research on the development of an urban data governance framework.
Digital twins: a meta-review on their conceptualization, application, and reference architecture
(2022)
The concept of digital twins (DTs) is receiving increasing attention in research and management practice. However, various facets around the concept are blurry, including conceptualization, application areas, and reference architectures for DTs. A review of preliminary results regarding the emerging research output on DTs is required to promote further research and implementation in organizations. To do so, this paper asks four research questions: (1) How is the concept of DTs defined? (2) Which application areas are relevant for the implementation of DTs? (3) How is a reference architecture for DTs conceptualized? and (4) Which directions are relevant for further research on DTs? With regard to research methods, we conduct a meta-review of 14 systematic literature reviews on DTs. The results yield important insights for the current state of conceptualization, application areas, reference architecture, and future research directions on DTs.
Especially, if the potential of technical and organizational measures for ergonomic workplace design is limited, exoskeletons can be considered as innovative ergonomic aids to reduce the physical workload of workers. Recent scientific findings from ergonomic analyses with and without exoskeletons are indicating that strain reduction can be achieved, particularly at workplaces with lifting, holding, and carrying processes. Currently, a work system design method is under development incorporating criteria and characteristics for the design of work systems in which a human worker is supported by an exoskeleton. Based on the properties of common passive and active exoskeletons, factors influencing the human on which an exoskeleton can have a positive or negative effect (e.g. additional weight) were derived. The method will be validated by the conceptualization and setup of several work system demonstrators at Werk150, the factory of ESB Business School on campus of Reutlingen University, to prove the positive ergonomic effect on humans and the supporting process to choose the suitable exoskeleton. The developed method and demonstrators enable the user to experience the positive ergonomic effects of exoskeletal support in lifting, holding and carrying processes in logistics and production. The new work system design method will contribute to the fact that employees can pursue their professional activity longer without substantial injuries or can be used more flexibly at different work stations. Also new work concepts, strategies and scenarios are opened up to reduce the risk of occupational accidents and to promote the compatibility of work for employees. A training module is being developed and evaluated with participants from industry and master students to build up competence.
The early involvement of experiences gained through intelligence and data analysis is becoming increasingly important in order to develop new products, leading to a completely different conception of product creation, development and engineering processes using the advantages that the dedication of the digital twin entails. Introducing a novel stage gate process in order to be holistically anchored in learning factories adopting idea generation and idea screening in an early stage, beta testing of first prototypes, technical implementation in real production scenarios, business analysis, market evaluation, pricing, service models as well as innovative social media portals. Corresponding product modelling in the sense of sustainability, circular economy, and data analytics forecasts the product on the market both before and after market launch with the interlinking of data interpretation nearby in real-time. The digital twin represents the link between the digital model and the digital shadow. Additionally, the connection of the digital twin with the product provides constantly updated operating status and process data as well as mapping of technical properties and real-world behaviours. A future-networking product, by embedded information technology with the ability to initiate and carry out one's own further development, is able to interact with people and environments and thus is relevant to the way of life of future generations. In today's development work for this new product creation approach, on one hand, "Werk150" is the object of the development itself and on the other hand the validation environment. In the next step, new learning modules and scenarios for trainings at master level will be derived from these findings.
Digital assistants like Alexa, Google Assistant or Siri have seen a large adoption over the past years. Using artificial intelligence (AI) technologies, they provide a vocal interface to physical devices as well as to digital services and have spurred an entire new ecosystem. This comprises the big tech companies themselves, but also a strongly growing community of developers that make these functionalities available via digital platforms. At present, only few research is available to understand the structure and the value creation logic of these AI-based assistant platforms and their ecosystem. This research adopts ecosystem intelligence to shed light on their structure and dynamics. It combines existing data collection methods with an automated approach that proves useful in deriving a network-based conceptual model of Amazon’s Alexa assistant platform and ecosystem. It shows that skills are a key unit of modularity in this ecosystem, which is linked to other elements such as service, data, and money flows. It also suggests that the topology of the Alexa ecosystem may be described using the criteria reflexivity, symmetry, variance, strength, and centrality of the skill coactivations. Finally, it identifies three ways to create and capture value on AI-based assistant platforms. Surprisingly only a few skills use a transactional business model by selling services and goods but many skills are complementary and provide information, configuration, and control services for other skill provider products and services. These findings provide new insights into the highly relevant ecosystems of AI-based assistant platforms, which might serve enterprises in developing their strategies in these ecosystems. They might also pave the way to a faster, data-driven approach for ecosystem intelligence.
In this paper we presented the results of the workshop with the topic: Co-creation in citizen science (CS) for the development of climate adaptation measurements - Which success factors promote, and which barriers hinder a fruitful collaboration and co-creation process between scientists and volunteers? Under consideration of social, motivational, technical/technological and legal factors., which took place at the CitSci2022. We underlined the mentioned factors in the work with scientific literature. Our findings suggest that a clear communication strategy of goals and how citizen scientists can contribute to the project are important. In addition, they have to feel include and that the contribution makes a difference. To achieve this, it is critical to present the results to the citizen scientists. Also, the relationship between scientist and citizen scientists are essential to keep the citizen scientists engaged. Notification of meetings and events needs to be made well in advance and should be scheduled on the attendees' leisure time. The citizen scientists should be especially supported in technical questions. As a result, they feel appreciated and remain part of the project. For legal factors the current General Data Protection Regulation was considered important by the participants of the workshop. For the further research we try to address the individual points and first of all to improve our communication with the citizen scientist about the project goals and how they can contribute. In addition, we should better share the achieved results.
Production systems are becoming increasingly complex, which means that the main task of industrial maintenance, ensuring the technical availability of a production system, is also becoming increasingly difficult. The previous focus of maintenance efforts on individual machines must give way to a holistic view encompassing the whole production system. Against this background, the technical availability of a production system must be redefined. The aim of this publication is to present different definition approaches of production systems’ availability and to demonstrate the effects of random machine failures on the key figures considering the complexity of the production system using a discrete event simulation.
Perforations of the tympanic membrane (TM) can occur as a result of injury or inflammation of the middle ear. These perforations can lead to conductive hearing loss (HL), where in some cases the magnitude of HL exceeds that attributable to the observed TM perforation alone. We aim with this study to better understand the effects of location and size of TM perforations on the sound transmitting properties of the middle ear.
The middle ear transfer function (METF) of six human temporal bones (TB; freshly frozen specimen of body donors) were compared before and after perforation of the TM at different locations (anterior or posterior lower quadrant) and of different sizes (1mm, ¼ of the TM, ½ of the TM, and full ablation). The
METF were correlated with a Finite Element (FE) model of the middle ear, in which similar alterations were simulated.
The measured and simulated FE model METFs exhibited frequency and perforation size dependent amplitude losses at all locations and severities. In direct comparison, posterior TM perforations affected the transmission properties to a larger degree than perforations of the anterior quadrant. This could possibly be caused by an asymmetry of the TM, where the malleus-incus complex rotates and results in larger deflections in the posterior TM half than in the anterior TM half. The FE model of the TM with a sealed cavity suggest that small perforations result in a decrease of TM rigidity and thus to an increase in oscillation amplitude of the TM, mostly above 1 kHz.
The location and size of TM perforations influence the METF in a reproducible way. Correlating our data with the FE model could help to better understand the pathologic mechanisms of middle-ear diseases. If small TM perforations with uncharacteristically significant HL are observed in daily clinical practice, additional middle ear pathologies should be considered. Further investigations on the loss of TM pretension due to perforations may be informative.
The time has come : application of artificial intelligence in small- and medium-sized enterprises
(2022)
Artificial intelligence (AI) is not yet widely used in small- and medium-sized industrial enterprises (SME). The reasons for this are manifold and range from not understanding use cases, not enough trained employees, to too little data. This article presents a successful design-oriented case study at a medium-sized company, where the described reasons are present. In this study, future demand forecasts are generated based on historical demand data for products at a material number level using a gradient boosting machine (GBM). An improvement of 15% on the status quo (i.e. based on the root mean squared error) could be achieved with rather simple techniques. Hence, the motivation, the method, and the first results are presented. Concluding challenges, from which practical users should derive learning experiences and impulses for their own projects, are addressed.
Demand forecasting intermittent time series is a challenging business problem. Companies have difficulties in forecasting this particular form of demand pattern. On the one hand, it is characterized by many non-demand periods and therefore classical statistical forecasting algorithms, such as ARIMA, only work to a limited extent. On the other hand, companies often cannot meet the requirements for good forecasting models, such as providing sufficient training data. The recent major advances of artificial intelligence in applications are largely based on transfer learning. In this paper, we investigate whether this method, originating from computer vision, can improve the forecasting quality of intermittent demand time series using deep learning models. Our empirical results show that, in total, transfer learning can reduce the mean square error by 65 percent. We also show that especially short (65 percent reduction) and medium long (91 percent reduction) time series benefit from this approach.
In order to evaluate the performance of different stapes prosthesis types, a coupled finite element (FE) model of human ear was developed. First, the middle-ear FE model was developed and validated using the middle-ear transfer function measurements available in literature including pathological cases. Then, the inner-ear FE model was developed and validated using tonotopy, impedance, and level of cochlea amplification curves from literature. Both models are based on pre-existing research with some improvements and were combined into one coupled FE model. The stapes in the coupled FE ear model was replaced with a model of a stapes prosthesis to create a reconstructed ear model that can be used to estimate how different types of protheses perform relative to each other as well as to the natural ear. This will help in designing of new innovative types of stapes prostheses or any other type of middle-ear prostheses as well as to improve the ones that are already available on the market.
Simulation models of the middle ear have rarely been used for diagnostic purposes due to their limited predictive ability with respect to pathologies. One big challenge is the large uncertainty and ambiguity in the choice of material parameters of the model.
Typically, the model parameters are determined by fitting simulation results to validation measurements. In a previous study, it was shown that fitting the model parameters of a finite-element model using the middle-ear transfer function and various other measurable output variables from normal ears alone is not sufficient to obtain a good predictive ability of the model on pathological middle-ear conditions. However, the inclusion of validation measurements on one pathological case resulted in a very good predictive ability also for other pathological cases. Although the found parameter set was plausible in all aspects, it was not yet possible to draw conclusions about the uniqueness and the accuracy or the uncertainty of the parameter set.
To answer these questions, statistical solution approaches are used in this study. Using the Monte Carlo method, a large number of plausible model data sets are generated that correctly represent the normal and pathological middle-ear characteristics in terms of various output variables like e.g., impedance, reflectance, umbo, and stapes transfer function. Subsequent principal component analyses (PCA) allow to draw conclusions about correlations, quantitative limits and statistical density of parameter values.
Furthermore, applying inverse PCA yields numerous plausible parameterizations of the middle-ear model, which can be used for data augmentation and training of a neural network which is capable of distinguishing between a normal middle ear and pathologies like otosclerosis, malleus fixation, and disarticulation based on objectively measured quantities like impedance, reflectance, and umbo velocity.
The hearing contact lens® (HCL) is a new type of hearing aid devices. One of its main components is a piezo-electric actuator. In order to evaluate and maximize the HCL’s performance, a model of the HCL coupled to the middle ear was developed using finite element approach. The model was validated step by step starting with the HCL only. To validate the HCL model, vibrational measurements on the HCL were performed using a Laser-Doppler-Vibrometer (LDV). Then, a silicone cap was placed onto the HCL to provide an interface between the HCL and the tympanic membrane of the middle-ear model and additional LDV measurements on temporal bones were performed to validate the coupled model. The coupled model was used to evaluate the equivalent sound pressure of the HCL. Moreover, a deeper insight was gained into the contact between the HCL and tympanic membrane and its effects on the HCL performance. The model can be used to investigate the sensitivity of geometrical and material parameters with respect to performance measures of the HCL and evaluate the feedback behavior.
A MATLAB toolbox was developed both for teachers performing quick experimental demonstrations during lectures and for students practicing measurement and frequency analysis procedures. The conceptual purpose was to support fundamental acoustics courses with contents defined by the DEGA recommendation 102. All implemented functions and parameters are visible at once and quickly adjustable by a GUI without submenus. A user manual is provided with explanations of how to get started and how all implemented functions can be applied. The toolbox probably still contains bugs. All users are welcome to inform the author about their experiences and proposals for improvement. In future it is planned to convert "Acoustics" to the MATLAB app designer format as Mathworks announced GUIDE to be replaced. Useful extensions would be additional tabs containing animations of sound propagation phenomena or sound fields caused by different sources.
The purpose of this paper is to examine the effects of perceived stress on traffic and road safety. One of the leading causes of stress among drivers is the feeling of having a lack of control during the driving process. Stress can result in more traffic accidents, an increase in driver errors, and an increase in traffic violations. To study this phenomenon, the Stress Perceived Questionnaire (PSQ) was used to evaluate the perceived stress while driving in a simulation. The study was conducted with participants from Germany, and they were grouped into different categories based on their emotional stability. Each participant was monitored using wearable devices that measured their instantaneous heart rate (HR). The preference for wearable devices was due to their non-intrusive and portable nature. The results of this study provide an overview of how stress can affect traffic and road safety, which can be used for future research or to implement strategies to reduce road accidents and promote traffic safety.
Generating synthetic data is a relevant point in the machine learning community. As accessible data is limited, the generation of synthetic data is a significant point in protecting patients' privacy and having more possibilities to train a model for classification or other machine learning tasks. In this work, some generative adversarial networks (GAN) variants are discussed, and an overview is given of how generative adversarial networks can be used for data generation in different fields. In addition, some common problems of the GANs and possibilities to avoid them are shown. Different evaluation methods of the generated data are also described.
Sleep analysis using a Polysomnography system is difficult and expensive. That is why we suggest a non-invasive and unobtrusive measurement. Very few people want the cables or devices attached to their bodies during sleep. The proposed approach is to implement a monitoring system, so the subject is not bothered. As a result, the idea is a non-invasive monitoring system based on detecting pressure distribution. This system should be able to measure the pressure differences that occur during a single heartbeat and during breathing through the mattress. The system consists of two blocks signal acquisition and signal processing. This whole technology should be economical to be affordable enough for every user. As a result, preprocessed data is obtained for further detailed analysis using different filters for heartbeat and respiration detection. In the initial stage of filtration, Butterworth filters are used.
Determination of accelerometer sensor position for respiration rate detection: initial research
(2022)
Continuous monitoring of a patient's vital signs is essential in many chronic illnesses. The respiratory rate (RR) is one of the vital signs indicating breathing diseases. This article proposes the initial investigation for determining the accelerometric sensor position of a non-invasive and unobtrusive respiratory rate monitoring system. This research aims to determine the sensor position in relation to the patient, which can provide the most accurate values of the mentioned physiological parameter. In order to achieve the result, the particular system setup, including a mechanical sensor holder construction was used. The breathing signals from 5 participants were analyzed corresponding to the relaxed state. The main criterion for selecting a suitable sensor position was each patient's average acceleration amplitude excursion, which corresponds to the respiratory signal. As a result, we provided one more defined important parameter for the considered system, which was not determined before.
Today many scientific works are using deep learning algorithms and time series, which can detect physiological events of interest. In sleep medicine, this is particularly relevant in detecting sleep apnea, specifically in detecting obstructive sleep apnea events. Deep learning algorithms with different architectures are used to achieve decent results in accuracy, sensitivity, etc. Although there are models that can reliably determine apnea and hypopnea events, another essential aspect to consider is the explainability of these models, i.e., why a model makes a particular decision. Another critical factor is how these deep learning models determine how severe obstructive sleep apnea is in patients based on the apnea-hypopnea index (AHI). Deep learning models trained by two approaches for AHI determination are exposed in this work. Approaches vary depending on the data format the models are fed: full-time series and window-based time series.
Sleep is essential to existence, much like air, water, and food, as we spend nearly one-third of our time sleeping. Poor sleep quality or disturbed sleep causes daytime solemnity, which worsens daytime activities' mental and physical qualities and raises the risk of accidents. With advancements in sensor and communication technology, sleep monitoring is moving out of specialized clinics and into our everyday homes. It is possible to extract data from traditional overnight polysomnographic recordings using more basic tools and straightforward techniques. Ballistocardiogram is an unobtrusive, non-invasive, simple, and low-cost technique for measuring cardiorespiratory parameters. In this work, we present a sensor board interface to facilitate the communication between force sensitive resistor sensor and an embedded system to provide a high-performing prototype with an efficient signal-to-noise ratio. We have utilized a multi-physical-layer approach to locate each layer on top of another, yet supporting a low-cost, compact design with easy deployment under the bed frame.
The importance of sleep for human life is enormous. It affects physical, mental, and psychological health. Therefore, it is vital to recognise sleep disorders in a timely manner in order to be able to initiate therapy. There are two methods for measuring sleep-related parameters - objective and subjective. Whether the substitution of a subjective method for an objective one is possible is investigated in this paper. Such replacement may bring several advantages, including increased comfort for the user. To answer this research question, a study was conducted in which 75 overnight recordings were evaluated. The primary purpose of this study was to compare both ways of measurement for total sleep time and sleep efficiency, which are essential parameters for, e.g., insomnia diagnosis and treatment. The evaluation results demonstrated that, on average, there are 32 minutes of difference between the two measurement methods when total sleep time is analysed. In contrast, on average, both measurement methods differ by 7.5% for sleep efficiency measurement. It should also be noted that people typically overestimate total sleep time and efficiency with the subjective method, where the perceived values are measured.
This workshop addressed scientific research and development to acquire physiological signals, process signals, and extract relevant data for further analysis. There are very different domains of application, for example. Tiredness and drowsiness are responsible for a significant percentage of road accidents. There are different approaches to monitoring driver drowsiness, ranging from the driver’s steering behavior to in-depth analysis of the driver, e.g., eye tracking, blinking, yawning, or Electrocardiogram (ECG). One of the leading causes of road accidents in Egypt is trucks, buses, cars, motorcycles, and pedestrians, all sharing the same infrastructure. The result is that there are more than 12,000 fatalities in road accidents every year. Thousands are injured, and some suffer long-term disabilities. A similar effect can be observed in Germany for all types of vehicles. According to the Federal Statistical Office, a high percentage of accidents involving personal injury are directly or indirectly caused by drowsiness.
A different application domain is sleep monitoring: Healthy and sound sleep is a prerequisite for a rested mind and body. Both form the basis for physical and mental health. Healthy sleep is counteracted by sleep disorders, the medically diagnosed frequency of which increases sharply from the age of 40. Increasing acceptance can be promoted by monitoring vital signs during sleep over long periods through the exclusive use of noninvasive technologies. In the case of objective measurement, the vital signs are measured to calculate the sleep phases or sleep efficiency and, after applying the appropriate algorithms, to record the sleep quality. About a quarter of all Germans have the feeling of sleeping poorly. The disruptive factors include problems falling asleep or the subjective feeling that sleep is not restful. About half of those subjectively affected have consulted a doctor. Older people and people living alone are particularly affected. There is no doubt that sleep abnormalities can lead to poor performance throughout the day, physical/somatic illnesses, psychological problems, or even premature death. Prevention, early detection, and therapy support are relevant factors impacting the personal quality of life.
The presented approaches have different application domains but share standard methodologies and technologies. Cross-domain thinking and application are essential to successful data acquisition and processing, either with traditional or cutting-edge approaches.
Das Motto in diesem Jahr lautet: "Zukunft mIT gestalten". Die Beiträge sind ein Spiegelbild der menschenzentrierten Rolle der Informatik in der heutigen Welt. Sie zeigen u. a. Forschungen in Künstlicher Intelligenz, Mensch-Maschine-Interkation und Mixed-Reality mit Anwendungen in der Medizin, der Wirtschaft und der Gesellschaft. Ein besonderer Höhepunkt der Konferenz ist der abschließende Gastvortrag von Frau Prof. Dr. Claudia Müller-Birn zum Thema "Human-Centered Data Science".
When wearing compressive garments, the tissue of the human body is altered in relation to its natural shape by the properties of the applied material and by the pattern construction used.
To check the fit of garments, both construction and selected materials can be virtually simulated in 3D on avatars in corresponding CAD programs before fabrication.
The software Blender allows the modelling of an avatar and to generate in respective to the different tissue zones with their specific properties to adjust them with soft body physics according to the testing of real soft tissue but the models in Blender are mainly using linear springs.
Even though near-data processing (NDP) can provably reduce data transfers and increase performance, current NDP is solely utilized in read-only settings. Slow or tedious to implement synchronization and invalidation mechanisms between host and smart storage make NDP support for data-intensive update operations difficult. In this paper, we introduce a low-latency cache-coherent shared lock table for update NDP settings in disaggregated memory environments. It utilizes the novel CCIX interconnect technology and is integrated in neoDBMS, a near-data processing DBMS for smart storage. Our evaluation indicates end-to-end lock latencies of ∼80-100ns and robust performance under contention.
Since project managers still face problems in managing interorganizational R&D projects, it is a promising approach to manage these projects project-culturally-aware. However, an important prerequisite for a project-culture-aware management is that the involved individual organizations pursue a collaborative strategy. Therefore, our article provides a conceptual approach including a new tool, the Collaborative Iron Triangle, which supports both project sponsors and managers in different phases of the collaboration process to pursue a collaborative strategy in interorganizational R&D projects.
Continuous monitoring of individual vital parameters can provide information for the assessment of one’s health and indications of medical problems in the context of personalized medicine. Correlations between parameters and health issues are to be evaluated. As one project in this topic area, a telemedicine platform is implemented to gather data of outpatients via wearables and accumulate them for physicians and researchers to review. This work extracts requirements, draws use case scenarios, and shows the current system architecture consisting of a patient application, a physician application with a web server, and a backend server application. In further work, the prototype will assist to develop a vendor-free and open monitoring solution. A conclusion on functionality and usability will be evaluated in an imminent first study.
Blockchain is a technology for the secure processing and verification of data transactions based on a distributed peer-to-peer network that uses cryptographic processes, consensus algorithms, and backward-linked blocks to make transactions virtually immutable. Within supply chain management, blockchain technology offer potentials in increasing supply chain transparency, visibility, automation, and efficiency. However, its complexity requires future employees to have comprehensive knowledge regarding the functionality of blockchain-based applications in order to be able to apply their benefits to scenarios in supply chain and production. Learning factories represent a suitable environment allowing learners to experience new technologies and to apply them to virtual and physical processes throughout value chains. This paper presents a concept to practically transfer knowledge about the technical functionality of blockchain technology to future engineers and software developers working within supply chains and production operations to sensitize them regarding the advantages of decentralized applications. First, the concept proposes methods to playfully convey immutable backward-linked blocks and the embedment of blockchain smart contracts. Subsequently, the students use this knowledge to develop blockchain-based application scenarios by means of an exemplary product in a learning factory environment. Finally, the developed solutions are implemented with the help of a prototypical decentralized application, which enables a holistic mapping of supply chain events.
Physicians in interventional radiology are exposed to high physical stress. To avoid negative long-term effects resulting from unergonomic working conditions, we demonstrated the feasibility of a system that gives feedback about unergonomic
situations arising during the intervention based on the Azure Kinect camera. The overall feasibility of the approach could be shown.
This article explores the question of how sustainability and labour law are interrelated. The modern world of work is characterised by the growing social and environmental responsibility of companies. Especially in the post-COVID era, sustainability also plays an increasingly important role in the corporate context, which is also noticeable in the so-called ‘war for talent’. Achieving personal career goals is no longer enough for employees today. Corporate values and in particular the so-called ESG criteria (Environment, Social, Governance) are thus also becoming increasingly important in the employment relationship and in corporate reporting requirements. In terms of social sustainability, labour law instruments can, for example, promote the creation of a discrimination-free working environment, the introduction of flexible working time models or the protection of whistleblowers. From an ecological perspective, labour regulations are also suitable for implementing ‘green mobility’ and other measures to reduce companies’ ecological footprints. Working from home, which experienced a huge boom during the COVID-19 pandemic, is also sustainable, especially from an ecological point of view. Appropriate consideration of these sustainable work tools in future corporate social responsibility (CSR) strategies not only creates a competitive advantage but can also be beneficial in recruitment.
Imagine a world in which the search for tomorrow's trends of (software) products is not subject to a long and laborious data search but is possible with a single mouse click. Through the use of artificial intelligence (AI), this reality is made possible and is to be further advanced through research. The study therefore aims to provide an initial overview of the young research field. Based on research, expert interviews, company and student surveys, current application possibilities of AI in the innovation process (defined as Smart Innovation), existing challenges that slow down the further development are discussed in more detail and future application possibilities are presented. Finally, a recommendation for action is made for business, politics and science to help overcome the current obstacles together and thus drive the future of Smart Innovation.
Die Zielsetzung des hier vorgestellten Projekts ist es, eine intelligente Steuerungsalgorithmik für Biogas-Blockheizkraftwerke (Biogas-BHKW) zu entwickeln und zu optimieren. Daran schließt sich eine Testphase an einer realen Biogasanlage an, an der die Algorithmik zu diesem Zweck in die Anlagensteuerung implementiert wird. Um beurteilen zu können inwieweit die Steuerungsalgorithmik einen Beitrag zur Entlastung von Stromnetzen leisten kann, wird für die Versuche neben dem elektrischen Bedarf des landwirtschaftlichen Betriebs, an dem die Anlage angesiedelt ist, zusätzlich die Residuallast des benachbarten Stromnetzes betrachtet. Diese basiert auf Daten vom nächstgelegenen Umspannwerk, die so skaliert werden, dass sie eine Siedlung repräsentieren, die von dem Biogas-BHKW der Anlage mitversorgt werden kann. Die Einbindung der Steuerungsalgorithmik in die Anlagensteuerung erfolgt über eine Kommunikationsstruktur mit einer Datenbank als zentraler Schnittstelle. Eine erste Versuchsreihe, bei der das Biogas-BHKW nach den Fahrplänen der intelligenten Steuerungsalgorithmik geregelt wird, zeigt vielversprechende Ergebnisse. Über die gesamte Versuchsreihe hinweg berechnet die Steuerungsalgorithmik zuverlässig neue Fahrpläne, die vom BHKW weitestgehend auch sehr gut umgesetzt werden. Zudem kann nachgewiesen werden, dass durch den Einsatz der Algorithmik das vorgelagerte Stromnetz entlastet wird.
Die bedarfsgerechte Steuerung dezentraler thermischer Energiesysteme, wie Kraft-Wärme-Kopplungs- (KWK-) Anlagen und Wärmepumpen, kann einen entscheidenden Beitrag zur Deckung bzw. Reduktion der Residuallast leisten und so für eine Verringerung der konventionellen Reststromversorgung und den damit einhergehenden Treibhausgasemissionen sorgen. Dafür wurde an der Hochschule Reutlingen in mehrjähriger Forschungsarbeit ein prognosebasierter Steuerungsalgorithmus entwickelt. Gegenstand dieses Beitrags bilden neben der Vorstellung eben jenes Steuerungsalgorithmus auch dessen praktische Umsetzungsvarianten: Eine auf einer speicherprogrammierbaren Steuerung (SPS) rein lokal ausführbare Version sowie eine Webservice-Anwendung für den parallelen Betrieb mehrerer Anlagen – ausgehend von einem zentralen Server. Erprobungen am KWK-Prüfstand der Hochschule Reutlingen bestätigen die zuverlässige Funktionsweise des Algorithmus in den verschiedenen Umsetzungsvarianten. Gleichzeitig wird der Vorteil der bedarfsgerechten Steuerung gegenüber dem, insbesondere im Mikro-KWK-Bereich standardmäßig vorliegenden, wärmegeführten Betrieb in Form einer Steigerung der Eigenstromdeckung von bis zu 27 % aufgezeigt. Neben der bedarfsgerechten Steuerung bedient der entwickelte Algorithmus zudem noch ein weiteres Anwendungsgebiet: Den vorhersagbaren KWK-Betrieb, der beispielsweise in Form täglicher Einspeiseprognose im Rahmen des Redispatch 2.0 eingefordert wird. Die Vorhersage des KWK-Betriebs ist dabei auf zwei Weisen möglich: Als erste Option kann der wärmegeführte Betrieb direkt über den Algorithmus abgebildet und prognostiziert werden. Eine andere Möglichkeit stellt wiederum die bedarfsgerechte Steuerung der Anlage dar; der berechnete optimale Fahrplan entspricht dabei gleichzeitig der Betriebsprognose des KWK-Geräts. Damit ist der entwickelte Steuerungsalgorithmus in der Lage, auf unterschiedliche Weisen zum Gelingen der Energiewende beizutragen.
The vast majority of state-of-the-art integrated circuits are mixed-signal chips. While the design of the digital parts of the ICs is highly automated, the design of the analog circuitry is largely done manually; it is very time-consuming; and prone to error. Among the reasons generally listed for this is often the attitude of the analog designer. The fact is that many analog designers are convinced that human experience and intuition are needed for good analog design. This is why they distrust the automated synthesis tools. This observation is quite correct, but this is only a symptom of the real problem. This paper shows that this phenomenon is caused by very concrete technical (and thus very rational) issues. These issues lie in the mode of operation of the typical optimization processes employed for the synthesizing tasks. I will show that the dilemma that arises in analog design with these optimizers is the root cause of the low level of automation in analog design. The paper concludes with a review of proposals for automating analog design
Die Informatics Inside ist seit über 13 Jahren ein fester Bestandteil des akademischen Jahres an der Fakultät für Informatik der Hochschule Reutlingen. Die Konferenz wird von Studierenden des Masterstudiengangs Human-Centered Computing selbstständig organisiert und bildet einen wichtigen Teil der wissenschaftlichen Ausbildung. Die Studierenden haben ihre Themen selbst gewählt und nicht selten sind es Fragen, die sie bereits durch das ganze Studium begleiten. Sie bereiten diese im Format einer wissenschaftlichen Ausarbeitung auf, wobei Inhalt, Vollständigkeit und Nachvollziehbarkeit entscheidende Faktoren sind. Die Ergebnisse dieser vertieften Auseinandersetzung mit relevanten Anwendungsthemen der Informatik können Sie in diesem Tagungsband nachlesen. Die Anwendungsdomänen reichen von der Medizin über Wirtschaft bis zu den Medien. Dabei werden aktuelle Fragestellungen des menschzentrierten Einsatzes von künstlicher Intelligenz, Softwaretechnik, Datenanalyse und Kommunikation sowie der digitalen Transformation behandelt. Es wird deutlich, dass der Nutzen von IT-Lösungen für den Menschen im Mittelpunkt der Veranstaltung steht. Das Motto der Veranstaltung „IT´s Future“ ist Programm und macht die Relevanz der Informatik für alle Lebensbereiche sowie die zukünftige Innovations- und Wettbewerbsfähigkeit von Industrie und Forschung deutlich.
For collision and obstacle avoidance as well as trajectory planning, robots usually generate and use a simple 2D costmap without any semantic information about the detected obstacles. Thus a robot’s path planning will simply adhere to an arbitrarily large safety margin around obstacles. A more optimal approach is to adjust this safety margin according to the class of an obstacle. For class prediction, an image processing convolutional neural network can be trained. One of the problems in the development and training of any neural network is the creation of a training dataset. The first part of this work describes methods and free open source software, allowing a fast generation of annotated datasets. Our pipeline can be applied to various objects and environment settings and is extremely easy to use to anyone for synthesising training data from 3D source data. We create a fully synthetic industrial environment dataset with 10 k physically-based rendered images and annotations. Our da taset and sources are publicly available at https://github.com/LJMP/synthetic-industrial-dataset. Subsequently, we train a convolutional neural network with our dataset for costmap safety class prediction. We analyse different class combinations and show that learning the safety classes end-to-end directly with a small dataset, instead of using a class lookup table, improves the quantity and precision of the predictions.
Seit über 12 Jahren findet nun die Informatics Inside als Informatikkonferenz an der Hochschule Reutlingen statt, in diesem Jahr zum zweiten Mal in einem halbjährigen Rhythmus, d.h. auch im Herbst. Diese Wissenschaftliche Konferenz des Masterstudiengangs Human-Centered Computing wird von den Studierenden selbst organisiert und durchgeführt. Sie erhalten während ihres Masterstudiums die Gelegenheit sich in einem selbstgewählten Fachthema zu vertiefen. Dies kann an der Hochschule, in einem Unternehmen, einem Forschungsinstitut oder im Ausland durchgeführt werden. Gerade diese flexible Ausgestaltung des Moduls „Wissenschaftliche Vertiefung“ führt zu einem sehr breiten Themenspektrum, das von den Studierenden bearbeitet wird. Neben der eigentlichen fachlichen Vertiefung spielt auch die Präsentation und Verteidigung von wissenschaftlichen Ergebnissen eine wichtige Rolle und dies weit über das Studium hinaus. Ein gewähltes Fachgebiet so allgemeinverständlich aufzubereiten und zu vermitteln, dass es auch für Nicht-Spezialisten verständlich wird, stellt immer wieder eine besondere Herausforderung dar. Dieser Herausforderung stellen sich die Studierenden im Rahmen der Herbstkonferenz zur Wissenschaftlichen Vertiefung am 24. November 2021. Bereits zum vierten Mal wird die Veranstaltung in einem online-Modus stattfinden, einschließlich eines virtuellen Begleitprogramms.
Das Themenspektrum der diesjährigen Herbstkonferenz ist wieder sehr vielfältig und breit gefächert. So erwarten Sie u.a. Beiträge aus dem Gesundheitssektor, dem Maschinellen Lernen, der KI und VR sowie dem Marketing und E-Learning. Allen gemein ist ein sehr starker Bezug zu innovativen Informatikansätzen, was sich auch in dem Wortspiel und Motto „RockIT Science“ der Konferenz widerspiegelt. Die Informatik durchdringt fast alle beruflichen und privaten Anwendungsbereiche und hat zunehmend größeren Einfluss auf unser tägliches Leben. Dies kann einerseits Besorgnis und andererseits Begeisterung auslösen. Gerade letzteres wollen die Studierenden mit Ihren Beiträgen erreichen und es auch mal im Informatiksektor „rocken“ lassen.
This paper takes a holistic view on an IP-traceability process in interorganizational R&D projects, as a particular Open innovation mode, aiming at showing different technologies which can be used in the front and backend of a traceability process and discussing these technologies in terms of their suitability for data from creativity processes in these projects. To achieve this goal a two-stage literature review on different technologies in the context of traceability was conducted. Then, criteria were derived from the characteristics of data from creativity processes and of interorganizational R&D projects, with which the resulting technologies were discussed. At the end, recommendations regarding suitable technologies for tracing individual creativity artifacts in interorganizational R&D projects were given.
In various German cities free-floating e-scooter sharing is an upcoming trend in e-mobility. Trends such as climate change, urbanization, demographic change, amongst others are arising and forces the society to develop new mobility solutions. Contrasting the more scientifically explored car sharing, the usage patterns and behaviors of e-scooter sharing customers still need to be analyzed. This presumably enables a better addressing of customers as well as adaptions of the business model to increase scooter utilization and therefore the profit of the e-scooter providers. The customer journey is digitally traceable from registration to scooter reservation and the ride itself. These data enable to identifies customer needs and motivations. We analyzed a dataset from 2017 to 2019 of an e-scooter sharing provider operating in a big German city. Based on the datasets we propose a customer clustering that identifies three different customer segments, enabling to draw multiple conclusions for the business development and improving the problem-solution fit of the e-scooter sharing model.
Forecasting intermittent and lumpy demand is challenging. Demand occurs only sporadically and, when it does, it can vary considerably. Forecast errors are costly, resulting in obsolescent stock or unmet demand. Methods from statistics, machine learning and deep learning have been used to predict such demand patterns. Traditional accuracy metrics are often employed to evaluate the forecasts, however these come with major drawbacks such as not taking horizontal and vertical shifts over the forecasting horizon into account, or indeed stock-keeping or opportunity costs. This results in a disadvantageous selection of methods in the context of intermittent and lumpy demand forecasts. In our study, we compare methods from statistics, machine learning and deep learning by applying a novel metric called Stock-keeping-oriented Prediction Error Costs (SPEC), which overcomes the drawbacks associated with traditional metrics. Taking the SPEC metric into account, the Croston algorithm achieves the best result, just ahead of a Long Short-Term Memory Neural Network.
Context: Agile practices as well as UX methods are nowadays well-known and often adopted to develop complex software and products more efficiently and effectively. However, in the so called VUCA environment, which many companies are confronted with, the sole use of UX research is not sufficient to find the best solutions for customers. The implementation of Design Thinking can support this process. But many companies and their product owners don’t know how much resources they should spend for conducting Design Thinking.
Objective: This paper aims at suggesting a supportive tool, the “Discovery Effort Worthiness (DEW) Index”, for product owners and agile teams to determine a suitable amount of effort that should be spent for Design Thinking activities.
Method: A case study was conducted for the development of the DEW index. Design Thinking was introduced into the regular development cycle of an industry Scrum team. With the support of UX and Design Thinking experts, a formula was developed to determine the appropriate effort for Design Thinking.
Results: The developed “Discovery Effort Worthiness Index” provides an easy-to-use tool for companies and their product owners to determine how much effort they should spend on Design Thinking methods to discover and validate requirements. A company can map the corresponding Design Thinking methods to the results of the DEW Index calculation, and product owners can select the appropriate measures from this mapping. Therefore, they can optimize the effort spent for discovery and validation.
In this paper, we propose a radical new approach for scale-out distributed DBMSs. Instead of hard-baking an architectural model, such as a shared-nothing architecture, into the distributed DBMS design, we aim for a new class of so-called architecture-less DBMSs. The main idea is that an architecture-less DBMS can mimic any architecture on a per-query basis on-the-fly without any additional overhead for reconfiguration. Our initial results show that our architecture-less DBMS AnyDB can provide significant speedup across varying workloads compared to a traditional DBMS implementing a static architecture.
Facial beauty prediction (FBP) aims to develop a machine that automatically makes facial attractiveness assessment. In the past those results were highly correlated with human ratings, therefore also with their bias in annotating. As artificial intelligence can have racist and discriminatory tendencies, the cause of skews in the data must be identified. Development of training data and AI algorithms that are robust against biased information is a new challenge for scientists. As aesthetic judgement usually is biased, we want to take it one step further and propose an Unbiased Convolutional Neural Network for FBP. While it is possible to create network models that can rate attractiveness of faces on a high level, from an ethical point of view, it is equally important to make sure the model is unbiased. In this work, we introduce AestheticNet, a state-of-the-art attractiveness prediction network, which significantly outperforms competitors with a Pearson Correlation of 0.9601. Additionally, we propose a new approach for generating a bias-free CNN to improve fairness in machine learning.
Die Bereitstellung klinischer Informationen im Operationssaal ist ein wichtiger Aspekt zur Unterstützung des chirurgischen Teams. Die roboter-assistierte Ösophagusresektion ist ein besonders komplexer Eingriff, der Potenzial zur workflowbasierten Unterstützung bietet. Wir präsentieren erste Ergebnisse der Entwicklung eines Checklisten-Tools mit der zugrundeliegenden Modellierung des chirurgischen Workflows und Informationsbedarf der Chirurgen. Das Checklisten-Tool zeigt hierfür die durchzuführenden Schritte chronologisch an und stellt zusätzliche Informationen kontextadaptiert bereit. Eine automatische Dokumentation von Start- und Endzeiten einzelner OP-Phasen und Schritte soll zukünftige Prozessanalysen der Operation ermöglichen.
Assistant platforms are becoming a key element for the business model of many companies. They have evolved from assistance systems that provide support when using information (or other) systems to platforms in their own. Alexa, Cortana or Siri may be used with literally thousands of services. From this background, this paper develops the notion of assistant platforms and elaborates a conceptual model that supports businesses in developing appropriate strategies. The model consists of three main building blocks, an architecture that depicts the components as well as the possible layers of an assistant platform, the mechanism that determines the value creation on assistant platforms, and the ecosystem with its network effects, which emerge from the multi-sided nature of assistant platforms. The model has been derived from a literature review and is illustrated with examples of existing assistant platforms. Its main purpose is to advance the understanding of assistant platforms and to trigger future research.
Um den Übergang von der Schule zur Hochschule zu erleichtern, brauchen Studierende technischer Fächer häufig eine Auffrischung ihrer Kenntnisse in Mathematik und Physik. Ein Online-Lernsystem für Physik kann Studierende bei der Beschäftigung mit physikalischen Inhalten unterstützen. Zudem kann ein Physik-Wissenstest Lücken im individuellen Wissensstand aufzeigen und zum Lernen der fehlenden Themen motivieren. Die Arbeitsgruppe "eLearning in der Physik" der Hochschulföderation Süd-West (HfSW) bestehend aus den baden-württembergischen Hochschulen Aalen, Esslingen, Heilbronn, Mannheim und Reutlingen hat einen Aufgabenpool von über 200 Physikaufgaben für Erstsemester erarbeitet. Sie stehen den Studierenden mit Lösungen in Lernmanagementsystemen zum Selbststudium und jetzt auch im "Zentralen Open Educational Resources Repositorium der Hochschulen in Baden-Württemberg" (ZOERR) zur Verfügung. In diesem Beitrag wird über den Einsatz der Online-Übungsaufgaben in 2020/2021 berichtet, über die Ergebnisse der Wissenstests und über die in der Corona-Zeit neu eingerichteten eTutorien.
Today's logistics systems are characterized by uncertainty and constantly changing requirements. Rising demand for customized products, short product life cycles and a large number of variants increases the complexity of these systems enormously. In particular, intralogistics material flow systems must be able to adapt to changing conditions at short notice, with little effort and at low cost. To fulfil these requirements, the material flow system needs to be flexible in three important parameters, namely layout, throughput and product. While the scope of the flexibility parameters is described in literature, the respective effects on an intralogistics material flow system and the influencing factors are mostly unknown. This paper describes how flexibility parameters of an intralogistics system can be determined using a multi-method simulation. The study was conducted in the learning factory “Werk150” on the campus of Reutlingen University with its different means of transport and processes and validated in terms of practical experiments.
The production environment experiences copious challenges, but likewise discovers many new potential opportunities. To meet the new requirements, caused by the developments towards mass-customization, human-robot-cooperation (HRC) was identified as a key piece of technology and is becoming more and more important. HRC combines the strengths of robots, such as reliability, endurance and repeatability, with the strengths of humans, for instance flexibility and decision-making skills. Notwithstanding the high potential of HRC applications, the technology has not achieved a breakthrough in production so far. Studies have shown that one of the biggest obstacles for implementing HRC is the allocation of tasks. Another key technology that offers various opportunities to improve the production environment is Artificial Intelligence (AI). Therefore, this paper describes an AI supported method to improve the work organization in HRC in regards to the task-allocation. The aim of this method is to build a dynamic, semi-autonomous group work environment which keeps not just employee motivation at a high level, but also the product quality due to a decreased failure rate. The AI helps to detect the perfect condition in which the employee delivers the best performance and also supports at identifying the time when the worker leaves this optimal state. As soon as the employee reaches this trigger event, the allocation of the tasks adapts based on the identified stress. This adaptation aims to return the employee to the state of the optimal performance. In order to realize such a dynamic allocation, this method describes the creation of a pool with various interaction scenarios, as well as the AI supported recognition of the defined trigger event.
Manufacturing companies are confronted with external (e.g. short-term change of product configuration by the customer) and internal (e.g. production process deviations) turbulences which are affecting the performance of production. Predefined, centrally controlled logistics processes are limiting the possibilities of production to initiate countermeasures to react in an optimized way to these turbulences. The autonomous control of intralogistics offers a great potential to cope with these turbulences by using the respective flexibility corridors of production systems and applying intelligent logistic objects with decentralized decision and process execution capabilities to maintain a target-optimized production. A method for AI-based storage-location- and material-handling-optimization to achieve performance-optimized intralogistics system through continuous monitoring of performance-relevant parameters and influencing factors by using AI (e.g. for pattern recognition) has been developed. To provide the basis to investigate and demonstrate the potentials of autonomously controlled intralogistics in connection with turbulences of production and in combination with AI, an intelligent warehouse involving an indoor localization system, smart bins, manual, semi-automated/collaborative and autonomous transport systems has been developed and implemented at Werk150, the factory on campus of ESB Business School (Reutlingen University). This scenario, which has been integrated into graduate training modules, allows the analysis and demonstration of different measures of intralogistics to cope with turbulences in production involving amongst others storage and material provision processes. The target fulfilment of the applied intralogistics measures to master arising turbulences is assessed based on the overall performance of production considering lead times and adherence to delivery dates. By applying artificial intelligence (AI) algorithms the intelligent logistical objects (smart bin, transport systems, etc.) as well as the entire logistics system should be enabled to improve their decision and process execution capabilities to master short-term turbulences in the production system autonomously.
Classification model of supply chain events regarding their transferability to blockchain technology
(2021)
The blockchain technology represents a decentralized database that stores information securely in immutable data blocks. Regarding supply chain management, these characteristics offer potentials in increasing supply chain transparency, visibility, automation, and efficiency. In this context, first token-based mapping approaches exist to transfer certain supply chain events to the blockchain, such as the creation or assembly of parts as well as their transfer of ownership. However, the decentralized and immutable structure of blockchain technology also creates challenges. In particular, the scalability, storage capacity, and the special requirements for storage formats make it currently impossible to map all supply chain events unrestrictedly on the blockchain. As a first step, this paper identifies important supply chain events for different use cases combining blockchain technology and supply chain management. Secondly, the supply chain events are classified in terms of their expected technical properties and their relevance for the respective use case. Finally, the identified supply chain events are evaluated regarding their transferability to blockchain technology and a classification model is introduced.