Refine
Document Type
- Journal article (752)
- Conference proceeding (747)
- Book chapter (124)
- Book (22)
- Doctoral Thesis (22)
- Working Paper (12)
- Review (6)
- Report (1)
Language
- English (1686) (remove)
Has full text
- yes (1686) (remove)
Is part of the Bibliography
- yes (1686)
Institute
- Informatik (613)
- ESB Business School (403)
- Technik (294)
- Life Sciences (266)
- Texoversum (108)
- Zentrale Einrichtungen (6)
Publisher
- IEEE (249)
- Springer (239)
- Elsevier (166)
- MDPI (97)
- Hochschule Reutlingen (54)
- Gesellschaft für Informatik (52)
- Wiley (46)
- ACM (40)
- De Gruyter (32)
- IARIA (26)
The massive use of patient data for the training of artificial intelligence algorithms is common nowadays in medicine. In this scientific work, a statistical analysis of one of the most used datasets for the training of artificial intelligence models for the detection of sleep disorders is performed: sleep health heart study 2. This study focuses on determining whether the gender and age of the patients have a relevant influence to consider working with differentiated datasets based on these variables for the training of artificial intelligence models.
Accurate monitoring of a patient's heart rate is a key element in the medical observation and health monitoring. In particular, its importance extends to the identification of sleep-related disorders. Various methods have been established that involve sensor-based recording of physiological signals followed by automated examination and analysis. This study attempts to evaluate the efficacy of a non-invasive HR monitoring framework based on an accelerometer sensor specifically during sleep. To achieve this goal, the motion induced by thoracic movements during cardiac contractions is captured by a device installed under the mattress. Signal filtering techniques and heart rate estimation using the symlets6 wavelet are part of the implemented computational framework described in this article. Subsequent analysis indicates the potential applicability of this system in the prognostic domain, with an average error margin of approximately 3 beats per minute. The results obtained represent a promising advancement in non-invasive heart rate monitoring during sleep, with potential implications for improved diagnosis and management of cardiovascular and sleep-related disorders.
Software scripts for sensor data extraction in Rasberry Pi: user-space and kernel-space comparison
(2024)
This paper compares two popular scripting implementations for hardware prototyping: Python scripts execut from User-Space and C-based Linux-Driver processes executed from Kernel-Space, which can provide information to researchers when considering one or another in their implementations. Conclusions exhibit that deploying software scripts in the kernel space makes it possible to grant a certain quality of sensor information using a Raspberry Pi without the need for advanced real-time operational systems.
Purpose
As a response to the increased frequency of disruptive events and intense competition, organizational agility has become a key concept in organizational research. Fostering organizational agility requires leveraging knowledge that exists both outside (exploration) and inside (exploitation) the organization. This research tests the so-called ambidexterity hypothesis, which claims that a balance between exploration and exploitation leads to increased organizational outcomes, including the development of organizational agility. Complementing previously established measurement models on ambidexterity, this research proposes an alternative measurement model to analyze how ambidexterity can enhance organizational agility and, indirectly, performance, taking into consideration the moderating effect of environmental competitiveness.
Design/methodology/approach
A review of existing measurement models for ambidexterity shows that tension, a crucial aspect of ambidexterity, is often neglected. The authors, therefore, develop a new measurement model of ambidexterity to incorporate ambidexterity-induced tension. Using this measurement model, they examine the effect of ambidexterity on the development of entrepreneurial and adaptive agility as well as performance.
Findings
Ambidexterity positively influences both entrepreneurial and adaptive agility, indicating that a balance between exploration and exploitation has superior organizational effects. This finding confirms the ambidexterity hypothesis with respect to organizational agility. Furthermore, both entrepreneurial and adaptive agility drive organizational performance. These two indirect effects via agility fully mediate the impact of ambidexterity on organizational performance. Finally, environmental competitiveness positively moderates the relationship between ambidexterity and adaptive agility.
Originality/value
The findings extend research on ambidexterity by showing its positive effects on organizational agility. Furthermore, the study proposes an alternative operationalization to capture the ambidexterity construct that may lay the groundwork for further applications of the ambidexterity concept.
Tech hubs (THs) and cognate structures are nowadays ubiquitous in the innovation ecosystem of Sub-Saharan African (SSA) countries. However, the concept of THs is fuzzy due to the lack of a clear and universally accepted definition. This ambiguity is further compounded by the diverse range of organizations that self-identify as hubs, or are categorized as such by others. As a result, research on THs in SSA remained limited. Against the backdrop of established research on the interconnectedness of technology, innovation and entrepreneurship in different organizational forms, this paper is meant to provide fresh insights into the study of THs in SSA. To advance future research, first, it reveals what is special about THs in SSA and how they are related to existing concepts. I particularly argue that they contour a fourth-wave model of incubation. Second, four main categories are unfolded to delineate THs in SSA which is the cornerstone for future research.
Comparative analysis of the chemical and rheological curing kinetics of formaldehyde-based wood adhesives is crucial for assessing their respective performance. Differential scanning calorimetry (DSC) and rheometry are the conventional techniques used for monitoring the curing processes leading to crosslinking polymerization of the adhesives. However, the direct comparison of these techniques is inappropriate due to the intrinsic differences in their underlying procedures. To address this challenge, the two adhesive samples were sequentially cured, firstly with rheometry and followed by DSC. The observed higher curing degree in the subsequent DSC procedure underpins the incomplete curing of the samples during initial rheometry. Furthermore, the comparative assessment of the activation energies, molar ratios, and active groups of the two adhesives highlights the importance of the pre-exponential factor in addition to the activation energies, as it attributes to the probability of active groups coinciding at the appropriate spatial arrangement.
Salivary gland tumors (SGTs) are a relevant, highly diverse subgroup of head and neck tumors whose entity determination can be difficult. Confocal Raman imaging in combination with multivariate data analysis may possibly support their correct classification. For the analysis of the translational potential of Raman imaging in SGT determination, a multi-stage evaluation process is necessary. By measuring a sample set of Warthin tumor, pleomorphic adenoma and non-tumor salivary gland tissue, Raman data were obtained and a thorough Raman band analysis was performed. This evaluation revealed highly overlapping Raman patterns with only minor spectral differences. Consequently, a principal component analysis (PCA) was calculated and further combined with a discriminant analysis (DA) to enable the best possible distinction. The PCA-DA model was characterized by accuracy, sensitivity, selectivity and precision values above 90% and validated by predicting model-unknown Raman spectra, of which 93% were classified correctly. Thus, we state our PCA-DA to be suitable for parotid tumor and non-salivary salivary gland tissue discrimination and prediction. For evaluation of the translational potential, further validation steps are necessary.
Purpose – This paper aims to determine the affecting factors of the brand authenticity of startups in social media.
Design/methodology/approach – Using a qualitative method based on a grounded theory approach, this research specifies and classifies the affecting factors of brand authenticity of startups in social media through in-depth semi-structured interviews.
Findings – Multiple factors affecting the brand authenticity of startups in social media are determined and categorized as indexical, iconic and existential cues through this research. Connection to heritage and having credible support are determined as indexical cues. Founder intellectuality, brand intellectuality, commitment toward customers and proactive clear and interesting communications are identified as iconic cues. Having self-confidence and self-satisfaction, having intimacy with the brand and a joyful feeling for interactions with the community around the brand are determined as existential cues in this research. This research furthers previous arguments on a multiplicity of brand authenticity by shedding light on the relationship between the different aspects of authenticity and the form that different affecting factors can be organized together. Consumers eventually evaluate a strengthened perception of brand authenticity through existential cues that reflect the cues of other aspects (iconic and indexical) which passed through the goal-based assessment and self-authentication filter.
Research limitations/implications – The research sampling population can be more diversified in terms of sociodemographic attributes. Due to the qualitative methodology of this research, assessment of the findings through quantitative methods can be considered in future research. Practical implications – Using the findings of this research, startup managers can properly build a perception of authenticity in their consumers’ minds by using alternate factors while lacking major indexical cues such as heritage. This research helps startup businesses to design their brand communications better to convey their authenticity to their audiences.
Originality/value – This research determines the factors affecting the authenticity of startup brands in social media. It also defines the process of authenticity perception through different aspects of brand authenticity.
Plasmonics and nanophotonics both deal with the interaction of light with structures of typically sub-wavelength size in one of more dimensions. Over the past decade or two, interest in these topics has grown significantly. This includes basic research towards detailed understanding of light-matter interaction and the manipulation of light on the nanometer scale as well as the search for applications ranging from quantum information processing, data storage, solar cells, spectroscopy and microscopy to (bio-)sensors and biomedical devices. Key enablers for this development are advanced materials and the variety of techniques to structure them with nanometer precision on the one hand, and progress in the theoretical description and numerical implementations, on the other. Besides the traditional metals Au, Ag, Al, and Cu also compounds such as refractory metal nitrides with much higher durability as well as semiconductors, dielectrics and hybrid structures have become of interest. Structuring techniques are not only aiming at the fabrication of individual elements with highest precision for detailed interaction analysis, but also at methods for large scale, low-cost nanofabrication mostly for sensor applications. In the former case, mostly electron beam lithography and focused ion beam milling are employed, while for high throughput various forms of nanoimprint and self-assembly based techniques are favored. Thin film deposition and pattern transfer techniques are mostly derived from those developed for nano-electronics, however more recently methods such as electroless plating, atomic layer deposition or etching and 3-D additive techniques are appearing. Thus, highly specialized expertise has been acquired in the different disciplines, and successful research and technology transfer will draw from this pool of knowledge.
Introduction to the special issue on self‑managing and hardware‑optimized database systems 2022
(2023)
Data management systems have evolved in terms of functionality, performance characteristics, complexity, and variety during the last 40 years. Particularly, the relational database management systems and the big data systems (e.g., Key-Value stores, Document stores, Graph stores and Graph Computation Systems, Spark, MapReduce/Hadoop, or Data Stream Processing Systems) have evolved with novel additions and extensions. However, the systems administration and tasks have become highly complex and expensive, especially given the simultaneous and rapid hardware evolution in processors, memory, storage, or networking. These developments present new open problems and challenges to data management systems as well as new opportunities.
The SMDB (International Workshop on Self-Managing Database Systems) and HardBD&Active (Joint International Workshop on Big Data Management on Emerging Hardware and Data Management on Virtualized Active Systems) workshops organized in conjunction with the IEEE ICDE (International Conference on Data Engineering) offered two distinct platforms for examining the above system-related challenges from different perspectives. The SMDB workshop looks into developing autonomic or self-* features in database and data management systems to tackle complex administrative tasks, while the HardBD&Active workshop focuses on harnessing hardware technologies to enhance efficiency and performance of data processing and management tasks. As a result of these workshops, we are delighted to present the third special issue of DAPD titled “Self-Managing and Hardware-Optimized Database Systems 2022,” which showcases the best contributions from the SMDB 2021/2022 and HardBD&Active 2021/2022 workshops.
Digitalization and enterprise architecture management: a perspective on benefits and challenges
(2023)
Many companies digitally transform their business models, processes, and services. They have also been using Enterprise Architecture Management approaches for a long time to synchronize corporate strategy and information technology. Such digitalization projects bring different challenges for Enterprise Architecture Management. Without understanding and addressing them, Enterprise Architecture Management projects will fail or not deliver the expected value. Since existing research has not yet addressed these challenges, they were investigated based on a qualitative expert study with leading industry experts from Europe. Furthermore, potential benefits of digitalization projects for Enterprise Architecture Management were researched. Our results provide a theoretical framework consisting of five identified challenges, triggers and a number of benefits. Furthermore, we discuss in what ways digitalization and EAM is a promising topic for future research.
In recent years, both fields, AI and VRE, have received increasing attention in scientific research. Thus, this article’s purpose is to investigate the potential of DL-based applications on VRE and as such provide an introduction to and structured overview of the field. First, we conduct a systematic literature review of the application of Artificial Intelligence (AI), especially Deep Learning (DL), on the integration of Variable Renewable Energy (VRE). Subsequently, we provide a comprehensive overview of specific DL-based solution approaches and evaluate their applicability, including a survey of the most applied and best suited DL architectures. We identify ten DL-based approaches to support the integration of VRE in modern power systems. We find (I) solar PV and wind power generation forecasting, (II) system scheduling and grid management, and (III) intelligent condition monitoring as three high potential application areas.
Because of a high product and technology complexity, companies involve external partners in their research and development (R&D) processes. Interorganizational projects result, which represent temporary organizations. In these projects heterogenous organizations work closely together. Since project work is always teamwork, these projects face due to their characteristic’s major challenges on an organizational, relational, and content-related collaboration level. Thus, this paper raises the following research question: “How can a project team be supported on an organizational, relational, and content-related level in an interorganizational new product development setting?” To answer this research question, an explorative expert study was set up with two digital workshops using the interactive presentation tool Mentimeter. The results show that a cooperative innovation culture could support project teams on an organizational and relational level in the future in minimizing predominant problems. Moreover, it supports project teams for example in a functional communication. Furthermore, 18 values of a cooperative innovation culture result which are for example openness and transparency, risk and failure tolerance or respect. On a content-related level the results show that an adaptable tool which promotes creativity and collaboration method as well as content-related input support could be beneficial for problem-solving in an interorganizational new product development setting in the future. Because the tool can guide product developers through the process with suitable creativity and collaboration methods, can give content-related input and can enable interactive interchange on a table-top. Future research could mainly focus on the connection of the cooperative innovation culture and the tool since these potentially influence each other.
In a recently developed study programme at Reutlingen University, which focuses on practical orientations, an innovative product with solid company references is to be defined and realised by student teams. On the basis of this product, all subjects of the business engineering study programme “Sustainable Production and Business” are taught. By focusing on three main paths of future skills that have been developed by NextSkills to analyse upcoming social changes, global challenges and fields of work that are innovation-driven and agile, the new study programme aims to create responsible leaders who will shape global businesses respectfully. Thereby, different TRIZ tools help to support students in developing their own products with a focus on sustainability and pay off on the future skills enhancement. Further, students get to know TRIZ tools in an unbiased way, unburdened by too much theory, and are thus continuously supported in the progressing product development process that accompanies their studies. Hence, students perceive TRIZ on the one hand as a method to develop sustainable products and, on the other hand, to find sustainable solutions for everyday problems. The knowledge and positive experiences gained in this way should then arouse curiosity for the TRIZ class at the end of the study programme. The students can graduate with a TRIZ Level 1 certificate. Thereby, as many students as possible are introduced to the TRIZ methods, and the TRIZ tool is spread widely.
Modern component-based architectural styles, e.g., microservices, enable developing the components independently from each other. However, this independence can result in problems when it comes to managing issues, such as bugs, as developer teams can freely choose their technology stacks, such as issue management systems (IMSs), e.g., Jira, GitHub, or Redmine. In the case of a microservice architecture, if an issue of a downstream microservice depends on an issue of an upstream microservice, this must be both identified and communicated, and the downstream service’s issues should link to its causing issue. However, agile project management today requires efficient communication, which is why more and more teams are communicating through comments in the issues themselves. Unfortunately, IMSs are not integrated with each other, thus, semantically linking these issues is not supported, and identifying such issue dependencies from different IMSs is time-consuming and requires manual searching in multiple IMS technologies. This results in many context switches and prevents developers from being focused and getting things done. Therefore, in this paper, we present a concept for seamlessly integrating different IMS technologies into each other and providing a better architectural context. The concept is based on augmenting the websites of issue management systems through a browser extension. We validate the approach with a prototypical implementation for the Chrome browser. For evaluation, we conducted expert interviews, which approved that the presented approach provides significant advantages for managing issues of agile microservice architectures.
Application systems often need to be deployed in different variants if requirements that influence their implementation, hosting, and configuration differ between customers. Therefore, deployment technologies, such as Ansible or Terraform, support a certain degree of variability modeling. Besides, modern application systems typically consist of various software components deployed using multiple deployment technologies that only support their proprietary, non-interoperable variability modeling concepts. The Variable Deployment Metamodel (VDMM) manages the deployment variability across heterogeneous deployment technologies based on a single variable deployment model. However, VDMM currently only supports modeling conditional components and their relations which is sometimes too coarse-grained since it requires modeling entire components, including their implementation and deployment configuration for each different component variant. Therefore, we extend VDMM by a more fine-grained approach for managing the variability of component implementations and their deployment configurations, e.g., if a cheap version of a SaaS deployment provides only a community edition of the software and not the enterprise edition, which has additional analytical reporting functionalities built-in. We show that our extended VDMM can be used to realize variable deployments across different individual deployment technologies using a case study and our prototype OpenTOSCA Vintner.
Different network architectures are being used to build remote laboratories. Historically, it has been difficult to integrate industrial control systems with higher level IT systems like enterprise resource planning (ERP), manufacturing execution systems (MES), and manufacturing operations management (MOM). Getting these systems to communicate with one another has proven to be relatively difficult due to the absence of shared protocols between them. The Open Platform Communications United Architecture (OPC-UA) protocol was introduced as a remedy for this issue and is gaining popularity, but what if open-source protocols that are widely used in the IT industry could be used instead? This paper presents the development of an IT-Architecture for a cyber-physical industrial control systems laboratory that enables a seamless interconnection and integration of its elements. The architecture utilises Node-Red technology. Node-RED is an open-source programming platform developed by IBM that is focused on making it simple to link physical components, APIs, and web services. This cyber-physical laboratory is for learning principles of an industrial cascaded process control factory. Finally, this text will also discuss future work relating to digital twin (DT). A coupled tank system is selected as a teaching factory to illustrate a range of fluid control application in a typical chemical process factory.
Do Chinese subordinates trust their German supervisors? A model of inter-cultural trust development
(2023)
In this qualitative study based on 95 interviews with Chinese subordinates and their German supervisors, we inductively develop a model which advances theoretical understanding by showing how inter-cultural trust development in hierarchical relationships is the result of six distinct elements: the subordinate trustor’s cultural profile (cosmopolitans, hybrids, culturally bounds), the psychological mechanisms operating within the trustor (role expectations and cultural accommodation), and contextual moderators (e.g., country context, time spent in foreign culture, and third-party influencers), which together influence the trust forms (e.g., presumptive trust, relational trust) and trust dynamics (e.g., trust breakdown and repair) within relationship phases over time (initial contact, trust continuation, trust disillusionment, separation, and acculturation). Our findings challenge the assumption that cultural differences result in low levels of initial trust and highlight the strong role the subordinate’s cultural profile can have on the dynamics and trajectory of trust in hierarchical relationships. Our model highlights that inter-cultural trust development operates as a variform universal, following the combined universalistic-particularistic paradigm in cross-cultural management, with both culturally generalizable etic dynamics, as well as culturally specific etic manifestations.
Development of an indoor positioning system to create a digital shadow of production plant layouts
(2023)
The objective of this dissertation is to develop an indoor positioning system that allows the creation of a digital shadow of the plant layout in order to continuously represent the actual state of the physical layout in the virtual space. In order to define the requirements for such a system, potential stakeholders who could benefit from a digital shadow in the context of the plant layout were analysed. In order to generate added value for their work, the requirements were derived from their perspective. As the core of an indoor positioning system is the sensory aspect to capture the physical layout parameters, different potential technologies were compared and evaluated in terms of their suitability for this particular application. Derived from this analysis, the selected concept is based on the use of a pan-tilt-zoom (PTZ) camera in combination with fiducial markers. In order to determine specific camera parameters, a series of experiments were conducted which were necessary to develop the measurement method as well as the mathematical calculation method and coordinate transformation for the determination of poses (positions and angular orientations) of the respective facilities in the plant. In addition, an experimental validation was performed to ensure that the limit values for individual parameters determined in the requirements analysis can be met.
The basis for developing future products in the automotive industry is finding creative and innovative solutions. Ideas can be found by means of creativity methods that support product developers throughout the creative process. Product developers are provided with a variety of different and new methods. This leads to a “method jungle” in which it is difficult for product developers to find the most suitable path. The successful use of methods in product development goes hand in hand with the acceptance and implementation of the methods. Despite the added value, only a low use is observed in the development process. The field of Creativity Support Tools also offers a wide variety of different tools that support the creativity process. Although a chasm exists between the many CSTs that are developed and what creative practitioners actually use. Therefore, previous studies iteratively developed a user-centered tool called “IDEA” that tries to provide a tool that responds to users' needs. The question arises how the developed tool IDEA performs in “real life setting” regarding its UX and usability as well as the creativity method acceptance and level of mental workload.
The targeted design of monodisperse, mesoporous silica microspheres (MPSMs) as HPLC separation phases is still a challenge. The MPSMs can be generated via a multi-step template-assisted method. However, this method and the factors affecting the individual process steps and resulting material properties are scarcely understood, and specific control of the complex multi-step process has been hardly discussed. In this work, the key synthesis steps were systematically investigated by means of statistical Design of Experiment (DoE). In particular, three steps were considered in detail: 1) the synthesis of porous poly(glycidyl methacrylate-co-ethylene glycol dimethacrylate) (p(GMA-co-EDMA)) particles, which as template particles, determine the structure for the final MPSMs. In this context, functional models were generated, which allow the control of the template properties pore volume, pore size and specific surface area. 2) In the presence of amino-functionalized template particles, the sol-gel process was carried out under Stöber process conditions. The water to tetraethyl orthosilicate (TEOS) ratio, as well as the concentration of ammonia as basic catalyst were varied according to a face-centered central composite design (FCD). The incorporation of silica nanoparticles (SNPs) into the pore network of the porous polymers was investigated by scanning electron microscopy (SEM), evaluation of the pore properties assessed by nitrogen sorption measurements and determination of the inorganic content by thermogravimetric analysis (TGA). Here, the material properties, such as the amount of attached silica, can be specifically controlled in the resulting organic/silica hybrid material (hybrid beads, HBs). Furthermore, depending on the sol-gel conditions three, potentially four, reaction regimes were identified, leading to different HBs. These range from porous polymer particles coated with a thin protective silica layer, to interpenetrating networks of polymer and silica, to potential particles consisting of a porous polymer core coated with a silica shell. Also, the effects of the use of different precursors and solvents on silica incorporation were investigated. 3) To obtain MPSMs from the HBs, the organic polymer template was removed by calcination. The effects of sol-gel process conditions on the resulting MPSMs were evaluated and relationships between process conditions and material properties were shown in predictive models. Fully porous, spherical, monodisperse silica particles with sizes ranging from 0.5 µm to 7.8 µm and pore sizes from 3.5 nm to 72.4 nm can be prepared specifically. Subsequent to organo-functionalization, prepared MPSMs were applied as reversed-phase HPLC column materials. Here, the columns were successfully applied for the separation of proteins and amino acids. The separation performance of the materials depends largely on the property profile of the MPSMs, which is predetermined during the preparation of the HBs.
In the context of Industry 4.0, intralogistics faces an increasingly complex and dynamic environment driven by a high level of product customisation and complex manufacturing processes. One approach to deal with these changing conditions is the decentralised and intelligent connectivity of intralogistics systems. However, wireless connectivity presents a major challenge in the industry due to strict requirements such as safety and real-time data transmission. In this context, the fifth generation of mobile communications (5G) is a promising technology to meet the requirements of safety-critical applications. Particularly, since 5G offers the possibility of establishing private 5G networks, also referred to as standalone non-public networks. Through their isolation from public networks, private 5G networks provide exclusive coverage for private organisations offering them high intrinsic network control and data security. However, 5G is still under development and is being gradually introduced in a continuous release process. This process lacks transparency regarding the performance of 5G in individual releases, complicating the successful adoption of 5G as an industrial communication. Additionally, the evaluation of 5G against the specified target performance is insufficient due to the impact of the environment and external interfering factors on 5G in the industrial environment. Therefore, this paper aims to develop a technical decision-support framework that takes a holistic approach to evaluate the practicality of 5G for intralogistics use cases by considering two fundamental stages. The first of these analyses technical parameters and characteristics of the use case to evaluate the theoretical feasibility of 5G. The second stage investigates the application's environment, which substantially impacts the practicality of 5G, for instance, the influence of surrounding materials. Finally, a case study validates the proposed framework by means of an autonomous mobile robot. As a result, the validation proves the proposed framework's applicability and shows the practicality of the autonomous mobile robot, when integrating it into a private 5G network testbed.
In the era of digital transformation, the notion of software quality transcends its traditional boundaries, necessitating an expansion to encompass the realms of value creation for customers and the business. Merely optimizing technical aspects of software quality can result in diminishing returns. Product discovery techniques can be seen as a powerful mechanism for crafting products that align with an expanded concept of quality - one that incorporates value creation. Previous research has shown that companies struggle to determine appropriate product discovery techniques for generating, validating, and prioritizing ideas for new products or features to ensure they meet the needs and desires of the customers and the business. For this reason, we conducted a grey literature review to identify various techniques for product discovery. First, the article provides an overview of different techniques and assesses how frequently they are mentioned in the literature review. Second, we mapped these techniques to an existing product discovery process from previous research to provide concrete guidelines for establishing product discovery in their organizations. The analysis shows, among other things, the increasing importance of techniques to structure the problem exploration process and the product strategy process. The results are interpreted regarding the importance of the techniques to practical applications and recognizable trends.
Cyber-Physical Production Systems increasingly use semantic information to meet the grown flexibility requirements. Ontologies are often used to represent and use this semantic information. Existing systems focus on mapping knowledge and less on the exchange with other relevant IT systems (e.g., ERP systems) in which crucial semantic information, often implicit, is contained. This article presents an approach that enables the exchange of semantic information via adapters. The approach is demonstrated by a use case utilizing an MES system and an ERP system.
Transforming our food system is important to achieving global climate neutrality and food security. Germany has set a national target of reaching a 30% share in organic farming to support the goal. When looking at the transformation process from conventional to organic farming, it becomes apparent that measures need to be taken to reach this anticipated goal. A particular emphasis of this work is placed on finding a digital solution and process improvements to ensure longevity and efficiency. Interviews with actors along the farm-to-fork value chain were conducted to identify central barriers and drivers of organic transformation. The results of the interviews show firstly, that three subsystems need to be distinguished when talking about the farm-to-fork value chain: (1) farmers, (2) intermediaries, and (3) the canteen system. Although all three subsystems can be combined to form a coherent value chain, they rarely act and communicate beyond the boundaries of their subsystem. Secondly, we were able to allocate primary barriers and drivers to each of the subsystems, highlighting the need to include all three in the transformation process and aim for a comprehensive digital solution. This work explores the potential of a network-based platform to improve the current practice of rigid and strictly hierarchical value chains. We focus on deriving user requirements from the interviews to describe the necessary functionality of the platform to address the identified barriers and exploit existing drivers.
Applications often need to be deployed in different variants due to different customer requirements. However, since modern applications often need to be deployed using multiple deployment technologies in combination, such as Ansible and Terraform, the deployment variability must be considered in a holistic way. To tackle this, we previously developed Variability4TOSCA and the prototype OpenTOSCA Vintner, which is a TOSCA preprocessing and management layer that implements Variability4TOSCA. In this demonstration, we present a detailed case study that shows how to model a deployment using Variability4TOSCA, how to resolve the variability using Vintner, and how the result can be deployed.
Gamification has been increasingly applied to software engineering education in the past. The approaches vary from applying game elements on a conceptual phase in the course to using specific tools to engage the students more and support their learning goals. However, existing tools usually have game elements, such as quizzes or challenges, but do not provide a more computer game-like experience. Therefore, we try to raise the level of gamified learning experience to another level by proposing Gamify-IT. Gamify-IT is a Unity- and web-based game platform intended to help students learn software engineering. It follows an immersive role-play game characteristic where the students explore a world, find and solve minigames and clear dungeons with SE tasks. Lecturers can configure the worlds, e.g., to add content hints. Furthermore, they can add and configure minigames and dungeons to include exercises in a fully gamified way. Thereby, they customize their course in Gamify-IT to adapt the world very precisely to other materials such as lectures or exercises. Results of an evaluation of our initial prototype show that (i) students like to engage with the platform, (ii) students are motivated to learn when using Gamify-IT, and (iii) the minigames support students in understanding the learning objectives.
Intelligent Tutoring Systems (ITSs) are increasingly used in modern education to automatically give students individual feedback on their performance. The advantage for students is fast individual feedback on their answers to asked questions, while lecturers benefit from considerable time savings and easy delivery of educational material. Of course, it is important that the provided feedback is as effective as direct feedback from the lecturer. However, in digital teaching, lecturers cannot assess the student’s knowledge precisely but can only provide information on which questions were answered correctly and incorrectly. Therefore, this paper presents a concept for integrating ITS elements into the gamified e-learning platform IT-REX so that the feedback quality can be improved to support students in the best possible way.
The COVID-19 pandemic necessitated significant changes in foreign language education, forcing teachers to reconstruct their identities and redefine their roles as language educators. To better understand these adaptations and perspectives, it is crucial to study how the pandemic has influenced teaching practices. This mixed-methods study focused on the less-explored aspects of foreign language teaching during the pandemic, specifically examining how language teachers adapted and perceived their practices, including rapport building and learner autonomy, during emergency remote teaching (ERT) in higher education institutions. It also explored teachers’ intentions for their teaching in the post-pandemic era. An online survey was conducted, involving 118 language educators primarily from Germany, with a smaller representation from New Zealand, the United States, and the United Kingdom. The analysis of participants’ responses revealed issues and opportunities regarding lesson formats, tool usage, rapport, and learner autonomy. Our findings offer insights into the desired changes participants envisioned for the post-pandemic era. The results highlight the opportunities ERT had created in terms of teacher development, and we offer suggestions to enhance professional development programmes based on these findings.
The present study investigated the possibilities and limitations of using a low-cost NIR spectrometer for the verification of the presence of the declared active pharmaceutical ingredients (APIs) in tablet formulations, especially for medicine screening studies in low-resource settings. Spectra from 950 to 1650 nm were recorded for 170 pharmaceutical products representing 41 different APIs, API combinations or placebos. Most of the products, including 20 falsified medicines, had been collected in medicine quality studies in African countries. After exploratory principal component analysis, models were built using data-driven soft independent modelling of class analogy (DD-SIMCA), a one-class classifier algorithm, for tablet products of penicillin V, sulfamethoxazole/trimethoprim, ciprofloxacin, furosemide, metronidazole, metformin, hydrochlorothiazide, and doxycycline. Spectra of amoxicillin and amoxicillin/clavulanic acid tablets were combined into a single model. Models were tested using Procrustes cross-validation and by projection of spectra of tablets containing the same or different APIs. Tablets containing no or different APIs could be identified with 100 % specificity in all models. A separation of the spectra of amoxicillin and amoxicillin/clavulanic acid tablets was achieved by partial least squares discriminant analysis. 15 out of 19 external validation products (79 %) representing different brands of the same APIs were correctly identified as members of the target class; three of the four rejected samples showed an API mass percentage of the total tablet weight that was out of the range covered in the respective calibration set. Therefore, in future investigations larger and more representative spectral libraries are required for model building. Falsified medicines containing no API, incorrect APIs, or grossly incorrect amounts of the declared APIs could be readily identified. Variation between different NIR-S-G1 spectroscopic devices led to a loss of accuracy if spectra recorded with different devices were pooled. Therefore, piecewise direct standardization was applied for calibration transfer. The investigated method is a promising tool for medicine screening studies in low-resource settings.
Impact of a large distribution network on radiation characteristics of planar spiral antenna arrays
(2023)
Designing antenna arrays with a central feed point has gained ground in the antenna technique. This approach, which is usually applied because of manufacturing costs, is difficult to achieve and leads to a large feeding network. The impact of which is numerically investigated in the present work. Upon comparing three different antennas, it is shown that the enlargement of the feed strongly affects the antenna's overall dimensions and the antenna's radiation characteristics. The antenna with the plug-in solution is not only small in size but also performs better compared to antennas with a central feed point. Considering the high effort in designing the feed network with a central point and the influence of the resulting enlarged network on the dimensions and radiation characteristics of the antenna, the cost saving in production can be put into perspective.
The introduction of smart contracts has expanded the applicability of blockchains to many domains beyond finance and cryptocurrencies. Moreover, different blockchain technologies have evolved that target special requirements. As a result, in practice, often a combination of different blockchain systems is required to achieve an overall goal. However, due to the heterogeneity of blockchain protocols, the execution of distributed business transactions that span several blockchains leads to multiple interoperability and integration challenges. Therefore, in this article, we examine the domain of Cross-Chain Smart Contract Invocations (CCSCIs), which are distributed transactions that involve the invocation of smart contracts hosted on two or more blockchain systems. We conduct a systematic multi-vocal literature review to get an overview of the available CCSCI approaches. We select 20 formal literature studies and 13 high-quality gray literature studies, extract data from them, and analyze it to derive the CCSCI Classification Framework. With the help of the framework, we group the approaches into two categories and eight subcategories. The approaches differ in multiple characteristics, e.g., the mechanisms they follow, and the capabilities and transaction processing semantics they offer. Our analysis indicates that all approaches suffer from obstacles that complicate real-world adoption, such as the low support for handling heterogeneity and the need for trusted third parties.
Blockchains have become increasingly important in recent years and have expanded their applicability to many domains beyond finance and cryptocurrencies. This adoption has particularly increased with the introduction of smart contracts, which are immutable, user-defined programs directly deployed on blockchain networks. However, many scenarios require business transactions to simultaneously access smart contracts on multiple, possibly heterogeneous blockchain networks while ensuring the atomicity and isolation of these transactions, which is not natively supported by current blockchain systems. Therefore, in this work, we introduce the Transactional Cross-Chain Smart Contract Invocation (TCCSCI) approach that supports such distributed business transactions while ensuring their global atomicity and serializability. The approach introduces the concept of Resource Manager Smart Contracts, and 2PC for Blockchains (2PC4BC), a client-driven Atomic Commit Protocol (ACP) specialized for blockchain-based distributed transactions. We validate our approach using a prototypical implementation, evaluate its introduced overhead, and prove its correctness.
The Circular Economy aims to reintroduce the value of products back into the economic cycle at the same value chain level. While the activities of the Circular Economy are already well-defined, there exists a gap in how returned products are treated by the industry. This study aims to examine how a process should be designed to handle returned products in the context of the Circular Economy. To achieve this, a machine learning-based algorithm is used to classify data and extract relevant information throughout the product life cycle. The focus of this research is limited to land transportation systems within the Sharing Economy sector.
The members of the European TRIZ Campus (ETC) have been learning from and working together with many honorable members of MATRIZ Official for many years and feel very connected to the official International TRIZ Association.
To further spread the TRIZ methodology and TRIZ teaching in the European area in the past 12 months the ETC has put a lot of thought in how making TRIZ accessible to a broader audi-ence and getting more professionals in touch with the methodology was one of the focal points.
To this end, we have developed new formats such as the "Trainer Day" to support trainers on their way into practice. We have drawn up detailed quality guidelines for the teaching of the TRIZ methodology, which are intended to provide orientation for the design of training classes and docu-mentation. We strive for exchange with representatives of "neighbouring" methods such as Six sigma, Lean, DFMA and Design Thinking to indicate synergies and added value among methods and approaches of different kinds. We are testing formats for community building, in order to connect users of all places more strongly with the TRIZ methodology through communication and information of-fers. If TRIZ users feel alone in their organizations, the exchange outside their organi-zation helps them to keep up with the TRIZ methodology. Moreover, the ETC strives to increase the ability to communicate the benefits of TRIZ-usage inside organizations. We discuss, how to reach teachers and students of all age, to make them the unique way of inventive thinking accessible.
In our paper we want to give other MATRIZ Official members insights and share our experi-ences and best practices with our fellow MO members.
Advancing mental health diagnostics: AI-based method for depression detection in patient interviews
(2023)
In this paper, we present a novel artificial intelligence (AI) application for depression detection, using advanced transformer networks to analyse clinical interviews. By incorporating simulated data to enhance traditional datasets, we overcome limitations in data protection and privacy, consequently improving the model’s performance. Our methodology employs BERT-based models, GPT-3.5, and ChatGPT-4, demonstrating state-of-the-art results in detecting depression from linguistic patterns and contextual information that significantly outperform previous approaches. Utilising the DAIC-WOZ and Extended-DAIC datasets, our study showcases the potential of the proposed application in revolutionising mental health care through early depression detection and intervention. Empirical results from various experiments highlight the efficacy of our approach and its suitability for real-world implementation. Furthermore, we acknowledge the ethical, legal, and social implications of AI in mental health diagnostics. Ultimately, our study underscores the transformative potential of AI in mental health diagnostics, paving the way for innovative solutions that can facilitate early intervention and improve patient outcomes.
This research evaluates current measurement scales for ambidexterity and proposes a new approach for the measurement of this important construct. We argue that current measurement approaches may be unsuitable to capture the concept of ambidexterity. Through a systematic scale development process, we derive a measurement scale with dual items that simultaneously refer to both dimensions, exploitation and exploration, thus reflecting the true nature of ambidexterity. An extensive pre-test with 39 executives suggests that our scale is suitable for capturing ambidexterity. Our measurement model enhances conceptual clarity of ambidexterity and can serve as a base for future investigations of the concept.
Why are organizations and markets slow to transform toward sustainability despite the abundant well-recognized opportunities it provides? An important subset of the phenomena this question addresses involves decision-makers recognizing the existence of opportunities but failing to undertake ambitious, effective, sufficient, or timely action. Building on existing research on capability traps, market formation, and managing sustainability, we focus on the forces con-straining organizations from developing the capabilities and market infrastructures required for sustainability transformations. We characterize types of sustainability initiatives and, using causal loop diagramming, visualize structures that enable and constrain how organizations can navigate individually and collectively worse-before-better dynamics resulting from uncertain,nonlinear, and delayed returns. Being under day-to-day pressures and deeply intertwined within their environment, organizational actors find it difficult to recognize, undertake, maintain, and coordinate necessary efforts internally and externally. We discuss research implications and directions for future research on avoiding these traps and accelerating sustainability transformations.
Cyber-Physical Production Systems increasingly use semantic information to meet the grown flexibility requirements. Ontologies are often used to represent and use this semantic information. Existing systems focus on mapping knowledge and less on the exchange with other relevant IT systems (e.g., ERP systems) in which crucial semantic information, often implicit, is contained. This article presents an approach that enables the exchange of semantic information via adapters. The approach is demonstrated by a use case utilizing an MES system and an ERP system.
Organizational agility may be an antidote against threats from volatile, uncertain, complex, or ambiguous corporate environments. While agility has been extensively examined in manufacturing enterprises, comparably less is known about agility in knowledge-intensive organizations. As results may not be transferable, there is still some confusion about how agility in knowledge-intensive organizations can be characterized, what factors facilitate its development, what its organizational effects are, and what environmental conditions favor these effects. This study closes these gaps by presenting a systematic literature review on agility in knowledge-intensive organizations. A systematic literature search led to a sample of 37 relevant papers for our review. Integrating the knowledge-based view and a dynamic capabilities perspective, we (1) present different relevant conceptualizations of organizational agility, (2) discuss relevant knowledge management-related as well as information technology-related capabilities that support the development of organizational agility, and (3) shed light on the moderating role of environmental conditions in enhancing organizational agility and its effect on organizational performance. This academic paper adds value to theory by synthesizing existing research on agility in knowledge-intensive organizations. It furthermore may serve as a map for closing research gaps by proposing an extensive agenda for future research. Our study expands existing literature reviews on agility with its specific focus on a knowledge-intensive context and its integration of the research streams of knowledge management capabilities as well as information technology capabilities. It integrates relevant organizational knowledge management practices and the use of knowledge management systems to ensure superior performance effects. Our study can serve as a base for future examinations of organizational agility by illustrating fruitful topics for further examination as well as open questions. It may also provide value to practitioners by showing what factors favor the development of agility in knowledge-intensive organizations and what organizational effects can be achieved under which conditions.
Knowledge-intensive organizations primarily rely on knowledge and expertise as key strategic resources. In light of economic, social, and health-related crises in recent years, such organizations increasingly need to operate in dynamic environments. However, examinations on dynamic capabilities specifically in knowledge-intensive organizations remain scarce. This is remarkable given the role that knowledge holds as an economic resource in developed countries. To provide an explanation of how knowledge-intensive organizations can prevail among competitors under dynamic conditions, the authors integrate two literature streams in a knowledge-intensive context: the knowledge-based view and the dynamic capabilities approach. The knowledge-based view focuses on the nature of organizational knowledge as a critical resource and illustrates specific properties of knowledge in contrast to traditional means of labor such as capital. The dynamic capabilities approach on the other hand is about a firm's ability to integrate, build, and reconfigure internal and external resources and can be drawn on to explain organizational success through adaptation to dynamic contexts. In this conceptual study, the authors propose a research model linking knowledge processes to organizational performance through two different paths: (1) Operational capabilities permit organizations to make their living in the present and refer to efficiency. (2) Dynamic capabilities allow organizations to change their resource base and, therefore, enable their long-term survival in dynamic environments by focusing on effectiveness. Additionally, the authors hypothesize a moderating effect of environmental dynamics on the relationship between dynamic capabilities and performance. The study offers a comprehensive overview on the interplay between dynamic capabilities and the knowledge-based view, offering valuable insights for both researchers and practitioners in the field.
Human pose estimation (HPE) is integral to scene understanding in numerous safety-critical domains involving human-machine interaction, such as autonomous driving or semi-automated work environments. Avoiding costly mistakes is synonymous with anticipating failure in model predictions, which necessitates meta-judgments on the accuracy of the applied models. Here, we propose a straightforward human pose regression framework to examine the behavior of two established methods for simultaneous aleatoric and epistemic uncertainty estimation: maximum a-posteriori (MAP) estimation with Monte-Carlo variational inference and deep evidential regression (DER). First, we evaluate both approaches on the quality of their predicted variances and whether these truly capture the expected model error. The initial assessment indicates that both methods exhibit the overconfidence issue common in deep probabilistic models. This observation motivates our implementation of an additional recalibration step to extract reliable confidence intervals. We then take a closer look at deep evidential regression, which, to our knowledge, is applied comprehensively for the first time to the HPE problem. Experimental results indicate that DER behaves as expected in challenging and adverse conditions commonly occurring in HPE and that the predicted uncertainties match their purported aleatoric and epistemic sources. Notably, DER achieves smooth uncertainty estimates without the need for a costly sampling step, making it an attractive candidate for uncertainty estimation on resource-limited platforms.
The Industry 4.0 paradigm requires concepts for integrating intelligent/ smart IoT Solutions into manufacturing. Such intelligent solutions are envisioned to increase flexibility and adaptability in smart factories. Especially autonomous cobots capable of adapting to changing conditions are a key enabler for changeable factory concepts. However, identifying the requirements and solution scenarios incorporating intelligent products challenges the manufacturing industry, especially in the SME sector. In pick and place scenarios, changing coordinate systems of workpiece carriers cause placing process errors. Using the IPIDS framework, this paper describes the development of a tool-center-point positioning method to improve the process stability of a collaborative robot in a changeable assembly workstation. Applying the framework identifies the requirement for an intelligent workpiece carrier as a part of the solution. Implementing and evaluating the solution within a changeable factory validates the IPIDS framework.
In recent years, the demand for accurate and efficient 3D body scanning technologies has increased, driven by the growing interest in personalised textile development and health care. This position paper presents the implementation of a novel 3D body scanner that integrates multiple RGB cameras and image stitching techniques to generate detailed point clouds and 3D mesh models. Our system significantly enhances the scanning process, achieving higher resolution and fidelity while reducing the cost, time and effort required for data acquisition and processing. Furthermore, we evaluate the potential use cases and applications of our 3D body scanner, focusing on the textile technology and health sectors. In textile development, the 3D scanner contributes to bespoke clothing production, allowing designers to construct made-to-measure garments, thus minimising waste and enhancing customer satisfaction through fitting clothing. In mental health care, the 3D body scanner can be employed as a tool for body image analysis, providing valuable insights into the psychological and emotional aspects of self-perception. By exploring the synergy between the 3D body scanner and these fields, we aim to foster interdisciplinary collaborations that drive advancements in personalisation, sustainability, and well-being.
The hard template method for the preparation of monodisperse mesoporous silica microspheres (MPSMs) has been established in recent years. In this process, in situ-generated silica nanoparticles (SNPs) enter the porous organic template and control the size and pore parameters of the final MPSMs. Here, the sizes of the deposited SNPs are determined by the hydrolysis and condensation rates of different alkoxysilanes in a base catalyzed sol–gel process. Thus, tetramethyl orthosilicate (TMOS), tetraethyl orthosilicate (TEOS), tetrapropyl orthosilicate (TPOS) and tetrabutyl orthosilicate (TBOS) were sol–gel processed in the presence of amino-functionalized poly (glycidyl methacrylate-co-ethylene glycol dimethacrylate) (p(GMA-co-EDMA)) templates. The size of the final MPSMs covers a broad range of 0.5–7.3 µm and a median pore size distribution from 4.0 to 24.9 nm. Moreover, the specific surface area can be adjusted between 271 and 637 m2 g−1. Also, the properties and morphology of the MPSMs differ according to the SNPs. Furthermore, the combination of different alkoxysilanes allows the individual design of the morphology and pore parameters of the silica particles. Selected MPSMs were packed into columns and successfully applied as stationary phases in high-performance liquid chromatography (HPLC) in the separation of various water-soluble vitamins.
Patterns are virtually simulated in 3D CAD programs before production to check the fit. However, achieving lifelike representations of human avatars, especially regarding soft tissue dynamics, remains challenging. This is mainly since conventional avatars in garment CAD programs are simulated with a continuous hard surface and not corresponding to the human physical and mechanical body properties of soft tissue. In the real world, the human body’s natural shape is affected by the contact pressure of tight-fitting textiles. To verify the fit of a simulated garment, the interactions between the individual body shape and the garment must be considered. This paper introduces an innovative approach to digitising the softness of human tissue using 4D scanning technology. The primary objective of this research is to explore the interactions between tissue softness and different compression levels of apparel, exerting pressure on the tissue to capture the changes in the natural shape. Therefore, to generate data and model an avatar with soft body physics, it is essential to capture the deform ability and elasticity of the soft tissue and map it into the modification options for a simulation. To aim this, various methods from different fields were researched and compared to evaluate 4D scanning as the most suitable method for capturing tissue deformability in vivo. In particular, it should be considered that the human body has different deformation capabilities depending on age, the amount of muscle and body fat. In addition, different tissue zones have different mechanical properties, so it is essential to identify and classify them to back up these properties for the simulation. It has been shown that by digitising the obtained data of the different defined applied pressure levels, a prediction of the deformation of the tissue of the exact person becomes possible. As technology advances and data sets grow, this approach has the potential to reshape how we verify fit digitally with soft avatars and leverage their realistic soft tissue properties for various practical purposes.
With the rapid development of globalization, the demand for translation between different languages is also increasing. Although pre-training has achieved excellent results in neural machine translation, the existing neural machine translation has almost no high-quality suitable for specific fields. Alignment information, so this paper proposes a pre-training neural machine translation with alignment information via optimal transport. First, this paper narrows the representation gap between different languages by using OTAP to generate domain-specific data for information alignment, and learns richer semantic information. Secondly, this paper proposes a lightweight model DR-Reformer, which uses Reformer as the backbone network, adds Dropout layers and Reduction layers, reduces model parameters without losing accuracy, and improves computational efficiency. Experiments on the Chinese and English datasets of AI Challenger 2018 and WMT-17 show that the proposed algorithm has better performance than existing algorithms.
Analog integrated circuit sizing still relies heavily on human expert knowledge as previous automation approaches have not found wide-spread acceptance in industry. One strand, the optimization-based automation, is often discarded due to inflated constraining setups, infeasible results or excessive run times. To address these deficits, this work proposes a alternative optimization flow featuring a designer’s intuition for feasible design spaces through integration of expert knowledge based on the gm/ID-method. Moreover, the extensive run times of simulation-based optimization flows are overcome by incorporating computationally efficient machine learning methods. Neural network surrogate models predicting eleven performance parameters increase the evaluation speed by 3 400× on average compared to a simulator. Additionally, they enable the use of optimization algorithms dependent on automatic differentiation, that would otherwise be unavailable in this field. First, an up to 4× more efficient way for sampling training data based on the aforementioned space is detailed. After presenting the architecture and training effort regarding the surrogate models, they are employed as part of the objective function for sizing three operational amplifiers with three different optimization algorithms. Additionally, the benefits of using the gm/ID-method become evident when considering technology migration, as previously found solutions may be reused for other technologies.
Natural wood colors occur within a wide range from almost white (e.g., white poplar), various yellowish, reddish, and brownish hues to almost black (e.g., ebony). The intrinsic color of wood is basically defined by its chemical composition. However, other factors such as specific anatomical formations or physical properties further affect the optical impression. Starting with the chemical composition of wood and anatomical basics, wood color and its modifications are discussed in this chapter. The classic method of coloring or re-coloring wood-based material surfaces is the application of a coating containing appropriate dyes or pigments. Different concepts for wood coating and coloration are presented. Another method used dyes for coloration of the wood structure. As alternative techniques, physical methods, for example, drying, steaming, ammoniation, bleaching, enzyme treatment, as well as treatment with electromagnetic irradiation (e.g., UV), are explained in this chapter.
Facing ever-looming climate change, studying the drivers for individuals' Information Systems (IS) Use to reduce environmental harm gains momentum. While extant research on the antecedents of sustainable IS Use has focused on specific theories, interventions, contexts, and technologies, a holistic understanding has become increasingly elusive, with a synthesis remaining absent. We employ a systematic literature review methodology to shed light on the driving antecedents for sustainable IS Use among individual consumers. Our results build on findings of 29 empirical studies drawn from 598 articles retrieved from our premier outlets and a forward/backward search. The analysis reveals six salient complementary antecedents: Relief, Empowerment, Default, User-centricity, Salience, and Encouragement. We recommend considering these concepts when developing, deploying, promoting, or regulating digital technologies to mitigate individual consumers' emissions. Along with memorable and implementable concepts, our theoretical framework offers a novel conceptualization and four promising avenues for researchers on sustainable IS Use.
This article presents a modified method of performing power flow calculations as an alternative to pure energy-based simulations of off-grid hybrid systems. The enhancement consists in transforming the scenario-based power flow method into a discrete time-dependent algorithm with the inclusion of bus and controller dynamics.
Smart cities are considered data factories that generate an enormous amount of data from various sources. In fact data is the backbone of any smart services. Therefore, the strategic beneficial handling of this digital capital is crucial for cities. Some smart city pioneers have already written down their approach to data in the form of data strategies, but what should a city's data strategy include, and how can the goals and measures defined in the strategies be operationalized? This paper addresses these questions by looking closely at the data strategies of cities in Germany and the top three countries in the EU Digital Economy and Society Index. The in-depth analysis of 8 city data strategies has yielded 11 dimensions that cities should consider in their data strategy. These are relevance of data, principles, methods, data sharing, technology, data culture, data ethics, organizational structure, data security and privacy, collaborations, data literacy. In addition, data governance is a concept to put these 11 strategic dimensions into practice through standardization measures, training programs, and defining roles and responsibilities by developing a data catalog.
Platforms feature increasingly complex architectures with regard to interconnecting with other digital platforms as well as with a variety of devices and services. This development also impacts the structure of digital platform ecosystems and forces providers of these services, devices, and services to incorporate this complexity in their decision-making. To contribute to the existing body of knowledge on measuring ecosystem complexity, the present research proposes two key artefacts based on ecosystem intelligence: On the one hand, complementarity graphs represent ecosystems with an ecosystem's functional modules as vertices and complementarities as edges. The nodes carry information about the category membership of the module. On the other hand, a process is suggested that can collect important information for ecosystem intelligence using proxies and web scraping. Our approach allows replacing data, which today is largely unavailable due to competitive reasons. We demonstrated the use of the artefacts in category-oriented complementarity maps that aggregate the information from complementarity graphs and support decision-making. They show which combination of module categories creates strong and weak complementarities. The paper evaluates complementarity maps and the data collection process by creating category-oriented complementarity graphs on the Alexa skill ecosystem and concludes with a call to pursue more research based on functional ecosystem intelligence.
Project managers still face management problems in interorganizational Research and Development (R&D) projects due to their limited authority. Addressing a project culture which is conducive to cooperation and innovation in interorganizational R&D project management demands commitment of individual project members and thus balances this limited authority. However, the relational collaboration level at which project culture manifests itself is not addressed by current project management approaches, or it is addressed only at a late stage. Consequently, project culture develops within a predefined framework of project organization and organized contents and thus is not actively targeted. Therefore, a focus shift towards project culture becomes necessary. This can be done by a project-culture-aware management. The method CLIPS actively supports interorganizational project members in this kind of management. It should be integrable in the common project management approaches, that with its application all collaboration levels are addressed in interorganizational R&D project management. The goal of this paper is to demonstrate the integrability of the method CLIPS and show how it can be integrated in common project management approaches. This enriches interorganizational R&D project management by a project culture focus.
The chemical recycling of used motor oil via catalytic cracking to convert it into secondary diesel-like fuels is a sustainable and technically attractive solution for managing environmental concerns associated with traditional disposal. In this context, this study was conducted to screen basic and acidic-aluminum silicate catalysts doped with different metals, including Mg, Zn, Cu, and Ni. The catalysts were thoroughly characterized using various techniques such as N2 adsorption–desorption isotherms, FT-IR spectroscopy, and TG analysis. The liquid and gaseous products were identified using GC, and their characteristics were compared with acceptable ranges from ASTM characterization methods for diesel fuel. The results showed that metal doping improved the performance of the catalysts, resulting in higher conversion rates of up to 65%, compared to thermal (15%) and aluminum silicates (≈20%). Among all catalysts, basic aluminum silicates doped with Ni showed the best catalytic performance, with conversions and yields three times higher than aluminum silicate catalysts. These findings significantly contribute to developing efficient and eco-friendly processes for the chemical recycling of used motor oil. This study highlights the potential of basic aluminum silicates doped with Ni as a promising catalyst for catalytic cracking and encourages further research in this area.
Polyurethane thermosets have a wide range of applications. In this study, alternative raw materials were used to enhance sustainability. In two newly developed biobased polyurethanes (PUs), the cross-linker content was varied, which caused phase separation and therefore affected the turbidity. To investigate this phenomenon, UV–Vis–NIR spectroscopy was utilized. Spectra were recorded from 200 to 2500 nm in transmittance mode, and multivariate data analysis was applied to the three UV, Vis, and NIR sections separately. For the two different PU classes, each with five different cross-linker contents, classification by principal component analysis combined with linear or quadratic discriminant analysis was possible with an accuracy between 93% and nearly 100%. The best separation was achieved in the NIR range. Partial least-squares regression models were determined to predict the cross-linker content. As mentioned, the model for the NIR range is the most suitable, with the highest R2 (validation) of 0.99 for PU1 and 0.98 for PU2. The corresponding root-mean-square error of prediction values of the external validation was the lowest, with 0.82% (PU1) and 1.25% (PU2). Therefore, UV–Vis–NIR absorbance spectroscopy, especially NIR, is a suitable tool for monitoring the appropriate material composition of turbid PU thermosets in line.
In clothing e-commerce, the challenge of optimally recommending clothing that suits a user’s unique characteristics remains a pressing issue. Many platforms simply recommend best-selling or popular clothing, without taking into account important attributes like user’s face color, pupil color, face shape, age, etc. To solve this problem, this paper proposes a personalized clothing recommendation algorithm that incorporates the established 4-Season Color System and user-specific biological characteristics. Firstly, the attributes and colors of clothing are classified by Fnet network, that can learn disjoint label combinations and mitigate the issue of excessive labels. Secondly, on the basis of the 4-Season Color System, the user’s face color model is trained by combined MobileNetV3_DTL, which ensures the model’s generalization and improves the training speed. Thirdly, user’s face shape and age are divided into different categories by an Inception network. Finally, according to the users’ face color, age, face shape and other information, personalized clothing is recommended in a coarse-to-fine manner. Experiments on five datasets demonstrate that the algorithm proposed in this paper achieves state-of-the-art results.
Determination of the gel point of formaldehyde-based wood adhesives by using a multiwave technique
(2023)
Determining the instant of gelation of formaldehyde-based wood adhesives as an assessment parameter for their curing rate is important for optimizing the curing behavior. Due to the stoichiometrically imbalanced networks of formaldehyde-based adhesives, the crossover point of storage G′ and loss modulus G″ cannot unconditionally be assumed as the gel point in oscillatory time sweeps as the material response is frequency-dependent. This study aims to determine the gel point of selected adhesives by the isothermal multiwave oscillatory shear test. A thorough comparison between the gel and the crossover point of G′ and G″ is performed. Rheokinetic analysis showed no significant difference between the activation energies calculated at the gel point determined by a multiwave test and the crossover point obtained by the time sweep test. Hence, for resins with similar curing reactions, a reliable determination of gel point by applying a multiwave test is needed for a comparison of their reactivity.
Sol−gel-controlled size and morphology of mesoporous silica microspheres using hard templates
(2023)
Mesoporous silica microspheres (MPSMs) represent a promising material as a stationary phase for HPLC separations. The use of hard templates provides a preparation strategy for producing such monodisperse silica microspheres. Here, 15 MPSMs were systematically synthesized by varying the sol–gel reaction parameters of water-to-precursor ratio and ammonia concentration in the presence of a porous p(GMA-co-EDMA) polymeric hard template. Changing the sol–gel process factors resulted in a wide range of MPSMs with varying particle sizes from smaller than one to several micrometers. The application of response surface methodology allowed to derive quantitative predictive models based on the process factor effects on particle size, pore size, pore volume, and specific surface area of the MPSMs. A narrow size distribution of the silica particles was maintained over the entire experimental space. Two larger-scale batches of MPSMs were prepared, and the particles were functionalized with trimethoxy(octadecyl) silane for the application as stationary phase in reversed-phases liquid chromatography. The separation of proteins and amino acids was successfully accomplished, and the effect of the pore properties of the silica particles on separation was demonstrated.
The proliferation of smart technologies transforms the way individual consumers perform tasks. Considerable research alludes that smart technologies are often related to domestic energy consumption. However, it remains unclear how such technologies transform tasks and thereby impact our planet. We explore the role of technological smartness in personal day-to-day tasks that help create a more sustainable future. In the absence of theory, but facing extensive changes in everyday life enabled by smart technologies, we draw on phenomenon-based theorizing (PBT) guidelines. As anchor, we refer to task endogeneity related to task-technology fit theory (TTF). As infusion, we employ theory on public goods. Our model proposes novel relations between the concepts of smart autonomy and -transparency with sustainable task outcomes, mediated by task convenience and task significance. We discuss some implications, limitations, and future research opportunities.
Smart factories, driven by the integration of automation and digital technologies, have revolutionized industrial production by enhancing efficiency, productivity, and flexibility. However, the optimization and continuous improvement of these complex systems present numerous challenges, especially when real-world data collection is time-consuming, expensive, or limited. In this paper, we propose a novel method for semi-automated improvement of smart factories using synthetic data and cause-effect-relations, while incorporating the aspect of self-organization. The method leverages the power of synthetic data generation techniques to create representative datasets that mimic the behaviour of real-world manufacturing systems. These synthetic datasets serve together with the cause-and-effect relationships as a valuable resource for factory optimization, as they enable extensive experimentation and analysis without the constraints of limited or costly real-world data. Furthermore, the method embraces the concept of self organization within smart factories. By allowing the system to adapt and optimize itself based on feedback from the synthetic data, cause-effect-relationships, the factory can dynamically reconfigure and adjust its processes. To facilitate the improvement process, the method integrates the synthetic data with advanced analytics and machine learning algorithms as well as and the cause-and-effect relationships. This synergy between human expertise and technological advancements represents a compelling path towards a truly optimized smart factory of the future.
Production planning and control are characterized by unplanned events or so-called turbulences. Turbulences can be external, originating outside the company (e.g., delayed delivery by a supplier), or internal, originating within the company (e.g., failures of production and intralogistics resources). Turbulences can have far reaching consequences for companies and their customers, such as delivery delays due to process delays. For target-optimized handling of turbulences in production, forecasting methods incorporating process data in combination with the use of existing flexibility corridors of flexible production systems offer great potential. Probabilistic, data-driven forecasting methods allow determining the corresponding probabilities of potential turbulences. However, a parallel application of different forecasting methods is required to identify an appropriate one for the specific application. This requires a large database, which often is unavailable and, therefore, must be created first. A simulation-based approach to generate synthetic data is used and validated to create the necessary database of input parameters for the prediction of internal turbulences. To this end, a minimal system for conducting simulation experiments on turbulence scenarios was developed and implemented. A multi-method simulation of the minimal system synthetically generates the required process data, using agent-based modeling for the autonomously controlled system elements and event-based modeling for the stochastic turbulence events. Based on this generated synthetic data and the variation of the input parameters in the forecast, a comparative study of data-driven probabilistic forecasting methods was conducted using a data analytics tool. Forecasting methods of different types (including regression, Bayesian models, nonlinear models, decision trees, ensemble, deep learning) were analyzed in terms of prediction quality, standard deviation, and computation time. This resulted in the identification ofappropriate forecasting methods, and required input parameters for the considered turbulences.
Mesoporous silica microspheres (MPSMs) find broad application as separation materials in high liquid chromatography (HPLC). A promising preparation strategy uses p(GMA-co-EDMA) as hard templates to control the pore properties and a narrow size distribution of the MPMs. Here six hard templates were prepared which differ in their porosity and surface functionalization. This was achieved by altering the ratio of GMA to EDMA and by adjusting the proportion of monomer and porogen in the polymerization process. The various amounts of GMA incorporated into the polymer network of P1-6 lead to different numbers of tetraethylene pentamine in the p(GMA-co-EDMA) template. This was established by a partial least squares regression (PLS-R) model, based on FTIR spectra of the templates. Deposition of silica nanoparticles (SNP) into the template under Stoeber conditions and subsequent removal of the polymer by calcination result in MPSM1-6. The size of the SNPs and their incorporation depends on the pore parameters of the template and degree of TEPA functionalization. Moreover, the incorporated SNPs construct the silica network and control the pore parameters of the MPSMs. Functionalization of the MPSMs with trimethoxy (octadecyl) silane allows their use as a stationary phase for the separation of biomolecules. The pore characteristics and the functionalization of the template determine the pore structure of the silica particles and, consequently, their separation properties.
The fifth generation of mobile communication (5G) is a wireless technology developed to provide reliable, fast data transmission for industrial applications, such as autonomous mobile robots and connect cyber-physical systems using Internet of Things (IoT) sensors. In this context, private 5G networks enable the full performance of industrial applications built on dedicated 5G infrastructures. However, emerging wireless communication technologies such as 5G are a complex and challenging topic for training in learning factories, often lacking physical or visual interaction. Therefore, this paper aims to develop a real-time performance monitoring system of private 5G networks and different industrial 5G devices to visualise the performance and impact factors influencing 5G for students and future connectivity experts. Additionally, this paper presents the first long-term measurements of private 5G networks and shows the performance gap between the actual and targeted performance of private 5G networks.
Most Question-answering (QA) systems rely on training data to reach their optimal performance. However, acquiring training data for supervised systems is both time-consuming and resource-intensive. To address this, in this paper, we propose TFCSG, an unsupervised similar question retrieval approach that leverages pre-trained language models and multi-task learning. Firstly, topic keywords in question sentences are extracted sequentially based on a latent topic-filtering algorithm to construct unsupervised training corpus data. Then, the multi-task learning method is used to build the question retrieval model. There are three tasks designed. The first is a short sentence contrastive learning task. The second is the question sentence and its corresponding topic sequence similarity judgment task. The third is using question sentences to generate their corresponding topic sequence task. The three tasks are used to train the language model in parallel. Finally, similar questions are obtained by calculating the cosine similarity between sentence vectors. The comparison experiment on public question datasets that TFCSG outperforms the comparative unsupervised baseline method. And there is no need for manual marking, which greatly saves human resources.
Since its first publication in 2015, the learning factory morphology has been frequently used to design new learning factories and to classify existing ones. The structuring supports the concretization of ideas and promotes exchange between stakeholders.
However, since the implementation of the first learning factories, the learning factory concept has constantly evolved.
Therefore, in the Working Group "Learning Factory Design" of the International Association of Learning Factories, the existing morphology has been revised and extended based on an analysis of the trends observed in the evolution of learning factory concepts. On the one hand, new design elements were complemented to the previous seven design dimensions, and on the other hand, new design dimensions were added. The revised version of the morphology thus provides even more targeted support in the design of new learning factories in the future.
The market for indoor positioning systems for a variety of applications has grown strongly in recent years. A wide range of systems is available, varying considerably in terms of accuracy, price and technology used. The suitability of the systems is highly dependent on the intended application. This paper presents a concept to use a single low-cost PTZ camera in combination with fiducial markers for indoor position and orientation determination. The intended use case is to capture a plant layout consisting of position, orientation and unique identity of individual facilities. Important factors to consider for the selection of a camera have been identified and the transformation of the marker pose in camera coordinates into a selectable plant coordinate system is described. The concept is illustrated by an exemplary practical implementation and its results.
The benefits of urban data cannot be realized without a political and strategic view of data use. A core concept within this view is data governance, which aligns strategy in data-relevant structures and entities with data processes, actors, architectures, and overall data management. Data governance is not a new concept and has long been addressed by scientists and practitioners from an enterprise perspective. In the urban context, however, data governance has only recently attracted increased attention, despite the unprecedented relevance of data in the advent of smart cities. Urban data governance can create semantic compatibility between heterogeneous technologies and data silos and connect stakeholders by standardizing data models, processes, and policies. This research provides a foundation for developing a reference model for urban data governance, identifies challenges in dealing with data in cities, and defines factors for the successful implementation of urban data governance. To obtain the best possible insights, the study carries out qualitative research following the design science research paradigm, conducting semi-structured expert interviews with 27 municipalities from Austria, Germany, Denmark, Finland, Sweden, and the Netherlands. The subsequent data analysis based on cognitive maps provides valuable insights into urban data governance. The interview transcripts were transferred and synthesized into comprehensive urban data governance maps to analyze entities and complex relationships with respect to the current state, challenges, and success factors of urban data governance. The findings show that each municipal department defines data governance separately, with no uniform approach. Given cultural factors, siloed data architectures have emerged in cities, leading to interoperability and integrability issues. A city-wide data governance entity in a cross-cutting function can be instrumental in breaking down silos in cities and creating a unified view of the city’s data landscape. The further identified concepts and their mutual interaction offer a powerful tool for developing a reference model for urban data governance and for the strategic orientation of cities on their way to data-driven organizations.
While driving, stress is caused by situations in which the driver estimates their ability to manage the driving demands as insufficient or loses the capability to handle the situation. This leads to increased numbers of driver mistakes and traffic violations. Additional stressing factors are time pressure, road conditions, or dislike for driving. Therefore, stress affects driver and road safety. Stress is classified into two categories depending on its duration and the effects on the body and psyche: short-term eustress and constantly present distress, which causes degenerative effects. In this work, we focus on distress. Wearable sensors are handy tools for collecting biosignals like heart rate, activity, etc. Easy installation and non-intrusive nature make them convenient for calculating stress. This study focuses on the investigation of stress and its implications. Specifically, the research conducts an analysis of stress within a select group of individuals from both Spain and Germany. The primary objective is to examine the influence of recognized psychological factors, including personality traits such as neuroticism, extroversion, psychoticism, stress and road safety. The estimation of stress levels was accomplished through the collection of physiological parameters (R-R intervals) using a Polar H10 chest strap. We observed that personality traits, such as extroversion, exhibited similar trends during relaxation, with an average heart rate 6% higher in Spain and 3% higher in Germany. However, while driving, introverts, on average, experienced more stress, with rates 4% and 1% lower than extroverts in Spain and Germany, respectively.
The fierce market competition environment makes employees feel insecure at work. While it is difficult for enterprises to provide employees with a sense of security, they have to rely on employees’ innovative behavior to seek competitive advantage. Therefore, this study focuses on how employees engage in innovative behavior when they face job insecurity.MethodsUsing a variable-centered approach, this study aims to examine the mediating effects of intrinsic and impression management motivation in the relationship between quantitative and qualitative job insecurity and innovative behavior, including proactive and reactive innovative behavior. In addition, a person-centered approach is used to investigate whether it is possible to distinguish different combinations of quantitative and qualitative job insecurity, and examine the effect of these job insecurity profiles on motivation and innovative behavior. We used 503 data sets collected via the Credamo platform in China into the data analysis.ResultsThe study found that quantitative job insecurity affects proactive and reactive innovative behavior through impression management motivation and that qualitative job insecurity affects proactive and reactive innovative behavior through intrinsic and impression management motivation. In addition, three job insecurity profiles were identified: balanced high job insecurity, balanced low job insecurity, and a profile dominated by high quantitative job insecurity, all of which have significantly different effects on motivation and innovative behavior.DiscussionThis study contributes to provide new insights into the relationship between job insecurity and innovative behavior and compensate for the limitation of the traditional variable-centered approach that cannot capture heterogeneity within the workforce.
Mobile assistance systems (MAS) promise to overcome personnel shortages in operating theatres worldwide. A literature review inspired by the PRISMA 2020 method determines the state of the art of MAS, and identifies a lack of application areas for MAS in the operating theatre. Interviews with subject-matter experts aim to investigate application areas for MAS. The results show that most operational tasks refer to material management and patient management. MAS, with their potential to reduce the time needed for material and patient management, and the physical and mental strain of patient management, have great potential in the operating theatre.
In the past, plant layouts were regarded as highly static structures. With increasing internal and external factors causing turbulence in operations, it has become more necessary for companies to adapt to new conditions in order to maintain optimal performance. One possible way for such an adaptation is the adjustment of the plant layout by rearranging the individual facilities within the plant. Since the information about the plant layout is considered as master data and changes have a considerable impact on interconnected processes in production, it is essential that this data remains accurate and up-to-date. This paper presents a novel approach to create a digital shadow of the plant layout, which allows the actual state of the physical layout to be continuously represented in virtual space. To capture the spatial positions and orientations of the individual facilities, a pan-tilt-zoom camera in combination with fiducial markers is used. With the help of a prototypically implemented system, the real plant layout was captured and converted into different data formats for further use in exemplary external software systems. This enabled the automatic updating of the plant layout for simulation, analysis and routing tasks in a case study and showed the benefits of using the proposed system for layout capturing in terms of accuracy and effort reduction.
The increase in product variance and shorter product lifecycles result in higher production ramp-up frequencies and promote the usage of mixed-model lines. The ramp-up is considered a critical step in the product life cycle and in the automotive industry phases of the ramp-up are often executed on separated production lines (pilot lines) or factories (pilot plants) to verify processes and to qualify employees without affecting the production of other products in the mixed-model line. The required financial funds for planning and maintaining dedicated pilot lines prevent small and medium-sized enterprises (SMEs) from the application. Hence, SMEs require different tools for piloting and training during the production ramp-up. Learning islands on which employees can be trained through induced and autonomous learning propose a solution. In this work, a concept for the development and application which contains the required organization, activities, and materials is developed through expert interviews. The results of a case study application with a medium-sized automotive manufacturer show that learning islands are a viable tool for employee qualification and process verification during the ramp-up of mixed-model lines.
The presented research is dedicated to estimation of the correlation between the level of renewable energy sources and the costs of congestion management in electric networks in selected European countries. Data of six countries in North-West European area (Italy, Spain, Germany, France, Poland and Austria) were investigated. Factors considered included grid congestion costs including re-dispatching costs as well as countertrading costs, gross electricity generation, installed capacity of electric generating facilities, installed capacity of electric non-dispatchable renewable energy sources and total electricity consumption. Special attention is paid to the share of renewable energy sources. It is found that the grid congestion costs are not clearly affected by penetration of non-dispatchable renewables in all the analysed countries and therefore a clear mathematical correlation cannot no be extrapolated with the available data. The results of this research show in general a loose dependency of the grid congestion costs on the penetration of renewables and a strong dependency on the total electrical consumption of the country.
Assistant platforms
(2023)
Many assistant systems have evolved toward assistant platforms. These platforms combine a range of resources from various actors via a declarative and generative interface. Among the examples are voice-oriented assistant platforms like Alexa and Siri, as well as text-oriented assistant platforms like ChatGPT and Bard. They have emerged as valuable tools for handling tasks without requiring deeper domain expertise and have received large attention with the present advances in generative artificial intelligence. In view of their growing popularity, this Fundamental outlines the key characteristics and capabilities that define assistant platforms. The former comprise a multi-platform architecture, a declarative interface, and a multi-platform ecosystem, while the latter include capabilities for composition, integration, prediction, and generativity. Based on this framework, a research agenda is proposed along the capabilities and affordances for assistant platforms.
Determinants of customer recovery in retail banking — lessons from a German banking case study
(2023)
Due to the increased willingness of retail banking customers to switch and churn their banking relationships, a question arises: Is it possible to win back lost customers, and if so, is such a possibility even desirable after all economic factors have been considered? To answer these questions, this paper examines selected determinants for the recovery of terminated customer–bank relationships from the perspective of former customers. This study therefore evaluates for the first time, empirically and systematically with reference to a German Sparkasse as a case-study setting, whether lost customers have a sufficient general willingness to return (GWR) a retail banking relationship. From our results, a correlation is shown between the GWR a banking relationship and some specific determinants: seeking variety, attractiveness of alternatives and customer satisfaction with the former business relationship. In addition, we show that a customer’s GWR varies depending on the reason for churn and is surprisingly greater when the customer defected for reasons that lie within the scope of the customer himself. Despite the case-study character, however, our results provide relevant insights for other banks and, in particular, this applies to countries with a comparable banking system.
Monitoring heart rate and breathing is essential in understanding the physiological processes for sleep analysis. Polysomnography (PSG) system have traditionally been used for sleep monitoring, but alternative methods can help to make sleep monitoring more portable in someone's home. This study conducted a series of experiments to investigate the use of pressure sensors placed under the bed as an alternative to PSG for monitoring heart rate and breathing during sleep. The following sets of experiments involved the addition of small rubber domes - transparent and black - that were glued to the pressure sensor. The resulting data were compared with the PSG system to determine the accuracy of the pressure sensor readings. The study found that the pressure sensor provided reliable data for extracting heart rate and respiration rate, with mean absolute errors (MAE) of 2.32 and 3.24 for respiration and heart rate, respectively. However, the addition of small rubber hemispheres did not significantly improve the accuracy of the readings, with MAEs of 2.3 bpm and 7.56 breaths per minute for respiration rate and heart rate, respectively. The findings of this study suggest that pressure sensors placed under the bed may serve as a viable alternative to traditional PSG systems for monitoring heart rate and breathing during sleep. These sensors provide a more comfortable and non-invasive method of sleep monitoring. However, the addition of small rubber domes did not significantly enhance the accuracy of the readings, indicating that it may not be a worthwhile addition to the pressure sensor system.
The Covid-19 virus has triggered a worldwide pandemic and therefore many employees were required to work from home which caused numerous challenges. With the Covid-19 pandemic now in its third year, there are already several studies available on the subject of home offices. To investigate the impact of remote work on employee satisfaction and trust, this quantitative study aims to review existing results and formulate hypotheses based on a conceptual model created through a qualitative study and extensive literature review. The research question is as follows: Does home office during Covid-19 affect employee satisfaction and trust? To test the hypotheses, a structural equation model was constructed and analyzed. The culture of trust and flexibility are identified as the biggest influencing factors in this study.
Sleep is an essential part of human existence, as we are in this state for approximately a third of our lives. Sleep disorders are common conditions that can affect many aspects of life. Sleep disorders are diagnosed in special laboratories with a polysomnography system, a costly procedure requiring much effort for the patient. Several systems have been proposed to address this situation, including performing the examination and analysis at the patient's home, using sensors to detect physiological signals automatically analysed by algorithms. This work aims to evaluate the use of a contactless respiratory recording system based on an accelerometer sensor in sleep apnea detection. For this purpose, an installation mounted under the bed mattress records the oscillations caused by the chest movements during the breathing process. The presented processing algorithm performs filtering of the obtained signals and determines the apnea events presence. The performance of the developed system and algorithm of apnea event detection (average values of accuracy, specificity and sensitivity are 94.6%, 95.3%, and 93.7% respectively) confirms the suitability of the proposed method and system for further ambulatory and in-home use.
Healthy sleep is one of the prerequisites for a good human body and brain condition, including general well-being. Unfortunately, there are several sleep disorders that can negatively affect this. One of the most common is sleep apnoea, in which breathing is impaired. Studies have shown that this disorder often remains undiagnosed. To avoid this, developing a system that can be widely used in a home environment to detect apnoea and monitor the changes once therapy has been initiated is essential. The conceptualisation of such a system is the main aim of this research. After a thorough analysis of the available literature and state of the art in this area of knowledge, a concept of the system was created, which includes the following main components: data acquisition (including two parts), storage of the data, apnoea detection algorithm, user and device management, data visualisation. The modules are interchangeable, and interfaces have been defined for data transfer, most of which operate using the MQTT protocol. System diagrams and detailed component descriptions, including signal requirements and visualisation mockups, have also been developed. The system's design includes the necessary concepts for the implementation and can be realised in a prototype in the next phase.
Entrepreneurship plays a role both for the development of African countries and for foreign companies with market entry plans. The infrastructural and institutional conditions for entrepreneurship are still difficult, but the advancing digitization leads to an increasingly active start-up scene in many African countries. There is still a mismatch between the areas where start-ups are created and the areas where foreign companies are looking for partners for market entry. Thus, despite positive developments in entrepreneurship, it remains difficult to find suitable partners in the foreseeable future.
The influence of sleep on human health is enormous. Accordingly, sleep disorders can have a negative impact on it. To avoid this, they should be identified and treated in time. For this purpose, objective (with an appropriate device) or subjective (based on perceived values) measurement methods are used for sleep analysis to understand the problem. The aim of this work is to find out whether an exchange of the two methods is possible and can provide reliable results. In accordance with this goal, a study was conducted with people aged over 65 years old (a total of 154 night-time recordings) in which both measurement methods were compared. Sleep questionnaires and electronic devices for sleep assessment placed under the mattress were applied to achieve the study aims. The obtained results indicated that the correlation between both measurement methods could be observed for sleep characteristics such as total sleep time, total time in bed and sleep efficiency. However, there are also significant differences in absolute values of the two measurement approaches for some subjects/nights, which leads us to conclude that the substitution is more likely to be considered in case of long-term monitoring where the trends are of more importance and not the absolute values for individual nights.
Development of an expert system to overpass citizens technological barriers on smart home and living
(2023)
Adopting new technologies can be overwhelming, even for people with experience in the field. For the general public, learning about new implementations, releases, brands, and enhancements can cause them to lose interest. There is a clear need to create point sources and platforms that provide helpful information about the novel and smart technologies, assisting users, technicians, and providers with products and technologies. The purpose of these platforms is twofold, as they can gather and share information on interests common to manufacturers and vendors. This paper presents the ”Finde-Dein-SmartHome” tool. Developed in association with the Smart Home & Living competence center [5] to help users learn about, understand, and purchase available technologies that meet their home automation needs. This tool aims to lower the usability barrier and guide potential customers to clear their doubts about privacy and pricing. Communities can use the information provided by this tool to identify market trends that could eventually lower costs for providers and incentivize access to innovative home technologies and devices supporting long-term care.
Managerial accountants spend a large part of their working time on more operational activities in cost accounting, reporting, and operational planning and budgeting. In all these areas, there has been increasing discussion in recent years, both in theory and practice, about using more digital technologies. For reporting, this means not only an intensified discussion of technologies such as RPA and AI but also more intensive changes to existing reporting systems. In particular, management information systems (MIS), which are maintained by managerial accountants and used by managers for corporate management, should be mentioned here. Based on an empirical survey in a large German company, this article discusses the requirements and assessments of users when switching from a regular MIS to a cloud-based system.
Measuring cardiorespiratory parameters in sleep, using non-contact sensors and the Ballistocardiography technique has received much attention due to the low-cost, unobtrusive, and non-invasive method. Designing a user-friendly, simple-to-use, and easy-to-deployment preserving less error-prone remains open and challenging due to the complex morphology of the signal. In this work, using four forcesensitive resistor sensors, we conducted a study by designing four distributions of sensors, in order to simplify the complexity of the system by identifying the region of interest for heartbeat and respiration measurement. The sensors are deployed under the mattress and attached to the bed frame without any interference with the subjects. The four distributions are combined in two linear horizontal, one linear vertical, and one square, covering the influencing region in cardiorespiratory activities. We recruited 4 subjects and acquired data in four regular sleeping positions, each for a duration of 80 seconds. The signal processing was performed using discrete wavelet transform bior 3.9 and smooth level of 4 as well as bandpass filtering. The results indicate that we have achieved the mean absolute error of 2.35 and 4.34 for respiration and heartbeat, respectively. The results recommend the efficiency of a triangleshaped structure of three sensors for measuring heartbeat and respiration parameters in all four regular sleeping positions.
In increasingly complex production environments, tremendous efforts are being made to optimize the efficiency of a production system. An important efficiency factor is industrial maintenance, both influencing the cost and securing the technical availability of machines and components. Maintenance managers are required to deliver the necessary availability of the production system while minimizing the resources needed to do so. To make this possible, a method to evaluate the dependency between the technical availability of an entire production system and maintenance resources is necessary. This paper presents a systematic literature review of such methods is presented. In order to assess the methods proposed in the literature, first, requirements are developed, including a necessary focus on maintenance strategies within these methods. Including maintenance strategies is necessary since they provide the foundation for both the availability of a component and the maintenance resources needed. In total, 13 requirements are developed, and 21 different methods are evaluated. Only one of the proposed methods addresses all requirements, with others lacking possible combinations of maintenance strategies and the resulting influences on the production system.
Introduction: Telemedicine reduces greenhouse gas emissions (CO2eq); however, results of studies vary extremely in dependence of the setting. This is the first study to focus on effects of telemedicine on CO2 imprint of primary care.
Methods: We conducted a comprehensive retrospective study to analyze total CO2eq emissions of kilometers (km) saved by telemedical consultations. We categorized prevented and provoked patient journeys, including pharmacy visits. We calculated CO2eq emission savings through primary care telemedical consultations in comparison to those that would have occurred without telemedicine. We used the comprehensive footprint approach, including all telemedical cases and the CO2eq emissions by the telemedicine center infrastructure. In order to determine the net ratio of CO2eq emissions avoided by the telemedical center, we calculated the emissions associated with the provision of telemedical consultations (including also the total consumption of physicians’ workstations) and subtracted them from the total of avoided CO2eq emissions. Furthermore, we also considered patient cases in our calculation that needed to have an in-person visit after the telemedical consultation. We calculated the savings taking into account the source of the consumed energy (renewable or not).
Results: 433 890 telemedical consultations overall helped save 1 800 391 km in travel. On average, 1 telemedical consultation saved 4.15 km of individual transport and consumed 0.15 kWh. We detected savings in almost every cluster of patients. After subtracting the CO2eq emissions caused by the telemedical center, the data reveal savings of 247.1 net tons of CO2eq emissions in total and of 0.57 kg CO2eq per telemedical consultation. The comprehensive footprint approach thus indicated a reduced footprint due to telemedicine in primary care.
Discussion: Integrating a telemedical center into the health care system reduces the CO2 footprint of primary care medicine; this is true even in a densely populated country with little use of cars like Switzerland. The insight of this study complements previous studies that focused on narrower aspects of telemedical consultations.
The development of automatic solutions for the detection of physiological events of interest is booming. Improvements in the collection and storage of large amounts of healthcare data allow access to these data faster and more efficiently. This fact means that the development of artificial intelligence models for the detection and monitoring of a large number of pathologies is becoming increasingly common in the medical field. In particular, developing deep learning models for detecting obstructive apnea (OSA) events is at the forefront. Numerous scientific studies focus on the architecture of the models and the results that these models can provide in terms of OSA classification and Apnea-Hypopnea-Index (AHI) calculation. However, little focus is put on other aspects of great relevance that are crucial for the training and performance of the models. Among these aspects can be found the set of physiological signals used and the preprocessing tasks prior to model training. This paper covers the essential requirements that must be considered before training the deep learning model for obstructive sleep apnea detection, in addition to covering solutions that currently exist in the scientific literature by analyzing the preprocessing tasks prior to training.
AbstractThrough their procyclical behavior, loan loss provisions have been determined as one of the factors that contribute to financial instability during a crisis. IFRS 9 was introduced in 2018 with an expected credit loss model replacing the incurred loss model of IAS 39 to mitigate the effect in the future. Our study aims to analyze loan loss provisions of major banks in the Eurozone to determine for the first time if the implementation of IFRS 9, as intended by regulators, has a dampening effect on procyclicality, especially during the stressed situation under COVID‐19. We analyze 51 banks from 12 countries of the European Monetary Union using 2856 firm‐year observations. While no robust evidence of less procyclicality can be found after the implementation of IFRS 9 until the pandemic, we find evidence that loan loss provisions moved countercyclical during 2020, indicating an alleviating effect at the beginning of the exogenous shock.
During the first years of the last decade, Egypt used to face recurrent electricity cut-offs in summer. In the past few years, the electricity tariff dramatically increased. Radiative cooling to the clear night sky is a renewable energy source that represents a relative solution. The dry desert climate promotes nocturnal radiative cooling applications. This study investigates the potential of nocturnal radiative cooling systems (RCSs) to reduce the energy consumption of the residential building sector in Egypt. The system technology proposed in this work is based on uncovered solar thermal collectors integrated into the building hydronic system. By implementing different control strategies, the same system could be used for both cooling and heating applications. The goal of this paper is to analyze the performance of RCSs in residential buildings in Egypt. The dynamic simulation program TRNSYS was used to simulate the thermal behavior of the system. The relevant issues of Egypt as a case-study are firstly overviewed. Then the paper introduces the work done to develop a building model that represents a typical residential apartment in Egypt. Typical occupancy profiles were developed to define the internal thermal gains. The adopted control strategy to optimize the system operation is presented as well. To fully understand and hence evaluate the operation of the proposed RCS, four simulation cases were considered: 1. a reference case (fully passive), 2. the stand-alone operation of the RCS, 3. ideal heating & cooling operation (fully-active), and 4. the hybrid-operation (when the active cooling system is supported by the proposed RCS). The analysis considered the main three distinct climates in Egypt, represented by the cities of Alexandria, Cairo and Asyut. The hotter and drier weather conditions resulted in a higher cooling potential and larger temperature differences. The simulated cooling power in Asyut was 28.4 W/m² for a 70 m² absorber field. For a smaller field area of 10 m², the cooling power reached 109 W/m² but with humble temperature differences. To meet the rigorous thermal comfort conditions, the proposed sensible RCS cannot fully replace conventional air-conditioning units, especially in humid areas like Alexandria. When working in a hybrid system, a 10% reduction in the active cooling energy demand could be achieved in Asyut to keep the cooling set-point at 24 °C. This percentage reduction was nearly doubled when the thermal comfort set-point was increased by two degrees (26 °C). In a sensitivity analysis, external shading devices as a passive measure as well as the implementation of the Egyptian code for buildings (ECP306/1–2005) were also investigated. The analysis of this study raised other relevant aspects to discuss, e.g. system-sizing, environmental effects, limitations and recommendations.
This study introduces a straightforward approach to construct three-dimensional (3D) surface-enhanced Raman spectroscopy (SERS) substrates using chemically modified silica particles as microcarriers and by attaching metal nanoparticles (NPs) onto their surfaces. Tollens’ reagent and sputtering techniques are utilized to prepare the SERS substrates from mercapto-functionalized silica particles. Treatment with Tollens’ reagent generates a variety of silver NPs, ranging from approximately 10 to 400 nm, while sputtering with gold (Au) yields uniformly distributed NPs with an island-like morphology. Both substrates display wide plasmon resonances in the scattering spectra, making them effective for SERS in the visible spectral range, with enhancement factors (ratio of the analyte’s intensity at the hotspot compared to that on the substrate in the absence of metal nanoparticles) of up to 25. These 3D substrates have a significant advantage over traditional SERS substrates because their active surface area is not limited to a 2D surface but offers a much greater active surface due to the 3D arrangement of the NPs. This feature may enable achieving much higher SERS intensity from within streaming liquids or inside cells/tissues.
Development of an IoT-based inventory management solution and training module using smart bins
(2023)
Flexibility, transparency and changeability of warehouse environments are playing an increasingly important role to achieve a cost-efficient production of small batch sizes. This results in increasing requirements for warehouses in terms of flexibility, scalability, reconfigurability and transparency of material and information flows to deal with large number of different components and variable material and information flows due to small batch sizes. Therefore, an IoT-based inventory management solution and training module has been developed, implemented and validated at Werk150 – the Factory on campus of the ESB Business School. Key elements of the developed solution are smart bins using weight mats to track the bin’s content and additional sensors and buttons which are connected to an IoT – Hub to collect data of material consumption and manual handling operations. The use of weight mats for the smart bins offers the possibility to measure the container content independent of the specific component geometry and thus for a variety of components based on the specific component weights. The developed solution enables focusing on key for success elements of the system to provide synchronization of the flow of materials and information resulting an increase of flexibility and significantly higher transparency of the material flow. AIbased algorithms are applied to analyse the gathered data and to initiate process optimizations by providing the logistics decision makers a profound and transparent basis for decision making. In order to provide students and industry visitors of the learning factory with the necessary competences and to support the transfer into practice, a training module on IoT-based inventory management was developed and implemented.
Circular economy aims to support reuse and extends the product life cycles through repair, remanufacturing, upgrades and retrofits, as well as closing material cycles through recycling. To successfully manage the necessary transformation processes to circular economy, manufacturing enterprises rely on the competency of their employees. The definition of competency requirements for circular economy-oriented production networks will contribute to the operationalization of circular economy. The International Association of Learning Factories (IALF) statesin its mission the development of learning systems addressing these challenges for training of students and further education of industry employees. To identify the required competencies for circular economy, the major changes of the product life cycle phases have been investigated based on the state of the science and compared to the socio-technical infrastructure and thematic fields of the learning factories considered in this paper. To operationalize the circular economy approach in the product design and production phase in learning factories, an approach for a cross learning factory network (so called "Cross Learning Factory Product Production System (CLFPPS)") has been developed. The proposed CLFPPS represents a network on the design dimensions of learning factories. This approach contributes to the promotion of circular economy in learning factories as it makes use of and combines the focus areas of different learning factories. This enables the CLFPPS to offer a holistic view on the product life cycle in production networks.
Large critical systems, such as those created in the space domain, are usually developed by a large number of organizations and, furthermore, they have to comply with standards. Yet, the different stakeholders often do not have a common understanding of the needed quality of requirements specifications. Achieving such a common understanding is a laborious process that is currently not sufficiently supported. Moreover, such a common understanding must be aligned with the standards. In this paper, we present an approach that can be used to align the different stakeholder perceptions regarding the quality of requirements specifications. Existing quality models for requirements specifications are analyzed for equivalences, and transferred into a common representation, the so-called Aligned Quality Map (AQM). Furthermore, a process is defined that supports the alignment of different stakeholder perspectives with regard to the quality of requirements specifications using AQM, which is validated in a case study in the context of European space projects. AQM has been created and populated with an initial set of quality models. It is designed in such way that it can be extended to include further quality models. The case study has shown that an alignment of different stakeholder perspectives and the quality model of the European Cooperation for Space Standardization using AQM is feasible. The approach allows for aligning different stakeholder perspectives for a common understanding of the quality of requirements specifications in the context of standards. Furthermore, AQM supports the assessment of requirements specifications.
The efficient production and utilization of green hydrogen is vital to succeed in the global strive for a sustainable future. To provide the necessary amount of green hydrogen a high number of electrolyzers will be connected as decentralized power consumers to the grid. A large amount of decentralized renewable power sources will provide the energy. In such a system a control method is necessary to dispatch the available power most efficiently. In particular, the shutdown of renewable energy sources due to temporary overproduction must be avoided. This paper presents a decentralized tertiary control algorithm that provides a new decentralized control approach, thus creating a flexible, robust and easily scalable system. The operation of each grid participant within this grid connected microgrid is optimized for maximum financial profit, while minimizing the exchange of power with the mains grid and reducing the shutdown of renewable power sources.
In the context of digital transformation, having a data-driven organizational culture has been recognized as an important factor for data analytics capabilities, innovativeness and competitive advantage of firms. However, the current literature on data-driven culture (DDC) is fragmented, lacking both a synthesis of findings and a theoretical foundation. Therefore, the aim of this work has been to develop a comprehensive framework for understanding DDC and the mechanisms that can be used to embed such a culture in organizations as well as structuring prior dispersed findings on the topic. Based on the foundation of organizational culture theory, we employed a Design Science Research (DSR) approach using a systematic literature review and expert interviews to build and evaluate a transformation-oriented framework. This research contributes to knowledge by synthesizing previously dispersed knowledge in a holistic framework, as well as, by providing a conceptual framework to guide the transformation towards a DDC.
The performance and scalability of modern data-intensive systems are limited by massive data movement of growing datasets across the whole memory hierarchy to the CPUs. Such traditional processor-centric DBMS architectures are bandwidth- and latency-bound. Processing-in-Memory (PIM) designs seek to overcome these limitations by integrating memory and processing functionality on the same chip. PIM targets near- or in-memory data processing, leveraging the greater in-situ parallelism and bandwidth.
In this paper, we introduce pimDB and provide an initial comparison of processor-centric and PIM-DBMS approaches under different aspects, such as scalability and parallelism, cache-awareness, or PIM-specific compute/bandwidth tradeoffs. The evaluation is performed end-to-end on a real PIM hardware system from UPMEM.
Software development teams have to face stress caused by deadlines, staff turnover, or individual differences in commitment, expertise, and time zones. While students are typically taught the theory of software project management, their exposure to such stress factors is usually limited. However, preparing students for the stress they will have to endure once they work in project teams is important for their own sake, as well as for the sake of team performance in the face of stress. Team performance has been linked to the diversity of software development teams, but little is known about how diversity influences the stress experienced in teams. In order to shed light on this aspect, we provided students with the opportunity to self-experience the basics of project management in self-organizing teams, and studied the impact of six diversity dimensions on team performance, coping with stressors, and positive perceived learning effects. Three controlled experiments at two universities with a total of 65 participants suggest that the social background impacts the perceived stressors the most, while age and work experience have the highest impact on perceived learnings. Most diversity dimensions have a medium correlation with the quality of work, yet no significant relation to the team performance. This lays the foundation to improve students’ training for software engineering teamwork based on their diversity-related needs and to create diversity-sensitive awareness among educators, employers and researchers.