Refine
Year of publication
- 2023 (212) (remove)
Document Type
- Journal article (111)
- Conference proceeding (81)
- Book chapter (8)
- Doctoral Thesis (6)
- Book (3)
- Issue of a journal (1)
- Report (1)
- Review (1)
Language
- English (212) (remove)
Is part of the Bibliography
- yes (212) (remove)
Institute
- ESB Business School (80)
- Informatik (69)
- Life Sciences (26)
- Technik (26)
- Texoversum (12)
- Zentrale Einrichtungen (2)
Publisher
- Elsevier (43)
- Springer (34)
- MDPI (21)
- IEEE (18)
- American Chemical Society (7)
- Association for Information Systems (7)
- Emerald (5)
- Association for Computing Machinery (4)
- Taylor & Francis (4)
- Universität Tübingen (3)
This paper presents the first part of a research-work conducted at the University of Applied Sciences (HFT- Stuttgart). The aim of the research was to investigate the potential of low-cost renewable energy systems to reduce the energy demand of the building sector in hot and dry areas. Radiative cooling to the night sky represents a low-cost renewable energy source. The dry desert climate conditions promote radiative cooling applications. The system technology adopted in this work is based on uncovered solar thermal collectors integrated into the building’s hydronic system. By implementing different control strategies, the same system could be used for cooling as well as for heating applications. This paper focuses on identifying the collector parameters which are required as the coefficients to configure such an unglazed collector for calibrating its mathematical model within the simulation environment. The parameter identification process implies testing the collector for its thermal performance. This paper attempts to provide an insight into the dynamic testing of uncovered solar thermal collectors (absorbers), taking into account their prospective operation at nighttime for radiative cooling applications. In this study, the main parameters characterizing the performance of the absorbers for radiative cooling applications are identified and obtained from standardized testing protocol. For this aim, a number of plastic solar absorbers of different designs were tested on the outdoor test-stand facility at HFT-Stuttgart for the characterization of their thermal performance. The testing process was based on the quasi-dynamic test method of the international standard for solar thermal collectors EN ISO 9806. The test database was then used within a mathematical optimization tool (GenOpt) to determine the optimal parameter settings of each absorber under testing. Those performance parameters were significant to compare the thermal performance of the tested absorbers. The coefficients (identified parameters) were used then to plot the thermal efficiency curves of all absorbers, for both the heating and cooling modes of operation. Based on the intended main scope of the system utilization (heating or cooling), the tested absorbers could be benchmarked. Hence, one of those absorbers was selected to be used in the following simulation phase as was planned in the research-project.
During the first years of the last decade, Egypt used to face recurrent electricity cut-offs in summer. In the past few years, the electricity tariff dramatically increased. Radiative cooling to the clear night sky is a renewable energy source that represents a relative solution. The dry desert climate promotes nocturnal radiative cooling applications. This study investigates the potential of nocturnal radiative cooling systems (RCSs) to reduce the energy consumption of the residential building sector in Egypt. The system technology proposed in this work is based on uncovered solar thermal collectors integrated into the building hydronic system. By implementing different control strategies, the same system could be used for both cooling and heating applications. The goal of this paper is to analyze the performance of RCSs in residential buildings in Egypt. The dynamic simulation program TRNSYS was used to simulate the thermal behavior of the system. The relevant issues of Egypt as a case-study are firstly overviewed. Then the paper introduces the work done to develop a building model that represents a typical residential apartment in Egypt. Typical occupancy profiles were developed to define the internal thermal gains. The adopted control strategy to optimize the system operation is presented as well. To fully understand and hence evaluate the operation of the proposed RCS, four simulation cases were considered: 1. a reference case (fully passive), 2. the stand-alone operation of the RCS, 3. ideal heating & cooling operation (fully-active), and 4. the hybrid-operation (when the active cooling system is supported by the proposed RCS). The analysis considered the main three distinct climates in Egypt, represented by the cities of Alexandria, Cairo and Asyut. The hotter and drier weather conditions resulted in a higher cooling potential and larger temperature differences. The simulated cooling power in Asyut was 28.4 W/m² for a 70 m² absorber field. For a smaller field area of 10 m², the cooling power reached 109 W/m² but with humble temperature differences. To meet the rigorous thermal comfort conditions, the proposed sensible RCS cannot fully replace conventional air-conditioning units, especially in humid areas like Alexandria. When working in a hybrid system, a 10% reduction in the active cooling energy demand could be achieved in Asyut to keep the cooling set-point at 24 °C. This percentage reduction was nearly doubled when the thermal comfort set-point was increased by two degrees (26 °C). In a sensitivity analysis, external shading devices as a passive measure as well as the implementation of the Egyptian code for buildings (ECP306/1–2005) were also investigated. The analysis of this study raised other relevant aspects to discuss, e.g. system-sizing, environmental effects, limitations and recommendations.
As fuel prices climb and the global automotive sector migrates to more sustainable vehicle technologies, the future of South Africa’s minibus taxis is in flux. The authors’ previous research has found that battery electric technology struggles to meet all the mobility requirements of minibus taxis. They investigate the technical feasibility of powering taxis with hydrogen fuel cells instead. The following results are projected using a custom-built simulator, and tracking data of taxis based in Stellenbosch, South Africa. Each taxi requires around 12 kg of hydrogen gas per day to travel an average distance of 360 km. 465 kWh of electricity, or 860 m2 of solar panels, would electrolyse the required green hydrogen. An economic analysis was conducted on the capital and operational expenses of a system of ten hydrogen taxis and an electrolysis plant. Such a pilot project requires a minimum investment of € 3.8 million (R 75 million), for a 20 year period. Although such a small scale roll-out is technically feasible and would meet taxis’ performance requirements, the investment cost is too high, making it financially unfeasible. They conclude that a large scale solution would need to be investigated to improve financial feasibility; however, South Africa’s limited electrical generation capacity poses a threat to its technical feasibility. The simulator is uploaded at: https://gitlab.com/eputs/ev-fleet-sim-fcv-model.
Most Question-answering (QA) systems rely on training data to reach their optimal performance. However, acquiring training data for supervised systems is both time-consuming and resource-intensive. To address this, in this paper, we propose TFCSG, an unsupervised similar question retrieval approach that leverages pre-trained language models and multi-task learning. Firstly, topic keywords in question sentences are extracted sequentially based on a latent topic-filtering algorithm to construct unsupervised training corpus data. Then, the multi-task learning method is used to build the question retrieval model. There are three tasks designed. The first is a short sentence contrastive learning task. The second is the question sentence and its corresponding topic sequence similarity judgment task. The third is using question sentences to generate their corresponding topic sequence task. The three tasks are used to train the language model in parallel. Finally, similar questions are obtained by calculating the cosine similarity between sentence vectors. The comparison experiment on public question datasets that TFCSG outperforms the comparative unsupervised baseline method. And there is no need for manual marking, which greatly saves human resources.
UV hyperspectral imaging (225 nm–410 nm) was used to identify and quantify the honey- dew content of real cotton samples. Honeydew contamination causes losses of millions of dollars annually. This study presents the implementation and application of UV hyperspectral imaging as a non-destructive, high-resolution, and fast imaging modality. For this novel approach, a reference sample set, which consists of sugar and protein solutions that were adapted to honeydew, was set-up. In total, 21 samples with different amounts of added sugars/proteins were measured to calculate multivariate models at each pixel of a hyperspectral image to predict and classify the amount of sugar and honeydew. The principal component analysis models (PCA) enabled a general differentiation between different concentrations of sugar and honeydew. A partial least squares regression (PLS-R) model was built based on the cotton samples soaked in different sugar and protein concentrations. The result showed a reliable performance with R2cv = 0.80 and low RMSECV = 0.01 g for the valida- tion. The PLS-R reference model was able to predict the honeydew content laterally resolved in grams on real cotton samples for each pixel with light, strong, and very strong honeydew contaminations. Therefore, inline UV hyperspectral imaging combined with chemometric models can be an effective tool in the future for the quality control of industrial processing of cotton fibers.
Flame-retardant finishing of cotton fabrics using DOPO functionalized alkoxy- and amido alkoxysilane
(2023)
In the present study, DOPO-based alkoxysilane (DOPO-ETES) and amido alkoxysilane (DOPO-AmdPTES) were synthesized by one-step and without by-products as halogen-free flame retardants. The flame retardants were applied on cotton fabric utilizing sol–gel method and pad-dry-cure finishing process. The flame retardancy, the thermal stability and the combustion ehaviour of treated cotton were evaluated by surface and bottom edge ignition flame test (according to EN ISO 15025), thermogravimetric analysis (TGA) and micro-scale combustion calorimeter (MCC). Unlike CO/DOPO-ETES sample, cotton treated with DOPO-AmdPTES nanosols exhibits self-extinguishing ehaviour with high char residue, an improvement of the LOI value and a significant reduction of the PHRR, HRC and THR compared to pristine cotton. Cotton finished with DOPO-AmdPTES reveals a semi-durability after ten laundering cycles keeping the flame-retardant properties unchanged. According to the results obtained from TGA-FTIR, Py-GC/MS and XPS, the major activity of flame retardant occurs in the condensed phase via catalytic induced char formation as physical barrier along with the activity in the gas phase derived mainly from the dilution effect. The early degradation of CO/DOPO-AmdPTES compared to CO/DOPO-ETES, triggered by the cleavage of the weak bond between P and C=O, as the DFT study indicated, provides the beneficial effect of this flame retardant on the fire resistance of cellulose.
The chemical recycling of used motor oil via catalytic cracking to convert it into secondary diesel-like fuels is a sustainable and technically attractive solution for managing environmental concerns associated with traditional disposal. In this context, this study was conducted to screen basic and acidic-aluminum silicate catalysts doped with different metals, including Mg, Zn, Cu, and Ni. The catalysts were thoroughly characterized using various techniques such as N2 adsorption–desorption isotherms, FT-IR spectroscopy, and TG analysis. The liquid and gaseous products were identified using GC, and their characteristics were compared with acceptable ranges from ASTM characterization methods for diesel fuel. The results showed that metal doping improved the performance of the catalysts, resulting in higher conversion rates of up to 65%, compared to thermal (15%) and aluminum silicates (≈20%). Among all catalysts, basic aluminum silicates doped with Ni showed the best catalytic performance, with conversions and yields three times higher than aluminum silicate catalysts. These findings significantly contribute to developing efficient and eco-friendly processes for the chemical recycling of used motor oil. This study highlights the potential of basic aluminum silicates doped with Ni as a promising catalyst for catalytic cracking and encourages further research in this area.
Facing ever-looming climate change, studying the drivers for individuals' Information Systems (IS) Use to reduce environmental harm gains momentum. While extant research on the antecedents of sustainable IS Use has focused on specific theories, interventions, contexts, and technologies, a holistic understanding has become increasingly elusive, with a synthesis remaining absent. We employ a systematic literature review methodology to shed light on the driving antecedents for sustainable IS Use among individual consumers. Our results build on findings of 29 empirical studies drawn from 598 articles retrieved from our premier outlets and a forward/backward search. The analysis reveals six salient complementary antecedents: Relief, Empowerment, Default, User-centricity, Salience, and Encouragement. We recommend considering these concepts when developing, deploying, promoting, or regulating digital technologies to mitigate individual consumers' emissions. Along with memorable and implementable concepts, our theoretical framework offers a novel conceptualization and four promising avenues for researchers on sustainable IS Use.
The proliferation of smart technologies transforms the way individual consumers perform tasks. Considerable research alludes that smart technologies are often related to domestic energy consumption. However, it remains unclear how such technologies transform tasks and thereby impact our planet. We explore the role of technological smartness in personal day-to-day tasks that help create a more sustainable future. In the absence of theory, but facing extensive changes in everyday life enabled by smart technologies, we draw on phenomenon-based theorizing (PBT) guidelines. As anchor, we refer to task endogeneity related to task-technology fit theory (TTF). As infusion, we employ theory on public goods. Our model proposes novel relations between the concepts of smart autonomy and -transparency with sustainable task outcomes, mediated by task convenience and task significance. We discuss some implications, limitations, and future research opportunities.
The properties of polyelectrolyte multilayers are ruled by the process parameters employed during self-assembly. This is the first study in which a design of experiment approach was used to validate and control the production of ultrathin polyelectrolyte multilayer coatings by identifying the ranges of critical process parameters (polyelectrolyte concentration, ionic strength and pH) within which coatings with reproducible properties (thickness, refractive index and hydrophilicity) are created. Mathematical models describing the combined impact of key process parameters on coatings properties were developed demonstrating that only ionic strength and pH affect the coatings thickness, but not polyelectrolyte concentration. While the electrolyte concentration had a linear effect, the pH contribution was described by a quadratic polynomial. A significant contribution of this study is the development of a new approach to estimate the thickness of polyelectrolyte multilayer nanofilms by quantitative rhodamine B staining, which might be useful in all cases when ellipsometry is not feasible due to the shape complexity or small size of the coated substrate. The novel approach proposed here overcomes the limitations of known methods as it offers a low spatial sampling size and the ability to analyse a wide area without restrictions on the chemical composition and shape of the substrate.
Monitoring heart rate and breathing is essential in understanding the physiological processes for sleep analysis. Polysomnography (PSG) system have traditionally been used for sleep monitoring, but alternative methods can help to make sleep monitoring more portable in someone's home. This study conducted a series of experiments to investigate the use of pressure sensors placed under the bed as an alternative to PSG for monitoring heart rate and breathing during sleep. The following sets of experiments involved the addition of small rubber domes - transparent and black - that were glued to the pressure sensor. The resulting data were compared with the PSG system to determine the accuracy of the pressure sensor readings. The study found that the pressure sensor provided reliable data for extracting heart rate and respiration rate, with mean absolute errors (MAE) of 2.32 and 3.24 for respiration and heart rate, respectively. However, the addition of small rubber hemispheres did not significantly improve the accuracy of the readings, with MAEs of 2.3 bpm and 7.56 breaths per minute for respiration rate and heart rate, respectively. The findings of this study suggest that pressure sensors placed under the bed may serve as a viable alternative to traditional PSG systems for monitoring heart rate and breathing during sleep. These sensors provide a more comfortable and non-invasive method of sleep monitoring. However, the addition of small rubber domes did not significantly enhance the accuracy of the readings, indicating that it may not be a worthwhile addition to the pressure sensor system.
Cyber-Physical Production Systems increasingly use semantic information to meet the grown flexibility requirements. Ontologies are often used to represent and use this semantic information. Existing systems focus on mapping knowledge and less on the exchange with other relevant IT systems (e.g., ERP systems) in which crucial semantic information, often implicit, is contained. This article presents an approach that enables the exchange of semantic information via adapters. The approach is demonstrated by a use case utilizing an MES system and an ERP system.
Military organizations have special features like following different organizational laws in times of peace and war and their specific embeddedness in society and politics. Especially the latter aspect has made the military an important object of study since the beginnings of modern sociology. In the wake of establishing specific sociological accounts, military sociology has been developed, dedicated to the different facets of the military. This research is based on different theoretical perspectives, but has hardly embraced the frameworks from economics and sociology of conventions (EC/SC) so far. The aim of the chapter is to explore and demonstrate the potentials of this approach. In a first step, the state of the art of military sociology research is outlined, and potential avenues for analyzing military forces based on EC/SC are identified. It is argued that especially the connection to organizational theory (military as organization) and civil-military relations, including leadership and professionalism, offer starting points. After introducing existing studies addressing military-related topics with reference to EC/SC, relevant concepts and approaches of convention theory that prove to be particularly enriching for military research are discussed. An outlook on possible further fields and topics of research is given to concretize how an inclusion of the perspective of EC/SC could look like.
The performance and scalability of modern data-intensive systems are limited by massive data movement of growing datasets across the whole memory hierarchy to the CPUs. Such traditional processor-centric DBMS architectures are bandwidth- and latency-bound. Processing-in-Memory (PIM) designs seek to overcome these limitations by integrating memory and processing functionality on the same chip. PIM targets near- or in-memory data processing, leveraging the greater in-situ parallelism and bandwidth.
In this paper, we introduce pimDB and provide an initial comparison of processor-centric and PIM-DBMS approaches under different aspects, such as scalability and parallelism, cache-awareness, or PIM-specific compute/bandwidth tradeoffs. The evaluation is performed end-to-end on a real PIM hardware system from UPMEM.
Motivation
In order to enable context-aware behavior of surgical assistance systems, the acquisition of various information about the current intraoperative situation is crucial. To achieve this, the complex task of situation recognition can be delegated to a specialized system. Consequently, a standardized interface is required for the seamless transfer of the recognized contextual information to the assistance systems, enabling them to adapt accordingly.
Methods
Our group analyzed four medical interface standards to determine their suitability for exchanging intraoperative contextual information. The assessment was based on a harmonized data and service model derived from the requirements of expected context-aware use cases. The Digital Imaging and Communications in Medicine (DICOM) and IEEE 11073 for Service-oriented Device Connectivity (SDC) were identified as the most appropriate standards.
Results
We specified how DICOM Unified Procedure Steps (UPS), can be used to effectively communicate contextual information. We proposed the inclusion of attributes to formalize different granularity levels of the surgical workflow.
Conclusions
DICOM UPS SOP classes can be used for the exchange of intraoperative contextual information between a situation recognition system and surgical assistance systems. This can pave the way for vendor-independent context awareness in the OR, leading to targeted assistance of the surgical team and an improvement of the surgical workflow.
Large critical systems, such as those created in the space domain, are usually developed by a large number of organizations and, furthermore, they have to comply with standards. Yet, the different stakeholders often do not have a common understanding of the needed quality of requirements specifications. Achieving such a common understanding is a laborious process that is currently not sufficiently supported. Moreover, such a common understanding must be aligned with the standards. In this paper, we present an approach that can be used to align the different stakeholder perceptions regarding the quality of requirements specifications. Existing quality models for requirements specifications are analyzed for equivalences, and transferred into a common representation, the so-called Aligned Quality Map (AQM). Furthermore, a process is defined that supports the alignment of different stakeholder perspectives with regard to the quality of requirements specifications using AQM, which is validated in a case study in the context of European space projects. AQM has been created and populated with an initial set of quality models. It is designed in such way that it can be extended to include further quality models. The case study has shown that an alignment of different stakeholder perspectives and the quality model of the European Cooperation for Space Standardization using AQM is feasible. The approach allows for aligning different stakeholder perspectives for a common understanding of the quality of requirements specifications in the context of standards. Furthermore, AQM supports the assessment of requirements specifications.
Cyber-Physical Production Systems increasingly use semantic information to meet the grown flexibility requirements. Ontologies are often used to represent and use this semantic information. Existing systems focus on mapping knowledge and less on the exchange with other relevant IT systems (e.g., ERP systems) in which crucial semantic information, often implicit, is contained. This article presents an approach that enables the exchange of semantic information via adapters. The approach is demonstrated by a use case utilizing an MES system and an ERP system.
It is widely recognized that Education for Sustainable Development (ESD) plays a critical role in creating a more sustainable world by fostering the development of the knowledge, skills, understanding, values, and actions necessary for such change (UNESCO, 2020). In this context, ESD represents a holistic approach that focuses on lifelong learning to create informed people who can make decisions today and in the future. Related to the textile and fashion industry, ESD is an appropriate approach to continuously implement sustainability aspects in education and training. To achieve this goal, the European project "Sustainable Fashion Curriculum at Textile Universities in Europe - Development, Implementation and Evaluation of a Teaching Module for Educators" (Fashion DIET) has developed a digital teaching module in a partnership between a University of Education and universities with textile departments. The main objective of the project is to elaborate an ESD module for university lecturers in order to introduce a sustainable fashion curriculum in textile universities in Europe and implement it in educational systems. The project therefore aims to train educators along the textile supply chain, to inform the young generation about the latest aspects of sustainability and raise awareness by implementing ESD in textile education. This paper presents the learning outcomes of the modules on sustainable fashion design and related production technologies developed by the technical university partners, as part of the total of 42 courses covering didactic-methodological approaches and the sustainable orientation of the fashion market, offered at the consortium level. The project content is made available as Open Educational Resources through Glocal Campus, an open-access e-learning platform that enables virtual collaboration between universities.
Context
Web APIs are one of the most used ways to expose application functionality on the Web, and their understandability is important for efficiently using the provided resources. While many API design rules exist, empirical evidence for the effectiveness of most rules is lacking.
Objective
We therefore wanted to study 1) the impact of RESTful API design rules on understandability, 2) if rule violations are also perceived as more difficult to understand, and 3) if demographic attributes like REST-related experience have an influence on this.
Method
We conducted a controlled Web-based experiment with 105 participants, from both industry and academia and with different levels of experience. Based on a hybrid between a crossover and a between-subjects design, we studied 12 design rules using API snippets in two complementary versions: one that adhered to a rule and one that was a violation of this rule. Participants answered comprehension questions and rated the perceived difficulty.
Results
For 11 of the 12 rules, we found that violation performed significantly worse than rule for the comprehension tasks. Regarding the subjective ratings, we found significant differences for 9 of the 12 rules, meaning that most violations were subjectively rated as more difficult to understand. Demographics played no role in the comprehension performance for violation.
Conclusions
Our results provide first empirical evidence for the importance of following design rules to improve the understandability of Web APIs, which is important for researchers, practitioners, and educators.
Sleep is extremely important for physical and mental health. Although polysomnography is an established approach in sleep analysis, it is quite intrusive and expensive. Consequently, developing a non-invasive and non-intrusive home sleep monitoring system with minimal influence on patients, that can reliably and accurately measure cardiorespiratory parameters, is of great interest. The aim of this study is to validate a non-invasive and unobtrusive cardiorespiratory parameter monitoring system based on an accelerometer sensor. This system includes a special holder to install the system under the bed mattress. The additional aim is to determine the optimum relative system position (in relation to the subject) at which the most accurate and precise values of measured parameters could be achieved. The data were collected from 23 subjects (13 males and 10 females). The obtained ballistocardiogram signal was sequentially processed using a sixth-order Butterworth bandpass filter and a moving average filter. As a result, an average error (compared to reference values) of 2.24 beats per minute for heart rate and 1.52 breaths per minute for respiratory rate was achieved, regardless of the subject’s sleep position. For males and females, the errors were 2.28 bpm and 2.19 bpm for heart rate and 1.41 rpm and 1.30 rpm for respiratory rate. We determined that placing the sensor and system at chest level is the preferred configuration for cardiorespiratory measurement. Further studies of the system’s performance in larger groups of subjects are required, despite the promising results of the current tests in healthy subjects.
Sleep is an essential part of human existence, as we are in this state for approximately a third of our lives. Sleep disorders are common conditions that can affect many aspects of life. Sleep disorders are diagnosed in special laboratories with a polysomnography system, a costly procedure requiring much effort for the patient. Several systems have been proposed to address this situation, including performing the examination and analysis at the patient's home, using sensors to detect physiological signals automatically analysed by algorithms. This work aims to evaluate the use of a contactless respiratory recording system based on an accelerometer sensor in sleep apnea detection. For this purpose, an installation mounted under the bed mattress records the oscillations caused by the chest movements during the breathing process. The presented processing algorithm performs filtering of the obtained signals and determines the apnea events presence. The performance of the developed system and algorithm of apnea event detection (average values of accuracy, specificity and sensitivity are 94.6%, 95.3%, and 93.7% respectively) confirms the suitability of the proposed method and system for further ambulatory and in-home use.
Sleep is essential to physical and mental health. However, the traditional approach to sleep analysis—polysomnography (PSG)—is intrusive and expensive. Therefore, there is great interest in the development of non-contact, non-invasive, and non-intrusive sleep monitoring systems and technologies that can reliably and accurately measure cardiorespiratory parameters with minimal impact on the patient. This has led to the development of other relevant approaches, which are characterised, for example, by the fact that they allow greater freedom of movement and do not require direct contact with the body, i.e., they are non-contact. This systematic review discusses the relevant methods and technologies for non-contact monitoring of cardiorespiratory activity during sleep. Taking into account the current state of the art in non-intrusive technologies, we can identify the methods of non-intrusive monitoring of cardiac and respiratory activity, the technologies and types of sensors used, and the possible physiological parameters available for analysis. To do this, we conducted a literature review and summarised current research on the use of non-contact technologies for non-intrusive monitoring of cardiac and respiratory activity. The inclusion and exclusion criteria for the selection of publications were established prior to the start of the search. Publications were assessed using one main question and several specific questions. We obtained 3774 unique articles from four literature databases (Web of Science, IEEE Xplore, PubMed, and Scopus) and checked them for relevance, resulting in 54 articles that were analysed in a structured way using terminology. The result was 15 different types of sensors and devices (e.g., radar, temperature sensors, motion sensors, cameras) that can be installed in hospital wards and departments or in the environment. The ability to detect heart rate, respiratory rate, and sleep disorders such as apnoea was among the characteristics examined to investigate the overall effectiveness of the systems and technologies considered for cardiorespiratory monitoring. In addition, the advantages and disadvantages of the considered systems and technologies were identified by answering the identified research questions. The results obtained allow us to determine the current trends and the vector of development of medical technologies in sleep medicine for future researchers and research.
The article pleads for Education for Sustainable Development (ESD) in the textile and fashion sector and shows possibilities how this can be implemented from elementary school to higher education and vocational training. It begins by highlighting the non-sustainable practices and deficits that can be found in the fashion and textile sector worldwide and explains the sustainability goals in the context of the UN Roadmap ESD for 2030. In order to raise the awareness for sustainability and implement these goals, education is needed. The article introduces the concept of ESD as a guiding principle with the core element design competence, implemented by the interdisciplinary method of Design Thinking (DT). In order to successfully teach the ESD-relevant design competence, various didactic principles are required. It can be shown that they are very similar to the principles and phases of DT. Within a research project DT and its potential for implementing ESD has been investigated in teaching-learning situations at elementary schools as well as in an interdisciplinary seminar for student teachers. These findings have been transferred to the EU project Fashion DIET, which pursues the goal of implementing ESD in the textile and fashion sector. By means of an online pilot workshop, the methods and principles of DT were presented and explained to lecturers, teachers and educators, who gave their feedback on the potential of DT as a method to implement ESD as a guiding principle in their curricula.
The increasing complexity and need for availability of automated guided vehicles (AGVs) pose challenges to companies, leading to a focus on new maintenance strategies. In this paper, a smart maintenance architecture based on a digital twin is presented to optimize the technical and economic effectiveness of AGV maintenance activities. To realize this, a literature review was conducted to identify the necessary requirements for Smart Maintenance and Digital Twins. The identified requirements were combined into modules and then integrated into an architecture. The architecture was evaluated on a real AGV on the battery as one of the critical components.
Smart cities are considered data factories that generate an enormous amount of data from various sources. In fact data is the backbone of any smart services. Therefore, the strategic beneficial handling of this digital capital is crucial for cities. Some smart city pioneers have already written down their approach to data in the form of data strategies, but what should a city's data strategy include, and how can the goals and measures defined in the strategies be operationalized? This paper addresses these questions by looking closely at the data strategies of cities in Germany and the top three countries in the EU Digital Economy and Society Index. The in-depth analysis of 8 city data strategies has yielded 11 dimensions that cities should consider in their data strategy. These are relevance of data, principles, methods, data sharing, technology, data culture, data ethics, organizational structure, data security and privacy, collaborations, data literacy. In addition, data governance is a concept to put these 11 strategic dimensions into practice through standardization measures, training programs, and defining roles and responsibilities by developing a data catalog.
The benefits of urban data cannot be realized without a political and strategic view of data use. A core concept within this view is data governance, which aligns strategy in data-relevant structures and entities with data processes, actors, architectures, and overall data management. Data governance is not a new concept and has long been addressed by scientists and practitioners from an enterprise perspective. In the urban context, however, data governance has only recently attracted increased attention, despite the unprecedented relevance of data in the advent of smart cities. Urban data governance can create semantic compatibility between heterogeneous technologies and data silos and connect stakeholders by standardizing data models, processes, and policies. This research provides a foundation for developing a reference model for urban data governance, identifies challenges in dealing with data in cities, and defines factors for the successful implementation of urban data governance. To obtain the best possible insights, the study carries out qualitative research following the design science research paradigm, conducting semi-structured expert interviews with 27 municipalities from Austria, Germany, Denmark, Finland, Sweden, and the Netherlands. The subsequent data analysis based on cognitive maps provides valuable insights into urban data governance. The interview transcripts were transferred and synthesized into comprehensive urban data governance maps to analyze entities and complex relationships with respect to the current state, challenges, and success factors of urban data governance. The findings show that each municipal department defines data governance separately, with no uniform approach. Given cultural factors, siloed data architectures have emerged in cities, leading to interoperability and integrability issues. A city-wide data governance entity in a cross-cutting function can be instrumental in breaking down silos in cities and creating a unified view of the city’s data landscape. The further identified concepts and their mutual interaction offer a powerful tool for developing a reference model for urban data governance and for the strategic orientation of cities on their way to data-driven organizations.
Patterns are virtually simulated in 3D CAD programs before production to check the fit. However, achieving lifelike representations of human avatars, especially regarding soft tissue dynamics, remains challenging. This is mainly since conventional avatars in garment CAD programs are simulated with a continuous hard surface and not corresponding to the human physical and mechanical body properties of soft tissue. In the real world, the human body’s natural shape is affected by the contact pressure of tight-fitting textiles. To verify the fit of a simulated garment, the interactions between the individual body shape and the garment must be considered. This paper introduces an innovative approach to digitising the softness of human tissue using 4D scanning technology. The primary objective of this research is to explore the interactions between tissue softness and different compression levels of apparel, exerting pressure on the tissue to capture the changes in the natural shape. Therefore, to generate data and model an avatar with soft body physics, it is essential to capture the deform ability and elasticity of the soft tissue and map it into the modification options for a simulation. To aim this, various methods from different fields were researched and compared to evaluate 4D scanning as the most suitable method for capturing tissue deformability in vivo. In particular, it should be considered that the human body has different deformation capabilities depending on age, the amount of muscle and body fat. In addition, different tissue zones have different mechanical properties, so it is essential to identify and classify them to back up these properties for the simulation. It has been shown that by digitising the obtained data of the different defined applied pressure levels, a prediction of the deformation of the tissue of the exact person becomes possible. As technology advances and data sets grow, this approach has the potential to reshape how we verify fit digitally with soft avatars and leverage their realistic soft tissue properties for various practical purposes.
Human recognition is an important part of perception systems, such as those used in autonomous vehicles or robots. These systems often use deep neural networks for this purpose, which rely on large amounts of data that ideally cover various situations, movements, visual appearances, and interactions. However, obtaining such data is typically complex and expensive. In addition to raw data, labels are required to create training data for supervised learning. Thus, manual annotation of bounding boxes, keypoints, orientations, or actions performed is frequently necessary. This work addresses whether the laborious acquisition and creation of data can be simplified through targeted simulation. If data are generated in a simulation, information such as positions, dimensions, orientations, surfaces, and occlusions are already known, and appropriate labels can be generated automatically. A key question is whether deep neural networks, trained with simulated data, can be applied to real data. This work explores the use of simulated training data using examples from the field of pedestrian detection for autonomous vehicles. On the one hand, it is shown how existing systems can be improved by targeted retraining with simulation data, for example to better recognize corner cases. On the other hand, the work focuses on the generation of data that hardly or not occur at all in real standard datasets. It will be demonstrated how training data can be generated by targeted acquisition and combination of motion data and 3D models, which contain finely graded action labels to recognize even complex pedestrian situations. Through the diverse annotation data that simulations provide, it becomes possible to train deep neural networks for a wide variety of tasks with one dataset. In this work, such simulated data is used to train a novel deep multitask network that brings together diverse, previously mostly independently considered but related, tasks such as 2D and 3D human pose recognition and body and orientation estimation.
The Industry 4.0 paradigm requires concepts for integrating intelligent/ smart IoT Solutions into manufacturing. Such intelligent solutions are envisioned to increase flexibility and adaptability in smart factories. Especially autonomous cobots capable of adapting to changing conditions are a key enabler for changeable factory concepts. However, identifying the requirements and solution scenarios incorporating intelligent products challenges the manufacturing industry, especially in the SME sector. In pick and place scenarios, changing coordinate systems of workpiece carriers cause placing process errors. Using the IPIDS framework, this paper describes the development of a tool-center-point positioning method to improve the process stability of a collaborative robot in a changeable assembly workstation. Applying the framework identifies the requirement for an intelligent workpiece carrier as a part of the solution. Implementing and evaluating the solution within a changeable factory validates the IPIDS framework.
Framework for integrating intelligent product structures into a flexible manufacturing system
(2023)
Increasing individualisation of products with a high variety and shorter product lifecycles result in smaller lot sizes, increasing order numbers, and rising data and information processing for manufacturing companies. To cope with these trends, integrated management of the products and manufacturing information is necessary through a “product-driven” manufacturing system. Intelligent products that are integrated as an active element within the controlling and planning of the manufacturing process can represent flexibility advantages for the system. However, there are still challenges regarding system integration and evaluation of product intel-ligence structures. In light of these trends, this paper proposes a conceptual frame-work for defining, analysing, and evaluating intelligent products using the example of an assembly system. This paper begins with a classification of the existing problems in the assembly and a definition of the intelligence level. In contrast to previous approaches, the analysis of products is expanded to five dimensions. Based on this, a structured evaluation method for a use case is presented. The structure of solving the assembly problem is provided by the use case-specific ontology model. Results are presented in terms of an assignment of different application areas, linking the problem with the target intelligence class and, depending on the intelligence class of the product, suggesting requirements for implementation. The conceptual frame-work is evaluated by utilising a case study in a learning factory. Here, the model-mix assembly is controlled actively by the workpiece carrier in terms of transferring the variant-specific work instructions to the operator and the collaborative robot (cobot) at the workstations. The resulting system thus enables better exploitation of the poten-tials through less frequent errors and shorter search times. Such an implementation has demonstrated that the intelligent workpiece carrier represents an additional part for realising a cyber-physical production system (CPPS).
The Covid-19 virus has triggered a worldwide pandemic and therefore many employees were required to work from home which caused numerous challenges. With the Covid-19 pandemic now in its third year, there are already several studies available on the subject of home offices. To investigate the impact of remote work on employee satisfaction and trust, this quantitative study aims to review existing results and formulate hypotheses based on a conceptual model created through a qualitative study and extensive literature review. The research question is as follows: Does home office during Covid-19 affect employee satisfaction and trust? To test the hypotheses, a structural equation model was constructed and analyzed. The culture of trust and flexibility are identified as the biggest influencing factors in this study.
Do Chinese subordinates trust their German supervisors? A model of inter-cultural trust development
(2023)
In this qualitative study based on 95 interviews with Chinese subordinates and their German supervisors, we inductively develop a model which advances theoretical understanding by showing how inter-cultural trust development in hierarchical relationships is the result of six distinct elements: the subordinate trustor’s cultural profile (cosmopolitans, hybrids, culturally bounds), the psychological mechanisms operating within the trustor (role expectations and cultural accommodation), and contextual moderators (e.g., country context, time spent in foreign culture, and third-party influencers), which together influence the trust forms (e.g., presumptive trust, relational trust) and trust dynamics (e.g., trust breakdown and repair) within relationship phases over time (initial contact, trust continuation, trust disillusionment, separation, and acculturation). Our findings challenge the assumption that cultural differences result in low levels of initial trust and highlight the strong role the subordinate’s cultural profile can have on the dynamics and trajectory of trust in hierarchical relationships. Our model highlights that inter-cultural trust development operates as a variform universal, following the combined universalistic-particularistic paradigm in cross-cultural management, with both culturally generalizable etic dynamics, as well as culturally specific etic manifestations.
Purpose
In recognising the key role of business intelligence and big data analytics in influencing companies’ decision-making processes, this paper aims to codify the main phases through which companies can approach, develop and manage big data analytics.
Design/methodology/approach
By adopting a research strategy based on case studies, this paper depicts the main phases and challenges that companies “live” through in approaching big data analytics as a way to support their decision-making processes. The analysis of case studies has been chosen as the main research method because it offers the possibility for different data sources to describe a phenomenon and subsequently to develop and test theories.
Findings
This paper provides a possible depiction of the main phases and challenges through which the approach(es) to big data analytics can emerge and evolve over time with reference to companies’ decision-making processes.
Research limitations/implications
This paper recalls the attention of researchers in defining clear patterns through which technology-based approaches should be developed. In its depiction of the main phases of the development of big data analytics in companies’ decision-making processes, this paper highlights the possible domains in which to define and renovate approaches to value. The proposed conceptual model derives from the adoption of an inductive approach. Despite its validity, it is discussed and questioned through multiple case studies. In addition, its generalisability requires further discussion and analysis in the light of alternative interpretative perspectives.
Practical implications
The reflections herein offer practitioners interested in company management the possibility to develop performance measurement tools that can evaluate how each phase can contribute to companies’ value creation processes.
Originality/value
This paper contributes to the ongoing debate about the role of digital technologies in influencing managerial and social models. This paper provides a conceptual model that is able to support both researchers and practitioners in understanding through which phases big data analytics can be approached and managed to enhance value processes.
Artificial intelligence (AI) is one of the most promising technologies of the post-pandemic era. Cloud computing technology can simplify the process of developing AI applications by offering a variety of services, including ready-to-use tools to train machine learning (ML) algorithms. However, comparing the vast amount of services offered by different providers and selecting a suitable cloud service can be a major challenge for many firms. Also in academia, suitable criteria to evaluate this type of service remain largely unclear. Therefore, the overall aim of this work has been to develop a framework to evaluate cloud-based ML services. We use Design Science Research as our methodology and conduct a hermeneutic literature review, a vendor analysis, as well as, expert interviews. Based on our research, we present a novel framework for the evaluation of cloud-based ML services consisting of six categories and 22 criteria that are operationalized with the help of various metrics. We believe that our results will help organizations by providing specific guidance on how to compare and select service providers from the vast amount of potential suppliers.
Twitter and citations
(2023)
Social media, especially Twitter, plays an increasingly important role among researchers in showcasing and promoting their research. Does Twitter affect academic citations? Making use of Twitter activity about columns published on VoxEU, a renowned online platform for economists, we develop an instrumental variable strategy to show that Twitter activity about a research paper has a causal effect on the number of citations that this paper will receive. We find that the existence of at least one tweet, as opposed to none, increases citations by 16-25%. Doubling overall Twitter engagement boosts citations by up to 16%.
Silicon neurons represent different levels of biological details and accuracies as a trade-off between complexity and power consumption. With respect to this trade-off and high similarity to neuron behaviour models, relaxation-type oscillator circuits often yield a good compromise to emulate neurons. In this chapter, two exemplified relaxation-type silicon neurons are presented that emulate neural behaviour with energy consumption under the scale of nJ/spike. The first proposed fully CMOS relaxation SiN is based on mathematical Izhikevich model and can mimic a broad range of physiologically observable spike patterns. The results of kinds of biologically plausible output patterns and coupling process of two SiNs are presented in 0.35 μm CMOS technology. The second type is a novel ultra-low-frequency hybrid CMOS-memristive SiN based on relaxation oscillators and analog memristive devices. The hybrid SiN directly emulates neuron behaviour in the range of physiological spiking frequencies (less than 100 Hz). The relaxation oscillator is implemented and fabricated in 0.13 μm CMOS technology. An autonomous neuronal synchronization process is demonstrated with two relaxation oscillators coupled by an analog memristive device in the measurement to emulate the synchronous behaviour between spiking neurons.
In today’s education, healthcare, and manufacturing sectors, organizations and information societies are discussing new enhancements to corporate structure and process efficiency using digital platforms. These enhancements can be achieved using digital tools. Industry 5.0 and Society 5.0 give several potentials for businesses to enhance the adaptability and efficacy of their industrial processes, paving the door for developing new business models facilitated by digital platforms. Society 5.0 can contribute to a super-intelligent society that includes the healthcare industry. In the past decade, the Internet of Things, Big Data Analytics, Neural Networks, Deep Learning, and Artificial Intelligence (AI) have revolutionized our approach to various job sectors, from manufacturing and finance to consumer products. AI is developing quickly and efficiently. We have heard of the latest artificial intelligence chatbot, ChatGPT. OpenAI created this, which has taken the internet by storm. We tested the effectiveness of a considerable language model referred to as ChatGPT on four critical questions concerning “Society 5.0”, “Healthcare 5.0”, “Industry,” and “Future Education” from the perspectives of Age 5.0.
Development of an IoT-based inventory management solution and training module using smart bins
(2023)
Flexibility, transparency and changeability of warehouse environments are playing an increasingly important role to achieve a cost-efficient production of small batch sizes. This results in increasing requirements for warehouses in terms of flexibility, scalability, reconfigurability and transparency of material and information flows to deal with large number of different components and variable material and information flows due to small batch sizes. Therefore, an IoT-based inventory management solution and training module has been developed, implemented and validated at Werk150 – the Factory on campus of the ESB Business School. Key elements of the developed solution are smart bins using weight mats to track the bin’s content and additional sensors and buttons which are connected to an IoT – Hub to collect data of material consumption and manual handling operations. The use of weight mats for the smart bins offers the possibility to measure the container content independent of the specific component geometry and thus for a variety of components based on the specific component weights. The developed solution enables focusing on key for success elements of the system to provide synchronization of the flow of materials and information resulting an increase of flexibility and significantly higher transparency of the material flow. AIbased algorithms are applied to analyse the gathered data and to initiate process optimizations by providing the logistics decision makers a profound and transparent basis for decision making. In order to provide students and industry visitors of the learning factory with the necessary competences and to support the transfer into practice, a training module on IoT-based inventory management was developed and implemented.
The presented research is dedicated to estimation of the correlation between the level of renewable energy sources and the costs of congestion management in electric networks in selected European countries. Data of six countries in North-West European area (Italy, Spain, Germany, France, Poland and Austria) were investigated. Factors considered included grid congestion costs including re-dispatching costs as well as countertrading costs, gross electricity generation, installed capacity of electric generating facilities, installed capacity of electric non-dispatchable renewable energy sources and total electricity consumption. Special attention is paid to the share of renewable energy sources. It is found that the grid congestion costs are not clearly affected by penetration of non-dispatchable renewables in all the analysed countries and therefore a clear mathematical correlation cannot no be extrapolated with the available data. The results of this research show in general a loose dependency of the grid congestion costs on the penetration of renewables and a strong dependency on the total electrical consumption of the country.
Distributed Ledger Technologies for the energy sector: facilitating interoperability analysis
(2023)
The use of distributed data storage and management structures, such as Distributed Ledger Technologies (DLT), in the energy sector has gained great interest in recent times. This opens up new possibilities in e.g. microgrid management, aggregation of distributed resources, peer-to- peer trading, integration of electromobility or proof-of-origin strategies. However, in order to benefit from those new possibilities, new challenges have to be overcome. This work focuses on one of these challenges, which is the need to ensure interoperability when integrating DLT-enabled devices in energy use cases. Firstly, the use of DLTs in the energy sector will be analyzed and the main use cases will be presented. Then, a classification of DLT-Energy use cases will be proposed. Secondly, the need for a common reference architecture framework to analyze those use cases with a focus on interoperability will be discussed and the current activities in research and standardization in this field will be presented. Finally, a new common reference architecture framework based on current activities in standardization will be presented.
Introduction to the special issue on self‑managing and hardware‑optimized database systems 2022
(2023)
Data management systems have evolved in terms of functionality, performance characteristics, complexity, and variety during the last 40 years. Particularly, the relational database management systems and the big data systems (e.g., Key-Value stores, Document stores, Graph stores and Graph Computation Systems, Spark, MapReduce/Hadoop, or Data Stream Processing Systems) have evolved with novel additions and extensions. However, the systems administration and tasks have become highly complex and expensive, especially given the simultaneous and rapid hardware evolution in processors, memory, storage, or networking. These developments present new open problems and challenges to data management systems as well as new opportunities.
The SMDB (International Workshop on Self-Managing Database Systems) and HardBD&Active (Joint International Workshop on Big Data Management on Emerging Hardware and Data Management on Virtualized Active Systems) workshops organized in conjunction with the IEEE ICDE (International Conference on Data Engineering) offered two distinct platforms for examining the above system-related challenges from different perspectives. The SMDB workshop looks into developing autonomic or self-* features in database and data management systems to tackle complex administrative tasks, while the HardBD&Active workshop focuses on harnessing hardware technologies to enhance efficiency and performance of data processing and management tasks. As a result of these workshops, we are delighted to present the third special issue of DAPD titled “Self-Managing and Hardware-Optimized Database Systems 2022,” which showcases the best contributions from the SMDB 2021/2022 and HardBD&Active 2021/2022 workshops.
Recent work on database application development platforms has sought to include a declarative formulation of a conceptual data model in the application code, using annotations or attributes. Some recent work has used metadata to include the details of such formulations in the physical database, and this approach brings significant advantages in that the model can be enforced across a range of applications for a single database. In previous work, we have discussed the advantages for enterprise integration of typed graph data models (TGM), which can play a similar role in graphical databases, leveraging the existing support for the unified modelling language UML. Ideally, the integration of systems designed with different models, for example, graphical and relational database, should also be supported. In this work, we implement this approach, using metadata in a relational database management system (DBMS).
Modern wide bandgap power devices promise higher power conversion performance if the device can be operated reliably. As switching speed increases, the effects of parasitic ringing become more prominent, causing potentially damaging overvoltages during device turn-off. Estimating the expected additional voltage caused by such ringing enables more reliable designs. In this paper, we present an analytical expression to calculate the expected overvoltage caused by parasitic ringing based on parasitic element values and operating point parameters. Simulations and measurements confirm that the expression can be used to find the smallest rise time of the switches’ drain-source voltage for minimum overvoltage. The given expression also allows the prediction of the trade off overvoltage amplitude in case of faster required rise times.
In recent years, the demand for accurate and efficient 3D body scanning technologies has increased, driven by the growing interest in personalised textile development and health care. This position paper presents the implementation of a novel 3D body scanner that integrates multiple RGB cameras and image stitching techniques to generate detailed point clouds and 3D mesh models. Our system significantly enhances the scanning process, achieving higher resolution and fidelity while reducing the cost, time and effort required for data acquisition and processing. Furthermore, we evaluate the potential use cases and applications of our 3D body scanner, focusing on the textile technology and health sectors. In textile development, the 3D scanner contributes to bespoke clothing production, allowing designers to construct made-to-measure garments, thus minimising waste and enhancing customer satisfaction through fitting clothing. In mental health care, the 3D body scanner can be employed as a tool for body image analysis, providing valuable insights into the psychological and emotional aspects of self-perception. By exploring the synergy between the 3D body scanner and these fields, we aim to foster interdisciplinary collaborations that drive advancements in personalisation, sustainability, and well-being.
Advancing mental health diagnostics: AI-based method for depression detection in patient interviews
(2023)
In this paper, we present a novel artificial intelligence (AI) application for depression detection, using advanced transformer networks to analyse clinical interviews. By incorporating simulated data to enhance traditional datasets, we overcome limitations in data protection and privacy, consequently improving the model’s performance. Our methodology employs BERT-based models, GPT-3.5, and ChatGPT-4, demonstrating state-of-the-art results in detecting depression from linguistic patterns and contextual information that significantly outperform previous approaches. Utilising the DAIC-WOZ and Extended-DAIC datasets, our study showcases the potential of the proposed application in revolutionising mental health care through early depression detection and intervention. Empirical results from various experiments highlight the efficacy of our approach and its suitability for real-world implementation. Furthermore, we acknowledge the ethical, legal, and social implications of AI in mental health diagnostics. Ultimately, our study underscores the transformative potential of AI in mental health diagnostics, paving the way for innovative solutions that can facilitate early intervention and improve patient outcomes.
AI-based prediction and recommender systems are widely used in various industry sectors. However, general acceptance of AI-enabled systems is still widely uninvestigated. Therefore, firstly we conducted a survey with 559 respondents. Findings suggested that AI-enabled systems should be fair, transparent, consider personality traits and perform tasks efficiently. Secondly, we developed a system for the Facial Beauty Prediction (FBP) benchmark that automatically evaluates facial attractiveness. As our previous experiments have proven, these results are usually highly correlated with human ratings. Consequently they also reflect human bias in annotations. An upcoming challenge for scientists is to provide training data and AI algorithms that can withstand distorted information. In this work, we introduce AntiDiscriminationNet (ADN), a superior attractiveness prediction network. We propose a new method to generate an unbiased convolutional neural network (CNN) to improve the fairn ess of machine learning in facial dataset. To train unbiased networks we generate synthetic images and weight training data for anti-discrimination assessments towards different ethnicities. Additionally, we introduce an approach with entropy penalty terms to reduce the bias of our CNN. Our research provides insights in how to train and build fair machine learning models for facial image analysis by minimising implicit biases. Our AntiDiscriminationNet finally outperforms all competitors in the FBP benchmark by achieving a Pearson correlation coefficient of PCC = 0.9601.
The aim of this work is the development of artificial intelligence (AI) application to support the recruiting process that elevates the domain of human resource management by advancing its capabilities and effectiveness. This affects recruiting processes and includes solutions for active sourcing, i.e. active recruitment, pre-sorting, evaluating structured video interviews and discovering internal training potential. This work highlights four novel approaches to ethical machine learning. The first is precise machine learning for ethically relevant properties in image recognition, which focuses on accurately detecting and analysing these properties. The second is the detection of bias in training data, allowing for the identification and removal of distortions that could skew results. The third is minimising bias, which involves actively working to reduce bias in machine learning models. Finally, an unsupervised architecture is introduced that can learn fair results even without ground truth data. Together, these approaches represent important steps forward in creating ethical and unbiased machine learning systems.
In recent years, 3D facial reconstructions from single images have garnered significant interest. Most of the approaches are based on 3D Morphable Model (3DMM) fitting to reconstruct the 3D face shape. Concurrently, the adoption of Generative Adversarial Networks (GAN) has been gaining momentum to improve the texture of reconstructed faces. In this paper, we propose a fundamentally different approach to reconstructing the 3D head shape from a single image by harnessing the power of GAN. Our method predicts three maps of normal vectors of the head’s frontal, left, and right poses. We are thus presenting a model-free method that does not require any prior knowledge of the object’s geometry to be reconstructed.
The key advantage of our proposed approach is the substantial improvement in reconstruction quality compared to existing methods, particularly in the case of facial regions that are self-occluded in the input image. Our method is not limited to 3d face reconstruction. It is generic and applicable to multiple kinds of 3D objects. To illustrate the versatility of our method, we demonstrate its efficacy in reconstructing the entire human body.
By delivering a model-free method capable of generating high-quality 3D reconstructions, this paper not only advances the field of 3D facial reconstruction but also provides a foundation for future research and applications spanning multiple object types. The implications of this work have the potential to extend far beyond facial reconstruction, paving the way for innovative solutions and discoveries in various domains.
Over the last 50 years, neoclassical financial theory has been dominating our perception of what is happening in financial markets. It has spurred numerous valuable theories and concepts all based on the concept of Homo Economicus, the strictly rational economic man. However, humans do not always act in a strictly rational manner. For students and practitioners alike, our book aims at opening the door to another perspective on financial markets: a behavioral perspective based on a Homo Oeconomicus Humanus. This agent acts with limited rationality when making decisions. He/she uses heuristics and shortcuts and is prone to the influence of emotions. This sounds familiar in real life and can be transferred to what happens in financial markets, too.
Because of a high product and technology complexity, companies involve external partners in their research and development (R&D) processes. Interorganizational projects result, which represent temporary organizations. In these projects heterogenous organizations work closely together. Since project work is always teamwork, these projects face due to their characteristic’s major challenges on an organizational, relational, and content-related collaboration level. Thus, this paper raises the following research question: “How can a project team be supported on an organizational, relational, and content-related level in an interorganizational new product development setting?” To answer this research question, an explorative expert study was set up with two digital workshops using the interactive presentation tool Mentimeter. The results show that a cooperative innovation culture could support project teams on an organizational and relational level in the future in minimizing predominant problems. Moreover, it supports project teams for example in a functional communication. Furthermore, 18 values of a cooperative innovation culture result which are for example openness and transparency, risk and failure tolerance or respect. On a content-related level the results show that an adaptable tool which promotes creativity and collaboration method as well as content-related input support could be beneficial for problem-solving in an interorganizational new product development setting in the future. Because the tool can guide product developers through the process with suitable creativity and collaboration methods, can give content-related input and can enable interactive interchange on a table-top. Future research could mainly focus on the connection of the cooperative innovation culture and the tool since these potentially influence each other.
Project managers still face management problems in interorganizational Research and Development (R&D) projects due to their limited authority. Addressing a project culture which is conducive to cooperation and innovation in interorganizational R&D project management demands commitment of individual project members and thus balances this limited authority. However, the relational collaboration level at which project culture manifests itself is not addressed by current project management approaches, or it is addressed only at a late stage. Consequently, project culture develops within a predefined framework of project organization and organized contents and thus is not actively targeted. Therefore, a focus shift towards project culture becomes necessary. This can be done by a project-culture-aware management. The method CLIPS actively supports interorganizational project members in this kind of management. It should be integrable in the common project management approaches, that with its application all collaboration levels are addressed in interorganizational R&D project management. The goal of this paper is to demonstrate the integrability of the method CLIPS and show how it can be integrated in common project management approaches. This enriches interorganizational R&D project management by a project culture focus.
Supply chains have evolved into dynamic, interconnected supply networks, which increases the complexity of achieving end-to-end traceability of object flows and their experienced events. With its capability to ensure a secure, transparent, and immutable environment without relying on a trusted third party, the emerging blockchain technology shows strong potential to enable end-to-end traceability in such complex multitiered supply networks. However, as the dissertation’s systematic literature review reveals, the currently available blockchain-based traceability solutions lack the ability to map object-related supply chain events holistically, which involves mapping objects’ creation and deletion, aggregation and disaggregation, transformation, and transaction. Therefore, this dissertation proposes a novel blockchain-based traceability architecture that integrates governance and token concepts to overcome the limitations of existing architectures. While the governance concept manages the supply chain structure on an application level, the token concept includes all functions to conduct object-related supply chain events. For this to be possible, this dissertation’s token concept introduces token ‘blueprints’, which allow clients to group tokens into different types, where tokens of the same type are non-fungible. Furthermore, blueprints can include minting conditions, which are, for example, necessary when mapping assembly or delivery processes. In addition, the token concept contains logic for reflecting all conducted object-related events in an integrated token history. This ultimately leads to end-to-end traceability of tokens and their physical or abstract representatives on the blockchain. For validation purposes, this dissertation implements the architecture’s components and their update and request relationships in code and proves its applicability based on the Ethereum blockchain. Finally, this dissertation provides a scenario-based evaluation based on two industrial case studies from a manufacturing and logistics perspective to validate the architecture’s capabilities when applied in real-world industrial settings. The proposed blockchain-based traceability architecture thus covers all object-related supply chain events derived from the two industrial case studies and therefore proves its general-purpose end-to-end traceability capabilities of object flows.
The fifth mobile communications generation (5G) can lead to a substantial change in companies enabling the full capability of wireless industrial communication. 5G with its key features of providing Enhanced Mobile Broadband, Ultra-Reliable and Low-Latency Communication, and Massive Machine Type Communication will support the implementation of Industry 4.0 applications. In particular, the possibility to set-up Non-Public Networks provides the opportunity of 5G communication in factories and ensures sole access to the 5G infrastructure offering new opportunities for companies to implement innovative mobile applications. Currently there exist various concepts, ideas, and projects for 5G applications in an industrial environment. However, the global rollout of 5G systems is a continuous process based on various stages defined by the global initiative 3rd Generation Partnership Project that develops and specifies the 5G telecommunication standard. Accordingly, some services are currently still far from their final performance capability or not yet implemented. Additionally, research lacks in clarifying the general suitability of 5G regarding frequently mentioned 5G use cases. This paper aims to identify relevant 5G use cases for intralogistics and evaluates their technical requirements regarding their practical feasibility throughout the upcoming 5G specifications.
Supply chains have evolved into dynamic, interconnected supply networks, which increases the complexity of achieving end-to-end traceability of object flows and their experienced events. With its capability of ensuring a secure, transparent, and immutable environment without relying on a trusted third party, the emerging blockchain technology shows strong potential to enable end-to-end traceability in such complex multitiered supply networks. This paper aims to overcome the limitations of existing blockchain-based traceability architectures regarding their object-related event mapping ability, which involves mapping the creation and deletion of objects, their aggregation and disaggregation, transformation, and transaction, in one holistic architecture. Therefore, this paper proposes a novel ‘blueprint-based’ token concept, which allows clients to group tokens into different types, where tokens of the same type are non-fungible. Furthermore, blueprints can include minting conditions, which, for example, are necessary when mapping assembly processes. In addition, the token concept contains logic for reflecting all conducted object-related events in an integrated token history. Finally, for validation purposes, this article implements the architecture’s components in code and proves its applicability based on the Ethereum blockchain. As a result, the proposed blockchain-based traceability architecture covers all object-related supply chain events and proves its general-purpose end-to-end traceability capabilities of object flows.
The blockchain technology represents a decentralised database that stores information securely in immutable data blocks. Regarding supply chain management, these characteristics offer potentials in increasing supply chain transparency, visibility, automation, and efficiency. In this context, first token-based mapping approaches exist to transfer certain manufacturing processes to the blockchain, such as the creation or assembly of parts as well as their transfer of ownership. This paper proposes a prototypical blockchain application that adopts an authority concept and a concept of smart non-fungible tokens. The application enables the mapping of complex products in dynamic supply chains that require the auditability of changeable assembling processes on the blockchain. Finally, the paper demonstrates the practical feasibility of the proposed application based on a prototypical implementation created on the Ethereum blockchain.
Condition monitoring supported with artificial intelligence, cloud computing, and industrial internet of things (IIoT) technologies increases the feasibility of predictive maintenance. However, the cost of traditional sensors, data acquisition systems, and the required information technology expert-knowledge challenge the industry. This paper presents a hybrid condition monitoring system (CMS) architecture consisting of a distributed, low-cost IIoT-sensor solution. The CMS uses micro-electro-mechanical system (MEMS) microphones for data acquisition, edge computing for signal preprocessing, and cloud computing, including artificial neural networks (ANN) for higher-level information processing. The system's feasibility is validated using a testbed for reciprocating linear-motion axes.
The Commitment of Traders report (CoT) has been around for over 30 years, consistently revealing the futures positions of key market players. This study's primary aim is to use the comprehensive data from the Commitment of Traders reports to develop a short-term reversal trading strategy. Against the benchmark, a S&P 500 buy-and-hold approach with a Sharpe ratio of 1.07, the CoT long only strategy generated significant results in six individual markets. Extending the strategy to long-and-short, two markets outperformed the benchmark significantly. However, a scenario analysis indicated underperformance of the CoT strategy when traded in a portfolio, confirming that the chosen strategy parameters could not generate excess Sharpe ratios. Our results indicate that the Commodity Futures Trading Commission, more specifically the CoT report, contributed to efficient derivatives market.
For optimization of production processes and product quality, often knowledge of the factors influencing the process outcome is compulsory. Thus, process analytical technology (PAT) that allows deeper insight into the process and results in a mathematical description of the process behavior as a simple function based on the most important process factors can help to achieve higher production efficiency and quality. The present study aims at characterizing a well-known industrial process, the transesterification reaction of rapeseed oil with methanol to produce fatty acid methyl esters (FAME) for usage as biodiesel in a continuous micro reactor set-up. To this end, a design of experiment approach is applied, where the effects of two process factors, the molar ratio and the total flow rate of the reactants, are investigated. The optimized process target response is the FAME mass fraction in the purified nonpolar phase of the product as a measure of reaction yield. The quantification is performed using attenuated total reflection infrared spectroscopy in combination with partial least squares regression. The data retrieved during the conduction of the DoE experimental plan were used for statistical analysis. A non-linear model indicating a synergistic interaction between the studied factors describes the reactor behavior with a high coefficient of determination (R²) of 0.9608. Thus, we applied a PAT approach to generate further insight into this established industrial process.
Ecuador, traditionally an agricultural based economy, has a great potential for valorizing their industrial residues. This study, presents a techno-economic analysis for applying a novel biomass oxidation method to produce formic and acetic acids from coffee husk residues in Machala, Ecuador. The analysis determined that the time of return of investment was lower than 5 years, making this project economically feasible, when producing approx. 1000 tons of formic acid per year, which is enough for supplying the Ecuadorian market. This production, would reduce imports costs and develop the chemical industry in the country.
Mobile assistance systems (MAS) promise to overcome personnel shortages in operating theatres worldwide. A literature review inspired by the PRISMA 2020 method determines the state of the art of MAS, and identifies a lack of application areas for MAS in the operating theatre. Interviews with subject-matter experts aim to investigate application areas for MAS. The results show that most operational tasks refer to material management and patient management. MAS, with their potential to reduce the time needed for material and patient management, and the physical and mental strain of patient management, have great potential in the operating theatre.
The pH value of the human skin is not in the neutral range but is slightly acidic with values of – depending on the body part – 3.5 to 6. This provides a suitable habitat for the commensal skin floral but has a killing effect on some pathogenic micro-organisms and an inactivating effect on some viruses. This protective acid mantle of the skin thus represents a first external protective layer against infestation by pathogens. An appropriate surface pH on textiles can help to minimize the transmission of pathogens through the clothing of healthcare workers while at the same time not exerting a negative influence on the skin’s own flora. In addition, the colonization of e.g. bed linen by pathogenic microorganisms can be reduced. This can also have a positive influence on bacteria-associated odor formation on functional clothing.
The pH value of the human skin is not in the neutral range but is slightly acidic with values of – depending on the body part – 3.5 to 6. This provides a suitable habitat for the commensal skin floral but has a killing effect on some pathogenic micro-organisms and an inactivating effect on some viruses. This protective acid mantle of the skin thus represents a first external protective layer against infestation by pathogens. An appropriate surface pH on textiles can help to minimize the transmission of pathogens through the clothing of healthcare workers while at the same time not exerting a negative influence on the skin’s own flora. In addition, the colonization of e.g. bed linen by pathogenic microorganisms can be reduced. This can also have a positive influence on bacteria-associated odor formation on functional clothing.
High-performance liquid chromatography is one of the most important analytical tools for the identification and separation of substances. The efficiency of this method is largely determined by the stationary phase of the columns. Although monodisperse mesoporous silica microspheres (MPSM) represent a commonly used material as stationary phase their tailored preparation remains challenging. Here we report on the synthesis of four MPSMs via the hard template method. Silica nanoparticles (SNPs) which form the silica network of the final MPSMs were generated in situ from tetraethyl orthosilicate (TEOS) in the presence of (3-aminopropyl) triethoxysilane (APTES) functionalized p(GMA-co-EDMA) as hard template. Methanol, ethanol, 2-propanol, and 1-butanol were applied as solvents to control the size of the SNPs in the hybrid beads (HB). After calcination, MPSMs with different sizes, morphology and pore properties were obtained and characterized by scanning electron microscopy, nitrogen adsorption and desorption measurements, thermogravimetric analysis, solid state NMR and DRIFT IR spectroscopy. Interestingly, the 29Si NMR spectra of the HBs show T and Q group species which suggests that there is no covalent linkage between the SNPs and the template. The MPSMs were functionalized with trimethoxy (octadecyl) silane and used as stationary phases in reversed-phase chromatography to separate a mixture of eleven different amino acids. The separation characteristics of the MPSMs strongly depend on their morphology and pore properties which are controlled by the solvent during the preparation of the MPSMs. Overall, the separation behavior of the best phases is comparable with those of commercially available columns. The phases even achieve faster separation of the amino acids without loss of quality.
Mesoporous silica microspheres (MPSMs) find broad application as separation materials in high liquid chromatography (HPLC). A promising preparation strategy uses p(GMA-co-EDMA) as hard templates to control the pore properties and a narrow size distribution of the MPMs. Here six hard templates were prepared which differ in their porosity and surface functionalization. This was achieved by altering the ratio of GMA to EDMA and by adjusting the proportion of monomer and porogen in the polymerization process. The various amounts of GMA incorporated into the polymer network of P1-6 lead to different numbers of tetraethylene pentamine in the p(GMA-co-EDMA) template. This was established by a partial least squares regression (PLS-R) model, based on FTIR spectra of the templates. Deposition of silica nanoparticles (SNP) into the template under Stoeber conditions and subsequent removal of the polymer by calcination result in MPSM1-6. The size of the SNPs and their incorporation depends on the pore parameters of the template and degree of TEPA functionalization. Moreover, the incorporated SNPs construct the silica network and control the pore parameters of the MPSMs. Functionalization of the MPSMs with trimethoxy (octadecyl) silane allows their use as a stationary phase for the separation of biomolecules. The pore characteristics and the functionalization of the template determine the pore structure of the silica particles and, consequently, their separation properties.
The hard template method for the preparation of monodisperse mesoporous silica microspheres (MPSMs) has been established in recent years. In this process, in situ-generated silica nanoparticles (SNPs) enter the porous organic template and control the size and pore parameters of the final MPSMs. Here, the sizes of the deposited SNPs are determined by the hydrolysis and condensation rates of different alkoxysilanes in a base catalyzed sol–gel process. Thus, tetramethyl orthosilicate (TMOS), tetraethyl orthosilicate (TEOS), tetrapropyl orthosilicate (TPOS) and tetrabutyl orthosilicate (TBOS) were sol–gel processed in the presence of amino-functionalized poly (glycidyl methacrylate-co-ethylene glycol dimethacrylate) (p(GMA-co-EDMA)) templates. The size of the final MPSMs covers a broad range of 0.5–7.3 µm and a median pore size distribution from 4.0 to 24.9 nm. Moreover, the specific surface area can be adjusted between 271 and 637 m2 g−1. Also, the properties and morphology of the MPSMs differ according to the SNPs. Furthermore, the combination of different alkoxysilanes allows the individual design of the morphology and pore parameters of the silica particles. Selected MPSMs were packed into columns and successfully applied as stationary phases in high-performance liquid chromatography (HPLC) in the separation of various water-soluble vitamins.
Blockchains have become increasingly important in recent years and have expanded their applicability to many domains beyond finance and cryptocurrencies. This adoption has particularly increased with the introduction of smart contracts, which are immutable, user-defined programs directly deployed on blockchain networks. However, many scenarios require business transactions to simultaneously access smart contracts on multiple, possibly heterogeneous blockchain networks while ensuring the atomicity and isolation of these transactions, which is not natively supported by current blockchain systems. Therefore, in this work, we introduce the Transactional Cross-Chain Smart Contract Invocation (TCCSCI) approach that supports such distributed business transactions while ensuring their global atomicity and serializability. The approach introduces the concept of Resource Manager Smart Contracts, and 2PC for Blockchains (2PC4BC), a client-driven Atomic Commit Protocol (ACP) specialized for blockchain-based distributed transactions. We validate our approach using a prototypical implementation, evaluate its introduced overhead, and prove its correctness.
This study examines the underexplored areas of customer success management, focusing on the impact of leadership and companywide collaboration, and the role of customer success in overall firm performance. A qualitative research approach was utilized, which involved reviewing relevant literature and conducting an interview with the Vice President of Customer Success Management in B2B at a case company. Findings revealed that both leadership and pervasive collaboration greatly enhance the customer journey experience. Given that 75% of Annual Recurring Revenue is derived from existing customers, the substantial role of customer success in propelling business growth is affirmed. The study also demonstrated the importance of proactive customer engagement, assimilating customer feedback into products and services, and nurturing personal relationships with customers for fostering innovation. It further stressed the need for service provision and decision-making at various levels, as well as the implementation of a range of communication channels, to ensure customer success.
This study examines the phenomenon of Virtual Influencer (VI) marketing and its impact on customer purchase behavior. The aim is to understand the scope and impact of VI marketing. The study compares VI marketing to traditional Human Influencer (HI) marketing and identifies the unique benefits and challenges associated with VIs. A survey was conducted to gain insight into consumer attitudes and behaviors toward VIs. Key findings reveal varying levels of trust and acceptance of VIs among consumers. While some participants expressed openness to buying products promoted by VIs, others had reservations about their authenticity. The study also explores the potential role of VIs in the metaverse, highlighting business opportunities and challenges in this evolving digital landscape. Overall, this research sheds light on the growing influence of VIs and the need for further research in the field of marketing.
Purpose
The authors study the valuation effect of corporate diversification in the initial phase of the COVID-19 pandemic in 2020 in Europe.
Design/methodology/approach
Applying a cross-sectional regression model to a sample of public companies headquartered in the European Union, the authors investigate the existence of and the change in a diversification discount between 2018 and 2020. By applying the Excess Q methodology, the authors make an industry adjustment of diversified companies to measure the value effect of corporate diversification.
Findings
The authors find an economically and statistically significant diversification discount that increases from an average Excess Q of −0.05 in 2019 to −0.10 in 2020. The diversified companies' inferior fundamental financial performance in 2020 accompanies the discount. The results deviate from those of previous research, which mostly show a decrease in the diversification discount in economic crises, and thereby, shed doubt on whether diversification provides insurance against pandemic-induced adverse value effects.
Originality/valueThe study distinguishes the role of corporate diversification during recessionary periods by establishing that the valuation effect of diversification depends on the nature of the crisis. The analysis incorporates criticism of previous studies concerning a biased methodology and uniform data source by applying the Excess Q methodology and using FactSet industry segment data.
The present study investigated the possibilities and limitations of using a low-cost NIR spectrometer for the verification of the presence of the declared active pharmaceutical ingredients (APIs) in tablet formulations, especially for medicine screening studies in low-resource settings. Spectra from 950 to 1650 nm were recorded for 170 pharmaceutical products representing 41 different APIs, API combinations or placebos. Most of the products, including 20 falsified medicines, had been collected in medicine quality studies in African countries. After exploratory principal component analysis, models were built using data-driven soft independent modelling of class analogy (DD-SIMCA), a one-class classifier algorithm, for tablet products of penicillin V, sulfamethoxazole/trimethoprim, ciprofloxacin, furosemide, metronidazole, metformin, hydrochlorothiazide, and doxycycline. Spectra of amoxicillin and amoxicillin/clavulanic acid tablets were combined into a single model. Models were tested using Procrustes cross-validation and by projection of spectra of tablets containing the same or different APIs. Tablets containing no or different APIs could be identified with 100 % specificity in all models. A separation of the spectra of amoxicillin and amoxicillin/clavulanic acid tablets was achieved by partial least squares discriminant analysis. 15 out of 19 external validation products (79 %) representing different brands of the same APIs were correctly identified as members of the target class; three of the four rejected samples showed an API mass percentage of the total tablet weight that was out of the range covered in the respective calibration set. Therefore, in future investigations larger and more representative spectral libraries are required for model building. Falsified medicines containing no API, incorrect APIs, or grossly incorrect amounts of the declared APIs could be readily identified. Variation between different NIR-S-G1 spectroscopic devices led to a loss of accuracy if spectra recorded with different devices were pooled. Therefore, piecewise direct standardization was applied for calibration transfer. The investigated method is a promising tool for medicine screening studies in low-resource settings.
The influence of sleep on human health is enormous. Accordingly, sleep disorders can have a negative impact on it. To avoid this, they should be identified and treated in time. For this purpose, objective (with an appropriate device) or subjective (based on perceived values) measurement methods are used for sleep analysis to understand the problem. The aim of this work is to find out whether an exchange of the two methods is possible and can provide reliable results. In accordance with this goal, a study was conducted with people aged over 65 years old (a total of 154 night-time recordings) in which both measurement methods were compared. Sleep questionnaires and electronic devices for sleep assessment placed under the mattress were applied to achieve the study aims. The obtained results indicated that the correlation between both measurement methods could be observed for sleep characteristics such as total sleep time, total time in bed and sleep efficiency. However, there are also significant differences in absolute values of the two measurement approaches for some subjects/nights, which leads us to conclude that the substitution is more likely to be considered in case of long-term monitoring where the trends are of more importance and not the absolute values for individual nights.
In order to ensure sufficient recovery of the human body and brain, healthy sleep is indispensable. For this purpose, appropriate therapy should be initiated at an early stage in the case of sleep disorders. For some sleep disorders (e.g., insomnia), a sleep diary is essential for diagnosis and therapy monitoring. However, subjective measurement with a sleep diary has several disadvantages, requiring regular action from the user and leading to decreased comfort and potential data loss. To automate sleep monitoring and increase user comfort, one could consider replacing a sleep diary with an automatic measurement, such as a smartwatch, which would not disturb sleep. To obtain accurate results on the evaluation of the possibility of such a replacement, a field study was conducted with a total of 166 overnight recordings, followed by an analysis of the results. In this evaluation, objective sleep measurement with a Samsung Galaxy Watch 4 was compared to a subjective approach with a sleep diary, which is a standard method in sleep medicine. The focus was on comparing four relevant sleep characteristics: falling asleep time, waking up time, total sleep time (TST), and sleep efficiency (SE). After evaluating the results, it was concluded that a smartwatch could replace subjective measurement to determine falling asleep and waking up time, considering some level of inaccuracy. In the case of SE, substitution was also proved to be possible. However, some individual recordings showed a higher discrepancy in results between the two approaches. For its part, the evaluation of the TST measurement currently does not allow us to recommend substituting the measurement method for this sleep parameter. The appropriateness of replacing sleep diary measurement with a smartwatch depends on the acceptable levels of discrepancy. We propose four levels of similarity of results, defining ranges of absolute differences between objective and subjective measurements. By considering the values in the provided table and knowing the required accuracy, it is possible to determine the suitability of substitution in each individual case. The introduction of a “similarity level” parameter increases the adaptability and reusability of study findings in individual practical cases.
The scoring of sleep stages is one of the essential tasks in sleep analysis. Since a manual procedure requires considerable human and financial resources, and incorporates some subjectivity, an automated approach could result in several advantages. There have been many developments in this area, and in order to provide a comprehensive overview, it is essential to review relevant recent works and summarise the characteristics of the approaches, which is the main aim of this article. To achieve it, we examined articles published between 2018 and 2022 that dealt with the automated scoring of sleep stages. In the final selection for in-depth analysis, 125 articles were included after reviewing a total of 515 publications. The results revealed that automatic scoring demonstrates good quality (with Cohen's kappa up to over 0.80 and accuracy up to over 90%) in analysing EEG/EEG + EOG + EMG signals. At the same time, it should be noted that there has been no breakthrough in the quality of results using these signals in recent years. Systems involving other signals that could potentially be acquired more conveniently for the user (e.g. respiratory, cardiac or movement signals) remain more challenging in the implementation with a high level of reliability but have considerable innovation capability. In general, automatic sleep stage scoring has excellent potential to assist medical professionals while providing an objective assessment.
Healthy sleep is one of the prerequisites for a good human body and brain condition, including general well-being. Unfortunately, there are several sleep disorders that can negatively affect this. One of the most common is sleep apnoea, in which breathing is impaired. Studies have shown that this disorder often remains undiagnosed. To avoid this, developing a system that can be widely used in a home environment to detect apnoea and monitor the changes once therapy has been initiated is essential. The conceptualisation of such a system is the main aim of this research. After a thorough analysis of the available literature and state of the art in this area of knowledge, a concept of the system was created, which includes the following main components: data acquisition (including two parts), storage of the data, apnoea detection algorithm, user and device management, data visualisation. The modules are interchangeable, and interfaces have been defined for data transfer, most of which operate using the MQTT protocol. System diagrams and detailed component descriptions, including signal requirements and visualisation mockups, have also been developed. The system's design includes the necessary concepts for the implementation and can be realised in a prototype in the next phase.
Acting like a startup - using corporate startup structures to manage the digital transformation
(2023)
Digital transformation is proving to be a significant challenge for firms and companies when it comes to maintaining their market position. It is evident that many companies are struggling to find their particular way through this transformation. A corporate startup structure is one way to find a suitable solution quickly. Therefore, we are presenting a model for corporate startup activities, which we will instantiate in an appropriate tool to support the management of corporate startups by their parent firms. We have derived the first requirements and design principles from a comprehensive problem analysis and literature study. In addition to this,we are presenting a first artifact, which should realize the design principles by implementing a practical tool. Forming a cooperation with an automotive firm has enabled us to gain access to real-world data for the design and evaluation of the artifact.
Student-faculty interactions that promote learning are essential contributors to student retention, academic success and satisfaction. But the factors that causally initiate and frame these interactions are not well understood. Only if students evaluate these interactions as positive will they seek them. We conducted a survey experiment with students (n = 375) from a tuition-fee-free German business school, using conditional process analysis to assess which factors frame effective interactions. We focus on out-of-classroom standard and non-standard requests that students make to faculty, then investigate how faculty and student gender and students’ academic entitlement influence the interaction. Our study examines how students evaluate the interaction with faculty: when they seek interaction, their expectations of getting their requests approved, and their disappointment when their requests are declined. We find a significant influence of the request type along with moderating effects of faculty gender, student gender and student entitlement, particularly for non-standard work requests. We conclude with policy implications for university management: developing target-group-specific measures that facilitate the desired and positively evaluated student-faculty interactions might benefit all university stakeholders.
Software development teams have to face stress caused by deadlines, staff turnover, or individual differences in commitment, expertise, and time zones. While students are typically taught the theory of software project management, their exposure to such stress factors is usually limited. However, preparing students for the stress they will have to endure once they work in project teams is important for their own sake, as well as for the sake of team performance in the face of stress. Team performance has been linked to the diversity of software development teams, but little is known about how diversity influences the stress experienced in teams. In order to shed light on this aspect, we provided students with the opportunity to self-experience the basics of project management in self-organizing teams, and studied the impact of six diversity dimensions on team performance, coping with stressors, and positive perceived learning effects. Three controlled experiments at two universities with a total of 65 participants suggest that the social background impacts the perceived stressors the most, while age and work experience have the highest impact on perceived learnings. Most diversity dimensions have a medium correlation with the quality of work, yet no significant relation to the team performance. This lays the foundation to improve students’ training for software engineering teamwork based on their diversity-related needs and to create diversity-sensitive awareness among educators, employers and researchers.
The members of the European TRIZ Campus (ETC) have been learning from and working together with many honorable members of MATRIZ Official for many years and feel very connected to the official International TRIZ Association.
To further spread the TRIZ methodology and TRIZ teaching in the European area in the past 12 months the ETC has put a lot of thought in how making TRIZ accessible to a broader audi-ence and getting more professionals in touch with the methodology was one of the focal points.
To this end, we have developed new formats such as the "Trainer Day" to support trainers on their way into practice. We have drawn up detailed quality guidelines for the teaching of the TRIZ methodology, which are intended to provide orientation for the design of training classes and docu-mentation. We strive for exchange with representatives of "neighbouring" methods such as Six sigma, Lean, DFMA and Design Thinking to indicate synergies and added value among methods and approaches of different kinds. We are testing formats for community building, in order to connect users of all places more strongly with the TRIZ methodology through communication and information of-fers. If TRIZ users feel alone in their organizations, the exchange outside their organi-zation helps them to keep up with the TRIZ methodology. Moreover, the ETC strives to increase the ability to communicate the benefits of TRIZ-usage inside organizations. We discuss, how to reach teachers and students of all age, to make them the unique way of inventive thinking accessible.
In our paper we want to give other MATRIZ Official members insights and share our experi-ences and best practices with our fellow MO members.
The COVID-19 pandemic necessitated significant changes in foreign language education, forcing teachers to reconstruct their identities and redefine their roles as language educators. To better understand these adaptations and perspectives, it is crucial to study how the pandemic has influenced teaching practices. This mixed-methods study focused on the less-explored aspects of foreign language teaching during the pandemic, specifically examining how language teachers adapted and perceived their practices, including rapport building and learner autonomy, during emergency remote teaching (ERT) in higher education institutions. It also explored teachers’ intentions for their teaching in the post-pandemic era. An online survey was conducted, involving 118 language educators primarily from Germany, with a smaller representation from New Zealand, the United States, and the United Kingdom. The analysis of participants’ responses revealed issues and opportunities regarding lesson formats, tool usage, rapport, and learner autonomy. Our findings offer insights into the desired changes participants envisioned for the post-pandemic era. The results highlight the opportunities ERT had created in terms of teacher development, and we offer suggestions to enhance professional development programmes based on these findings.
This article explores current debate on the use of soft power in international higher education, highlighting existing tensions between competing political and academic discourses. It draws on examples from practice and relevant insights in soft power scholarship to capture varying paradoxes and dilemmas that emerge as nations try to leverage the power of international tertiary education to enhance their brand and attract foreign audiences in the name of public diplomacy. Whilst exposing cases of hubris and hidden agendas, this study also addresses issues of inequality and responds to a growing call for knowledge diplomacy aimed at tackling common global problems.
In modern collaborative production environments where industrial robots and humans are supposed to work hand in hand, it is mandatory to observe the robot’s workspace at all times. Such observation is even more crucial when the robot’s main position is also dynamic e.g. because the system is mounted on a movable platform. As current solutions like physically secured areas in which a robot can perform actions potentially dangerous for humans, become unfeasible in such scenarios, novel, more dynamic, and situation aware safety solutions need to be developed and deployed.
This thesis mainly contributes to the bigger picture of such a collaborative scenario by presenting a data-driven convolutional neural network-based approach to estimate the two-dimensional kinematic-chain configuration of industrial robot-arms within raw camera images. This thesis also provides the information needed to generate and organize the mandatory data basis and presents frameworks that were used to realize all involved subsystems. The robot-arm’s extracted kinematic-chain can also be used to estimate the extrinsic camera parameters relative to the robot’s three-dimensional origin. Further a tracking system, based on a two-dimensional kinematic chain descriptor is presented to allow for an accumulation of a proper movement history which enables the prediction of future target positions within the given image plane. The combination of the extracted robot’s pose with a simultaneous human pose estimation system delivers a consistent data flow that can be used in higher-level applications.
This thesis also provides a detailed evaluation of all involved subsystems and provides a broad overview of their particular performance, based on novel generated, semi automatically annotated, real datasets.
The increase in distributed energy generation, such as photovoltaic systems (PV) or combined heat and power plants (CHP), poses new challenges to almost every distribution network operator (DNO). In the low-voltage (LV) grids, where installed PV capacity approaches the magnitude of household load, reverse power flow occurs at the secondary substa-tions. High PV penetration leads to voltage rise, flicker and loading problems. These problems have been addressed by the application of various techniques amongst which is the deployment of step voltage regulators (SVR). SVR can solve the voltage problem, but do not prevent or reduce reverse power flows. Therefore, the application of SVR in low voltage grids can result in significant power losses upstream. In this paper we present part of a research project investi-gating the application of remote-controlled cable cabinets (CC) with metering units in a low-voltage network as a possible alternative for SVR. A new generation of custom-made remote-control cable cabinets has been deployed and dynamic network reconfigurations (NR) have been realized with the following objectives: (i) reduction of reverse power flow through the secondary substation to the upstream network and therefore a reduction of upstream losses, (ii) reduction of the voltage rise caused by distributed energy resources and (iii) load balancing in the low-voltage grid. Secondary objec-tives are to improve the DNO's insight into the state of the network and to provide further information on future smart grid integration.
Sleep disorders can impact daily life, affecting physical, emotional, and cognitive well-being. Due to the time-consuming, highly obtrusive, and expensive nature of using the standard approaches such as polysomnography, it is of great interest to develop a noninvasive and unobtrusive in-home sleep monitoring system that can reliably and accurately measure cardiorespiratory parameters while causing minimal discomfort to the user’s sleep. We developed a low-cost Out of Center Sleep Testing (OCST) system with low complexity to measure cardiorespiratory parameters. We tested and validated two force-sensitive resistor strip sensors under the bed mattress covering the thoracic and abdominal regions. Twenty subjects were recruited, including 12 males and 8 females. The ballistocardiogram signal was processed using the 4th smooth level of the discrete wavelet transform and the 2nd order of the Butterworth bandpass filter to measure the heart rate and respiration rate, respectively. We reached a total error (concerning the reference sensors) of 3.24 beats per minute and 2.32 rates for heart rate and respiration rate, respectively. For males and females, heart rate errors were 3.47 and 2.68, and respiration rate errors were 2.32 and 2.33, respectively. We developed and verified the reliability and applicability of the system. It showed a minor dependency on sleeping positions, one of the major cumbersome sleep measurements. We identified the sensor under the thoracic region as the optimal configuration for cardiorespiratory measurement. Although testing the system with healthy subjects and regular patterns of cardiorespiratory parameters showed promising results, further investigation is required with the bandwidth frequency and validation of the system with larger groups of subjects, including patients.
AbstractThrough their procyclical behavior, loan loss provisions have been determined as one of the factors that contribute to financial instability during a crisis. IFRS 9 was introduced in 2018 with an expected credit loss model replacing the incurred loss model of IAS 39 to mitigate the effect in the future. Our study aims to analyze loan loss provisions of major banks in the Eurozone to determine for the first time if the implementation of IFRS 9, as intended by regulators, has a dampening effect on procyclicality, especially during the stressed situation under COVID‐19. We analyze 51 banks from 12 countries of the European Monetary Union using 2856 firm‐year observations. While no robust evidence of less procyclicality can be found after the implementation of IFRS 9 until the pandemic, we find evidence that loan loss provisions moved countercyclical during 2020, indicating an alleviating effect at the beginning of the exogenous shock.
In countries such as Germany, where municipalities have planning sovereignty, problems of urban sprawl often arise. As the dynamics of land development have not substantially subsided over the last years, the national government decided to test the instrument of ‘Tradable Planning Permits’ (TPP) in a nationwide field experiment with 87 municipalities involved. The field experiment was able to implement the key features of a TPP system in a laboratory setting with approximated real socioeconomic and planning conditions. In a TPP system allocated planning permits must be used by municipalities for developing land. The permits can be traded between local jurisdictions, so that they have flexibility in deciding how to comply with the regulation. In order to evaluate the performance of such a system, specific field data about future building areas and their impact on community budgets for the period 2014–2028 were collected. The field experiment contains several sessions with representatives of the municipalities and with students. The participants were confronted with two (municipalities) and four (students) schemes. The results show that a trading system can curb down land development in an effective and also efficient manner. However, depending on the regulatory framework, the trading schemes show different price developments and distributional effects. The unexperienced representatives of the local authorities can easily handle with the permits in the administration and in the established market. A trading scheme sets very high incentives to save open space and to direct development activities to areas within existing planning boundaries. It is therefore a promising instrument for Germany and also other regions or countries with an established land-use planning system.
Development of an indoor positioning system to create a digital shadow of production plant layouts
(2023)
The objective of this dissertation is to develop an indoor positioning system that allows the creation of a digital shadow of the plant layout in order to continuously represent the actual state of the physical layout in the virtual space. In order to define the requirements for such a system, potential stakeholders who could benefit from a digital shadow in the context of the plant layout were analysed. In order to generate added value for their work, the requirements were derived from their perspective. As the core of an indoor positioning system is the sensory aspect to capture the physical layout parameters, different potential technologies were compared and evaluated in terms of their suitability for this particular application. Derived from this analysis, the selected concept is based on the use of a pan-tilt-zoom (PTZ) camera in combination with fiducial markers. In order to determine specific camera parameters, a series of experiments were conducted which were necessary to develop the measurement method as well as the mathematical calculation method and coordinate transformation for the determination of poses (positions and angular orientations) of the respective facilities in the plant. In addition, an experimental validation was performed to ensure that the limit values for individual parameters determined in the requirements analysis can be met.
The market for indoor positioning systems for a variety of applications has grown strongly in recent years. A wide range of systems is available, varying considerably in terms of accuracy, price and technology used. The suitability of the systems is highly dependent on the intended application. This paper presents a concept to use a single low-cost PTZ camera in combination with fiducial markers for indoor position and orientation determination. The intended use case is to capture a plant layout consisting of position, orientation and unique identity of individual facilities. Important factors to consider for the selection of a camera have been identified and the transformation of the marker pose in camera coordinates into a selectable plant coordinate system is described. The concept is illustrated by an exemplary practical implementation and its results.
In the past, plant layouts were regarded as highly static structures. With increasing internal and external factors causing turbulence in operations, it has become more necessary for companies to adapt to new conditions in order to maintain optimal performance. One possible way for such an adaptation is the adjustment of the plant layout by rearranging the individual facilities within the plant. Since the information about the plant layout is considered as master data and changes have a considerable impact on interconnected processes in production, it is essential that this data remains accurate and up-to-date. This paper presents a novel approach to create a digital shadow of the plant layout, which allows the actual state of the physical layout to be continuously represented in virtual space. To capture the spatial positions and orientations of the individual facilities, a pan-tilt-zoom camera in combination with fiducial markers is used. With the help of a prototypically implemented system, the real plant layout was captured and converted into different data formats for further use in exemplary external software systems. This enabled the automatic updating of the plant layout for simulation, analysis and routing tasks in a case study and showed the benefits of using the proposed system for layout capturing in terms of accuracy and effort reduction.
This article provides a stochastic agent-based model to exhibit the role of aggregation metrics in order to mitigate polarization in a complex society. Our sociophysics model is based on interacting and nonlinear Brownian agents, which allow us to study the emergence of collective opinions. The opinion of an agent, x i (t) is a continuous positive value in an interval [0, 1]. We find (i) most agent-metrics display similar outcomes. (ii) The middle-metric and noisy-metric obtain new opinion dynamics either towards assimilation or fragmentation. (iii) We show that a developed 2-stage metric provide new insights about convergence and equilibria. In summary, our simulation demonstrates the power of institutions, which affect the emergence of collective behavior. Consequently, opinion formation in a decentralized complex society is reliant to the individual information processing and rules of collective behavior.
The aim of this article is to establish a stochastic search algorithm for neural networks based on the fractional stochastic processes {𝐵𝐻𝑡,𝑡≥0} with the Hurst parameter 𝐻∈(0,1). We define and discuss the properties of fractional stochastic processes, {𝐵𝐻𝑡,𝑡≥0}, which generalize a standard Brownian motion. Fractional stochastic processes capture useful yet different properties in order to simulate real-world phenomena. This approach provides new insights to stochastic gradient descent (SGD) algorithms in machine learning. We exhibit convergence properties for fractional stochastic processes.
This article examines the risks and societal costs associated with flexible average inflation targeting in the United States and symmetric inflation targeting in the Eurozone. Employing an empirical approach, we analyze monthly cumulative inflation gaps over a monetary policy horizon of 36 months. By investigating the trajectories of the cumulative inflation gaps, we find a heavy tailed distribution and a 20 percent probability of over- and undershooting the inflation target. We exhibit that the offsetting mechanism introduced in the revised monetary strategies lack credibility in ensuring price stability during a period of persistent inflation. Consequently, the credibility of central banks may be compromised. The policy implications are the integration of an escape clause and prompt monetary corrections in cases where the inflation goal is not achieved. This study provides insights for policymakers and central banks, emphasizing challenges in maintaining credibility and price stability within the new monetary strategies.
The paper “focuses on the critique of economic rationality” (p. 2). The author analyses the work by Amartya Sen with a somewhat interdisciplinary approach. The author concludes that Sen has greatly shifted our paradigm of economic rationality. The nexus of ethics and economics as well as the two types of rationality (consistency versus optimization) are major contributions of Sen, according to the author. In a nutshell, Sen’s work is reconfiguring economic rationality until today.
Smart factories, driven by the integration of automation and digital technologies, have revolutionized industrial production by enhancing efficiency, productivity, and flexibility. However, the optimization and continuous improvement of these complex systems present numerous challenges, especially when real-world data collection is time-consuming, expensive, or limited. In this paper, we propose a novel method for semi-automated improvement of smart factories using synthetic data and cause-effect-relations, while incorporating the aspect of self-organization. The method leverages the power of synthetic data generation techniques to create representative datasets that mimic the behaviour of real-world manufacturing systems. These synthetic datasets serve together with the cause-and-effect relationships as a valuable resource for factory optimization, as they enable extensive experimentation and analysis without the constraints of limited or costly real-world data. Furthermore, the method embraces the concept of self organization within smart factories. By allowing the system to adapt and optimize itself based on feedback from the synthetic data, cause-effect-relationships, the factory can dynamically reconfigure and adjust its processes. To facilitate the improvement process, the method integrates the synthetic data with advanced analytics and machine learning algorithms as well as and the cause-and-effect relationships. This synergy between human expertise and technological advancements represents a compelling path towards a truly optimized smart factory of the future.
Determinants of customer recovery in retail banking - lessons from a German banking case study
(2023)
Due to the increased willingness of retail banking customers to switch and churn their banking relationships, a question arises: Is it possible to win back lost customers, and if so, is such a possibility even desirable after all economic factors have been considered? To answer these questions, this paper examines selected determinants for the recovery of terminated customer–bank relationships from the perspective of former customers. This study therefore evaluates for the first time, empirically and systematically with reference to a German Sparkasse as a case-study setting, whether lost customers have a sufficient general willingness to return (GWR) a retail banking relationship. From our results, a correlation is shown between the GWR a banking relationship and some specific determinants: seeking variety, attractiveness of alternatives and customer satisfaction with the former business relationship. In addition, we show that a customer’s GWR varies depending on the reason for churn and is surprisingly greater when the customer defected for reasons that lie within the scope of the customer himself. Despite the case-study character, however, our results provide relevant insights for other banks and, in particular, this applies to countries with a comparable banking system.
Purpose
For the modeling, execution, and control of complex, non-standardized intraoperative processes, a modeling language is needed that reflects the variability of interventions. As the established Business Process Model and Notation (BPMN) reaches its limits in terms of flexibility, the Case Management Model and Notation (CMMN) was considered as it addresses weakly structured processes.
Methods
To analyze the suitability of the modeling languages, BPMN and CMMN models of a Robot-Assisted Minimally Invasive Esophagectomy and Cochlea Implantation were derived and integrated into a situation recognition workflow. Test cases were used to contrast the differences and compare the advantages and disadvantages of the models concerning modeling, execution, and control. Furthermore, the impact on transferability was investigated.
Results
Compared to BPMN, CMMN allows flexibility for modeling intraoperative processes while remaining understandable. Although more effort and process knowledge are needed for execution and control within a situation recognition system, CMMN enables better transferability of the models and therefore the system. Concluding, CMMN should be chosen as a supplement to BPMN for flexible process parts that can only be covered insufficiently by BPMN, or otherwise as a replacement for the entire process.
Conclusion
CMMN offers the flexibility for variable, weakly structured process parts, and is thus suitable for surgical interventions. A combination of both notations could allow optimal use of their advantages and support the transferability of the situation recognition system.
The basis for developing future products in the automotive industry is finding creative and innovative solutions. Ideas can be found by means of creativity methods that support product developers throughout the creative process. Product developers are provided with a variety of different and new methods. This leads to a “method jungle” in which it is difficult for product developers to find the most suitable path. The successful use of methods in product development goes hand in hand with the acceptance and implementation of the methods. Despite the added value, only a low use is observed in the development process. The field of Creativity Support Tools also offers a wide variety of different tools that support the creativity process. Although a chasm exists between the many CSTs that are developed and what creative practitioners actually use. Therefore, previous studies iteratively developed a user-centered tool called “IDEA” that tries to provide a tool that responds to users' needs. The question arises how the developed tool IDEA performs in “real life setting” regarding its UX and usability as well as the creativity method acceptance and level of mental workload.
Plasmonics and nanophotonics both deal with the interaction of light with structures of typically sub-wavelength size in one of more dimensions. Over the past decade or two, interest in these topics has grown significantly. This includes basic research towards detailed understanding of light-matter interaction and the manipulation of light on the nanometer scale as well as the search for applications ranging from quantum information processing, data storage, solar cells, spectroscopy and microscopy to (bio-)sensors and biomedical devices. Key enablers for this development are advanced materials and the variety of techniques to structure them with nanometer precision on the one hand, and progress in the theoretical description and numerical implementations, on the other. Besides the traditional metals Au, Ag, Al, and Cu also compounds such as refractory metal nitrides with much higher durability as well as semiconductors, dielectrics and hybrid structures have become of interest. Structuring techniques are not only aiming at the fabrication of individual elements with highest precision for detailed interaction analysis, but also at methods for large scale, low-cost nanofabrication mostly for sensor applications. In the former case, mostly electron beam lithography and focused ion beam milling are employed, while for high throughput various forms of nanoimprint and self-assembly based techniques are favored. Thin film deposition and pattern transfer techniques are mostly derived from those developed for nano-electronics, however more recently methods such as electroless plating, atomic layer deposition or etching and 3-D additive techniques are appearing. Thus, highly specialized expertise has been acquired in the different disciplines, and successful research and technology transfer will draw from this pool of knowledge.
This project aims to evaluate existing big data infrastructures for their applicability in the operating room to support medical staff with context-sensitive systems. Requirements for the system design were generated. The project compares different data mining technologies, interfaces, and software system infrastructures with a focus on their usefulness in the peri-operative setting. The lambda architecture was chosen for the proposed system design, which will provide data for both postoperative analysis and real-time support during surgery.
In recent years, both fields, AI and VRE, have received increasing attention in scientific research. Thus, this article’s purpose is to investigate the potential of DL-based applications on VRE and as such provide an introduction to and structured overview of the field. First, we conduct a systematic literature review of the application of Artificial Intelligence (AI), especially Deep Learning (DL), on the integration of Variable Renewable Energy (VRE). Subsequently, we provide a comprehensive overview of specific DL-based solution approaches and evaluate their applicability, including a survey of the most applied and best suited DL architectures. We identify ten DL-based approaches to support the integration of VRE in modern power systems. We find (I) solar PV and wind power generation forecasting, (II) system scheduling and grid management, and (III) intelligent condition monitoring as three high potential application areas.