Refine
Document Type
- Conference proceeding (137)
- Book chapter (100)
- Journal article (84)
- Anthology (12)
- Book (11)
- Doctoral Thesis (1)
Is part of the Bibliography
- yes (345)
Institute
- Informatik (173)
- ESB Business School (69)
- Texoversum (50)
- Technik (26)
- Life Sciences (24)
- Zentrale Einrichtungen (3)
Publisher
- Springer (345) (remove)
Flame-retardant finishing of cotton fabrics using DOPO functionalized alkoxy- and amido alkoxysilane
(2023)
In the present study, DOPO-based alkoxysilane (DOPO-ETES) and amido alkoxysilane (DOPO-AmdPTES) were synthesized by one-step and without by-products as halogen-free flame retardants. The flame retardants were applied on cotton fabric utilizing sol–gel method and pad-dry-cure finishing process. The flame retardancy, the thermal stability and the combustion ehaviour of treated cotton were evaluated by surface and bottom edge ignition flame test (according to EN ISO 15025), thermogravimetric analysis (TGA) and micro-scale combustion calorimeter (MCC). Unlike CO/DOPO-ETES sample, cotton treated with DOPO-AmdPTES nanosols exhibits self-extinguishing ehaviour with high char residue, an improvement of the LOI value and a significant reduction of the PHRR, HRC and THR compared to pristine cotton. Cotton finished with DOPO-AmdPTES reveals a semi-durability after ten laundering cycles keeping the flame-retardant properties unchanged. According to the results obtained from TGA-FTIR, Py-GC/MS and XPS, the major activity of flame retardant occurs in the condensed phase via catalytic induced char formation as physical barrier along with the activity in the gas phase derived mainly from the dilution effect. The early degradation of CO/DOPO-AmdPTES compared to CO/DOPO-ETES, triggered by the cleavage of the weak bond between P and C=O, as the DFT study indicated, provides the beneficial effect of this flame retardant on the fire resistance of cellulose.
In our initial DaMoN paper, we set out the goal to revisit the results of “Starring into the Abyss [...] of Concurrency Control with [1000] Cores” (Yu in Proc. VLDB Endow 8: 209-220, 2014). Against their assumption, today we do not see single-socket CPUs with 1000 cores. Instead, multi-socket hardware is prevalent today and in fact offers over 1000 cores. Hence, we evaluated concurrency control (CC) schemes on a real (Intel-based) multi-socket platform. To our surprise, we made interesting findings opposing results of the original analysis that we discussed in our initial DaMoN paper. In this paper, we further broaden our analysis, detailing the effect of hardware and workload characteristics via additional real hardware platforms (IBM Power8 and 9) and the full TPC-C transaction mix. Among others, we identified clear connections between the performance of the CC schemes and hardware characteristics, especially concerning NUMA and CPU cache. Overall, we conclude that no CC scheme can efficiently make use of large multi-socket hardware in a robust manner and suggest several directions on how CC schemes and overall OLTP DBMS should evolve in future.
Glioblastoma WHO IV belongs to a group of brain tumors that are still incurable. A promising treatment approach applies photodynamic therapy (PDT) with hypericin as a photosensitizer. To generate a comprehensive understanding of the photosensitizer-tumor interactions, the first part of our study is focused on investigating the distribution and penetration behavior of hypericin in glioma cell spheroids by fluorescence microscopy. In the second part, fluorescence lifetime imaging microscopy (FLIM) was used to correlate fluorescence lifetime (FLT) changes of hypericin to environmental effects inside the spheroids. In this context, 3D tumor spheroids are an excellent model system since they consider 3D cell–cell interactions and the extracellular matrix is similar to tumors in vivo. Our analytical approach considers hypericin as probe molecule for FLIM and as photosensitizer for PDT at the same time, making it possible to directly draw conclusions of the state and location of the drug in a biological system. The knowledge of both state and location of hypericin makes a fundamental understanding of the impact of hypericin PDT in brain tumors possible. Following different incubation conditions, the hypericin distribution in peripheral and central cryosections of the spheroids were analyzed. Both fluorescence microscopy and FLIM revealed a hypericin gradient towards the spheroid core for short incubation periods or small concentrations. On the other hand, a homogeneous hypericin distribution is observed for long incubation times and high concentrations. Especially, the observed FLT change is crucial for the PDT efficiency, since the triplet yield, and hence the O2 activation, is directly proportional to the FLT. Based on the FLT increase inside spheroids, an incubation time 30 min is required to achieve most suitable conditions for an effective PDT.
The early detection of head and neck cancer is a prolonged challenging task. It requires a precise and accurate identification of tissue alterations as well as a distinct discrimination of cancerous from healthy tissue areas. A novel approach for this purpose uses microspectroscopic techniques with special focus on hyperspectral imaging (HSI) methods. Our proof-of-principle study presents the implementation and application of darkfield elastic light scattering spectroscopy (DF ELSS) as a non-destructive, high-resolution, and fast imaging modality to distinguish lingual healthy from altered tissue regions in a mouse model. The main aspect of our study deals with the comparison of two varying HSI detection principles, which are a point-by-point and line scanning imaging, and whether one might be more appropriate in differentiating several tissue types. Statistical models are formed by deploying a principal component analysis (PCA) with the Bayesian discriminant analysis (DA) on the elastic light scattering (ELS) spectra. Overall accuracy, sensitivity, and precision values of 98% are achieved for both models whereas the overall specificity results in 99%. An additional classification of model-unknown ELS spectra is performed. The predictions are verified with histopathological evaluations of identical HE-stained tissue areas to prove the model’s capability of tissue distinction. In the context of our proof-of-principle study, we assess the Pushbroom PCA-DA model to be more suitable for tissue type differentiations and thus tissue classification. In addition to the HE-examination in head and neck cancer diagnosis, the usage of HSI-based statistical models might be conceivable in a daily clinical routine.
Der Girlboss Mythos : die gesellschaftlichen und ökonomischen Perspektiven der Gender-Debatte
(2019)
Faktisch sind Frauen heute gleichberechtigt. Sie haben die gleichen Chancen, Rechte und Möglichkeiten wie Männer. Dennoch weisen maßgebliche Studien darauf hin, dass die Anzahl von Frauen auf allen Führungsebenen stagniert oder nur im Schneckentempo wächst. In der medialen Diskussion rund um das Thema Frauen im Management ist die Welt auf den ersten Blick in zwei Lager geteilt. Ein Lager stellt ernüchtert fest, dass Frauen selbst Schuld sind an ihrer Situation. Oft werden hier gerade erfolgreiche Frauen zitiert, die ihren Geschlechtsgenossinnen den nötigen Erfolgswillen oder die Opferbereitschaft absprechen. Das andere Lager scheint die Sachlage genau entgegengesetzt zu beurteilen. Überall gut ausgebildete, hochmotivierte Frauen, die an Glasdecken stoßen oder denen von der Gesellschaft im Allgemeinen und Männern im Besonderen die Türen versperrt werden. Dieses Buch trägt zu einer wissenschaftlich nüchternen Diskussion bei, um die aktuelle gesellschaftspolitische Situation differenzierter und abseits von abgegriffenen Dogmen zu betrachten.
Die meisten Innovationsprojekte in Unternehmen scheitern nicht am Mangel an Ideen, Kreativität oder am Umsetzungswillen, sondern an vielen kleinen Hürden, die die Projekte massiv entschleunigen. So verlieren Initiativen an der Dynamik, die dafür sorgt, dass sich zügig Erfolge einstellen. Ein Bereich, in dem unkonventionell, agil und schnell Ergebnisse erzielt werden, ist das Guerilla Marketing. Was können Innovations-, Forschungs- und Projektleiter aus dem Methodenbaukasten lernen? Wie können konkrete Taktiken aus dem Marketing auch Innovationsprojekten zu mehr Viralität und Schwung verhelfen, um die Eigendynamik der Initiativen „unbremsbar“ zu machen? Das erfahren Sie in diesem essential.
Hybride Arbeitsmodelle gelten als Zukunft der Arbeit. Demnach beschäftigt sich die vorliegende Forschungsarbeit mit der Untersuchung hybrider Arbeitsmodelle im Hinblick auf deutsche kleine und mittlere Unternehmen (KMU) im Vergleich zu Großbetrieben. Mithilfe einer multi-methodischen Studie, bestehend aus einer Umfrage und qualitativen Experteninterviews, wird evaluiert, in welchem Maß hybride Arbeitsmodelle in KMU bereits etabliert sind und welche Herausforderungen sie dabei bewältigen müssen. Zusätzlich wird betrachtet, ob soziodemografische Faktoren wie Alter, Geschlecht oder Rolle im Unternehmen einen Einfluss auf hybrides Arbeiten haben.Die Ergebnisse zeigen, dass die Etablierung von hybriden Arbeitsmodellen in KMU im Gegensatz zu Großbetrieben weniger vorangeschritten ist. KMUs stehen vor vielfältigen Herausforderungen, die beispielsweise auf unzureichende Digitalisierung oder traditionellere Strukturen zurückzuführen sind. Insbesondere die Unternehmenskultur sowie die Rolle im Unternehmen und der Einfluss der Führungskraft spielen eine wichtige Rolle.Praktische Relevanz: Der Großteil vorliegender Literatur zum Thema New Work und Hybride Arbeit legt den Fokus auf die Gesamtbetrachtung aller Unternehmensgrößen oder auf Großbetriebe. Aufgrund der spezifischen Merkmale, wie beispielsweise eingeschränkter Ressourcenzugang, können Ergebnisse von Großbetrieben kaum auf KMU übertragen werden. Demnach gibt diese Arbeit eine Orientierung, wie hybride Arbeitsmodelle in KMU sinnvoll und gewinnbringend umgesetzt werden und welche Herausforderungen auftreten.
Forecasting demand is challenging. Various products exhibit different demand patterns. While demand may be constant and regular for one product, it may be sporadic for another, as well as when demand occurs, it may fluctuate significantly. Forecasting errors are costly and result in obsolete inventory or unsatisfied demand. Methods from statistics, machine learning, and deep learning have been used to predict such demand patterns. Nevertheless, it is not clear for what demand pattern, which algorithm would achieve the best forecast. Therefore, even today a large number of models are used to forecast on a test period. The model with the best result on the test period is used for the actual forecast. This approach is computationally and time intensive and, in most cases, uneconomical. In our paper we show the possibility to use a machine learning classification algorithm, which predicts the best possible model based on the characteristics of a time series. The approach was developed and evaluated on a dataset from a B2B-technical-retailer. The machine learning classification algorithm achieves a mean ROC-AUC of 89%, which emphasizes the skill of the model.
The basic idea behind a wearable robotic grasp assistancesystem is to support people that suffer from severe motor impairments in daily activities. Such a system needs to act mostly autonomously and according to the user’s intent. Vision-based hand pose estimation could be an integral part of a larger control and assistance framework. In this paper we evaluate the performance of egocentric monocular hand pose estimation for a robot-controlled hand exoskeleton in a simulation. For hand pose estimation we adopt a Convolutional Neural Network (CNN). We train and evaluate this network with computer graphics, created by our own data generator. In order to guide further design decisions we focus in our experiments on two egocentric camera viewpoints tested on synthetic data with the help of a 3D-scanned hand model, with and without an exoskeleton attached to it.We observe that hand pose estimation with a wrist-mounted camera performs more accurate than with a head-mounted camera in the context of our simulation. Further, a grasp assistance system attached to the hand alters visual appearance and can improve hand pose estimation. Our experiment provides useful insights for the integration of sensors into a context sensitive analysis framework for intelligent assistance.
Additive Manufacturing is increasingly used in the industrial sector as a result of continuous development. In the Production Planning and Control (PPC) system, AM enables an agile response in the area of detailed and process planning, especially for a large number of plants. For this purpose, a concept for a PPC system for AM is presented, which takes into account the requirements for integration into the operational enterprise software system. The technical applicability will be demonstrated by individual implemented sections. The presented solution approach promises a more efficient utilization of the plants and a more elastic use.
Military organizations have special features like following different organizational laws in times of peace and war and their specific embeddedness in society and politics. Especially the latter aspect has made the military an important object of study since the beginnings of modern sociology. In the wake of establishing specific sociological accounts, military sociology has been developed, dedicated to the different facets of the military. This research is based on different theoretical perspectives, but has hardly embraced the frameworks from economics and sociology of conventions (EC/SC) so far. The aim of the chapter is to explore and demonstrate the potentials of this approach. In a first step, the state of the art of military sociology research is outlined, and potential avenues for analyzing military forces based on EC/SC are identified. It is argued that especially the connection to organizational theory (military as organization) and civil-military relations, including leadership and professionalism, offer starting points. After introducing existing studies addressing military-related topics with reference to EC/SC, relevant concepts and approaches of convention theory that prove to be particularly enriching for military research are discussed. An outlook on possible further fields and topics of research is given to concretize how an inclusion of the perspective of EC/SC could look like.
Purpose
Injury or inflammation of the middle ear often results in the persistent tympanic membrane (TM) perforations, leading to conductive hearing loss (HL). However, in some cases the magnitude of HL exceeds that attributable by the TM perforation alone. The aim of the study is to better understand the effects of location and size of TM perforations on the sound transmission properties of the middle ear.
Methods
The middle ear transfer functions (METF) of six human temporal bones (TB) were compared before and after perforating the TM at different locations (anterior or posterior lower quadrant) and to different degrees (1 mm, ¼ of the TM, ½ of the TM, and full ablation). The sound-induced velocity of the stapes footplate was measured using single-point laser-Doppler-vibrometry (LDV). The METF were correlated with a Finite Element (FE) model of the middle ear, in which similar alterations were simulated.
Results
The measured and calculated METF showed frequency and perforation size dependent losses at all perforation locations. Starting at low frequencies, the loss expanded to higher frequencies with increased perforation size. In direct comparison, posterior TM perforations affected the transmission properties to a larger degree than anterior perforations. The asymmetry of the TM causes the malleus-incus complex to rotate and results in larger deflections in the posterior TM quadrants than in the anterior TM quadrants. Simulations in the FE model with a sealed cavity show that small perforations lead to a decrease in TM rigidity and thus to an increase in oscillation amplitude of the TM mainly above 1 kHz.
Conclusion
Size and location of TM perforations have a characteristic influence on the METF. The correlation of the experimental LDV measurements with an FE model contributes to a better understanding of the pathologic mechanisms of middle-ear diseases. If small perforations with significant HL are observed in daily clinical practice, additional middle ear pathologies should be considered. Further investigations on the loss of TM pretension due to perforations may be informative.
Kennzahlen zur Liquidität
(2016)
Wege der Gewinnermittlung
(2017)
Macht ein Unternehmen Gewinn, heißt dies nicht notwendigerweise, dass alles „in trockenen Tüchern“ ist. Die entscheidende Frage ist, wie der Gewinn ermittelt wurde, denn nur mit dem richtigen Verfahren erhält man auch den geeigneten Blickwinkel – auf den Erfolg eines einzelnen Geschäfts, auf den Gewinn einer Periode, auf das Betriebsvermögen, auf die Liquidität oder auf die Bilanz.
EBIT & Co.
(2017)
Eine ganze Reihe von Kennzahlen wird in der Betriebswirtschaftslehre zur Ermittlung und Steuerung des Unternehmensgewinns verwendet. Doch nicht alle eignen sich für denselben Zweck. Je nach Fragestellung sollten unterschiedliche Kennzahlen herangezogen werden. Ihre Interpretation muss nicht zuletzt auch branchenspezifisch erfolgen.
Wer mit Argumenten Veränderungen bewirken will, muss seine Ansprechpartner für seine Lösungsansätze gewinnen. Ob dies gelingt, ist heutzutage keine Frage von rhetorischem Talent und Charisma mehr. Denn Techniken des Storylinings und Storytellings machen eine Professionalisierung betriebswirtschaftlicher Argumentation und Gedankenführung für jedermann möglich.
In a networked world, companies depend on fast and smart decisions, especially when it comes to reacting to external change. With the wealth of data available today, smart decisions can increasingly be based on data analysis and be supported by IT systems that leverage AI. A global pandemic brings external change to an unprecedented level of unpredictability and severity of impact. Resilience therefore becomes an essential factor in most decisions when aiming at making and keeping them smart. In this chapter, we study the characteristics of resilient systems and test them with four use cases in a wide-ranging set of application areas. In all use cases, we highlight how AI can be used for data analysis to make smart decisions and contribute to the resilience of systems.
Prior to the introduction of AI-based forecast models in the procurement department of an industrial retail company, we assessed the digital skills of the procurement employees and surveyed their attitudes toward a new digital technology. The aim of the survey was to ascertain important contextual factors which are likely to influence the acceptance and the successful use of the new forecast tool. What we find is that the digital skills of the employees show an intermediate level and that their attitudes toward key aspects of new digital technologies are largely positive. Thus, the conditions for high acceptance and the successful use of the models are good, as evidenced by the high intention of the procurement staff to use the models. In line with previous research, we find that the perceived usefulness of a new technology and the perceived ease of use are significant drivers of the willingness to use the new forecast tool.
Due to the consequential impact of technological breakdowns, companies have to be prepared to deal with breakdowns or even better prevent them. In today's information technology, several methods and tools exist to downscale this concern. Therefore, this paper deals with the initial determination of a resilient enterprise architecture supporting predictive maintenance in the information technology domain and furthermore, concerns several mechanisms on how to reactively and proactively secure the state of resiliency on several abstraction levels. The objective of this paper is to give an overview on existing mechanisms for resiliency and to describe the foundation of an optimized approach, combining infrastructure and process mining techniques.
Context
Web APIs are one of the most used ways to expose application functionality on the Web, and their understandability is important for efficiently using the provided resources. While many API design rules exist, empirical evidence for the effectiveness of most rules is lacking.
Objective
We therefore wanted to study 1) the impact of RESTful API design rules on understandability, 2) if rule violations are also perceived as more difficult to understand, and 3) if demographic attributes like REST-related experience have an influence on this.
Method
We conducted a controlled Web-based experiment with 105 participants, from both industry and academia and with different levels of experience. Based on a hybrid between a crossover and a between-subjects design, we studied 12 design rules using API snippets in two complementary versions: one that adhered to a rule and one that was a violation of this rule. Participants answered comprehension questions and rated the perceived difficulty.
Results
For 11 of the 12 rules, we found that violation performed significantly worse than rule for the comprehension tasks. Regarding the subjective ratings, we found significant differences for 9 of the 12 rules, meaning that most violations were subjectively rated as more difficult to understand. Demographics played no role in the comprehension performance for violation.
Conclusions
Our results provide first empirical evidence for the importance of following design rules to improve the understandability of Web APIs, which is important for researchers, practitioners, and educators.
While several service-based maintainability metrics have been proposed in the scientific literature, reliable approaches to automatically collect these metrics are lacking. Since static analysis is complicated for decentralized and technologically diverse microservice-based systems, we propose a dynamic approach to calculate such metrics from runtime data via distributed tracing. The approach focuses on simplicity, extensibility, and broad applicability. As a first prototype, we implemented a Java application with a Zipkin integrator, 23 different metrics, and five export formats. We demonstrated the feasibility of the approach by analyzing the runtime data of an example microservice based system. During an exploratory study with six participants, 14 of the 18 services were invoked via the system’s web interface. For these services, all metrics were calculated correctly from the generated traces.
Software evolvability is an important quality attribute, yet one difficult to grasp. A certain base level of it is allegedly provided by service- and microservice-based systems, but many software professionals lack systematic understanding of the reasons and preconditions for this. We address this issue via the proxy of architectural modifiability tactics. By qualitatively mapping principles and patterns of Service Oriented Architecture (SOA) and microservices onto tactics and analyzing the results, we cannot only generate insights into service-oriented evolution qualities, but can also provide a modifiability comparison of the two popular service-based architectural styles. The results suggest that both SOA and microservices possess several inherent qualities beneficial for software evolution. While both focus strongly on loose coupling and encapsulation, there are also differences in the way they strive for modifiability (e.g. governance vs. evolutionary design). To leverage the insights of this research, however, it is necessary to find practical ways to incorporate the results as guidance into the software development process.
While many maintainability metrics have been explicitly designed for service-based systems, tool-supported approaches to automatically collect these metrics are lacking. Especially in the context of microservices, decentralization and technological heterogeneity may pose challenges for static analysis. We therefore propose the modular and extensible RAMA approach (RESTful API Metric Analyzer) to calculate such metrics from machine-readable interface descriptions of RESTful services. We also provide prototypical tool support, the RAMA CLI, which currently parses the formats OpenAPI, RAML, and WADL and calculates 10 structural service-based metrics proposed in scientific literature. To make RAMA measurement results more actionable, we additionally designed a repeatable benchmark for quartile-based threshold ranges (green, yellow, orange, red). In an exemplary run, we derived thresholds for all RAMA CLI metrics from the interface descriptions of 1,737 publicly available RESTful APIs. Researchers and practitioners can use RAMA to evaluate the maintainability of RESTful services or to support the empirical evaluation of new service interface metrics.
Gamification is one of the recognized methods of motivating people in various life processes, and it has spread to many spheres of life, including healthcare. This article proposes a system design for long-term care patients using the method mentioned. The proposed system aims to increase patient engagement in the treatment and rehabilitation process via gamification. Literature research on available and earlier proposed systems was conducted to develop a suited system design. The primary target group includes bedridden patients and a sedentary lifestyle (predominantly lying in bed). One of the main criteria for selecting a suitable option was its contactless realization for the mentioned target groups in long-term care cases. As a result, we developed the system design for hardware and software that could prevent bedsores and other health problems from occurring because of low activity. The proposed design can be tested in hospitals, nursing homes, and rehabilitation centers.
The article analyzes experimentally and theoretically the influence of microscope parameters on the pinhole-assisted Raman depth profiles in uniform and composite refractive media. The main objective is the reliable mapping of deep sample regions. The easiest to interpret results are found with low magnification, low aperture, and small pinholes. Here, the intensities and shapes of the Raman signals are independent of the location of the emitter relative to the sample surface. Theoretically, the results can be well described with a simple analytical equation containing the axial depth resolution of the microscope and the position of the emitter. The lower determinable object size is limited to 2–4 μm. If sub-micrometer resolution is desired, high magnification, mostly combined with high aperture, becomes necessary. The signal intensities and shapes depend now in refractive media on the position relative to the sample surface. This aspect is investigated on a number of uniform and stacked polymer layers, 2–160 μm thick, with the best available transparency. The experimental depth profiles are numerically fitted with excellent accuracy by inserting a Gaussian excitation beam of variable waist and fill fraction through the focusing lens area, and by treating the Raman emission with geometric optics as spontaneous isotropic process through the lens and the variable pinhole, respectively. The intersectional area of these two solid angles yields the leading factor in understanding confocal (pinhole-assisted) Raman depth profiles.
Gender pay gaps are commonly studied in populations with already completed educational careers. We focus on an earlier stage by investigating the gender pay gap among university students working alongside their studies. With data from five cohorts of a large-scale student survey from Germany, we use regression and wage decomposition techniques to describe gender pay gaps and potential explanations. We find that female students earn about 6% less on average than male students, which reduces to 4.1% when accounting for a rich set of explanatory variables. The largest explanatory factor is the type of jobs male and female students pursue.
Ziel des Beitrages ist es, sinnlich-ästhetisch Weltzugänge am Beispiel des Designs für pädagogische Kontexte herauszuarbeiten. Gleichermaßen soll eine Reflexion darüber in Gang gesetzt werden, welche Potenziale transdisziplinär geprägte Gestaltungsprozesse für das Lernen und Lehren bereithalten. Zunächst wird daher die transdisziplinäre Natur des Designs geschichtlich hergeleitet und verschiedene Prozesse beleuchtet. Das sich anschließende Kapitel arbeitet unter dem Begriff der Gestaltung jene Merkmale und Qualitäten professionellen Designs heraus, die allen Prozessen zugrunde liegen. Abschließend wird das Konzept des Design Thinking vor dem Hintergrund erfahrungsbasierten Lernens in der Schulbildung diskutiert und mit Beispielen aus empirischen Studien untermauert.
Diese Studie untersucht den kurzfristigen Einfluss der Tagespflege auf die kindliche Entwicklung im Vergleich zur Betreuung in der Kita. Internationale Studien deuten darauf hin, dass der Besuch einer Tagespflege im Vergleich zur Kita eher negative Auswirkungen auf Kinder hat. Mithilfe der Neugeborenen-Kohorte des NEPS können wir evaluieren, ob dies auch im deutschen Kontext gilt. Wir nutzen zwei verschiedene methodische Ansätze, um den Effekt der Tagespflege zu schätzen. Unsere Ergebnisse zeigen, dass die Tagespflege für die Mehrzahl der untersuchten Entwicklungsindikatoren keinen statistisch signifikant schlechteren Einfluss auf die kindliche Entwicklung hat, außer im Bereich der Habituation.
Effektives Risiko-Management sollte neben quantifizierbaren, bekannten Risiken auch Ereignisse berücksichtigen, die entweder in ähnlicher Art bereits eingetreten oder grundsätzlich vorstellbar sind. Für eine Identifikation dieser "Grauen Schwäne" müssen institutionell-organisatorische Voraussetzungen geschaffen und analytisch-konzeptionelle Instrumente bereitgestellt werden.
Geopolitische Risiken sind nicht erst seit Ausbruch des Ukraine-Kriegs für den Erfolg und die Überlebensfähigkeit von Unternehmen von großer Relevanz. Nur durch den Aufbau von Methodenkompetenz, diese besonderen Risiken zu identifizieren, schaffen Unternehmen die notwendigen Voraussetzungen für ein erfolgreiches Management von geopolitischen Ereignissen.
Eine realistische Risikoeinschätzung ist Basis von verantwortungsvollen Unternehmensentscheidungen. Doch wie lassen sich Risiken richtig einschätzen? Verschiedene Instrumente des Risiko-Managements erlauben es, Risiken systematisch zu identifizieren, zu quantifizieren, zu bewerten und zu dokumentieren.
Risiken sind per se nichts Schlechtes, wenn der dadurch erzielte Ertrag für das eingegangene Risiko angemessen ist. Dieser Zusammenhang wird allerdings nicht immer verstanden – einer der Gründe für die Finanzkrise von 2008/09. Die in diesem Beitrag vorgestellten Kennzahlen zeigen, wie man Risiken mit erzielten oder möglichen Erträgen ins Verhältnis setzen kann.
The purpose of this paper is to give an overview about the links between fashion businesses and film from a fashion business perspective. It focuses on the idea that digitalization brought much more film use for the fashion industry and that this development has just begun and not ended. This change finally also has an intense impact on the fashion industry, as fashion companies nowadays are content producers with films, too. The resulting closer connection with viewers via social media exposes fashion companies, gives on the other hand new influence potential to the fashion system. An in-depth future research about the fashion and film system is therefore required to develop answers for the current situation. This article should be interpreted more as a personal viewpoint of the author to this topic rather than a research paper based on the usual methodological criteria.
Today, digitalization is firmly anchored in society and business. It is also recognized to have significant impact on the retailing sector. The in-store display of moving images has so far, however, gained little attention by researchers. The aim of this research is to provide a first estimation on the current state of moving images distribution in stationary retail stores. A store check was the basis for analysis and evaluation. In sum, 152 stores were analyzed in Stuttgart, Germany. Out of 152 observed stores, 62 stores showed 177 moving images. Detailed analyses about content, mood, color and the actors of motion pictures showed that all aspects are very well harmonized with the target group of the store. The chapter provides a basic estimation of the in-store diffusion of moving images. Thereby, avenues for further research are opened up.
This chapter provides insights in the future of fashion film with respect to augmented reality and virtual reality technologies. The question: How does augmented reality and virtual reality influence the future of fashion film? is therefore considered. It is important to analyze the influence of those technologies on fashion films to assess the potential for fashion retailers and in best case gain first-mover advantages. To answer the stated research question, a literature research was conducted to gain insights about the topic and its influence towards fashion filming. Explanation of augmented reality and virtual reality is provided as well as implications in the retail sector regarding fashion films. Moreover, company examples already using this approach have been compiled. Furthermore, an empirical research part was conducted including a survey method based on an online survey design. The questionnaire is based on what has been revealed in literature to gain in depth insides and approval. The data gained indicated that augmented reality and virtual reality influence the future of fashion film in various ways. The findings highlight how important those technologies can be in order to enhance customer experience and engagement. Regarding the research question, the conclusion can be drawn that it is highly important for fashion managers to take future developments like augmented reality and virtual reality into account to stay competitive and satisfy the requirements of modern consumers.
This chapter discusses German television as a platform for fashion content and, in that context, streaming services as possible alternatives. Three German television channels were surveilled over the period of one month, as well as the two most popular streaming services in Germany and the online media library of one German television channel over six months, regarding length, fashion connection, transmission time and success. Additionally, for three channels fashion advertisement was analyzed. Broadcasting the most contributions with fashion connection in one month, VOX was the channel being the most fashionable. Aiming to entertain, informative contributions about fashion in television build a minority. Streaming services offer more flexibility, which the user is asking for. All three television stations show fashion brand spots during prime-time. Especially ProSieben and sixx are in close cooperation with several fashion brands. Therefore, fashion advertising seems to be preferably inserted in fashion related series.
Based on new ways of watching series via streaming platforms and a change of buying behavior, advertising needs to focus on new strategies. Branded entertainment gives brands the opportunity to deeper integrate their product placements into television show plots. Through a managerial perspective this increases the advertising effectiveness. The serial ‘Sex and the City’ exemplifies successful branded entertainment and shows how series influence fashion nowadays. The placements are outstanding when it comes to storytelling around the brand or product, setting trends and creating a character connection plus a desire through identification. This chapter shows success factors and chances of placements for the fashion industry.
Hip-hop culture defines itself through four central pillars: DJing, MCing, breakdancing and graffiti, but a fifth one, fashion, may be in the coming. Hip-hop has become the most popular music genre, and the influence it has on society is undebatable. But as hip-hop artists increasingly underpin their music with visual components, like music videos, the question arises if that has an influence on the fashion industry. This chapter clarifies which factors may determine a fashion business impact and discusses differences between mainstream hip-hop artists and the ones that are active in the fashion industry as well. The focus lays on the way and amount fashion is presented in the music videos. 24 music videos were analyzed, thereof 15 popular records from the past three years and nine of artists that are already considered as fashion influential. Additionally, a fashion influence index was created to compare the degree of fashion between the music videos. Numbers of styles, recognized brands, fashion related song verses, fashion related description box mentions and articles about the fashion in the music video were noted. Findings reveal that the number of outfits shown in the video did not have a direct link to the amount of traffic it produces in fashion media. The artists that are considered influential in the fashion industry, name brands in their song lyrics more often and show brand logos more frequent in their music videos than others. Though over the observed years, for the mainstream hip-hop artists, a rise in fashion awareness can be seen through a higher number of styles, recognizable brands and fashion related verses in the lyrics.
An event film is a successful marketing and communication instrument, which can be used from companies along social media. By reaching the target group and potential customers, companies could benefit from increasing brand awareness. It is striking that there is a lack of information about how event films are used in regard to showing fashion. To establish the subject further, the purpose of this paper is to enrich the existing findings and analyze the influence event films have. In an empirical study, the performance of two events and the two related fast fashion retailers H&M and Zara on Instagram and YouTube regarding event and fashion connected films is analyzed. Identified stylistic elements of event fashion are searched and found in their online shops. Since emotions are especially well transferred through event films, there is an indication that they contribute to the shaping of fashion trends.
A case study with four German fashion retail brands was conducted in order to measure the performance of their Omnichannel services. In detail, their Click & Collect service was analyzed. Click & Collect is one of the first introduced Omnichannel services in fashion retailing. Omnichannel services integrate different sales and communication channels providing a seamless customer journey experience. Offline, online, and mobile app customer experiences should provide a seamless customer experience. Omnichannel performance of the four retailers Decathlon, Hunkemöller, Massimo Dutti and Galeria Kaufhof was measured via mystery shopping. A seamless customer journey experience is not yet a standard in German fashion retailing. The four companies differ in many process details. The biggest market potential and the recommendation for further research emerges in deficits of the offline store Omnichannel customer experience. Here, all four case companies have room to improve. Best overall results regarding the integration of offline, online and mobile shops were found with Hunkemöller, followed by Decathlon, Massimo Dutti, and Galeria Kaufhof.
The purpose of this paper is to determine the relevance of social media for luxury brand management. It employs both a multi-methodological approach: After analyzing the online performance of the three luxury brands Burberry, Louis Vuitton and Gucci, the empirical research includes a survey as well as an eye tracking test executed with Tobii Studio. The findings reveal that online and social media have given luxury fashion businesses the opportunity to establish a sustainable interaction with their customers and distinguish themselves from the competition. Still, the online business holds many challenges for luxury companies to overcome. This paper gives instructions as to how social media can be effectively incorporated into a luxury company.
Instagram fashion videos
(2020)
Instagram is one of the most used social media platforms to share photos and videos. Due to this, it can be seen as a helpful opportunity for companies to use the platform as a marketing tool in order to spread information to a wide range of potential customers. Ever since its launch, Instagram is strongly connected to fashion, which makes the platform in particular interesting for fashion brands. According to the screened literature, most brands use Instagram for marketing purposes. It is furthermore a matter of fact, that the utilization of videos plays a decisive role. Following up on this, the question about how brands use videos on Instagram for marketing purposes comes up. Due to this, this chapter aims to investigate the extent to which brands make use of videos on Instagram, what the goals of the videos are and what the most effective videos in terms of user engagement are. More specifically, this chapter includes an empirical study which examines the Instagram profiles of nine selected brands of the categories lifestyle, luxury and fashion and sportswear on the underlying research question. A subsequent evaluation and discussion of the results depicts differences and similarities within the categories and between the categories. All in all, the results of the study show that fashion brands use the possibility of films as a marketing tool on Instagram. The content and types of films thereby heavily depend on the brand category.
The purpose of this paper is to investigate how motion pictures are currently used for the product presentation of fashion articles. An explorative approach was chosen for the literature section. This study shows that the use of moving images for the presentation of fashion articles in online shops is possible in numerous different ways. In order to be able to use product presentation videos meaningfully, one should consider exactly what is the purpose of these videos. Different goals require different means. However, retailers should obtain enough information in advance to assess whether they can afford the production and post-processing of these videos.
The purpose of this paper is to investigate how motion pictures are currently used for the product presentation of fashion articles in online shops in the German, American and British markets. This study shows that the use of moving images for the presentation of fashion articles in online shops is underutilized. With the amount of data that was manageable within the scope of this chapter, no valid generalizations can be made. All described results must be understood as an indication. In order to be able to use product presentation videos meaningfully, one should consider before exactly what is the purpose of these videos. Different goals require different means. However, retailer should obtain enough information in advance to assess whether they can afford the production and post processing of these videos.
This chapter looks at the usage of image films produced by brands and their dealing with themselves. It focuses on analyzing important film parameters, the content and the way it can influence brand image. A list of 70 fashion brands from different categories was gathered through a survey and confirmed by comparing the results with relevant literature. All 70 brands were looked at to find relevant self-referencing films. The films had to be produced by the brand themselves. Videos for advertisement or promoting collections are not regarded either. In total 22 films from 17 brands were analyzed. Results show that most brands seem to have recognized videos as a powerful marketing tool in the social media age. Many brands seem to struggle with the compliance of certain parameters such as length and the use of the brand logo. In general, the content of the videos is focused around the four topics recruitment, value, history and behind the brand. As for the intent, the videos can be classified into the three categories learning, emotion and doing something. This paper not only analyzes this special film category, but also gives recommendations to improve the videos.
The connection of fashion and film seems symbiotic at first sight and they influence each other. There exist differences, including a different understanding of clothing by costume designers and fashion businesses. This article focuses on two successful movies „The Hunger Games“ and „The Great Gatsby“ in order to explore the role of film in fashion and vice versa. The findings suggest, that there are various collections in the fashion world, based on both movies. Therefore, movies indeed have an influence on the development of seasonal fashion. However, this connection is not natural, but rather artificially created by both industries. Through nowadays organized co-operation, the lines between costume designers and fashion designers get blurred. Furthermore, today fashion doesn’t trickle down to an audience naturally, but promoted using the film and its broad reach.
Fashion show films
(2020)
Due to technological developments, fashion show films provide fashion brands the opportunity to communicate their brand concepts, to attract attention and to gain more brand awareness by publishing them in the Internet. The purpose of this research paper is to investigate how fashion brands communicate their brand concept and personality through fashion show films. For this purpose, ten fashion show films of brands from the categories luxury, premium, high-street and active wear are investigated. The results indicate that the investigated brands use different ways to attract attention and to communicate their brand concept and personality. The design of the setting, the presentation of the collection as well as the visualization of the brand concept through the brand name, logo, colors or symbols and camera work play an important role to create an effective and exciting fashion show film in order to communicate the brand concept and to promote their brand image. Mostly luxury and premium brands use fashion show films for branding. For high-street and active wear brands the analysis indicates less importance of fashion show films. The limitations of this research are related to the fact that the restricted number of ten fashion show films is analyzed. This gives an overview but cannot provide a comprehensive breakdown of this topic.
YouTube fashion videos
(2020)
YouTube is the most widely adopted and successful video sharing platform. It works as a marketing instrument and money-making tool for companies while reaching the target group. After considering the significant literature based on YouTube, it is striking that there is lack of information about YouTube’s benefits as a video marketing instrument for fashion brands. To establish this subject further, the purpose of this study is to enrich the existing findings on social video marketing on YouTube in the apparel industry. The findings indicate the importance of YouTube as a social network for fashion marketers. The second part conducts an empirical study, which makes the YouTube channel performance of nine fashion brands the subject of discussion. Thereby, three brands per lifestyle, sports and luxury sector are analyzed through comparative aspects. Accordingly, the differences and similarities within and between the sectors are analyzed and evaluated.
Public transport maps are typically designed in a way to support route finding tasks for passengers, while they also provide an overview about stations, metro lines, and city-specific attractions. Most of those maps are designed as a static representation, maybe placed in a metro station or printed in a travel guide. In this paper, we describe a dynamic, interactive public transport map visualization enhanced by additional views for the dynamic passenger data on different levels of temporal granularity. Moreover, we also allow extra statistical information in form of density plots, calendar-based visualizations, and line graphs. All this information is linked to the contextual metro map to give a viewer insights into the relations between time points and typical routes taken by the passengers. We also integrated a graph-based view on user-selected routes, a way to interactively compare those routes, an attribute- and property-driven automatic computation of specific routes for one map as well as for all available maps in our repertoire, and finally, also the most important sights in each city are included as extra information to include in a user-selected route. We illustrate the usefulness of our interactive visualization and map navigation system by applying it to the railway system of Hamburg in Germany while also taking into account the extra passenger data. As another indication for the usefulness of the interactively enhanced metro maps we conducted a controlled user experiment with 20 participants.
This study describes a non-contact measuring and system identification procedure for evaluating inhomogeneous stiffness and damping characteristics of the annular ligament in the physiological amplitude and frequency range without the application of large static external forces that can cause unnatural displacements of the stapes. To verify the procedure, measurements were first conducted on a steel beam. Then, measurements on an individual human cadaveric temporal bone sample were performed. The estimated results support the inhomogeneous stiffness and damping distribution of the annular ligament and are in a good agreement with the multiphoton microscopy results which show that the posterior-inferior corner of the stapes footplate is the stiffest region of the annular ligament.
A configuration-management-database driven approach for fabric-process specification and automation
(2014)
In this paper we describe an approach that integrates a Configuration- Management-Database into fabric-process specification and automation in order to consider different conditions regarding to cloud-services. By implementing our approach, the complexity of fabric processes gets reduced. We developed a prototype by using formal prototyping principles as research methods and integrated the Configuration-Management-Database Command into the Workflow- Management-System Activiti. We used this prototype to evaluate our approach. We implemented three different fabric-processes and show that by using our approach the complexity of these three fabric-processes gets reduced.
The digital transformation is today’s dominant business transformation having a strong influence on how digital services and products are designed in a service-dominant way. A popular underlying theory of value creation and economic exchange that is known as the service-dominant (S-D) logic can be connected to many successful digital business models. However, S-D logic by itself is abstract. Companies cannot directly use it as an instrument for business model innovation and design in an easy way. To address this a comprehensive ideation method based on S-D logic is proposed, called service-dominant design (SDD). SDD is aimed at supporting firms in the transition to a service- and value-oriented perspective. The method provides a simplified way to structure the ideation process based on four model components. Each component consists of practical implications, auxiliary questions and visualization techniques that were derived from a literature review, a use case evaluation of digital mobility and a focus group discussion. SDD represents a first step of having a toolset that can support established companies in the process of service- and value-orientation as part of their digital transformation efforts.
Die digitale Transformation ist die heute vorherrschende geschäftliche Transformation, die einen starken Einfluss darauf hat, wie digitale Dienstleistungen und Produkte dienstleistungsdominant gestaltet werden. Eine beliebte zugrundeliegende Theorie der Wertschöpfung und des wirtschaftlichen Austauschs, die als dienstleistungsdominante Logik (S-D) bekannt ist, kann mit vielen erfolgreichen digitalen Geschäftsmodellen verbunden werden. Allerdings ist die S-D-Logik an sich abstrakt. Unternehmen können sie nicht ohne Weiteres als Instrument für die Innovation und Gestaltung von Geschäftsmodellen nutzen. Um dies zu ändern, wird eine umfassende Ideenfindungsmethode auf der Grundlage der S-D-Logik vorgeschlagen, die als service-dominantes Design (SDD) bezeichnet wird. SDD zielt darauf ab, Unternehmen beim Übergang zu einer service- und wertorientierten Perspektive zu unterstützen. Die Methode bietet eine vereinfachte Möglichkeit, den Ideenfindungsprozess auf der Grundlage von vier Modellkomponenten zu strukturieren. Jede Komponente besteht aus praktischen Implikationen, Hilfsfragen und Visualisierungstechniken, die aus einer Literaturrecherche, einer Anwendungsfallbewertung der digitalen Mobilität und einer Fokusgruppendiskussion abgeleitet wurden. SDD ist ein erster Schritt zu einem Toolset, das etablierte Unternehmen bei der Service- und Werteorientierung im Rahmen ihrer digitalen Transformation unterstützen kann.
Werttreiber Lean Production
(2013)
Steigern Unternehmen, die Lean-Production-Methoden einsetzten, ihren Unternehmenswert, und wenn ja, wie sehr? Das Autorenteam der Hochschule Reutlingen hat das Zusammenspiel der Managementkonzepte Working Capital Management und Wertorientierung untersucht und stellt die ermutigenden Ergebnisse anhand je eines Szenarios für ein Großunternehmen und ein KMU vor.
Von den Covid-19-Restriktionen wurden im Automobilsektor die Zulieferer wesentlich stärker getroffen als die Fahrzeughersteller. Vor allem die Entwicklung des Working Capitals im ersten Pandemie-Jahr erwies sich als kritisch. Der Beitrag gibt einen Überblick über mögliche Lösungen für eine allseits vorteilhaftere, stabile Supply-Chain-Finanzierung in künftigen Krisen.
Autonomous navigation is one of the main areas of research in mobile robots and intelligent connected vehicles. In this context, we are interested in presenting a general view on robotics, the progress of research, and advanced methods related to this field to improve autonomous robots’ localization. We seek to evaluate algorithms and techniques that give robots the ability to move safely and autonomously in a complex and dynamic environment. Under these constraints, we focused our work in the paper on a specific problem: to evaluate a simple, fast and light SLAM algorithm that can minimize localization errors. We presented and validated a FastSLAM 2.0 system combining scan matching and loop closure detection. To allow the robot to perceive the environment and detect objects, we have studied one of the best deep learning technique using convolutional neural networks (CNN). We validate our testing using the YOLOv3 algorithm.
Power line communications (PLC) reuse the existing power-grid infrastructure for the transmission of data signals. As power line the communication technology does not require a dedicated network setup, it can be used to connect a multitude of sensors and Internet of Things (IoT) devices. Those IoT devices could be deployed in homes, streets, or industrial environments for sensing and to control related applications. The key challenge faced by future IoT-oriented narrowband PLC networks is to provide a high quality of service (QoS). In fact, the power line channel has been traditionally considered too hostile. Combined with the fact that spectrum is a scarce resource and interference from other users, this requirement calls for means to increase spectral efficiency radically and to improve link reliability. However, the research activities carried out in the last decade have shown that it is a suitable technology for a large number of applications. Motivated by the relevant impact of PLC on IoT, this paper proposed a cooperative spectrum allocation in IoT-oriented narrowband PLC networks using an iterative water-filling algorithm.
The digitization of factories will be a significant issue for the 2020s. New scenarios are emerging to increase the efficiency of production lines inside the factory, based on a new generation of robots’ collaborative functions. Manufacturers are moving towards data-driven ecosystems by leveraging product lifecycle data from connected goods. Energy-efficient communication schemes, as well as scalable data analytics, will support these various data collection scenarios. With augmented reality, new remote services are emerging that facilitate the efficient sharing of knowledge in the factory. Future communication solutions should generally ensure connectivity between the various production sites spread worldwide and new players in the value chain (e.g., suppliers, logistics) transparent, real-time, and secure. Industry 4.0 brings more intelligence and flexibility to production. Resulting in more lightweight equipment and, thus, offering better ergonomics. 5G will guarantee real-time transmissions with latencies of less than 1 ms. This will provide manufacturers with new possibilities to collect data and trigger actions automatically.
An autonomous vehicle is a robotic vehicle with decision and action capability capable of performing assigned tasks without or with minimal human intervention. Autonomous cars have been in development for many years. The Society of Automotive Engineers (SAE International) published in 2014 a classification in five levels of driving automation, with level 0 corresponding to completely manual driving, and level 5 to an ideal dream where the vehicle would be able to navigate entirely autonomously for all missions and in all environments. This work addressed the navigation of an autonomous vehicle in general. We focus on one of the most complex scenarios of the road network and crossing of road intersections. In this paper, the critical features of autonomous intelligent vehicles are reviewed. Furthermore, the associated problems are presented, and the most advanced solutions are derived. This article aims to allow a novice in this field to understand the different facets of localization and perception problems for autonomous vehicles.
Rotating machinery occupies a predominant place in many industrial applications. However, rotating machines are often encountered with severe vibration problems. The measurement of these machines’ vibrations signal is of particular importance since it plays a crucial role in predictive maintenance. When the vibrations are too high, they often cause fatigue failure. They announce an unexpected stop or break and, consequently, a significant loss of productivity or an attack on the personnel’s safety. Therefore, fault identification at early stages will significantly enhance the machine’s health and significantly reduce maintenance costs. Although considerable efforts have been made to master the field of machine diagnostics, the usual signal processing methods still present several drawbacks. This paper examines the rotating machinery condition monitoring in the time and frequency domains. It also provides a framework for the diagnosis process based on machine learning by analyzing the vibratory signals.
Silicon neurons represent different levels of biological details and accuracies as a trade-off between complexity and power consumption. With respect to this trade-off and high similarity to neuron behaviour models, relaxation-type oscillator circuits often yield a good compromise to emulate neurons. In this chapter, two exemplified relaxation-type silicon neurons are presented that emulate neural behaviour with energy consumption under the scale of nJ/spike. The first proposed fully CMOS relaxation SiN is based on mathematical Izhikevich model and can mimic a broad range of physiologically observable spike patterns. The results of kinds of biologically plausible output patterns and coupling process of two SiNs are presented in 0.35 μm CMOS technology. The second type is a novel ultra-low-frequency hybrid CMOS-memristive SiN based on relaxation oscillators and analog memristive devices. The hybrid SiN directly emulates neuron behaviour in the range of physiological spiking frequencies (less than 100 Hz). The relaxation oscillator is implemented and fabricated in 0.13 μm CMOS technology. An autonomous neuronal synchronization process is demonstrated with two relaxation oscillators coupled by an analog memristive device in the measurement to emulate the synchronous behaviour between spiking neurons.
In today’s education, healthcare, and manufacturing sectors, organizations and information societies are discussing new enhancements to corporate structure and process efficiency using digital platforms. These enhancements can be achieved using digital tools. Industry 5.0 and Society 5.0 give several potentials for businesses to enhance the adaptability and efficacy of their industrial processes, paving the door for developing new business models facilitated by digital platforms. Society 5.0 can contribute to a super-intelligent society that includes the healthcare industry. In the past decade, the Internet of Things, Big Data Analytics, Neural Networks, Deep Learning, and Artificial Intelligence (AI) have revolutionized our approach to various job sectors, from manufacturing and finance to consumer products. AI is developing quickly and efficiently. We have heard of the latest artificial intelligence chatbot, ChatGPT. OpenAI created this, which has taken the internet by storm. We tested the effectiveness of a considerable language model referred to as ChatGPT on four critical questions concerning “Society 5.0”, “Healthcare 5.0”, “Industry,” and “Future Education” from the perspectives of Age 5.0.
IOS 2.0 : new aspects on inter-organizational integration through enterprise 2.0 technologies
(2015)
This special theme of „Electronic Markets“ focuses on research concerned with the use of social technologies and "2.0" principles in the interaction between organization (i.e., with "inter-organizational systems (IOS) 2.0"). This theme falls within the larger space of Enterprise 2.0 research, but focuses in particular on inter-organizational use (between enterprises), not intra-organizational use (in a single enterprise). While there is great interest in practice regarding the use of 2.0 technologies to support intra-organizational communication, collaboration and interaction, information systems (IS) research has largely been oblivious to this important use of social technologies.
Personalized remote healthcare monitoring is in continuous development due to the technology improvements of sensors and wearable electronic systems. A state of the art of research works on wearable sensors for healthcare applications is presented in this work. Furthermore, a state of the art of wearable devices, chest and wrist band and smartwatches available on the market for health and sport monitoring is presented in this paper. Many activity trackers are commercially available. The prices are continuously reducing and the performances are improving, but commercial devices do not provide raw data and are therefore not useful for research purposes.
Introduction to the special issue on self‑managing and hardware‑optimized database systems 2022
(2023)
Data management systems have evolved in terms of functionality, performance characteristics, complexity, and variety during the last 40 years. Particularly, the relational database management systems and the big data systems (e.g., Key-Value stores, Document stores, Graph stores and Graph Computation Systems, Spark, MapReduce/Hadoop, or Data Stream Processing Systems) have evolved with novel additions and extensions. However, the systems administration and tasks have become highly complex and expensive, especially given the simultaneous and rapid hardware evolution in processors, memory, storage, or networking. These developments present new open problems and challenges to data management systems as well as new opportunities.
The SMDB (International Workshop on Self-Managing Database Systems) and HardBD&Active (Joint International Workshop on Big Data Management on Emerging Hardware and Data Management on Virtualized Active Systems) workshops organized in conjunction with the IEEE ICDE (International Conference on Data Engineering) offered two distinct platforms for examining the above system-related challenges from different perspectives. The SMDB workshop looks into developing autonomic or self-* features in database and data management systems to tackle complex administrative tasks, while the HardBD&Active workshop focuses on harnessing hardware technologies to enhance efficiency and performance of data processing and management tasks. As a result of these workshops, we are delighted to present the third special issue of DAPD titled “Self-Managing and Hardware-Optimized Database Systems 2022,” which showcases the best contributions from the SMDB 2021/2022 and HardBD&Active 2021/2022 workshops.
The aim of this work is the development of artificial intelligence (AI) application to support the recruiting process that elevates the domain of human resource management by advancing its capabilities and effectiveness. This affects recruiting processes and includes solutions for active sourcing, i.e. active recruitment, pre-sorting, evaluating structured video interviews and discovering internal training potential. This work highlights four novel approaches to ethical machine learning. The first is precise machine learning for ethically relevant properties in image recognition, which focuses on accurately detecting and analysing these properties. The second is the detection of bias in training data, allowing for the identification and removal of distortions that could skew results. The third is minimising bias, which involves actively working to reduce bias in machine learning models. Finally, an unsupervised architecture is introduced that can learn fair results even without ground truth data. Together, these approaches represent important steps forward in creating ethical and unbiased machine learning systems.
In recent years, 3D facial reconstructions from single images have garnered significant interest. Most of the approaches are based on 3D Morphable Model (3DMM) fitting to reconstruct the 3D face shape. Concurrently, the adoption of Generative Adversarial Networks (GAN) has been gaining momentum to improve the texture of reconstructed faces. In this paper, we propose a fundamentally different approach to reconstructing the 3D head shape from a single image by harnessing the power of GAN. Our method predicts three maps of normal vectors of the head’s frontal, left, and right poses. We are thus presenting a model-free method that does not require any prior knowledge of the object’s geometry to be reconstructed.
The key advantage of our proposed approach is the substantial improvement in reconstruction quality compared to existing methods, particularly in the case of facial regions that are self-occluded in the input image. Our method is not limited to 3d face reconstruction. It is generic and applicable to multiple kinds of 3D objects. To illustrate the versatility of our method, we demonstrate its efficacy in reconstructing the entire human body.
By delivering a model-free method capable of generating high-quality 3D reconstructions, this paper not only advances the field of 3D facial reconstruction but also provides a foundation for future research and applications spanning multiple object types. The implications of this work have the potential to extend far beyond facial reconstruction, paving the way for innovative solutions and discoveries in various domains.
Sleep is an important aspect in life of every human being. The average sleep duration for an adult is approximately 7 h per day. Sleep is necessary to regenerate physical and psychological state of a human. A bad sleep quality has a major impact on the health status and can lead to different diseases. In this paper an approach will be presented, which uses a long-term monitoring of vital data gathered by a body sensor during the day and the night supported by mobile application connected to an analyzing system, to estimate sleep quality of its user as well as give recommendations to improve it in real-time. Actimetry and historical data will be used to improve the individual recommendations, based on common techniques used in the area of machine learning and big data analysis.
Stress is becoming an important topic in modern life. The influence of stress results in a higher rate of health disorders such as burnout, heart problems, obesity, asthma, diabetes, depressions and many others. Furthermore individual’s behavior and capabilities could be directly affected leading to altered cognition, inappropriate decision making and problem solving skills. In a dynamic and unpredictable environment, such as automotive, this can result in a higher risk for accidents. Different papers faced the estimation as well as prediction of drivers’ stress level during driving. Another important question is not only the stress level of the driver himself, but also the influence on and of a group of other drivers in the near area. This paper proposes a system, which determines a group of drivers in a near area as clusters and it derives the individual stress level. This information will be analyzed to generate a stress map, which represents a graphical view about road section with a higher stress influence. Aggregated data can be used to generate navigation routes with a lower stress influence to decrease stress influenced driving as well as improve road safety.
Because of a high product and technology complexity, companies involve external partners in their research and development (R&D) processes. Interorganizational projects result, which represent temporary organizations. In these projects heterogenous organizations work closely together. Since project work is always teamwork, these projects face due to their characteristic’s major challenges on an organizational, relational, and content-related collaboration level. Thus, this paper raises the following research question: “How can a project team be supported on an organizational, relational, and content-related level in an interorganizational new product development setting?” To answer this research question, an explorative expert study was set up with two digital workshops using the interactive presentation tool Mentimeter. The results show that a cooperative innovation culture could support project teams on an organizational and relational level in the future in minimizing predominant problems. Moreover, it supports project teams for example in a functional communication. Furthermore, 18 values of a cooperative innovation culture result which are for example openness and transparency, risk and failure tolerance or respect. On a content-related level the results show that an adaptable tool which promotes creativity and collaboration method as well as content-related input support could be beneficial for problem-solving in an interorganizational new product development setting in the future. Because the tool can guide product developers through the process with suitable creativity and collaboration methods, can give content-related input and can enable interactive interchange on a table-top. Future research could mainly focus on the connection of the cooperative innovation culture and the tool since these potentially influence each other.
The fifth mobile communications generation (5G) can lead to a substantial change in companies enabling the full capability of wireless industrial communication. 5G with its key features of providing Enhanced Mobile Broadband, Ultra-Reliable and Low-Latency Communication, and Massive Machine Type Communication will support the implementation of Industry 4.0 applications. In particular, the possibility to set-up Non-Public Networks provides the opportunity of 5G communication in factories and ensures sole access to the 5G infrastructure offering new opportunities for companies to implement innovative mobile applications. Currently there exist various concepts, ideas, and projects for 5G applications in an industrial environment. However, the global rollout of 5G systems is a continuous process based on various stages defined by the global initiative 3rd Generation Partnership Project that develops and specifies the 5G telecommunication standard. Accordingly, some services are currently still far from their final performance capability or not yet implemented. Additionally, research lacks in clarifying the general suitability of 5G regarding frequently mentioned 5G use cases. This paper aims to identify relevant 5G use cases for intralogistics and evaluates their technical requirements regarding their practical feasibility throughout the upcoming 5G specifications.
The blockchain technology represents a decentralised database that stores information securely in immutable data blocks. Regarding supply chain management, these characteristics offer potentials in increasing supply chain transparency, visibility, automation, and efficiency. In this context, first token-based mapping approaches exist to transfer certain manufacturing processes to the blockchain, such as the creation or assembly of parts as well as their transfer of ownership. This paper proposes a prototypical blockchain application that adopts an authority concept and a concept of smart non-fungible tokens. The application enables the mapping of complex products in dynamic supply chains that require the auditability of changeable assembling processes on the blockchain. Finally, the paper demonstrates the practical feasibility of the proposed application based on a prototypical implementation created on the Ethereum blockchain.
The blockchain technology represents a decentralized database that stores information securely in immutable data blocks. Regarding supply chain management, these characteristics offer potentials in increasing supply chain transparency, visibility, automation, and efficiency. In this context, first token-based mapping approaches exist to transfer certain manufacturing processes to the blockchain, such as the creation or assembly of parts as well as their transfer of ownership. However, the decentralized and immutable structure of blockchain technology also creates challenges when applying these token-based approaches to dynamic manufacturing processes. As a first step, this paper investigates existing mapping approaches and exemplifies weaknesses regarding their suitability for products with changeable configurations. Secondly, a concept is proposed to overcome these weaknesses by introducing logically coupled tokens embedded into a flexible smart contract structure. Finally, a concept for a token-based architecture is introduced to map manufacturing processes of products with changeable configurations.
Companies are becoming aware of the potential risks arising from sustainability aspects in supply chains. These risks can affect ecological, economic or social aspects. One important element in managing those risks is improved transparency in supply chains by means of digital transformation. Innovative technologies like blockchain technology can be used to enforce transparency. In this paper, we present a smart contract-based Supply Chain Control Solution to reduce risks. Technological capabilities of the solution will be compared to a similar technology approach and evaluated regarding their benefits and challenges within the framework of supply chain models. As a result, the proposed solution is suitable for the dynamic administration of complex supply chains.
Distributed ledger technologies such as the blockchain technology offer an innovative solution to increase visibility and security to reduce supply chain risks. This paper proposes a solution to increase the transparency and auditability of manufactured products in collaborative networks by adopting smart contract-based virtual identities. Compared with existing approaches, this extended smart contract-based solution offers manufacturing networks the possibility of involving privacy, content updating, and portability approaches to smart contracts. As a result, the solution is suitable for the dynamic administration of complex supply chains.
The critical process parameters cell density and viability during mammalian cell cultivation are assessed by UV/VIS spectroscopy in combination with multivariate data analytical methods. This direct optical detection technique uses a commercial optical probe to acquire spectra in a label-free way without signal enhancement. For the cultivation, an inverse cultivation protocol is applied, which simulates the exponential growth phase by exponentially replacing cells and metabolites of a growing Chinese hamster ovary cell batch with fresh medium. For the simulation of the death phase, a batch of growing cells is progressively replaced by a batch with completely starved cells. Thus, the most important parts of an industrial batch cultivation are easily imitated. The cell viability was determined by the well-established method partial least squares regression (PLS). To further improve process knowledge, the viability has been determined from the spectra based on a multivariate curve resolution (MCR) model. With this approach, the progress of the cultivations can be continuously monitored solely based on an UV/VIS sensor. Thus, the monitoring of critical process parameters is possible inline within a mammalian cell cultivation process, especially the viable cell density. In addition, the beginning of cell death can be detected by this method which allows us to determine the cell viability with acceptable error. The combination of inline UV/VIS spectroscopy with multivariate curve resolution generates additional process knowledge complementary to PLS and is considered a suitable process analytical tool for monitoring industrial cultivation processes.
Public transport causes in rural areas high costs per passenger and kilometer as the frequency of scheduled busses is low and therefore, many people avoid using public transport. With the trend of moving from urban regions to countryside individual traffic will further increase. To tackle issues of emissions, mobility for young and elderly people and provide economically meaningful public transport a new concept was elaborated in Germany. This consists of (partly) autonomous shuttle busses which are remote controlled. For implementation rural districts of Germany have worked together and set up a three-phase plan consisting of a project with public funding, a highly frequent used pilot region and industrial partners with the commitment and possibilities for necessary investments. The concept promises economical value with respect to installation, service and maintaining costs, it leads to lower barriers for public transport of young and elderly people and ultimately reduces emissions and congestions.
Digitalisierung und Mediatisierung prägen die Gesellschaft und auch die Erwachsenenbildung/Weiterbildung. Der Beitrag geht der Frage nach, wie Digitalisierung in Angeboten der Erwachsenenbildung/Weiterbildung gelingt. Damit wird ein Fokus auf den Einsatz digitaler Medien gelegt. Dazu werden die Angebotsentwicklung für Adressatinnen und Adressaten sowie Teilnehmende, medienbezogene Inhalte, Lehr- und Lernarrangements mit digitalen Medien, der Einsatz digitaler Medien und die Zugänglichkeit von Lehr- und Lernmaterialien als relevante Merkmale identifiziert. Insgesamt zeigen die analysierten Interviewdaten, dass der Einsatz digitaler Medien in Angeboten eine Erweiterung der didaktischen Aufgaben darstellt, da Angebote mit digitalen Medien zielgenau auf die Bedarfe und Möglichkeiten von Adressatinnen und Adressaten sowie Teilnehmenden abgestimmt werden müssen.
Ecuador, traditionally an agricultural based economy, has a great potential for valorizing their industrial residues. This study, presents a techno-economic analysis for applying a novel biomass oxidation method to produce formic and acetic acids from coffee husk residues in Machala, Ecuador. The analysis determined that the time of return of investment was lower than 5 years, making this project economically feasible, when producing approx. 1000 tons of formic acid per year, which is enough for supplying the Ecuadorian market. This production, would reduce imports costs and develop the chemical industry in the country.
The analysis of exhaled metabolites has become a promising field of research in recent decades. Several volatile organic compounds reflecting metabolic disturbance and nutrition status have even been reported. These are particularly important for long-term measurements, as needed in medical research for detection of disease progression and therapeutic efficacy. In this context, it has become urgent to investigate the effect of fasting and glucose treatment for breath analysis. In the present study, we used amodel of ventilated rats that fasted for 12 h prior to the experiment. Ten rats per group were randomly assigned for continuous intravenous infusion without glucose or an infusion including 25 mg glucose per 100 g per hour during an observation period of 12 h. Exhaled gas was analysed using multicapillary column ion-mobility spectrometry. Analytes were identified by the BS-MCC/IMS database (version 1209; B & S Analytik, Dortmund, Germany). Glucose infusion led to a significant increase in blood glucose levels (p<0.05 at 4 h and thereafter) and cardiac output (p<0.05 at 4 h and thereafter). During the observation period, 39 peaks were found collectively. There were significant differences between groups in the concentration of ten volatile organic compounds: p<0.001 at 4 h and thereafter for isoprene, cyclohexanone, acetone, p-cymol, 2-hexanone, phenylacetylene, and one unknown compound, and p<0.001 at 8 h and thereafter for 1-pentanol, 1-propanol, and 2-heptanol. Our results indicate that for long-term measurement, fasting and the withholding of glucose could contribute to changes of volatile metabolites in exhaled air.
Social and environmental risk management in supply chains : a survey in the clothing industry
(2015)
Almost daily, news indicates that there are environmental and social problems in globally fragmented supply chains. Even though conceptualisations of sustainable supply chain management suggest supplier-related risk management for sustainable products and processes as substantial for companies, research on how risk management for environmental and social issues in supply chains is performed has so far been neglected. This study aims at analysing both why companies in the clothing industry are performing management of social and environmental risks in their supply chain and what kind of action they are taking. Based on the literature on sustainable supply chain management and supply chain risk management as well as 10 expert interviews, a conceptual model for risk management in sustainable supply chains was developed. This model was tested in an empirical study in the clothing industry. The data were analysed by structural equation modelling. Results of the research show high statistical significance for the respective conceptual model. The main driver to perform risk management in environmental and social affairs is pressures and incentives from stakeholders. While companies’ corporate orientation mainly drives social actions, top management drives environmental affairs for differentiating themselves from competitors.
Sustainability is a development that meets the needs of the present without compromising the ability of future generations to meet their own needs.
Business Model is a plan for the successful operation of a business, identifying sources of revenue, the intended customer base, products, and details of financing.
Circular economy is an approach of how a company creates, captures and delivers value, with a value creation logic designed to improve resource efficiency through contributing to extending the useful life of products and parts (e.g., through long-life design, repair and remanufacturing) and closing material loops.
Children undergoing systemic chemotherapy often suffer from severe immunosuppression usually associated to severe neutropenia (neutrophils < 0.5 x 109/l). Clinical courses during those periods range from asymptomatic to septic general conditions. Development of septic symptoms can be very fast and life-threatening. Swift detection of risk factors in those patients is therefore needed. So far no early, rapid and reliable marker or tool exists. Ion-Mobility-Spectrometry coupled with a Multi-Capillary-Column (IMS-MCC) can analyze more than 600 volatile components from exhaled air within a few minutes and hence is a potential, rapid detection-tool. As a proof of concept we measured the exhaled breath of 11 patients with neutropenia and 10 healthy controls ranging from 3 to 18 years of age at the time of measurement. Ten milliliters breath samples were taken at the outpatient clinic and analyzed with an onsite IMS-MCC (BreathDiscovery, B&S Analytik, Dortmund, Germany). Dead-space-volume was adapted to two groups (small 250 ml, large 500 ml). Interestingly 59 differing peaks were measured. Eleven were significantly different (p ≤ 0.05), three of which highly significant (p ≤ 0.01) in Mann-Whitney-Rank-Sum-testing. The corresponding analytes used in the decision tree are 2-Propanol, D-Limonene and Acetone. The analytes with the lowest rank sum identified are 2-Hexanone, Iso-Propylamine and 1-Butanol. Eventually we were able to show a three-step-decision-tree, which discerns the 21 samples except one from each group. Sensitivity was 90 % and specificity was 91 %. Naturally these findings need further confirmation within a bigger population. Our pilot-study proves that Ion-Mobility-Spectrometry coupled with a Multi-Capillary-Column is a feasible rapid diagnostic tool in the setting of a pediatric oncology out-patient clinic for patients 3 years and older. Our first results furthermore encourage additional analysis as to whether patients at risk for septic events during immunosuppression can be diagnosed in advance by rapidly assessing risk factors such as Neutropenia in exhaled breath.
Sleep quality and in general, behavior in bed can be detected using a sleep state analysis. These results can help a subject to regulate sleep and recognize different sleeping disorders. In this work, a sensor grid for pressure and movement detection supporting sleep phase analysis is proposed. In comparison to the leading standard measuring system, which is Polysomnography (PSG), the system proposed in this project is a non invasive sleep monitoring device. For continuous analysis or home use, the PSG or wearable actigraphy devices tends to be uncomfortable. Besides this fact, they are also very expensive. The system represented in this work classifies respiration and body movement with only one type of sensor and also in a non invasive way. The sensor used is a pressure sensor. This sensor is low cost and can be used for commercial proposes. The system was tested by carrying out an experiment that recorded the sleep process of a subject. These recordings showed the potential for classification of breathing rate and body movements. Although previous researches show the use of pressure sensors in recognizing posture and breathing, they have been mostly used by positioning the sensors between the mattress and bedsheet. This project however, shows an innovative way to position the sensors under the mattress.
In many cases continuous monitoring of vital signals is required and low intrusiveness is an important requirement. Incorporating monitoring systems in the hospital or home bed could have benefits for patients and caregivers. The objective of this work is the definition of a measurement protocol and the creation of a data set of measurements using commercial and low-cost prototypes devices to estimate heart rate and breathing rate. The experimental data will be used to compare results achieved by the devices and to develop algorithms for feature extraction of vital signals.
The recovery of our body and brain from fatigue directly depends on the quality of sleep, which can be determined from the results of a sleep study. The classification of sleep stages is the first step of this study and includes the measurement of vital data and their further processing. The non-invasive sleep analysis system is based on a hardware sensor network of 24 pressure sensors providing sleep phase detection. The pressure sensors are connected to an energy-efficient microcontroller via a system-wide bus. A significant difference between this system and other approaches is the innovative way in which the sensors are placed under the mattress. This feature facilitates the continuous use of the system without any noticeable influence on the sleeping person. The system was tested by conducting experiments that recorded the sleep of various healthy young people. Results indicate the potential to capture respiratory rate and body movement.
The main aim of presented in this manuscript research is to compare the results of objective and subjective measurement of sleep quality for older adults (65+) in the home environment. A total amount of 73 nights was evaluated in this study. Placing under the mattress device was used to obtain objective measurement data, and a common question on perceived sleep quality was asked to collect the subjective sleep quality level. The achieved results confirm the correlation between objective and subjective measurement of sleep quality with the average standard deviation equal to 2 of 10 possible quality points.
Identifikation von Schlaf- und Wachzuständen durch die Auswertung von Atem- und Bewegungssignalen
(2021)
Fragestellung: Das klinische Standardverfahren und Referenz der Schlafmessung und der Klassifizierung der einzelnen Schlafstadien ist die Polysomnographie (PSG). Alternative Ansätze zu diesem aufwändigen Verfahren könnten einige Vorteile bieten, wenn die Messungen auf eine komfortablere Weise durchgeführt werden. Das Hauptziel dieser Forschung Studie ist es, einen Algorithmus für die automatische Klassifizierung von Schlafstadien zu entwickeln, der ausschließlich Bewegungs- und Atmungssignale verwendet [1].
Patienten und Methoden: Nach der Analyse der aktuellen Forschungsarbeiten haben wir multinomiale logistische Regression als Grundlage für den Ansatz gewählt [2]. Um die Genauigkeit der Auswertung zu erhöhen, wurden vier Features entwickelt, die aus Bewegungs- und Atemsignalen abgeleitet wurden. Für die Auswertung wurden die nächtlichen Aufzeichnungen von 35 Personen verwendet, die von der Charité-Universitätsmedizin Berlin zur Verfügung gestellt wurden. Das Durchschnittsalter der Teilnehmer betrug 38,6 +/– 14,5 Jahre und der BMI lag bei durchschnittlich 24,4 +/– 4,9 kg/m2. Da der Algorithmus mit drei Stadien arbeitet, wurden die Stadien N1, N2 und N3 zum NREM-Stadium zusammengeführt. Der verfügbare Datensatz wurde strikt aufgeteilt: in einen Trainingsdatensatz von etwa 100 h und in einen Testdatensatz mit etwa 160 h nächtlicher Aufzeichnungen. Beide Datensätze wiesen ein ähnliches Verhältnis zwischen Männern und Frauen auf, und der durchschnittliche BMI wies keine signifikante Abweichung auf.
Ergebnisse: Der Algorithmus wurde implementiert und lieferte erfolgreiche Ergebnisse: die Genauigkeit der Erkennung von Wach-/NREM-/REM-Phasen liegt bei 73 %, mit einem Cohen’s Kappa von 0,44 für die analysierten 19.324 Schlafepochen von jeweils 30 s. Die beobachtete gewisse Überschätzung der NREM-Phase lässt sich teilweise durch ihre Prävalenz in einem typischen Schlafmuster erklären. Selbst die Verwendung eines ausbalancierten Trainingsdatensatzes konnte dieses Problem nicht vollständig lösen.
Schlussfolgerungen: Die erreichten Ergebnisse haben die Tauglichkeit des Ansatzes prinzipiell bestätigt. Dieser hat den Vorteil, dass nur Bewegungs- und Atemsignale verwendet werden, die mit weniger Aufwand und komfortabler für Benutzer aufgezeichnet werden können als z. B. Herz- oder EEG-Signale. Daher stellt das neue System eine deutliche Verbesserung im Vergleich zu bestehenden Ansätzen dar. Die Zusammenführung der beschriebenen algorithmischen Software mit dem in [1] beschriebenen Hardwaresystem zur Messung von Atem- und Körperbewegungssignalen zu einem autonomen, berührungslosen System zur kontinuierlichen Schlafüberwachung ist eine mögliche Richtung zukünftiger Arbeiten.
Recognition of sleep and wake states is one of the relevant parts of sleep analysis. Performing this measurement in a contactless way increases comfort for the users. We present an approach evaluating only movement and respiratory signals to achieve recognition, which can be measured non-obtrusively. The algorithm is based on multinomial logistic regression and analyses features extracted out of mentioned above signals. These features were identified and developed after performing fundamental research on characteristics of vital signals during sleep. The achieved accuracy of 87% with the Cohen’s kappa of 0.40 demonstrates the appropriateness of a chosen method and encourages continuing research on this topic.
The scoring of sleep stages is one of the essential tasks in sleep analysis. Since a manual procedure requires considerable human and financial resources, and incorporates some subjectivity, an automated approach could result in several advantages. There have been many developments in this area, and in order to provide a comprehensive overview, it is essential to review relevant recent works and summarise the characteristics of the approaches, which is the main aim of this article. To achieve it, we examined articles published between 2018 and 2022 that dealt with the automated scoring of sleep stages. In the final selection for in-depth analysis, 125 articles were included after reviewing a total of 515 publications. The results revealed that automatic scoring demonstrates good quality (with Cohen's kappa up to over 0.80 and accuracy up to over 90%) in analysing EEG/EEG + EOG + EMG signals. At the same time, it should be noted that there has been no breakthrough in the quality of results using these signals in recent years. Systems involving other signals that could potentially be acquired more conveniently for the user (e.g. respiratory, cardiac or movement signals) remain more challenging in the implementation with a high level of reliability but have considerable innovation capability. In general, automatic sleep stage scoring has excellent potential to assist medical professionals while providing an objective assessment.
In recent decades, it can be observed that a steady increase in the volume of tourism is a stable trend. To offer travel opportunities to all groups, it is also necessary to prepare offers for people in need of long-term care or people with disabilities. One of the ways to improve accessibility could be digital technologies, which could help in planning as well as in carrying out trips. In the work presented, a study of barriers was first conducted, which led to selecting technologies for a test setup after analysis. The main focus was on a mobile app with travel information and 360° tours. The evaluation results showed that both technologies could increase accessibility, but some essential aspects (such as usability, completeness, relevance, etc.) need to be considered when implementing them.
This book investigates and highlights the most critical challenges the pharmaceutical industry faces in an increasingly competitive environment of inflationary R&D investments and tightening cost control pressures. The authors present three sources of pharmaceutical innovation: new management methods in the drug development pipeline; new technologies as enablers for cutting-edge R&D; and new forms of cooperation and internationalization, such as open innovation in the early phases of R&D. New models and methods are illustrated with cases from Europe, the US, and Asia. This third fully revised edition was expanded to reflect the latest updates in open and collaborative innovation, the greater strategic importance of venture capital and early stage investments, and the new range of emerging technologies now being put to use in pharmaceutical innovation.
Broad acceptance of finite-element-based analysis of structural problems and the increased availability of CAD-systems for structural tasks, which help to generate meshes of non-trivial geometries, have been setting a standard for the evaluation of designs in mechanical engineering in the last few decades. The development of automated or semi-automated optimizers, integrated into the Computer-Aided Engineering (CAE)-packages or working as outer loop machines, requiring the solver to do the analysis of the specific designs, has been accepted by most advanced users of the simulation community as well. The availability and inexpensive processing power of computers is increasing without any limitations foreseen in the coming years. There is little doubt that virtual product development will continue using the tools that have proved to be so successful and so easy to handle.
Virtual prototyping of integrated mixed-signal smart sensor systems requires high-performance co-simulation of analog frontend circuitry with complex digital controller hardware and embedded real-time software. We use SystemC/TLM 2.0 in conjunction with a cycle-count accurate temporal decoupling approach (TD) to simulate digital components and firmware code execution at high speed while preserving clock-cycle accuracy and, thus, real-time behavior at time quantum boundaries. Optimal time quanta ensuring real-time capability can be calculated and set automatically during simulation if the simulation engine has access to exact timing information about upcoming inter-process communication events. These methods fail in the case of non-deterministic, asynchronous events, resulting in potentially invalid simulation results. In this paper, we propose an extension to the case of asynchronous events generated by blackbox sources from which a priori event timing information is not available, such as coupled analog simulators or hardware in the loop. Additional event processing latency or rollback effort caused by temporal decoupling is minimized by calculating optimal time quanta dynamically in a SystemC model using a linear prediction scheme. We analyze the theoretical performance of the presented predictive temporal decoupling approach (PTD) by deriving a cost model that expresses the expected simulation effort in terms of key parameters such as time quantum size and CPU time per simulation cycle. For an exemplary smart-sensor system model, we show that quasi-periodic events that trigger activities in TD processes are handled accurately after the predictor has settled.
Indoor localization systems are becoming more and more important with the digitalization of the industrial sector. Sensor data such as the current position of machines, transport vehicles, goods or tools represent an essential component of cyber physical production systems (CCPS). However, due to the high costs of these sensors, they are not widespread and are used mainly in special scenarios. However, especially optical indoor positioning systems (OIPS) based on cameras have certain advantages due to their technological specifications. In this paper, the application scenarios and requirements as well as their characteristics are presented and a classification approach of OIPS is introduced.