Refine
Document Type
- Journal article (1244)
- Conference proceeding (1039)
- Book chapter (381)
- Book (223)
- Doctoral Thesis (54)
- Working Paper (38)
- Anthology (32)
- Report (25)
- Patent / Standard / Guidelines (24)
- Issue of a journal (19)
Is part of the Bibliography
- yes (3088)
Institute
- ESB Business School (1107)
- Informatik (875)
- Technik (509)
- Life Sciences (343)
- Texoversum (220)
- Zentrale Einrichtungen (16)
Publisher
- Springer (473)
- IEEE (252)
- Elsevier (243)
- Hochschule Reutlingen (191)
- MDPI (99)
- Wiley (72)
- Gesellschaft für Informatik e.V (69)
- Universität Tübingen (65)
- De Gruyter (60)
- VDE Verlag (48)
Patterns are virtually simulated in 3D CAD programs before production to check the fit. However, achieving lifelike representations of human avatars, especially regarding soft tissue dynamics, remains challenging. This is mainly since conventional avatars in garment CAD programs are simulated with a continuous hard surface and not corresponding to the human physical and mechanical body properties of soft tissue. In the real world, the human body’s natural shape is affected by the contact pressure of tight-fitting textiles. To verify the fit of a simulated garment, the interactions between the individual body shape and the garment must be considered. This paper introduces an innovative approach to digitising the softness of human tissue using 4D scanning technology. The primary objective of this research is to explore the interactions between tissue softness and different compression levels of apparel, exerting pressure on the tissue to capture the changes in the natural shape. Therefore, to generate data and model an avatar with soft body physics, it is essential to capture the deform ability and elasticity of the soft tissue and map it into the modification options for a simulation. To aim this, various methods from different fields were researched and compared to evaluate 4D scanning as the most suitable method for capturing tissue deformability in vivo. In particular, it should be considered that the human body has different deformation capabilities depending on age, the amount of muscle and body fat. In addition, different tissue zones have different mechanical properties, so it is essential to identify and classify them to back up these properties for the simulation. It has been shown that by digitising the obtained data of the different defined applied pressure levels, a prediction of the deformation of the tissue of the exact person becomes possible. As technology advances and data sets grow, this approach has the potential to reshape how we verify fit digitally with soft avatars and leverage their realistic soft tissue properties for various practical purposes.
Due to Industry 4.0, the full value creation has the chance to undergo a fundamental technological transformation, the realisation of which, however, requires the commitment of every company for its own benefit. The new approaches of Industry 4.0 are often hardly evaluated, let alone proven, so that SMEs in particular often cannot properly estimate the potentials and risks, and often waiting too long with the migration towards Industry 4.0. In addition, they often do not pursue an integrated concept in order to identify possible potentials through changes in their business models. . As part of the research project "GEN-I 4.0 – Geschäftsmodell-Entwicklung für die Industrie 4.0” ", the ESB Business School at Reutlingen University of Applied Sciences and the Fraunhofer Institute for Industrial Engineering and Organization FHG IAO were engaged by the Baden-Württemberg Foundation from 2016 to 2018 to develop tools and an approach how the local economy can develop digital business models for itself in a methodical, beneficial and targeted manner. Through international analyses and interviews GEN-I 4.0 gained and concretized the knowledge required for the evaluation and selection of solutions and approaches for the transfer to develop digital business models. Together with the know-how of the project partners on Industry 4.0 and business model development, the findings were incorporated into the development of two software tools with which SMEs are shown the potentials of Industry 4.0 for their individual business model, online and in selfassessment, and given a comprehensive structured, concrete approach to development, as well as their individual risk. Users of the tools are supported by the selected platform for the networking of different players to implement innovative business models accompanied by coaching concepts for the companies in the follow-up and implementation of the assessment results.
Cotton contamination by honeydew is considered one of the significant problems for quality in textiles as it causes stickiness during manufacturing. Therefore, millions of dollars in losses are attributed to honeydew contamination each year. This work presents the use of UV hyperspectral imaging (225–300 nm) to characterize honeydew contamination on raw cotton samples. As reference samples, cotton samples were soaked in solutions containing sugar and proteins at different concentrations to mimic honeydew. Multivariate techniques such as a principal component analysis (PCA) and partial least squares regression (PLS-R) were used to predict and classify the amount of honeydew at each pixel of a hyperspectral image of raw cotton samples. The results show that the PCA model was able to differentiate cotton samples based on their sugar concentrations. The first two principal components (PCs) explain nearly 91.0% of the total variance. A PLS-R model was built, showing a performance with a coefficient of determination for the validation (R2cv) = 0.91 and root mean square error of cross-validation (RMSECV) = 0.036 g. This PLS-R model was able to predict the honeydew content in grams on raw cotton samples for each pixel. In conclusion, UV hyperspectral imaging, in combination with multivariate data analysis, shows high potential for quality control in textiles.
Military organizations have special features like following different organizational laws in times of peace and war and their specific embeddedness in society and politics. Especially the latter aspect has made the military an important object of study since the beginnings of modern sociology. In the wake of establishing specific sociological accounts, military sociology has been developed, dedicated to the different facets of the military. This research is based on different theoretical perspectives, but has hardly embraced the frameworks from economics and sociology of conventions (EC/SC) so far. The aim of the chapter is to explore and demonstrate the potentials of this approach. In a first step, the state of the art of military sociology research is outlined, and potential avenues for analyzing military forces based on EC/SC are identified. It is argued that especially the connection to organizational theory (military as organization) and civil-military relations, including leadership and professionalism, offer starting points. After introducing existing studies addressing military-related topics with reference to EC/SC, relevant concepts and approaches of convention theory that prove to be particularly enriching for military research are discussed. An outlook on possible further fields and topics of research is given to concretize how an inclusion of the perspective of EC/SC could look like.
Enterprises and societies currently face crucial challenges, while Industry 4.0 becomes important in the global manufacturing industry all the more. Industry 4.0 offers a range of opportunities for companies to increase the flexibility and efficiency of production processes. The development of new business models can be promoted with digital platforms and architectures for Industry 4.0. Therefore, products from the healthcare sector can increase in value. The adaptive integrated digital architecture framework (AIDAF) for Industry 4.0 is expected to promote and implement the digital platforms and robotics for healthcare and medical communities efficiently. In this paper, we propose that various digital platforms and robotics are designed and evaluated for digital healthcare as for manufacturing industry with Industry 4.0. We argue that the design of an open healthcare platform “Open Healthcare Platform 2030 - OHP2030” for medical product design and robotics can be developed with AIDAF. The vision of AIDAF applications to enable Industry 4.0 in the OHP2030 research initiative is explained and referenced, extended in the context of Society 5.0.
Enterprises and societies currently face essential challenges, and digital transformation can contribute to their resolution. Enterprise architecture (EA) is useful for promoting digital transformation in global companies and information societies covering ecosystem partners. The advancement of new business models can be promoted with digital platforms and architectures for Industry 4.0 and Society 5.0. Therefore, products from the sector of healthcare, manufacturing and energy, etc. can increase in value. The adaptive integrated digital architecture framework (AIDAF) for Industry 4.0 and the design thinking approach is expected to promote and implement the digital platforms and digital products for healthcare, manufacturing and energy communities more efficiently. In this paper, we propose various cases of digital transformation where digital platforms and products are designed and evaluated for digital IT, digital manufacturing and digital healthcare with Industry 4.0 and Society 5.0. The vision of AIDAF applications to perform digital transformation in global companies is explained and referenced, extended toward the digitalized ecosystems such as Society 5.0 and Industry 4.0.
Knowledge transfer is very important to our knowledge-based society and many approaches have been proposed to describe this transfer. However, these approaches take a rather abstract view on knowledge transfer, which makes implementation difficult. In order to address this issue, we introduce a layered model for knowledge transfer that structures the individual steps of knowledge transfer in more detail. This paper gives a description of the process and also an example of the application of the layered model for knowledge transfer. The example is located in the area of business process modelling. Business processes contain the important knowledge describing the procedures of the company to produce products and services. Knowledge transfer is the fundamental basis in the modelling and usage of Business processes, which makes it an interesting use case for the layered model for knowledge transfer.
This paper develops a linear and tractable model of financial bubbles. I demonstrate the application of the linear model and study the root causes of financial bubbles. Moreover, I derive leading properties of bubbles. This model enables investors and regulators to react to market dynamics in a timely manner. In conclusion, the linear model is helpful for the empirical verification and detection of financial bubbles.
This paper analyzes governance mechanisms for different group sizes. The European sovereign debt crisis has demonstrated the need of efficient governance for different group sizes. I find that self-governance only works for sufficiently homogenous and small neighbourhoods. Second, as long as the union expands, the effect of credible self-governance decreases. Third, spill-over effects amplify the size effect. Fourth, I show that sufficiently large monetary unions, are better off with costly but external governance or a free market mechanism. Finally, intermediate-size unions are most difficult to govern efficiently.
Applied mathematical theory for monetary-fiscal interaction in a supranational monetary union
(2014)
I utilize a differentiable dynamical system á la Lotka-Voletrra and explain monetary and fiscal interaction in a supranational monetary union. The paper demonstrates an applied mathematical approach that provides useful insights about the interaction mechanisms in theoretical economics in general and a monetary union in particular. I find that a common central bank is necessary but not sufficient to tackle the new interaction problems in a supranational monetary union, such as the free-riding behaviour of fiscal policies. Moreover, I show that upranational institutions, rules or laws are essential to mitigate violations of decentralized fiscal policies.
Application to CAE systems
(2016)
Due to the broad acceptance of CAD-systems based on 3D solids, the geometric data of all common CAE (Computer-Aided Engineering) software, at least in mechanical engineering, are based on these solids. We use solid models, where the space filled by material is defined in a simple and easily useable way. Solid models allow for the development of automated meshers that transform solid volumes into finite elements. Even after some unacceptable initial trials, users are able to generate meshes of non-trivial geometries within minutes to hours, instead of days or weeks. Once meshing had no longer been the cost limiting factor of finite element studies, numerical simulation became a tool for smaller industries as well.
Due to the broad acceptance of CAD-systems based on 3D solids , the geometric data of all common CAE (Computer-Aided Engineering) software, at least in mechanical engineering, are based on these solids. We use solid models , where the space filled by material is defined in a simple and easily useable way. Solid models allow for the development of automated meshers that transform solid volumes into finite elements. Even after some unacceptable initial trials, users are able to generate meshes of non-trivial geometries within minutes to hours, instead of days or weeks. Once meshing had no longer been the cost limiting factor of finite element studies, numerical simulation became a tool for smaller industries as well.
In the early days of automated meshing development, there were discussions over the use of tetragonal (Fig. 4.1) or hexagonal based meshes. But, after a short period of time, it became evident, that there were and will always be many problems using automated meshers to generate hexagonal elements . So today nearly all automated 3D-meshing systems use tetragonal elements .
The character of knowledge-intense processes is that participants decide the next process activities on base of the present information and their expert knowledge. The decisions of these knowledge workers are in general non-deterministic. It is not possible to model these processes in advance and to automate them using a process engine of a BPM system. Hence, in this context a process instance is called a case, because there is no predefined model that could be instantiated. Domain-specific or general case management systems are used to support the knowledge workers. These systems provide all case information and enable users to define the next activities, but they have no or only limited activity recommendation capabilities. In the following paper, we present a general concept for a self-learning system based on process mining that suggests the next best activity on quantitative and qualitative data for a given case. As a proof of concept, it was applied to the area of insurance claims settlement.
Like many others, fashion companies have to deal with a global and very competitive environment. Thus companies rely on accurate sales forecasts - as key success factor of an efficient supply chain management. However, forecasters have to take into account some specificities of the fashion industry. To respond to these constraints, a variety of different forecasting methods exists, including new, computer-based predictive analytics. After the evaluation of different methods, their application to the fashion industry is investigated through semi structured expert interviews. Despite several benefits predictive analytics is not yet frequently used in practice. This research does not only reflect an industry profile, but also gives important insights about the future potential and obstacles of predictive analytics.
Literature reviews are essential for any scientific work, both as part of a dissertation or as a stand-alone work. Scientists benefit from the fact that more and more literature is available in electronic form, and finding and accessing relevant literature has become more accessible through scientific databases. However, a traditional literature review method is characterized by a highly manual process, while technologies and methods in big data, machine learning, and text mining have advanced. Especially in areas where research streams are rapidly evolving, and topics are becoming more comprehensive, complex, and heterogeneous, it is challenging to provide a holistic overview and identify research gaps manually. Therefore, we have developed a framework that supports the traditional approach of conducting a literature review using machine learning and text mining methods. The framework is particularly suitable in cases where a large amount of literature is available, and a holistic understanding of the research area is needed. The framework consists of several steps in which the critical mind of the scientist is supported by machine learning. The unstructured text data is transformed into a structured form through data preparation realized with text mining, making it applicable for various machine learning techniques. A concrete example in the field of smart cities makes the framework tangible.
Development work within an experimental environment, in which certain properties are investigated and optimized, requires many test runs and is therefore often associated with long execution times, costs and risks. This can affect product, material and technology development in industry and research. New digital driver technologies offer the possibility to automate complex manual work steps in a cost-effective way, to increase the relevance of the results and to accelerate the processes many times over. In this context, this article presents a low-cost, modular and open-source machine vision system for test execution and evaluates it on the basis of a real industrial application. For this purpose a methodology for the automated execution of the load intervals, the process documentation and for the evaluation of the generated data by means of machine learning to classify wear levels. The software and the mechanical structure are designed to be adaptable to different conditions, components and for a variety of tasks in industry and research. The mechanical structure is required for tracking the test object and represents a motion platform with independent positioning by machine vision operators or machine learning. An evaluation of the state of the test object is performed by the transfer learning after the initial documentation run. The manual procedure for classifying the visually recorded data on the state of the test object is described for the training material. This leads to an increased resource efficiency on the material as well as on the personnel side since on the one hand the significance of the tests performed is increased by the continuous documentation and on the other hand the responsible experts can be assigned time efficiently. The presence and know-how of the experts are therefore only required for defined and decisive events during the execution of the experiments. Furthermore, the generated data are suitable for later use as an additional source of data for predictive maintenance of the developed object.
This paper reports an analysis of application and impact of FMEA on susceptibility of generic IT-networks. It is not new that in communication system, the frequency and the data transmission rate play a very important role. The rapid increase in miniaturization of electronic devices leads to very sensitivity against electromagnetic interference. Since the IT network with the data transfer rate makes a huge contribution to this development it is very important to monitor their functionality. Therefore, tests are performed to observe and ensure the data transfer rate of IT networks against IEMI. A fault tree model is presented and observed effects during radiation of disturbance on complex system by a HPEM interference sources are described using a continuous and consistent model of the physical layer to the application layer.
To illustrate the power and the pitfalls of Bionic Optimization, we will show some examples spanning classes of applications and employing various strategies. These applications cover a broad range of engineering tasks. Nevertheless, there is no guarantee that our experiences and our examples will be sufficient to deal with all questions and issues in a comprehensive way. As general rule it might be stated, that for each class of problems, novices should begin with a learning phase. So, in this introductory phase, we use simple and quick examples, e.g., using small FE-models, linear load cases, short time intervals and simple material models. Here beginners within the Bionic Optimization community can learn which parameter combinations to use. In Sect. 3.3 we discuss strategies for optimization study acceleration. Making use of these parameters as starting points is one way to set the specific ranges, e.g., number of parents and kids, crossing, mutation radii and, numbers of generations. On the other hand, these trial runs will doubtless indicate that Bionic Optimization needs large numbers of individual designs, and considerable time and computing power. We recommend investing enough time preparing each task in order to avoid the frustration should large jobs fail after long calculation times.
The aim of this paper is to show to what extent Artificial Intelligence can be used to optimize forecasting capability in procurement as well as to compare AI with traditional statistic methods. At the same time this article presents the status quo of the research project ANIMATE. The project applies Artificial Intelligence to forecast customer orders in medium-sized companies.
Precise forecasts are essential for companies. For planning, decision making and controlling. Forecasts are applied, e.g. in the areas of supply chain, production or purchasing. Medium-sized companies have major challenges in using suitable methods to improve their forecasting ability.
Companies often use proven methods such as classical statistics as the ARIMA algorithm. However, simple statistics often fail while applied for complex non-linear predictions.
Initial results show that even a simple MLP ANN produces better results than traditional statistic methods. Furthermore, a baseline (Implicit Sales Expectation) of the company was used to compare the performance. This comparison also shows that the proposed AI method is superior.
Until the developed method becomes part of corporate practice, it must be further optimized. The model has difficulties with strong declines, for example due to holidays. The authors are certain that the model can be further improved. For example, through more advanced methods, such as a FilterNet, but also through more data, such as external data on holiday periods.
In this paper a method for the generation of gSPM with ontology-based generalization was presented. The resulting gSPM was modeled with BPMN/BPMNsix in an efficient way and could be executed with BPMN workflow engines. In the next step the implementation of resource concepts, anatomical structures, and transition probabilities for workflow execution will be realized.
Purpose: Medical processes can be modeled using different methods and notations.Currently used modeling systems like Business Process Model and Notation (BPMN) are not capable of describing the highly flexible and variable medical processes in sufficient detail.
Methods: We combined two modeling systems, Business Process Management (BPM) and Adaptive Case Management (ACM), to be able to model non-deterministic medical processes. We used the new Standards Case Management Model and Notation (CMMN) and Decision Management Notation (DMN).
Results: First, we explain how CMMN, DMN and BPMN could be used to model non-deterministic medical processes. We applied this methodology to model 79 cataract operations provided by University Hospital Leipzig, Germany, and four cataract operations provided by University Eye Hospital Tuebingen, Germany. Our model consists of 85 tasks and about 20 decisions in BPMN. We were able to expand the system with more complex situations that might appear during an intervention.
Conclusion: An effective modeling of the cataract intervention is possible using the combination of BPM and ACM. The combination gives the possibility to depict complex processes with complex decisions. This combination allows a significant advantage for modeling perioperative processes.
An apparatus and method for analyzing a flow of material having an inlet region, a measurement range and an outlet region, and having a first diverter and a second diverter, and a deflection area, wherein in a first state of operation, the two diverters form a continuous first material flow space from the inlet region via the first diverter through the measurement range, via the second diverter to the outlet region, and in a second state of operation, form a continuous second material flow space from the inlet region via the first diverter through the deflection area, via the second diverter to the outlet region.
Today many scientific works are using deep learning algorithms and time series, which can detect physiological events of interest. In sleep medicine, this is particularly relevant in detecting sleep apnea, specifically in detecting obstructive sleep apnea events. Deep learning algorithms with different architectures are used to achieve decent results in accuracy, sensitivity, etc. Although there are models that can reliably determine apnea and hypopnea events, another essential aspect to consider is the explainability of these models, i.e., why a model makes a particular decision. Another critical factor is how these deep learning models determine how severe obstructive sleep apnea is in patients based on the apnea-hypopnea index (AHI). Deep learning models trained by two approaches for AHI determination are exposed in this work. Approaches vary depending on the data format the models are fed: full-time series and window-based time series.
In this paper, we propose a radical new approach for scale-out distributed DBMSs. Instead of hard-baking an architectural model, such as a shared-nothing architecture, into the distributed DBMS design, we aim for a new class of so-called architecture-less DBMSs. The main idea is that an architecture-less DBMS can mimic any architecture on a per-query basis on-the-fly without any additional overhead for reconfiguration. Our initial results show that our architecture-less DBMS AnyDB can provide significant speedup across varying workloads compared to a traditional DBMS implementing a static architecture.
Die anwendungsneutrale und vorsorgliche Verkabelung gibt es bereits seit über 25 Jahren. Die Materie ist zunehmend komplexer geworden. Das ursprünglich für die informationstechnische Vernetzung von Büros vorgesehene Konzept hat sich mit den Jahren auf weitere Anwendungsbereiche, z. B. in Rechenzentren und in industriell oder privat genutzten Bereichen ausgeweitet. Dabei hat jeder Anwendungsbereich neben einem allgemeinen Anforderungsprofil auch ein eigenes, spezifisches Regelwerk. Aufgrund der fortschreitenden Digitalisierung ist zudem eine ständige technologische Anpassung und Weiterentwicklung des Leistungsvermögens vonnöten. Vor diesem Hintergrund wird es zunehmend schwierig, die umfangreichen Normenwerke zu lesen, im Zusammenspiel zu begreifen und optimal anzuwenden.
Und genau hier setzt das Buch an! In dem vorliegenden Buch wird die Kommunikationskabelanlage von der Idee über die Planung, die Spezifizierung, Realisierung, Inbetriebnahme bis hin zur Wartung anschaulich und im Zusammenhang erläutert. Kernstück ist die Vorstellung und Beschreibung der aktuellen Normenreihen DIN EN 50173 (VDE 0800-173) und DIN EN 50174 (VDE 0800-174). Nachdem zunächst auf die Standortvoraussetzungen eingegangen wird, folgen die allgemeinen und spezifischen Anforderungen an informationstechnische Verkabelungen und die verwendeten Komponenten, Kabel bzw. Steckverbinder und zu guter Letzt die Planung, Spezifizierung, Umsetzung und messtechnische Bewertung der Installation. Den Autoren ist es dabei ein Anliegen, nicht nur das Grundverständnis zu den relevanten Anforderungsprofilen zu vermitteln, sondern auch den Blick für den Gesamtzusammenhang, beispielsweise zur Zukunftssicherheit und zum Einfluss unterschiedlicher Umweltbedingungen auf die Auslegung der Verkabelungskomponenten, zu behalten.
Steady state efficiency optimization techniques for induction motors are state of the art and various methods have already been developed. This paper provides new insights in the efficiency optimized operation in dynamic regime. The paper proposes an anticipative flux modification in order to decrease losses during torque and speed transients. These trajectories are analyzed based on a numerical study for different motors. Measurement results for one motor are given as well.
Wollen Unternehmen sozial und ökologisch nachhaltiger werden, beginnt es meistens mit Ankündigungen: Wir werden mehr Mitarbeiter dazu bewegen, mit dem Fahrrad zu kommen! Wir schaffen die Currywurst in der Kantine ab! Wir werden benachteilige Jugendliche stärker fördern! Solche Ankündigungen werden in der Forschung zu Environment, Social und Governance (ESG) als „Aspirational Talk“ bezeichnet. Sie zeigen den Anspruch eines Unternehmens auf: „Wir erkennen die Herausforderungen an und wollen sie meistern.“ Den Ankündigungen sollten dann freilich Taten folgen. Was aber passiert, wenn die Mitarbeiter zwischen dem, was angekündigt wurde, und dem, was gemacht wird, eine Lücke wahrnehmen?
Porous silica materials are often used for drug delivery. However, systems for simultaneous delivery of multiple drugs are scarce. Here we show that anisotropic and amphiphilic dumbbell core–shell silica microparticles with chemically selective environments can entrap and release two drugs simultaneously. The dumbbells consist of a large dense lobe and a smaller hollow hemisphere. Electron microscopy images show that the shells of both parts have mesoporous channels. In a simple etching process, the properly adjusted stirring speed and the application of ammonium fluoride as etching agent determine the shape and the surface anisotropy of the particles. The surface of the dense lobe and the small hemisphere differ in their zeta potentials consistent with differences in dye and drug entrapment. Confocal Raman microscopy and spectroscopy show that the two polyphenols curcumin (Cur) and quercetin (QT) accumulate in different compartments of the particles. The overall drug entrapment efficiency of Cur plus QT is high for the amphiphilic particles but differs widely between Cur and QT compared to controls of core–shell silica microspheres and uniformly charged dumbbell microparticles. Furthermore, Cur and QT loaded microparticles show different cancer cell inhibitory activities. The highest activity is detected for the dual drug loaded amphiphilic microparticles in comparison to the controls. In the long term, amphiphilic particles may open up new strategies for drug delivery.
Im Rahmen der wissenschaftlichen Vertiefung an der Hochschule Reutlingen befasst sich diese Arbeit mit der Untersuchung der Anforderungen und der Machbarkeit zur computergestützten Erkennung der Deutschen Gebärdensprache (DGS) und des deutschen Fingeralphabets. Die Erkenntnisse aus dieser Arbeit dienen als Grundlage zur Entwicklung eines Systems zur Übersetzung von Gebärden der DGS oder des Fingeralphabets in die deutsche Schriftsprache. Zunächst werden grundlegende Informationen zu Geschichte, Aufbau und Grammatik der DGS und des Fingeralphabets aufgeführt. Die Erkennung der Gebärden soll durch optische Bewegungssensoren erfolgen. Hierfür werden unterschieliche Sensortypen betrachtet und verglichen. Im weiteren Verlauf erfolgt die Analyse der benutzerspezifischen und technischen Anforderungen. Erstere basieren auf der Befragung einer Fokusgruppe aus gehörlosen und hörenden Menschen aus dem Bereich der Gehörlosen-, Schwerhörigen- und Sprachbehindertenpädagogik. Abgeleitet aus den Informationen der Anforderungsanalyse ergibt sich, bis zu einem gewissen Grad, die Machbarkeit aus technischer und benutzerspezifischer Sicht. Abschließend erfolgen die Zusammenfassung der Anforderungen, welche an das zu entwickelnde System gestllt werden, sowie eine Handlungsempfehlung für die Entwicklung eines Prototyps.
In Folge der gegenwärtigen Digitalisierung in der produzierenden Industrie werden Anwendungen oder Services mit potentiell positiven Auswirkungen auf Faktoren wie Effektivität und Arbeitsqualität entwickelt. Ein geeigneter Ansatz zur Stärkung motivierender Aspekte im Arbeitskontext kann Gamification darstellen. In dieser Arbeit ist die initiale Konzeption und Evaluation eines Gamification-Ansatzes für Anwender eines KI-Service zur Maschinenoptimierung dargestellt und möglichen Anforderungen an ein Konzept zur Motivationssteigerung extrahiert.
Anforderungen an die Mensch-Maschine-Schnittstelle im Automobil auf dem Weg zum autonomen Fahren
(2017)
In den letzten Jahrzehnten haben immer mehr Fahrerassistenzsysteme Einzug in das Automobil gefunden und bereiten damit den Weg zu vollautonomen Fahrzeugen der Zukunft vor. So bieten bereits viele Hersteller Ausstattungsvarianten ihrer Fahrzeuge an, die für den Umstieg in die vollautonome Zukunft gewappnet sind. Um den Menschen mit auf den Weg zu nehmen, werden einige Anforderungen an die Mensch-Maschine-Schnittstelle (MMS) des Automobils gestellt. Für die teilautonomen Fahrzeuge der nächsten Generation gilt es, den Fahrerwechsel zwischen manuellem und autonomen Fahren für die Menschen bestmöglich zu gestalten. Die Arbeit wirft einen Blick auf ausgewählte Ansätze für zukünftige MMS-Systeme und bewertet diese anhand der Übergabezeiten zwischen Mensch und Maschine. Ein Wandel der MMS im Automobil wird empfohlen, um den Menschen mit den neuen Technologien vertraut zu machen.
In dieser Arbeit werden Anforderungen an ein digitales Referenzmodell der Cell and Gene Therapy (CGT) Supply Chain mittels systematischer Literaturrecherche unter partieller Anwendung der Preferred-Reporting-Items-for-Systematic-Reviews-and-Meta-Analyses(PRISMA)-2020-Methode erarbeitet und erläutert. Die Ergebnisse der Literaturrecherche untermauern, dass die CGT Supply Chain standardisierte und automatisierte Prozesse benötigt, gewissen Transportanforderungen gerecht werden sowie eine lückenlose Rückverfolgbarkeit gewährleisten können muss. Die Anforderungen an das Referenzmodell lehnen sich z. T. an die Anforderungen des klassischen Supply-Chain-Operations-Reference(SCOR)-Modells an, bedürfen jedoch einer Veränderung und Weiterentwicklung unter Beachtung der Besonderheiten der CGT Supply Chain. Auf Basis eines Referenzmodells für die CGT Supply Chain, das die aus dieser Arbeit identifizierten Anforderungen beachtet, kann eine übergeordnete Managementplattform aufgebaut werden. Mit der digitalen Abbildung und Vernetzung aller Aktivitäten ist der Grundstein für die Integration in ein Enterprise-Resource-Planning(ERP)-System zum effektiven Data und Process Mining gelegt. Durch eine zunehmend bessere Datenqualität und -quantität entlang der Prozesse der CGT Supply Chain lassen sich verstärkt Informationen über die Prozesse selbst generieren, aus denen weitere Verbesserungsansätze hervorgehen. Eine CGT-Managementplattform bildet demnach die Grundlage für alle Prozesse innerhalb der CGT Supply Chain für einen kontinuierlichen Verbesserungsprozess.
This paper provides new evidence on the formation and anchoring of inflation expectations. I conduct a game experiment and analyze the adjustment as well as the impact of credible targets on expectations. In addition, I evaluate the idiosyncratic determinants on the formation of expectations. The analysis reveals six results: First, I find evidence that long-term inflation expectations are firmly anchored to a credible target. Second, a temporary deviation due to unexpected monetary policy might trigger a decline in credibility, and third a de-anchoring of expectations due to uncertainty. Fourth, I find that people change their expectations little if a credible target exists. Fifth, expectations exhibit a large degree of time-variance only in environments without a target. Sixth, the dynamic adjustment to an ‘incomplete’ equilibrium, which is theoretically unstable, is nevertheless rapid and persistent in case of credible targets. All in all, I demonstrate a unique game setup with contributions to both experimental and monetary economics.
Informationstechnische Systeme, die den Arbeitsablauf im klinischen Bereich unterstützen, sind aktuell auf organisatorische Abläufe beschränkt. Diese Arbeit stellt einen ersten Ansatz vor, wie solch ein System in den perioperativen Bereich eingebracht werden kann. Hierzu wurde eine Workflow Engine mit einer perioperativen Prozess-Visualisierung verknüpft. Das System wurde nach Modell-View-Controller-Prinzip implementiert. Als "Controller" kommt die Workflow Engine zum Einsatz; also "Modell" ein Prozessmodell, mit den erforderlichen klinischen Daten. Der "View" wurde durch eine abgekoppelte Anwendung realisiert, welche auf Web-Technologien basiert. Drei Visualisierungen, die Workflow Engine sowie die Anbindung beider über eine Datenbankschnittstelle, wurden erfolgreich umgesetzt. Bei den drei Visualisierungen wurden jeweils eine Ansicht für den OP-Koordinator, den Springer und eine Ansicht für die Übersicht einer OP erstellt.
To bring a pattern-based perspective to the SOA vs. microservices discussion, we qualitatively analyzed a total of 118 SOA patterns from 2 popular catalogs for their (partial) applicability to microservices. Patterns had to hold up to 5 derived microservices principles to be applicable. 74 patterns (63%) were categorized as fully applicable, 30 (25%) as partially applicable, and 14 (12%) as not applicable. Most frequently violated microservices characteristics werde Decentralization and Single System. The findings suggest that microservices and SOA share a large set of architectural principles and solutions in the general space of service-based systems while only having a small set of differences in specific areas.
This paper provides a quantitative approach to measuring the effectiveness of ambush marketing by using Google data. To our knowledge, it is one of the first studies that develop an empirical approach that directly measures the attention effect of ambush marketing in sports. The new data consists of 14 ambushers (treatment group) and 26 official sponsors (control group) and covers the time period of 2004 to 2012. These firms conducted marketing activities during the past football World Cups and European Championships. The innovation in our paper is the measurement method of attention by means of Google. The results are as follows: First ambush marketing increases product attention significantly. Second the product awareness of ambushers is greater or the same to that of official sponsors. Finally, we demonstrate that ambush marketing has positive impacts on the company's performance. Overall, we conclude that Google provide new insights for the analysis of ambush marketing.
A major lesson of the recent financial crisis is that money market freezes have major macroeconomic implications. This paper develops a tractable model in which we analyze the microeconomic and macroeconomic implications of a systemic banking crisis. In particular, we consider how the systemic crisis affects the optimal allocation of funding for businesses. We show that a central bank should reduce the interest rate to manage a systemic shock and hence smooth the macroeconomic consequences. Moreover, the analysis offers insight on the rational of bank behavior and the role of markets in a systemic crisis. We find that the failure to adopt the optimal policy can lead to economic fragility.
Data analysis is becoming increasingly important to pursue organizational goals, especially in the context of Industry 4.0, where a wide variety of data is available. Here numerous challenges arise, especially when using unstructured data. However, this subject has not been focused by research so far. This research paper addresses this gap, which is interesting for science and practice as well. In a study three major challenges of using unstructured data has been identified: analytical know-how, data issues, variety. Additionally, measures how to improve the analysis of unstructured data in the industry 4.0 context are described. Therefore, the paper provides empirical insights about challenges and potential measures when analyzing unstructured data. The findings are presented in a framework, too. Hence, next steps of the research project and future research points become apparent.
Modern wide bandgap power devices promise higher power conversion performance if the device can be operated reliably. As switching speed increases, the effects of parasitic ringing become more prominent, causing potentially damaging overvoltages during device turn-off. Estimating the expected additional voltage caused by such ringing enables more reliable designs. In this paper, we present an analytical expression to calculate the expected overvoltage caused by parasitic ringing based on parasitic element values and operating point parameters. Simulations and measurements confirm that the expression can be used to find the smallest rise time of the switches’ drain-source voltage for minimum overvoltage. The given expression also allows the prediction of the trade off overvoltage amplitude in case of faster required rise times.
While driving, stress is caused by situations in which the driver estimates their ability to manage the driving demands as insufficient or loses the capability to handle the situation. This leads to increased numbers of driver mistakes and traffic violations. Additional stressing factors are time pressure, road conditions, or dislike for driving. Therefore, stress affects driver and road safety. Stress is classified into two categories depending on its duration and the effects on the body and psyche: short-term eustress and constantly present distress, which causes degenerative effects. In this work, we focus on distress. Wearable sensors are handy tools for collecting biosignals like heart rate, activity, etc. Easy installation and non-intrusive nature make them convenient for calculating stress. This study focuses on the investigation of stress and its implications. Specifically, the research conducts an analysis of stress within a select group of individuals from both Spain and Germany. The primary objective is to examine the influence of recognized psychological factors, including personality traits such as neuroticism, extroversion, psychoticism, stress and road safety. The estimation of stress levels was accomplished through the collection of physiological parameters (R-R intervals) using a Polar H10 chest strap. We observed that personality traits, such as extroversion, exhibited similar trends during relaxation, with an average heart rate 6% higher in Spain and 3% higher in Germany. However, while driving, introverts, on average, experienced more stress, with rates 4% and 1% lower than extroverts in Spain and Germany, respectively.
Today's logistics systems are characterized by uncertainty and constantly changing requirements. Rising demand for customized products, short product life cycles and a large number of variants increases the complexity of these systems enormously. In particular, intralogistics material flow systems must be able to adapt to changing conditions at short notice, with little effort and at low cost. To fulfil these requirements, the material flow system needs to be flexible in three important parameters, namely layout, throughput and product. While the scope of the flexibility parameters is described in literature, the respective effects on an intralogistics material flow system and the influencing factors are mostly unknown. This paper describes how flexibility parameters of an intralogistics system can be determined using a multi-method simulation. The study was conducted in the learning factory “Werk150” on the campus of Reutlingen University with its different means of transport and processes and validated in terms of practical experiments.
This work is a study about a comparison of survey tools and it should help developers in selecting a suited tool for application in an AAL environment. The first step was to identify the basic required functionality of the survey tools used for AAL technologies and to compare these tools by their functionality and assignments. The comparative study was derived from the data obtained, previous literature studies and further technical data. A list of requirements was stated and ordered in terms of relevance to the target application domain. With the help of an integrated assessment method, the calculation of a generalized estimate value was performed and the result is explained. Finally, the planned application of this tool in a running project is explained.
Today, digitalization is firmly anchored in society and business. It is also recognized to have significant impact on the retailing sector. The in-store display of moving images has so far, however, gained little attention by researchers. The aim of this research is to provide a first estimation on the current state of moving images distribution in stationary retail stores. A store check was the basis for analysis and evaluation. In sum, 152 stores were analyzed in Stuttgart, Germany. Out of 152 observed stores, 62 stores showed 177 moving images. Detailed analyses about content, mood, color and the actors of motion pictures showed that all aspects are very well harmonized with the target group of the store. The chapter provides a basic estimation of the in-store diffusion of moving images. Thereby, avenues for further research are opened up.
The possibility to bring the interference source, close to the potential target is characterized by the property of the source as stationary, portable, mobile, very mobile and highly mobile [3]. Starting from the existing and well-known IEME interference or IEMI (Intentional Electromagnetic Interference) and the already existing classifications an analysis of methods based on a comparative study of the methods used to classify the intentional EM environment is carried out, which takes into account the frequency, the cost, the amplitude of the noise signal, the radiated power and the energy of a pulse of radiation.
There are several intra-operative use cases which require the surgeon to interact with medical devices. I used the Leap Motion Controller as input device for three use-cases: 2D-interaction (e.g. advancing EPR data), selection of a value (e.g. room illumination brightness) and an application point and click scenario. I evaluated the Palm Mouse as the most suitable gesture solution to coordinate the mouse and advise to use the implementation using all fingers to perform a click. This small case study introduces the implementations and methods that result those recommendations.
In der Medizin existieren verschiedene Reifegradmodelle, die die Digitalisierung von Krankenhäusern unterstützen können. Die Anforderungen an ein Reifegradmodell für diesen Zweck umfassen Aspekte aus allgemeinen und spezifischen Bereichen des Krankenhauses. Die Analyse der Reifegradmodelle HIN, CCMM, EMRAM und O-EMRAM zeigt große Lücken im Bereich des OP sowie fehlende Aspekte in der Notaufnahme auf. Ein umfassendes Reifegradmodell wurde nicht gefunden. Durch eine Kombination aus HIN und CCMM könnten fast alle Bereiche ausreichend abgedeckt werden. Zusätzliche Ergänzungen durch spezialisierte Reifegradmodelle oder sogar die Entwicklung eines umfassenden Reifegradmodells wären sinnvoll.
In diesem Beitrag wurde gezeigt, wie mit Hilfe von Verfahren zur Analyse von Petri–Netzen ein in der Programmiersprache Kontaktplan erstelltes SPS–Programm analysiert werden kann. Das Ziel des Verfahrens ist dabei nicht eine Verifikation im eigentlichen Sinne sondern das Aufdecken von verbotenen oder unerwünschten Zuständen. Im Beitrag wurden Regeln zur Transformation des im Kontaktplan erstellten Ablaufs in ein Petri–Netz angegeben und anhand der Analyse eines fehlerhaft implementierten Ablaufs die Leistungsfähigkeit des Ansatzes vorgestellt. Das Beispiel zeigt, dass Programmfehler bereits vor einem Test an der realen Anlage erkannt werden können. Bei der weiteren Entwicklung des Verfahrens liegt ein Schwerpunkt auf der Verallgemeinerung auf im Kontaktplan entwickelte Programmorganisationseinheiten, die nicht nur reine
Abläufe implementieren. Ein weiterer wichtiger Entwicklungsschritt ist die graphische Unterstützung der Fehlersuche im Erreichbarkeitsgraphen, so dass insgesamt ein leistungsfähiges Werkzeug zur Unterstützung der Implementierung von Ablaufsteuerungen im Kontaktplan zur Verfügung steht.
Fragestellung: Das klinische Standardverfahren und Referenz der Schlafmessung und der Klassifizierung der einzelnen Schlafstadien ist die Polysomnographie (PSG). Alternative Ansätze zu diesem aufwändigen Verfahren könnten einige Vorteile bieten, wenn die Messungen auf eine komfortablere Weise durchgeführt werden. Das Hauptziel dieser Forschung Studie ist es, einen Algorithmus für die automatische Klassifizierung von Schlafstadien zu entwickeln, der ausschließlich Bewegungs- und Atmungssignale verwendet [1].
Patienten und Methoden: Nach der Analyse der aktuellen Forschungsarbeiten haben wir multinomiale logistische Regression als Grundlage für den Ansatz gewählt [2]. Um die Genauigkeit der Auswertung zu erhöhen, wurden vier Features entwickelt, die aus Bewegungs- und Atemsignalen abgeleitet wurden. Für die Auswertung wurden die nächtlichen Aufzeichnungen von 35 Personen verwendet, die von der Charité-Universitätsmedizin Berlin zur Verfügung gestellt wurden. Das Durchschnittsalter der Teilnehmer betrug 38,6 +/– 14,5 Jahre und der BMI lag bei durchschnittlich 24,4 +/– 4,9 kg/m2. Da der Algorithmus mit drei Stadien arbeitet, wurden die Stadien N1, N2 und N3 zum NREM-Stadium zusammengeführt. Der verfügbare Datensatz wurde strikt aufgeteilt: in einen Trainingsdatensatz von etwa 100 h und in einen Testdatensatz mit etwa 160 h nächtlicher Aufzeichnungen. Beide Datensätze wiesen ein ähnliches Verhältnis zwischen Männern und Frauen auf, und der durchschnittliche BMI wies keine signifikante Abweichung auf.
Ergebnisse: Der Algorithmus wurde implementiert und lieferte erfolgreiche Ergebnisse: die Genauigkeit der Erkennung von Wach-/NREM-/REM-Phasen liegt bei 73 %, mit einem Cohen’s Kappa von 0,44 für die analysierten 19.324 Schlafepochen von jeweils 30 s. Die beobachtete gewisse Überschätzung der NREM-Phase lässt sich teilweise durch ihre Prävalenz in einem typischen Schlafmuster erklären. Selbst die Verwendung eines ausbalancierten Trainingsdatensatzes konnte dieses Problem nicht vollständig lösen.
Schlussfolgerungen: Die erreichten Ergebnisse haben die Tauglichkeit des Ansatzes prinzipiell bestätigt. Dieser hat den Vorteil, dass nur Bewegungs- und Atemsignale verwendet werden, die mit weniger Aufwand und komfortabler für Benutzer aufgezeichnet werden können als z. B. Herz- oder EEG-Signale. Daher stellt das neue System eine deutliche Verbesserung im Vergleich zu bestehenden Ansätzen dar. Die Zusammenführung der beschriebenen algorithmischen Software mit dem in [1] beschriebenen Hardwaresystem zur Messung von Atem- und Körperbewegungssignalen zu einem autonomen, berührungslosen System zur kontinuierlichen Schlafüberwachung ist eine mögliche Richtung zukünftiger Arbeiten.
Die Arbeit stellt die Möglichkeiten von 3D-Controllern für den Einsatz in der interventionellen Radiologie und insbesondere für die Steuerung der Echtzeit-Magnetresonanztomographie (MRT) dar. Dies ist interessant in Bezug auf die kontrollierte Navigation in ein Zielgewebe. Dabei kann der Interventionalist durch Echtzeit- Bildgebung den Verlauf des Eingriffs verfolgen, allerdings kann er bisher das MRT während der Durchführung des Eingriffs nicht selbst steuern, da dies durch den Assistenten im Nebenraum erfolgt. Die Kommunikation ist bei dem hohen Geräuschpegel aber sehr schwer. Diese Arbeit setzt an dieser Stelle an und analysiert 3D-Controller auf die Eignung für die Echtzeit-Steuerung eines MRTs. Dabei wurden trackingbasierte und trackinglose Geräte betrachtet. Als Ergebnis ließ sich festhalten, dass trackingbasierte Verfahren weniger geeignet sind, aufgrund der nicht ausreichenden Interpretation der Eingaben. Die trackinglosen Geräte hingegen sind aufgrund der korrekten Interpretation aller Eingaben und der intuitiven Bedienung geeignet.
Die Spannungsversorgung elektronischer Steuergeräte im Automotive-Bereich wird zunehmend durch Schaltregler sichergestellt. Der SEPIC (Single Ended Primary Inductance Converter) besitzt die Eigenschaft, eine Spannung aufwärts wie auch abwärts wandeln zu können und könnte somit klassische Buck- und Boost-Wandler ablösen. Dieser Beitrag untersucht den SEPIC hinsichtlich Eignung für Automotive-Anwendungen. Dazu wurde eine Groß- sowie Kleinsignalanalyse am Wandler durchgeführt, mit geeigneten Simulationsmodellen nachgebildet und Messungen gegenüber gestellt. Der SEPIC zeigt als Hauptvorteile:
1. einen verzugsfreien Übergang zwischen Buck-/Boost Betrieb, 2. geringe Eingangswelligkeit, 3.DC-Kurzschlussfestigkeit. Auch hinsichtlich Wirkungsgrad und EMV-Verhalten stellt der SEPIC eine interessante Alternative dar. Der zwischen Ein- und Ausgang liegende Kondensator wird dauerhaft von einem Strom durchflossen, auf Basis der Effektivströme wird das damit verbundene Ausfallrisiko diskutiert.