Refine
Document Type
- Journal article (1244)
- Conference proceeding (1039)
- Book chapter (381)
- Book (223)
- Doctoral Thesis (54)
- Working Paper (38)
- Anthology (32)
- Report (25)
- Patent / Standard / Guidelines (24)
- Issue of a journal (19)
Is part of the Bibliography
- yes (3088)
Institute
- ESB Business School (1107)
- Informatik (875)
- Technik (509)
- Life Sciences (343)
- Texoversum (220)
- Zentrale Einrichtungen (16)
Publisher
- Springer (473)
- IEEE (252)
- Elsevier (243)
- Hochschule Reutlingen (191)
- MDPI (99)
- Wiley (72)
- Gesellschaft für Informatik e.V (69)
- Universität Tübingen (65)
- De Gruyter (60)
- VDE Verlag (48)
Das Buch beinhaltet Übungsaufgaben, die im Rahmen der Vorlesung Fluidmechanik im 3. Semester des Bachelorstudiengangs Maschinenbau an der Fakultät Technik der Hochschule Reutlingen behandelt werden. Diese Übungsaufgaben sind zum größten Teil alte Prüfungsaufgaben und betreffen vier Kapitel aus der Vorlesung Fluidmechanik. Diese sind "Fluidstatik", "Fluiddynamik", "Impulssatz" und "verlustbehaftete Strömungen". Anhand der angegebenen ausführlichen Lösungen sind die Studierenden in der Lage, den Lösungsweg einzelner Aufgaben zu verifizieren und können sich so optimal auf die Prüfung vorbereiten.
Die vorliegende Aufgabensammlung beinhaltet Übungs- und Prüfungsaufgaben zur Vorlesung Strömungsmaschinen (drei Semesterwochenstunden einschließlich der Übungsveranstaltungen), die als Teilvorlesung im Rahmen des Moduls Kraft- und Arbeitsmaschinen im 4. Semester des Bachelorstudiengangs Maschinenbau an der Fakultät Technik der Hochschule Reutlingen angeboten wird. Der Bachelorstudiengang Maschinenbau ist als allgemeiner Maschinenbau ohne spezielle Vertiefung konzipiert, daher erfolgt die Vermittlung von Lerninhalten aus allen wesentlichen Bereichen des Maschinenbaus. Notwendige Voraussetzungen zur Erreichung der Lernziele der Teilvorlesung Strömungsmaschinen sind fundierte Kenntnisse aus den Modulen Mathematik I, Mathematik II, Physik, technische Thermodynamik und Fluidmechanik.
Planung ist komplex und aufwendig. Die Schnelligkeit und Effizienz der Planung kann durch die Orientierung an 16 erfolgskritischen Faktoren optimiert werden. Mittels dieser Kriterien wird sichergestellt, dass strategische und operative Aspekte integriert sind, der Aufwand im Rahmen bleibt und die Qualität der Daten zieladäquat ist.
Auf dem Weg zu einer neuen Normalität in Schule und Bildung?! : Empfehlungen der Beitragenden
(2023)
Die im vorliegenden Band präsentierten Studien und Erkenntnisse zeigen die tiefen Einschnitte, die die Pandemie in Schule und Bildung hinterlassen hat. Zahlreiche Forschende, Expertinnen und Experten, aber auch engagierte Eltern, Kinder und Jugendliche wünschen sich in Anbetracht der Erfahrungen eine „neue“ Normalität für Schule und Bildung – eine Normalität, in der Bildungsungerechtigkeit wirksamer begegnet wird, die digitaler ist, … Wie könnte der Weg dahin aussehen?
Business Process Management (BPM) ist aufgrund seiner Bedeutung für prozessorientierte Unternehmen und den daraus resultierenden Anforderungen hinsichtlich interner Betriebsorganisation und Audits, ein zentraler Bestandteil. Die Einführung und Aufrechterhaltung von BPM stellt jedoch einen erheblichen Aufwand dar, da Prozesse aufgenommen, modelliert und aktuell gehalten werden müssen. Empirische Belege zeigen, dass erfolgreiche Prozessmodellierung dabei eine besondere Herausforderung darstellt, welche häufig nicht zufriedenstellend nachhaltig gelingt. Ein wesentlicher Erfolgsfaktor für die nachhaltige Prozessorientierung in Unternehmen ist somit die konsistente und aktuelle Prozessmodellierung, sowie deren Adaption an externe und interne Veränderungen. Mittels einer Literaturrecherche werden die relevanten Dimensionen zur nachhaltigen Prozessorientierung auf Grundlage der Prozessmodellierung ermittelt. Auf deren Basis wird ein adaptives handlungsorientiertes Framework für die praktische Anwendung in Unternehmen abgeleitet.
Forschungsfrage: Wie können sich Unternehmen als attraktive Arbeitgeber positionieren, wenn sie bei jungen Bewerbern der Generation Y kaum bekannt sind und ihre Produkte als wenig attraktiv wahrgenommen werden?
Methodik: Szenariostudie mit Daten aus einer schriftlichen Befragung
Praktische Implikationen: Solche Unternehmen sollten auf die Nutzenversprechen einer Employer Brand setzen, um sich als attraktiver Arbeitgeber für diese Zielgruppe zu positionieren. Voraussetzung dafür ist eine auf die Zielgruppe abgestimmte und präzise Kommunikation über die relevanten Attraktivitätsmerkmale.
Purpose – This paper explores, which employer attractiveness attributes Generation Z (Gen Z) talents prioritize. Comparing the findings for female and male participants, this study examines whether gender-specific work value orientations prevail among Gen Z talents and impact their expectations toward employers.
Design/methodology/approach – A survey was conducted among 308 students of business, economics and management in Germany. Data were collected using the employer attractiveness scale of Berthon and colleagues (2005) complemented by an additional dimension focusing on work–life balance.
Findings – Findings indicate that Gen Z talents primarily expect a fun work environment, a positive team atmosphere and supportive relations with colleagues and superiors. Application aspects and work–life balance enabling services are expected the least. Expectations of four of the six attributes measured differ significantly among women and men, indicating that traditional gender assumptions continue to be reflected in the work value orientations of Gen Z talents.
Research limitations/implications – The sample was limited to business, economics and management students in Germany. Additional research should include a wider variety of respondents of different disciplines and countries.
Practical implications – Practical implications refer to emphasizing the social value of employment in the employee value proposition and customizing employer branding activities by gender.
Originality/value – This study contributes to the literature by empirically determining which employer attractiveness attributes Gen Z talents expect and whether and how these expectations vary by gender.
Based on a survey among customers of seven German municipal utilities, we estimate two regression models to identify the most prospective customer segments and their preferences and motivations for participating in peer-to-peer (P2P) electricity trading and develop implications for decision-makers in the energy sector and policy-makers for this currently relatively unknown product. Our results show a large general openness of private households towards P2P electricity trading, which is also the main predictor of respondents' intention to participate. It is mainly influenced by individuals’ environmental attitude, technical interest, and independence aspiration. Respondents with the highest willingness to participate in P2P electricity trading are mainly motivated by the ability to share electricity, and to a lesser extent by economic reasons. They also have stronger preferences for innovative pricing schemes (service bundles, time-of-use tariffs). Differences between individuals can be observed depending on their current ownership (prosumers) or installation probability of a microgeneration unit (consumers, planners). Rather than current prosumers, especially planners willing to install microgeneration in the foreseeable future are considered to be the most promising target group for P2P electricity trading. Finally, our results indicate that P2P electricity trading could be a promising niche option in the German energy transition.
Based on a survey among customers of seven German municipal utilities, we estimate hierarchical multiple regression models to identify consumer motivations for participating in P2P electricity trading and develop implications for marketing strategies for this currently relatively unknown product. Our results show a low importance of socio-demographics in explaining differences between consumer groups, but high influence of attitudes, knowledge and likelihood to purchase related products. The most valuable target groups for P2P electricity trading marketing strategies of municipal utilities first and foremost should aim at are innovators, especially prosumers. They are well-informed about and open minded concerning electricity sharing and highly environmentally aware. They ask for transparency and are willing to purchase related products. They are attracted by the ability to share generation and consumption and to a lesser extent by economic reasons. Our results indicate that the marketing efforts should to a special degree take peer effects into account, as they are found to wield great influence on general openness towards and purchase intention for P2P electricity products. Finally, municipal utilities should build on the high level of satisfaction and trust of consumers and use P2P electricity trading as measure to keep and win customers willing to change their supplier.
Atemloses Pfeifkonzert : warum Helene Fischer beim DFB-Pokalfinale gnadenlos ausgepfiffen wurde
(2017)
Die gnadenlosen Pfiffe gegen Helene Fischer bei ihrem Auftritt im Berliner Olympiastadion in der Halbzeitpause des DFB-Pokal-Finales 2017 zwischen Borussia Dortmund und Ein-tracht Frankfurt wirkten in den Medien noch einige Zeit nach. Viele Menschen – Fußball-Fans, Schlager-Fans und auch völlig Unbeteiligte – stellten die Frage: Hat Sie das wirklich verdient? Im vorliegenden Beitrag werden vier Erklärungsansätze vorgestellt und erläutert, warum Helene Fischer beim Pokalfinale ausgepfiffen wurde und der Aufritt des Schlagerstars an dieser Stelle deplatziert war.
While Microservices promise several beneficial characteristics for sustainable long-term software evolution, little empirical research covers what concrete activities industry applies for the evolvability assurance of Microservices and how technical debt is handled in such systems. Since insights into the current state of practice are very important for researchers, we performed a qualitative interview study to explore applied evolvability assurance processes, the usage of tools, metrics, and patterns, as well as participants’ reflections on the topic. In 17 semi-structured interviews, we discussed 14 different Microservice-based systems with software professionals from 10 companies and how the sustainable evolution of these systems was ensured. Interview transcripts were analyzed with a detailed coding system and the constant comparison method.
We found that especially systems for external customers relied on central governance for the assurance. Participants saw guidelines like architectural principles as important to ensure a base consistency for evolvability. Interviewees also valued manual activities like code review, even though automation and tool support was described as very important. Source code quality was the primary target for the usage of tools and metrics. Despite most reported issues being related to Architectural Technical Debt (ATD), our participants did not apply any architectural or service-oriented tools and metrics. While participants generally saw their Microservices as evolvable, service cutting and finding an appropriate service granularity with low coupling and high cohesion were reported as challenging. Future Microservices research in the areas of evolution and technical debt should take these findings and industry sentiments into account.
Purpose of the research paper is to illuminate the subject of assortment policy in the German fashion e‐commerce market. A short literature review is conducted in order to set up a system of characteristics to contemplate assortments on a strategic level. In a second step, structured observations are conducted to quantitatively analyze and compare the assortments of the leading online fashion retailers within Germany. Based on literature, the following characteristics for a classification of assortments can be identified: assortment structure, assortment size, assortment width, assortment depth, assortment consistency and rotation, price level, quality mix, fashion degree as well as the mix of private labels and manufacturer brands. Furthermore, the results of the empirical analysis show that there are currently five leaders within the nalyzed market: Amazon, Otto, Zalando, Baur and About You. Among these five market leaders, Amazon positions itself as a retailer that not only offers an enormous assortment size, but also the lowest entry prices as well as the broadest price dispersion. Through the development of the system of characteristics for assortment analysis and the examination of the current market environment, the findings of this paper contribute to the current state of the art in both theoretical and practical aspects.
Today many vertical retailers are operating different sales channels at the same time and are respon-sible for the range of products in all sales channels. The purpose of this paper is to examine whether for vertical fashion retailers a format-specific assortment policy can be observed on the German mar-ket. To investigate this topic in addition to secondary data of a literature research, quantitative prima-ry data was collected through a structured observation by conducting store checks. The combination provides insights into the research topic, allows to build hypotheses and to get a current and specific answer on the research topic. The study revealed all vertical retailers exploit the advantage of unlim-ited capacity of the online shop by offering in this channel mainly the broadest and deepest assort-ment. Within the retail store the vertical retailers focus on offering full-price goods for the current season in full size sets. Compared to the online shop here are less styles sophisticated presented and adjusted on the sales floor. For the outlet channel all brands showed a higher density of products and at least a price reduction of 30 per cent. The present paper is limited by time, depth and language of secondary data collection. As the study only conducted quantitative data within limited observations additional visual data over a longer period is necessary.
Home health applications have evolved over the last few decades. Assistive systems such as a data platform in connection with health devices can allow for health-related data to be automatically transmitted to a database. However, there remain significant challenges concerning intermodular communication. Central among them is the challenge of achieving interoperability, the ability of devices to communicate and share data with each other. A major goal of this project was to extend an existing data platform (COMES®) and establish working interoperability by connecting assistive devices with differing approaches. We describe this process for a sleep monitoring and a physical exercise device. Furthermore, we aimed to test this setup and the implementation with a data platform in both a laboratory and an in-home setting with 11 elderly participants. The platform modification was realized, and the relevant changes were made so that the incoming data could be processed by the data platform, as well as visually displayed in real-time. Data was recorded by the respective device and transmitted into the data server with minor disruptions. Our observations affirmed that difficulties and data loss are far more likely to occur with increasing technical complexity, in the event of instable internet connection, or when the device setup requires (elderly) subjects to take specific steps for proper functioning. We emphasize the importance for tests and evaluations of home health technologies in real-life circumstances.
Assistant platforms
(2023)
Many assistant systems have evolved toward assistant platforms. These platforms combine a range of resources from various actors via a declarative and generative interface. Among the examples are voice-oriented assistant platforms like Alexa and Siri, as well as text-oriented assistant platforms like ChatGPT and Bard. They have emerged as valuable tools for handling tasks without requiring deeper domain expertise and have received large attention with the present advances in generative artificial intelligence. In view of their growing popularity, this Fundamental outlines the key characteristics and capabilities that define assistant platforms. The former comprise a multi-platform architecture, a declarative interface, and a multi-platform ecosystem, while the latter include capabilities for composition, integration, prediction, and generativity. Based on this framework, a research agenda is proposed along the capabilities and affordances for assistant platforms.
The influence of sleep on human health is enormous. Accordingly, sleep disorders can have a negative impact on it. To avoid this, they should be identified and treated in time. For this purpose, objective (with an appropriate device) or subjective (based on perceived values) measurement methods are used for sleep analysis to understand the problem. The aim of this work is to find out whether an exchange of the two methods is possible and can provide reliable results. In accordance with this goal, a study was conducted with people aged over 65 years old (a total of 154 night-time recordings) in which both measurement methods were compared. Sleep questionnaires and electronic devices for sleep assessment placed under the mattress were applied to achieve the study aims. The obtained results indicated that the correlation between both measurement methods could be observed for sleep characteristics such as total sleep time, total time in bed and sleep efficiency. However, there are also significant differences in absolute values of the two measurement approaches for some subjects/nights, which leads us to conclude that the substitution is more likely to be considered in case of long-term monitoring where the trends are of more importance and not the absolute values for individual nights.
Physicians in interventional radiology are exposed to high physical stress. To avoid negative long-term effects resulting from unergonomic working conditions, we demonstrated the feasibility of a system that gives feedback about unergonomic
situations arising during the intervention based on the Azure Kinect camera. The overall feasibility of the approach could be shown.
The promise of the EVs is twofold. First, rejuvenating a transport sector that still heavily depends on fossil fuels and second, integrating intermittent renewable energies into the power mix. However, it is still not clear how electricity networks will cope with the predicted increase in EVs and their charging demand, especially in combination with conventional energy demand. This paper proposes a methodology which allows to predict the impact of EV charging behavior on the electricity grid. Moreover, this model simulates the driving and charging behavior of heterogeneous EV drivers which differ in their mobility pattern, decision-making heuristics and charging strategies. The simulations show that uncoordinated charging results in charging load clustering. In contrast, decentralized coordination allows to fill the valleys of the conventional load curve and to integrate EVs without the need of a costly expansion of the electricity grid.
In order to ensure sufficient recovery of the human body and brain, healthy sleep is indispensable. For this purpose, appropriate therapy should be initiated at an early stage in the case of sleep disorders. For some sleep disorders (e.g., insomnia), a sleep diary is essential for diagnosis and therapy monitoring. However, subjective measurement with a sleep diary has several disadvantages, requiring regular action from the user and leading to decreased comfort and potential data loss. To automate sleep monitoring and increase user comfort, one could consider replacing a sleep diary with an automatic measurement, such as a smartwatch, which would not disturb sleep. To obtain accurate results on the evaluation of the possibility of such a replacement, a field study was conducted with a total of 166 overnight recordings, followed by an analysis of the results. In this evaluation, objective sleep measurement with a Samsung Galaxy Watch 4 was compared to a subjective approach with a sleep diary, which is a standard method in sleep medicine. The focus was on comparing four relevant sleep characteristics: falling asleep time, waking up time, total sleep time (TST), and sleep efficiency (SE). After evaluating the results, it was concluded that a smartwatch could replace subjective measurement to determine falling asleep and waking up time, considering some level of inaccuracy. In the case of SE, substitution was also proved to be possible. However, some individual recordings showed a higher discrepancy in results between the two approaches. For its part, the evaluation of the TST measurement currently does not allow us to recommend substituting the measurement method for this sleep parameter. The appropriateness of replacing sleep diary measurement with a smartwatch depends on the acceptable levels of discrepancy. We propose four levels of similarity of results, defining ranges of absolute differences between objective and subjective measurements. By considering the values in the provided table and knowing the required accuracy, it is possible to determine the suitability of substitution in each individual case. The introduction of a “similarity level” parameter increases the adaptability and reusability of study findings in individual practical cases.
Electric freight vehicles have the potential to mitigate local urban road freight transport emissions, but their numbers are still insignificant. Logistics companies often consider electric vehicles as too costly compared to vehicles powered by combustion engines. Research within the body of the current literature suggests that increasing the driven mileage can enhance the competitiveness of electric freight vehicles. In this paper we develop a numeric simulation approach to analyze the cost-optimal balance between a high utilization of medium-duty electric vehicles – which often have low operational costs – and the common requirement that their batteries will need expensive replacements. Our work relies on empirical findings of the real-world energy consumption from a large German field test with medium-duty electric vehicles. Our results suggest that increasing the range to the technical maximum by intermediate (quick) charging and multi-shift usage is not the most cost-efficient strategy in every case. A low daily mileage is more cost-efficient at high energy prices or consumptions, relative to diesel prices or consumptions, or if the battery is not safeguarded by a long warranty. In practical applications our model may help companies to choose the most suitable electric vehicle for the application purpose or the optimal trip length from a given set of options. For policymakers, our analysis provides insights on the relevant parameters that may either reduce the cost gap at lower daily mileages, or increase the utilization of medium-duty electric vehicles, in order to abate the negative impact of urban road freight transport on the environment.
New storage technologies, such as Flash and Non- Volatile Memories, with fundamentally different properties are appearing. Leveraging their performance and endurance requires a redesign of existing architecture and algorithms in modern high performance databases. Multi-Version Concurrency Control (MVCC) approaches in database systems, maintain multiple timestamped versions of a tuple. Once a transaction reads a tuple the database system tracks and returns the respective version eliminating lock-requests. Hence under MVCC reads are never blocked, which leverages well the excellent read performance (high throughput, low latency) of new storage technologies. Upon tuple updates, however, established implementations of MVCC approaches (such as Snapshot Isolation) lead to multiple random writes – caused by (i) creation of the new and (ii) in-place invalidation of the old version – thus generating suboptimal access patterns for the new storage media. The combination of an append based storage manager operating with tuple granularity and snapshot isolation addresses asymmetry and in-place updates. In this paper, we highlight novel aspects of log-based storage, in multi-version database systems on new storage media. We claim that multi-versioning and append-based storage can be used to effectively address asymmetry and endurance. We identify multi-versioning as the approach to address dataplacement in complex memory hierarchies. We focus on: version handling, (physical) version placement, compression and collocation of tuple versions on Flash storage and in complex memory hierarchies. We identify possible read- and cacherelated optimizations.
Mit der Verfügbarkeit leistungsfähiger Computer haben rechnergestützte Simulationsverfahren überall in Wissenschaft und Technik Einzug gehalten. Die modellbasierte Simulation als "virtuelles Experiment" stellt insbesondere im Entwurf technischer Systeme ein wirksames und längst unverzichtbares Hilfsmittel dar, um Entwicklungsergebnisse hinsichtlich gewünschter Eigenschaften abzusichern. Die Möglichkeiten heutiger Simulationsmethoden sind faszinierend, weshalb gerade Anfänger (aber nicht nur diese) der Gefahr ausgesetzt sind, deren Ergebnisse unkritisch zu übernehmen. Besondere Bedeutung kommt hier der Lehre zu. Neben der Anwendung der Simulationswerkzeuge ist es wichtig, den Studierenden auch deren theoretische Grundlagen nahe zu bringen und damit ihr Bewusstsein hinsichtlich der Grenzen der Simulation zu schärfen. Der Workshop der ASIM/GI-Fachgruppen "Simulation technischer Systeme" und "Grundlagen und Methoden in Modellbildung und Simulation" bringt Fachleute aus Wirtschaft und Wissenschaft zum Erfahrungsaustausch rund um die Simulation zusammen. Hierbei werden alle Aspekte von den Grundlagen über die Methoden bis hin zu Werkzeugen und Anwendungsbeispielen angesprochen.
Painting galleries typically provide a wealth of data composed of several data types. Those multivariate data are too complex for laymen like museum visitors to first, get an overview about all paintings and to look for specific categories. Finally, the goal is to guide the visitor to a specific painting that he wishes to have a more closer look on. In this paper we describe an interactive visualization tool that first provides such an overview and lets people experiment with the more than 41,000 paintings collected in the web gallery of art. To generate such an interactive tool, our technique is composed of different steps like data handling, algorithmic transformations, visualizations, interactions, and the human user working with the tool with the goal to detect insights in the provided data. We illustrate the usefulness of the visualization tool by applying it to such characteristic data and show how one can get from an overview about all paintings to specific paintings.
Demand forecasting intermittent time series is a challenging business problem. Companies have difficulties in forecasting this particular form of demand pattern. On the one hand, it is characterized by many non-demand periods and therefore classical statistical forecasting algorithms, such as ARIMA, only work to a limited extent. On the other hand, companies often cannot meet the requirements for good forecasting models, such as providing sufficient training data. The recent major advances of artificial intelligence in applications are largely based on transfer learning. In this paper, we investigate whether this method, originating from computer vision, can improve the forecasting quality of intermittent demand time series using deep learning models. Our empirical results show that, in total, transfer learning can reduce the mean square error by 65 percent. We also show that especially short (65 percent reduction) and medium long (91 percent reduction) time series benefit from this approach.
Addressing the high complexity of brand image measurement, the present research paper investigates the use of artificial neural networks in this particular application context. Since profound insights into the image of a brand are essential for management, the deployment of this learning algorithm is considered as it allows modeling of complex non-linear and multilayered relationships. The conceptual approach presented in the paper is illustrated with the empirical example of the sportswear manufacturer adidas. By using quantitative survey data, a multilayer artificial neural network is created to link the evaluations of specific brand attributes with the overall evaluation of the brand. Based on an analysis of the connection weights between neurons of the artificial neural network, the importance of different brand attributes for the brand evaluation is quantified. This results in concrete implications for brand management practice and potential for further investigations on the use of artificial intelligence in marketing analytics.
Artificial Intelligence (AI) in der Markenführung: Künstliche Neuronale Netze zur Markenimagemessung
(2023)
Da Künstliche Neuronale Netze die Modellierung nichtlinearer und vielschichtiger Beziehungen ermöglichen, befasst sich dieser Beitrag mit deren Einsatzmöglichkeiten für die methodisch anspruchsvolle Analyse und Messung des Markenimages. Zur Veranschaulichung des konzeptionellen Ansatzes wird am empirischen Beispiel des Sportartikelherstellers adidas ein mehrschichtiges Künstliches Neuronales Netz zwischen den Bewertungen spezifischer Markenattribute und der Gesamtbewertung der Marke erzeugt. Auf der Grundlage einer Analyse der Verbindungsgewichte des Künstliches Neuronales Netzes wird die Bedeutung verschiedener Markenattribute für die Markenbewertung gemessen, wodurch sich konkrete Implikationen für die Praxis der Markenführung ableiten lassen.
Artefaktkorrektur und verfeinerte Metriken für ein EEG-basiertes System zur Müdigkeitserkennung
(2019)
Fragestellung: Müdigkeit ist ein oft unterschätztes, aber dennoch großes Problem im Straßenverkehr. Von rund 2,5 Mio. Verkehrsunfällen 2015 in Deutschland, waren 2898 Unfälle, mit insgesamt 59 Toten (~1,7 % der Todesfälle), auf Übermüdung zurückzuführen. Schätzungen gehen von einer Dunkelziffer von bis zu 20 % aus. In einer ersten eigenen Studie wurde überprüft, ob ein mobiles EEG in einem Fahrsimulator Müdigkeitszustände zuverlässig erkennen kann. Die Erkennungsrate lag lediglich bei 61 %. Ziel dieser Arbeit ist, das verwendete Messsystem zu verbessern. Dazu wird die Genauigkeit durch eine Artefaktkorrektur und mit Hilfe von verfeinerten Qualitätsmetriken erhöht. Eine erkannte Übermüdung wird dem Fahrer dann in angemessener Weise angezeigt, so dass er entsprechend reagieren kann.
Patienten und Methoden: Die Independent Component Analysis (ICA) ist ein multivariates Verfahren, um mehrere Zufallsvariablen zu analysieren. Für die Entscheidung, ob ein Fahrer gerade müde oder wach ist, wird der erstellte Merkmalsvektor für jede Sequenz mit ICA klassifiziert. Dafür wird ein trainierter Machine-Learning-Algorithmus eingesetzt, der in der Lage ist, auch unbekannte Datensätze in Klassen einzuteilen. Um die benötigten Frequenzwerte zu erhalten, wurde für jeden EEG-Kanal eine Fourier Transformation durchgeführt. Der erstellte Merkmalsvektor wird im nächsten Schritt durch ein Künstliches Neuronales Netz klassifiziert. Für das Training werden vorab erstellte Merkmalsvektoren mit den Klassen „Wach“ und „Müde“ versehen. Diese Daten werden zufällig gemischt und im Verhältnis 2:1 in eine Trainings- und Testmenge geteilt. Das Experiment wurde mit acht Personen mit jeweils zweimal 45 min Testfahrt durchgeführt.
Ergebnisse: Der komplette Datensatz besteht aus 150.000 Signalwerten, welche zu ca. 7000 Sequenzen zusammengefasst werden. Durch die Anwendung der Qualitätsmetrik bleiben 4370 Sequenzen für das Training übrig. Bei invaliden Sequenzen aufgrund von EEG-Artefakten gibt es deutliche Unterschiede. Im „Wach“ Zustand werden dreimal so viele Sequenzen verworfen als im „Müde“ Zustand. Insgesamt werden bei wachen Probanden im Schnitt ca. 50 % der Sequenzen verworfen, bei Müden lediglich 25 %. Im Durchschnitt erreicht das System eine Erkennungsrate von 73 % für beide Zustände. Vergleicht man nun das Verhältnis von „Wach“ und „Müde“ und lässt „Leichte Müdigkeit“ außen vor, liegen die Ergebnisse bei über 90 %.
Schlussfolgerungen: Die Ergebnisse zeigen, dass die Aufmerksamkeit während des Experiments abnimmt bzw. die Müdigkeit zunimmt. Dies verdeutlichen zum einen subjektive und objektive Beobachtungen von Müdigkeitsanzeichen. Zum anderen lassen sich messbare und klassifizierbare Unterschiede im EEG Signal nachweisen. Die als Merkmale eingesetzten Theta-Wellen zeigten eine niedrigere Amplitude gegen Ende des Experiments. Die Erweiterung der binären Klassifizierung führt zu einer weiteren Stabilisierung der Ergebnisse. Artefaktkorrektur und Qualitätsmetriken steigern die Güte der Daten weiter. Die entwickelte Anwendung zur Müdigkeitserkennung ermittelt messbare Zeichen von Müdigkeit und kann eine gute Entscheidung über die Fahrtauglichkeit treffen.
Airports largely outgrew their sole purpose of simply being travel hubs and by connecting millions of passengers to their destinations each year on an international scale, they have become increasingly interesting for business and related marketing opportunities. In fact, passengers are easily segmented and can be reached effectively throughout specific airport areas, making some areas more suitable for advertising than others. Emotional states, roaming time and the freedom to move vastly, influence how much information passengers are able to absorb from their direct surroundings. Finally our research shows that some areas are more suitable than others. Therefore a careful selection of airport locations for communication will be key to secure the impact and improve the effectiveness of communication measures. With these insights, advertisers can deliberately choose the areas that are most effective for displaying their ads.
For area reasons, NMOS transistors are preferred over PMOS for the pull-up path in gate drivers. Bootstrapping has to ensure sufficient NMOS gate overdrive. Especially in high-current gate drivers with large transistors, the bootstrap capacitor is too large for integration. This paper proposes three options of fully integrated bootstrap circuits. The key idea is that the main bootstrap capacitor is supported by a second bootstrap capacitor, which is charged to a higher voltage and ensures high charge allocation when the driver turns on. A capacitor sizing guideline and the overall driver implementation including a suitable charge pump for permanent driver activation is provided. A linear regulator is used for bootstrap supply and it also compensates the voltage drop of the bootstrap diode. Measurements from a testchip in 180 nm high-voltage BiCMOS confirm the benefit of high-voltage charge storing. The fully integrated bootstrap circuit with two stacked 75.8 pF and 18.9 pF capacitors results in an expected voltage dip of lower than 1 V. Both bootstrap capacitors require 70% less area compared to a conventional bootstrap circuit. Besides drivers, the proposed bootstrap can also be directly applied to power stages to achieve fully integrated switched mode power supplies or class-D output stages.
Student-faculty interactions that promote learning are essential contributors to student retention, academic success and satisfaction. But the factors that causally initiate and frame these interactions are not well understood. Only if students evaluate these interactions as positive will they seek them. We conducted a survey experiment with students (n = 375) from a tuition-fee-free German business school, using conditional process analysis to assess which factors frame effective interactions. We focus on out-of-classroom standard and non-standard requests that students make to faculty, then investigate how faculty and student gender and students’ academic entitlement influence the interaction. Our study examines how students evaluate the interaction with faculty: when they seek interaction, their expectations of getting their requests approved, and their disappointment when their requests are declined. We find a significant influence of the request type along with moderating effects of faculty gender, student gender and student entitlement, particularly for non-standard work requests. We conclude with policy implications for university management: developing target-group-specific measures that facilitate the desired and positively evaluated student-faculty interactions might benefit all university stakeholders.
Unternehmen sind derzeit dabei, ihre Strategie, ihre Prozesse und ihre Informationssysteme zu verändern, um ihren Digitalisierungsgrad zu erhöhen. Das Potenzial des Internets und verwandter digitaler Technologien wie Internet der Dinge, Services Computing, Cloud Computing, künstliche Intelligenz, Big Data mit Analysen, mobile Systeme, Kollaborationsnetzwerke und cyber-physikalische Systeme treibt neue Geschäftsmodelle an und ermöglicht sie. Die Digitalisierung führt zu einer tiefgreifenden Umwälzung bestehender Unternehmen, Technologien und Volkswirtschaften und fördert die Architektur digitaler Umgebungen mit vielen eher kleinen und verteilten Strukturen. Dies hat starke Auswirkungen auf neue Wertschöpfungsmöglichkeiten und die Gestaltung digitaler Dienste und Produkte, die durch die Nutzung einer service-dominanten Logik gesteuert werden. Das Hauptergebnis des Buchkapitels erweitert Methoden für integrale digitale Strategien um wertorientierte Modelle für digitale Produkte und Dienstleistungen, die im Rahmen eines multiperspektivischen digitalen Unternehmensarchitektur-Referenzmodells definiert werden.
Dieses forschungsorientierte Buch enthält wichtige Beiträge zur Gestaltung der digitalen Transformation. Es umfasst die folgenden Hauptabschnitte in 20 Kapiteln:
- Digitale Transformation
- Digitales Geschäft
- Digitale Architektur
- Entscheidungshilfe
- Digitale Anwendungen
Es konzentriert sich auf digitale Architekturen für intelligente digitale Produkte und Dienstleistungen und ist eine wertvolle Ressource für Forscher, Doktoranden, Postgraduierte, Absolventen, Studenten, Akademiker und Praktiker, die sich für die digitale Transformation interessieren.
The digitization of our society changes the way we live, work, learn, communicate, and collaborate. The Internet of Things, enterprise social networks, adaptive case management, mobility systems, analytics for big data, and cloud services environments are emerging to support smart connected products and services and the digital transformation. Biological metaphors of living and adaptable ecosystems provide the logical foundation for self-optimizing and resilient run-time environments for intelligent business services and service-oriented enterprise architectures. Our aim is to support flexibility and agile transformations for both business domains and related information technology. The present research paper investigates mechanisms for decision analytics in the context of multi-perspective explorations of enterprise services and their digital enterprise architectures by extending original architecture reference models with state of art elements for agile architectural engineering for the digitization and collaborative architectural decision support. The paper’s context focuses on digital transformations of business and IT and integrates fundamental mappings between adaptable digital enterprise architectures and service-oriented information systems. We are putting a spotlight on the example domain – Internet of Things.
This research-oriented book presents key contributions on architecting the digital transformation. It includes the following main sections covering 20 chapters: · Digital Transformation · Digital Business · Digital Architecture · Decision Support · Digital Applications Focusing on digital architectures for smart digital products and services, it is a valuable resource for researchers, doctoral students, postgraduates, graduates, undergraduates, academics and practitioners interested in digital transformation.
This chapter presents an introduction to the emerging trends for architecting the digital transformation having a strong focus on digital products, intelligent services, and related systems together with methods, models and architectures. The primary aim of this book is to highlight some of the most recent research results in the field. We are providing a focused set of brief descriptions of the chapters included in the book.
Presently, many companies are transforming their strategy and product base, as well as their culture, processes and information systems to become more digital or to approach for a digital leadership. In the last years new business opportunities appeared using the potential of the Internet and related digital technologies, like Internet of Things, services computing, cloud computing, edge and fog computing, social networks, big data with analytics, mobile systems, collaboration networks, and cyber physical systems. Digitization fosters the development of IT environments with many rather small and distributed structures, like the Internet of Things, Microservices, or other micro-granular elements. This has a strong impact for architecting digital services and products. The change from a closed-world modeling perspective to more flexible open-world composition and evolution of micro-granular system architectures defines the moving context for adaptable systems. We are focusing on a continuous bottom-up integration of micro-granular architectures for a huge amount of dynamically growing systems and services, as part of a new digital enterprise architecture for service dominant digital products.
The current advancement of Artificial Intelligence (AI) combined with other digitalization efforts significantly impacts service ecosystems. Artificial intelligence has a substantial impact on new opportunities for the co-creation of value and the development of intelligent service ecosystems. Motivated by experiences and observations from digitalization projects, this paper presents new methodological perspectives and experiences from academia and practice on architecting intelligent service ecosystems and explores the impact of artificial intelligence through real cases supporting an ongoing validation. Digital enterprise architecture models serve as an integral representation of business, information, and technological perspectives of intelligent service-based enterprise systems to support management and development. This paper focuses on architectural models for intelligent service ecosystems, showing the fundamental business mechanism of AI-based value co-creation, the corresponding digital architecture, and management models. The focus of this paper presents the key architectural model perspectives for the development of intelligent service ecosystems.
Our paper gives first answers on a fundamental question: how can the design of architectures of intelligent digital systems and services be accomplished methodologically? Intelligent systems and services are the goals of many current digitalization efforts today and part of massive digital transformation efforts based on digital technologies. Digital systems and services are the foundation of digital platforms and ecosystems. Digtalization disrupts existing businesses, technologies, and economies and promotes the architecture of open environments. This has a strong impact on new value-added opportunities and the development of intelligent digital systems and services. Digital technologies such as artificial intelligence, the Internet of Things, services computing, cloud computing, big data with analytics, mobile systems, and social enterprise networks systems are important enablers of digitalization. The current publication presents our research on the architecture of intelligent digital ecosystems and products and services influenced by the service-dominant logic. We present original methodological extensions and a new reference model for digital architectures with an integral service and value perspective to model intelligent systems and services that effectively align digital strategies and architectures with artificial intelligence as main elements to support intelligent digitalization.
Enterprises are currently transforming their strategy, processes, and their information systems to extend their degree of digitalization. The potential of the Internet and related digital technologies, like Internet of Things, services computing, cloud computing, artificial intelligence, big data with analytics, mobile systems, collaboration networks, and cyber physical systems both drives and enables new business designs. Digitalization deeply disrupts existing businesses, technologies and economies and fosters the architecture of digital environments with many rather small and distributed structures. This has a strong impact for new value producing opportunities and architecting digital services and products guiding their design through exploiting a Service-Dominant Logic. The main result of the book chapter extends methods for integral digital strategies with value-oriented models for digital products and services which are defined in the framework of a multi-perspective digital enterprise architecture reference model.
To deliver on a digital value proposition, companies must fundamentally re-architect. In other words, they must redesign their processes, systems, roles, data, and habits to allow them to iteratively create, enhance, an replace digital offerings. This briefing examines how Royal Philips is transforming its value proposition - and its entire company - to seize the opportunities presented by digital technologies.
Arbeitswelten strategisch entwicklen: mit den DigiTraIn-Instrumenten zur digitalen Transformation
(2021)
Der Weg in die digitale Arbeitswelt ist für viele Unternehmen eine herausfordernde und komplexe Transformation. Um diesen Weg erfolgreich zu beschreiten, benötigen Unternehmen funktionierende Managementinstrumente. Im Projekt DigiTraIn 4.0 wurden vier Instrumente für eine gelingende Transformation in das digitale Arbeiten entwickelt und in der Unternehmenspraxis erprobt. Diese Instrumente werden im vorliegenden Beitrag, ausgehend von der Zielsetzung des Projekts, einführend dargestellt. Zudem wird ein Überblick über die weiteren Beiträge in diesem Buch gegeben, in denen die Instrumente im Detail erläutert werden und spezifische Aspekte des Wandels in die digitale Arbeitswelt im Fokus stehen.
Das Arbeitsrecht für Führungskräfte ist verzwickt: Auf der einen Seite üben sie tägliche geschäftsleitende Aufgaben im Rahmen der festgelegten Unternehmenspolitik aus. Auf der anderen Seite sehen viele Rechtssysteme Führungskräfte als Arbeitnehmer an und integrieren sie in das Arbeitnehmerrecht mit seinen sozialen Schutzvorschriften. In Deutschland wurde z. B. ein spezielles Arbeitsrecht für Arbeitnehmer mit Führungsverantwortung geschaffen, um der Doppelfunktion gerecht zu werden.
Gerade bei Entsendungen von Führungskräften in das Ausland oder vor Entscheidungen über den Kauf von ausländischen Unternehmen ist entscheidend, wie die verschiedenen Rechtssysteme ausgestaltet sind.
Das neue Handbuch gibt entscheidende Hinweise zur arbeitsrechtlichen Stellung von Führungskräften in Belgien, Brasilien, Großbritannien, Frankreich, Deutschland, Italien, Japan, Spanien und den USA.
Zukünftige Montagearbeitsplätze müssen veränderten Herausforderungen, wie z. B. der zunehmenden Anzahl von Mensch Roboter-Kollaborationen, gerecht werden. Die Virtual Reality (VR)-Technik bietet im Rahmen der Arbeitsplatzgestaltung neue Möglichkeiten, diesen veränderten Planungsherausforderungen gerecht zu werden. Die Ausarbeitung stellt eine Methode zur Bewertung des sinnvollen Einsatzes der VR-Technik für einen spezifischen Arbeitsplatz vor. Außerdem wird aufgezeigt, wie die VR-Technik in den Prozess der Arbeitsplatzgestaltung integriert werden kann.
The evolution of Services Oriented Architectures (SOA) presents many challenges due to their complex, dynamic and heterogeneous nature. We describe how SOA design principles can facilitate SOA evolvability and examine several approaches to support SOA evolution. SOA evolution approaches can be classified based on the level of granularity they address, namely, service code level, service interaction level and model level. We also discuss emerging trends, such as microservices and knowledge-based support, which can enhance the evolution of future SOA systems.
The early involvement of experiences gained through intelligence and data analysis is becoming increasingly important in order to develop new products, leading to a completely different conception of product creation, development and engineering processes using the advantages that the dedication of the digital twin entails. Introducing a novel stage gate process in order to be holistically anchored in learning factories adopting idea generation and idea screening in an early stage, beta testing of first prototypes, technical implementation in real production scenarios, business analysis, market evaluation, pricing, service models as well as innovative social media portals. Corresponding product modelling in the sense of sustainability, circular economy, and data analytics forecasts the product on the market both before and after market launch with the interlinking of data interpretation nearby in real-time. The digital twin represents the link between the digital model and the digital shadow. Additionally, the connection of the digital twin with the product provides constantly updated operating status and process data as well as mapping of technical properties and real-world behaviours. A future-networking product, by embedded information technology with the ability to initiate and carry out one's own further development, is able to interact with people and environments and thus is relevant to the way of life of future generations. In today's development work for this new product creation approach, on one hand, "Werk150" is the object of the development itself and on the other hand the validation environment. In the next step, new learning modules and scenarios for trainings at master level will be derived from these findings.
It has not yet been possible to achieve the desired aim of decoupling economic growth from global material demand. Small and medium sized enterprises (SMEs) represent the backbone of most industrialized economies. Although material efficiency is of vital importance for many SMEs, few of them actually treat it as their top priority. There is a cornucopia of tools and methods available, which can be used for material efficiency purposes. These, however, have gained little groud in the SME-field. This work deals with the enabling factors for material efficiency improvements in manufacturing SMEs and projections towards aspects of supply chain and circular economy. A multi-disciplinary decoupling approach for manufacturing SMEs and an implementation roadmap for further practical development are proposed. The approach combines appropriate complexity of technology and socio-economic considerations. It enables a connection to existing methods and the implementation of established information technologies.
It has not yet been possible to achieve the desired aim of decoupling economic growth from global material demand. Small and medium sized enterprises (SMEs) represent the backbone of most industrialized economies. Although material efficiency is of vital importance for many SMEs, few of them actually treat it as their top priority. There is a cornucopia of tools and methods available which can be used for material efficiency purposes. These, however, have gained little ground in the SME-field. This work deals with the enabling factors for material efficiency improvements in manufacturing SMEs and projections towards aspects of supply chain and circular economy. A multi-disciplinary decoupling approach for manufacturing SMEs and an implementation roadmap for further practical development are proposed. The approach combines appropriate complexity of technology and socio-economic considerations. It enables a connection of existing methods and the implementation of established information technologies.
Patterns are virtually simulated in 3D CAD programs before production to check the fit. However, achieving lifelike representations of human avatars, especially regarding soft tissue dynamics, remains challenging. This is mainly since conventional avatars in garment CAD programs are simulated with a continuous hard surface and not corresponding to the human physical and mechanical body properties of soft tissue. In the real world, the human body’s natural shape is affected by the contact pressure of tight-fitting textiles. To verify the fit of a simulated garment, the interactions between the individual body shape and the garment must be considered. This paper introduces an innovative approach to digitising the softness of human tissue using 4D scanning technology. The primary objective of this research is to explore the interactions between tissue softness and different compression levels of apparel, exerting pressure on the tissue to capture the changes in the natural shape. Therefore, to generate data and model an avatar with soft body physics, it is essential to capture the deform ability and elasticity of the soft tissue and map it into the modification options for a simulation. To aim this, various methods from different fields were researched and compared to evaluate 4D scanning as the most suitable method for capturing tissue deformability in vivo. In particular, it should be considered that the human body has different deformation capabilities depending on age, the amount of muscle and body fat. In addition, different tissue zones have different mechanical properties, so it is essential to identify and classify them to back up these properties for the simulation. It has been shown that by digitising the obtained data of the different defined applied pressure levels, a prediction of the deformation of the tissue of the exact person becomes possible. As technology advances and data sets grow, this approach has the potential to reshape how we verify fit digitally with soft avatars and leverage their realistic soft tissue properties for various practical purposes.
Due to Industry 4.0, the full value creation has the chance to undergo a fundamental technological transformation, the realisation of which, however, requires the commitment of every company for its own benefit. The new approaches of Industry 4.0 are often hardly evaluated, let alone proven, so that SMEs in particular often cannot properly estimate the potentials and risks, and often waiting too long with the migration towards Industry 4.0. In addition, they often do not pursue an integrated concept in order to identify possible potentials through changes in their business models. . As part of the research project "GEN-I 4.0 – Geschäftsmodell-Entwicklung für die Industrie 4.0” ", the ESB Business School at Reutlingen University of Applied Sciences and the Fraunhofer Institute for Industrial Engineering and Organization FHG IAO were engaged by the Baden-Württemberg Foundation from 2016 to 2018 to develop tools and an approach how the local economy can develop digital business models for itself in a methodical, beneficial and targeted manner. Through international analyses and interviews GEN-I 4.0 gained and concretized the knowledge required for the evaluation and selection of solutions and approaches for the transfer to develop digital business models. Together with the know-how of the project partners on Industry 4.0 and business model development, the findings were incorporated into the development of two software tools with which SMEs are shown the potentials of Industry 4.0 for their individual business model, online and in selfassessment, and given a comprehensive structured, concrete approach to development, as well as their individual risk. Users of the tools are supported by the selected platform for the networking of different players to implement innovative business models accompanied by coaching concepts for the companies in the follow-up and implementation of the assessment results.
Cotton contamination by honeydew is considered one of the significant problems for quality in textiles as it causes stickiness during manufacturing. Therefore, millions of dollars in losses are attributed to honeydew contamination each year. This work presents the use of UV hyperspectral imaging (225–300 nm) to characterize honeydew contamination on raw cotton samples. As reference samples, cotton samples were soaked in solutions containing sugar and proteins at different concentrations to mimic honeydew. Multivariate techniques such as a principal component analysis (PCA) and partial least squares regression (PLS-R) were used to predict and classify the amount of honeydew at each pixel of a hyperspectral image of raw cotton samples. The results show that the PCA model was able to differentiate cotton samples based on their sugar concentrations. The first two principal components (PCs) explain nearly 91.0% of the total variance. A PLS-R model was built, showing a performance with a coefficient of determination for the validation (R2cv) = 0.91 and root mean square error of cross-validation (RMSECV) = 0.036 g. This PLS-R model was able to predict the honeydew content in grams on raw cotton samples for each pixel. In conclusion, UV hyperspectral imaging, in combination with multivariate data analysis, shows high potential for quality control in textiles.
Military organizations have special features like following different organizational laws in times of peace and war and their specific embeddedness in society and politics. Especially the latter aspect has made the military an important object of study since the beginnings of modern sociology. In the wake of establishing specific sociological accounts, military sociology has been developed, dedicated to the different facets of the military. This research is based on different theoretical perspectives, but has hardly embraced the frameworks from economics and sociology of conventions (EC/SC) so far. The aim of the chapter is to explore and demonstrate the potentials of this approach. In a first step, the state of the art of military sociology research is outlined, and potential avenues for analyzing military forces based on EC/SC are identified. It is argued that especially the connection to organizational theory (military as organization) and civil-military relations, including leadership and professionalism, offer starting points. After introducing existing studies addressing military-related topics with reference to EC/SC, relevant concepts and approaches of convention theory that prove to be particularly enriching for military research are discussed. An outlook on possible further fields and topics of research is given to concretize how an inclusion of the perspective of EC/SC could look like.
Enterprises and societies currently face crucial challenges, while Industry 4.0 becomes important in the global manufacturing industry all the more. Industry 4.0 offers a range of opportunities for companies to increase the flexibility and efficiency of production processes. The development of new business models can be promoted with digital platforms and architectures for Industry 4.0. Therefore, products from the healthcare sector can increase in value. The adaptive integrated digital architecture framework (AIDAF) for Industry 4.0 is expected to promote and implement the digital platforms and robotics for healthcare and medical communities efficiently. In this paper, we propose that various digital platforms and robotics are designed and evaluated for digital healthcare as for manufacturing industry with Industry 4.0. We argue that the design of an open healthcare platform “Open Healthcare Platform 2030 - OHP2030” for medical product design and robotics can be developed with AIDAF. The vision of AIDAF applications to enable Industry 4.0 in the OHP2030 research initiative is explained and referenced, extended in the context of Society 5.0.
Enterprises and societies currently face essential challenges, and digital transformation can contribute to their resolution. Enterprise architecture (EA) is useful for promoting digital transformation in global companies and information societies covering ecosystem partners. The advancement of new business models can be promoted with digital platforms and architectures for Industry 4.0 and Society 5.0. Therefore, products from the sector of healthcare, manufacturing and energy, etc. can increase in value. The adaptive integrated digital architecture framework (AIDAF) for Industry 4.0 and the design thinking approach is expected to promote and implement the digital platforms and digital products for healthcare, manufacturing and energy communities more efficiently. In this paper, we propose various cases of digital transformation where digital platforms and products are designed and evaluated for digital IT, digital manufacturing and digital healthcare with Industry 4.0 and Society 5.0. The vision of AIDAF applications to perform digital transformation in global companies is explained and referenced, extended toward the digitalized ecosystems such as Society 5.0 and Industry 4.0.
Knowledge transfer is very important to our knowledge-based society and many approaches have been proposed to describe this transfer. However, these approaches take a rather abstract view on knowledge transfer, which makes implementation difficult. In order to address this issue, we introduce a layered model for knowledge transfer that structures the individual steps of knowledge transfer in more detail. This paper gives a description of the process and also an example of the application of the layered model for knowledge transfer. The example is located in the area of business process modelling. Business processes contain the important knowledge describing the procedures of the company to produce products and services. Knowledge transfer is the fundamental basis in the modelling and usage of Business processes, which makes it an interesting use case for the layered model for knowledge transfer.
This paper develops a linear and tractable model of financial bubbles. I demonstrate the application of the linear model and study the root causes of financial bubbles. Moreover, I derive leading properties of bubbles. This model enables investors and regulators to react to market dynamics in a timely manner. In conclusion, the linear model is helpful for the empirical verification and detection of financial bubbles.
This paper analyzes governance mechanisms for different group sizes. The European sovereign debt crisis has demonstrated the need of efficient governance for different group sizes. I find that self-governance only works for sufficiently homogenous and small neighbourhoods. Second, as long as the union expands, the effect of credible self-governance decreases. Third, spill-over effects amplify the size effect. Fourth, I show that sufficiently large monetary unions, are better off with costly but external governance or a free market mechanism. Finally, intermediate-size unions are most difficult to govern efficiently.
Applied mathematical theory for monetary-fiscal interaction in a supranational monetary union
(2014)
I utilize a differentiable dynamical system á la Lotka-Voletrra and explain monetary and fiscal interaction in a supranational monetary union. The paper demonstrates an applied mathematical approach that provides useful insights about the interaction mechanisms in theoretical economics in general and a monetary union in particular. I find that a common central bank is necessary but not sufficient to tackle the new interaction problems in a supranational monetary union, such as the free-riding behaviour of fiscal policies. Moreover, I show that upranational institutions, rules or laws are essential to mitigate violations of decentralized fiscal policies.
Application to CAE systems
(2016)
Due to the broad acceptance of CAD-systems based on 3D solids, the geometric data of all common CAE (Computer-Aided Engineering) software, at least in mechanical engineering, are based on these solids. We use solid models, where the space filled by material is defined in a simple and easily useable way. Solid models allow for the development of automated meshers that transform solid volumes into finite elements. Even after some unacceptable initial trials, users are able to generate meshes of non-trivial geometries within minutes to hours, instead of days or weeks. Once meshing had no longer been the cost limiting factor of finite element studies, numerical simulation became a tool for smaller industries as well.
Due to the broad acceptance of CAD-systems based on 3D solids , the geometric data of all common CAE (Computer-Aided Engineering) software, at least in mechanical engineering, are based on these solids. We use solid models , where the space filled by material is defined in a simple and easily useable way. Solid models allow for the development of automated meshers that transform solid volumes into finite elements. Even after some unacceptable initial trials, users are able to generate meshes of non-trivial geometries within minutes to hours, instead of days or weeks. Once meshing had no longer been the cost limiting factor of finite element studies, numerical simulation became a tool for smaller industries as well.
In the early days of automated meshing development, there were discussions over the use of tetragonal (Fig. 4.1) or hexagonal based meshes. But, after a short period of time, it became evident, that there were and will always be many problems using automated meshers to generate hexagonal elements . So today nearly all automated 3D-meshing systems use tetragonal elements .
The character of knowledge-intense processes is that participants decide the next process activities on base of the present information and their expert knowledge. The decisions of these knowledge workers are in general non-deterministic. It is not possible to model these processes in advance and to automate them using a process engine of a BPM system. Hence, in this context a process instance is called a case, because there is no predefined model that could be instantiated. Domain-specific or general case management systems are used to support the knowledge workers. These systems provide all case information and enable users to define the next activities, but they have no or only limited activity recommendation capabilities. In the following paper, we present a general concept for a self-learning system based on process mining that suggests the next best activity on quantitative and qualitative data for a given case. As a proof of concept, it was applied to the area of insurance claims settlement.
Like many others, fashion companies have to deal with a global and very competitive environment. Thus companies rely on accurate sales forecasts - as key success factor of an efficient supply chain management. However, forecasters have to take into account some specificities of the fashion industry. To respond to these constraints, a variety of different forecasting methods exists, including new, computer-based predictive analytics. After the evaluation of different methods, their application to the fashion industry is investigated through semi structured expert interviews. Despite several benefits predictive analytics is not yet frequently used in practice. This research does not only reflect an industry profile, but also gives important insights about the future potential and obstacles of predictive analytics.
Literature reviews are essential for any scientific work, both as part of a dissertation or as a stand-alone work. Scientists benefit from the fact that more and more literature is available in electronic form, and finding and accessing relevant literature has become more accessible through scientific databases. However, a traditional literature review method is characterized by a highly manual process, while technologies and methods in big data, machine learning, and text mining have advanced. Especially in areas where research streams are rapidly evolving, and topics are becoming more comprehensive, complex, and heterogeneous, it is challenging to provide a holistic overview and identify research gaps manually. Therefore, we have developed a framework that supports the traditional approach of conducting a literature review using machine learning and text mining methods. The framework is particularly suitable in cases where a large amount of literature is available, and a holistic understanding of the research area is needed. The framework consists of several steps in which the critical mind of the scientist is supported by machine learning. The unstructured text data is transformed into a structured form through data preparation realized with text mining, making it applicable for various machine learning techniques. A concrete example in the field of smart cities makes the framework tangible.
Development work within an experimental environment, in which certain properties are investigated and optimized, requires many test runs and is therefore often associated with long execution times, costs and risks. This can affect product, material and technology development in industry and research. New digital driver technologies offer the possibility to automate complex manual work steps in a cost-effective way, to increase the relevance of the results and to accelerate the processes many times over. In this context, this article presents a low-cost, modular and open-source machine vision system for test execution and evaluates it on the basis of a real industrial application. For this purpose a methodology for the automated execution of the load intervals, the process documentation and for the evaluation of the generated data by means of machine learning to classify wear levels. The software and the mechanical structure are designed to be adaptable to different conditions, components and for a variety of tasks in industry and research. The mechanical structure is required for tracking the test object and represents a motion platform with independent positioning by machine vision operators or machine learning. An evaluation of the state of the test object is performed by the transfer learning after the initial documentation run. The manual procedure for classifying the visually recorded data on the state of the test object is described for the training material. This leads to an increased resource efficiency on the material as well as on the personnel side since on the one hand the significance of the tests performed is increased by the continuous documentation and on the other hand the responsible experts can be assigned time efficiently. The presence and know-how of the experts are therefore only required for defined and decisive events during the execution of the experiments. Furthermore, the generated data are suitable for later use as an additional source of data for predictive maintenance of the developed object.
This paper reports an analysis of application and impact of FMEA on susceptibility of generic IT-networks. It is not new that in communication system, the frequency and the data transmission rate play a very important role. The rapid increase in miniaturization of electronic devices leads to very sensitivity against electromagnetic interference. Since the IT network with the data transfer rate makes a huge contribution to this development it is very important to monitor their functionality. Therefore, tests are performed to observe and ensure the data transfer rate of IT networks against IEMI. A fault tree model is presented and observed effects during radiation of disturbance on complex system by a HPEM interference sources are described using a continuous and consistent model of the physical layer to the application layer.
To illustrate the power and the pitfalls of Bionic Optimization, we will show some examples spanning classes of applications and employing various strategies. These applications cover a broad range of engineering tasks. Nevertheless, there is no guarantee that our experiences and our examples will be sufficient to deal with all questions and issues in a comprehensive way. As general rule it might be stated, that for each class of problems, novices should begin with a learning phase. So, in this introductory phase, we use simple and quick examples, e.g., using small FE-models, linear load cases, short time intervals and simple material models. Here beginners within the Bionic Optimization community can learn which parameter combinations to use. In Sect. 3.3 we discuss strategies for optimization study acceleration. Making use of these parameters as starting points is one way to set the specific ranges, e.g., number of parents and kids, crossing, mutation radii and, numbers of generations. On the other hand, these trial runs will doubtless indicate that Bionic Optimization needs large numbers of individual designs, and considerable time and computing power. We recommend investing enough time preparing each task in order to avoid the frustration should large jobs fail after long calculation times.
The aim of this paper is to show to what extent Artificial Intelligence can be used to optimize forecasting capability in procurement as well as to compare AI with traditional statistic methods. At the same time this article presents the status quo of the research project ANIMATE. The project applies Artificial Intelligence to forecast customer orders in medium-sized companies.
Precise forecasts are essential for companies. For planning, decision making and controlling. Forecasts are applied, e.g. in the areas of supply chain, production or purchasing. Medium-sized companies have major challenges in using suitable methods to improve their forecasting ability.
Companies often use proven methods such as classical statistics as the ARIMA algorithm. However, simple statistics often fail while applied for complex non-linear predictions.
Initial results show that even a simple MLP ANN produces better results than traditional statistic methods. Furthermore, a baseline (Implicit Sales Expectation) of the company was used to compare the performance. This comparison also shows that the proposed AI method is superior.
Until the developed method becomes part of corporate practice, it must be further optimized. The model has difficulties with strong declines, for example due to holidays. The authors are certain that the model can be further improved. For example, through more advanced methods, such as a FilterNet, but also through more data, such as external data on holiday periods.
In this paper a method for the generation of gSPM with ontology-based generalization was presented. The resulting gSPM was modeled with BPMN/BPMNsix in an efficient way and could be executed with BPMN workflow engines. In the next step the implementation of resource concepts, anatomical structures, and transition probabilities for workflow execution will be realized.
Purpose: Medical processes can be modeled using different methods and notations.Currently used modeling systems like Business Process Model and Notation (BPMN) are not capable of describing the highly flexible and variable medical processes in sufficient detail.
Methods: We combined two modeling systems, Business Process Management (BPM) and Adaptive Case Management (ACM), to be able to model non-deterministic medical processes. We used the new Standards Case Management Model and Notation (CMMN) and Decision Management Notation (DMN).
Results: First, we explain how CMMN, DMN and BPMN could be used to model non-deterministic medical processes. We applied this methodology to model 79 cataract operations provided by University Hospital Leipzig, Germany, and four cataract operations provided by University Eye Hospital Tuebingen, Germany. Our model consists of 85 tasks and about 20 decisions in BPMN. We were able to expand the system with more complex situations that might appear during an intervention.
Conclusion: An effective modeling of the cataract intervention is possible using the combination of BPM and ACM. The combination gives the possibility to depict complex processes with complex decisions. This combination allows a significant advantage for modeling perioperative processes.
An apparatus and method for analyzing a flow of material having an inlet region, a measurement range and an outlet region, and having a first diverter and a second diverter, and a deflection area, wherein in a first state of operation, the two diverters form a continuous first material flow space from the inlet region via the first diverter through the measurement range, via the second diverter to the outlet region, and in a second state of operation, form a continuous second material flow space from the inlet region via the first diverter through the deflection area, via the second diverter to the outlet region.
Today many scientific works are using deep learning algorithms and time series, which can detect physiological events of interest. In sleep medicine, this is particularly relevant in detecting sleep apnea, specifically in detecting obstructive sleep apnea events. Deep learning algorithms with different architectures are used to achieve decent results in accuracy, sensitivity, etc. Although there are models that can reliably determine apnea and hypopnea events, another essential aspect to consider is the explainability of these models, i.e., why a model makes a particular decision. Another critical factor is how these deep learning models determine how severe obstructive sleep apnea is in patients based on the apnea-hypopnea index (AHI). Deep learning models trained by two approaches for AHI determination are exposed in this work. Approaches vary depending on the data format the models are fed: full-time series and window-based time series.
In this paper, we propose a radical new approach for scale-out distributed DBMSs. Instead of hard-baking an architectural model, such as a shared-nothing architecture, into the distributed DBMS design, we aim for a new class of so-called architecture-less DBMSs. The main idea is that an architecture-less DBMS can mimic any architecture on a per-query basis on-the-fly without any additional overhead for reconfiguration. Our initial results show that our architecture-less DBMS AnyDB can provide significant speedup across varying workloads compared to a traditional DBMS implementing a static architecture.
Die anwendungsneutrale und vorsorgliche Verkabelung gibt es bereits seit über 25 Jahren. Die Materie ist zunehmend komplexer geworden. Das ursprünglich für die informationstechnische Vernetzung von Büros vorgesehene Konzept hat sich mit den Jahren auf weitere Anwendungsbereiche, z. B. in Rechenzentren und in industriell oder privat genutzten Bereichen ausgeweitet. Dabei hat jeder Anwendungsbereich neben einem allgemeinen Anforderungsprofil auch ein eigenes, spezifisches Regelwerk. Aufgrund der fortschreitenden Digitalisierung ist zudem eine ständige technologische Anpassung und Weiterentwicklung des Leistungsvermögens vonnöten. Vor diesem Hintergrund wird es zunehmend schwierig, die umfangreichen Normenwerke zu lesen, im Zusammenspiel zu begreifen und optimal anzuwenden.
Und genau hier setzt das Buch an! In dem vorliegenden Buch wird die Kommunikationskabelanlage von der Idee über die Planung, die Spezifizierung, Realisierung, Inbetriebnahme bis hin zur Wartung anschaulich und im Zusammenhang erläutert. Kernstück ist die Vorstellung und Beschreibung der aktuellen Normenreihen DIN EN 50173 (VDE 0800-173) und DIN EN 50174 (VDE 0800-174). Nachdem zunächst auf die Standortvoraussetzungen eingegangen wird, folgen die allgemeinen und spezifischen Anforderungen an informationstechnische Verkabelungen und die verwendeten Komponenten, Kabel bzw. Steckverbinder und zu guter Letzt die Planung, Spezifizierung, Umsetzung und messtechnische Bewertung der Installation. Den Autoren ist es dabei ein Anliegen, nicht nur das Grundverständnis zu den relevanten Anforderungsprofilen zu vermitteln, sondern auch den Blick für den Gesamtzusammenhang, beispielsweise zur Zukunftssicherheit und zum Einfluss unterschiedlicher Umweltbedingungen auf die Auslegung der Verkabelungskomponenten, zu behalten.
Steady state efficiency optimization techniques for induction motors are state of the art and various methods have already been developed. This paper provides new insights in the efficiency optimized operation in dynamic regime. The paper proposes an anticipative flux modification in order to decrease losses during torque and speed transients. These trajectories are analyzed based on a numerical study for different motors. Measurement results for one motor are given as well.
Wollen Unternehmen sozial und ökologisch nachhaltiger werden, beginnt es meistens mit Ankündigungen: Wir werden mehr Mitarbeiter dazu bewegen, mit dem Fahrrad zu kommen! Wir schaffen die Currywurst in der Kantine ab! Wir werden benachteilige Jugendliche stärker fördern! Solche Ankündigungen werden in der Forschung zu Environment, Social und Governance (ESG) als „Aspirational Talk“ bezeichnet. Sie zeigen den Anspruch eines Unternehmens auf: „Wir erkennen die Herausforderungen an und wollen sie meistern.“ Den Ankündigungen sollten dann freilich Taten folgen. Was aber passiert, wenn die Mitarbeiter zwischen dem, was angekündigt wurde, und dem, was gemacht wird, eine Lücke wahrnehmen?
Porous silica materials are often used for drug delivery. However, systems for simultaneous delivery of multiple drugs are scarce. Here we show that anisotropic and amphiphilic dumbbell core–shell silica microparticles with chemically selective environments can entrap and release two drugs simultaneously. The dumbbells consist of a large dense lobe and a smaller hollow hemisphere. Electron microscopy images show that the shells of both parts have mesoporous channels. In a simple etching process, the properly adjusted stirring speed and the application of ammonium fluoride as etching agent determine the shape and the surface anisotropy of the particles. The surface of the dense lobe and the small hemisphere differ in their zeta potentials consistent with differences in dye and drug entrapment. Confocal Raman microscopy and spectroscopy show that the two polyphenols curcumin (Cur) and quercetin (QT) accumulate in different compartments of the particles. The overall drug entrapment efficiency of Cur plus QT is high for the amphiphilic particles but differs widely between Cur and QT compared to controls of core–shell silica microspheres and uniformly charged dumbbell microparticles. Furthermore, Cur and QT loaded microparticles show different cancer cell inhibitory activities. The highest activity is detected for the dual drug loaded amphiphilic microparticles in comparison to the controls. In the long term, amphiphilic particles may open up new strategies for drug delivery.
Im Rahmen der wissenschaftlichen Vertiefung an der Hochschule Reutlingen befasst sich diese Arbeit mit der Untersuchung der Anforderungen und der Machbarkeit zur computergestützten Erkennung der Deutschen Gebärdensprache (DGS) und des deutschen Fingeralphabets. Die Erkenntnisse aus dieser Arbeit dienen als Grundlage zur Entwicklung eines Systems zur Übersetzung von Gebärden der DGS oder des Fingeralphabets in die deutsche Schriftsprache. Zunächst werden grundlegende Informationen zu Geschichte, Aufbau und Grammatik der DGS und des Fingeralphabets aufgeführt. Die Erkennung der Gebärden soll durch optische Bewegungssensoren erfolgen. Hierfür werden unterschieliche Sensortypen betrachtet und verglichen. Im weiteren Verlauf erfolgt die Analyse der benutzerspezifischen und technischen Anforderungen. Erstere basieren auf der Befragung einer Fokusgruppe aus gehörlosen und hörenden Menschen aus dem Bereich der Gehörlosen-, Schwerhörigen- und Sprachbehindertenpädagogik. Abgeleitet aus den Informationen der Anforderungsanalyse ergibt sich, bis zu einem gewissen Grad, die Machbarkeit aus technischer und benutzerspezifischer Sicht. Abschließend erfolgen die Zusammenfassung der Anforderungen, welche an das zu entwickelnde System gestllt werden, sowie eine Handlungsempfehlung für die Entwicklung eines Prototyps.
In Folge der gegenwärtigen Digitalisierung in der produzierenden Industrie werden Anwendungen oder Services mit potentiell positiven Auswirkungen auf Faktoren wie Effektivität und Arbeitsqualität entwickelt. Ein geeigneter Ansatz zur Stärkung motivierender Aspekte im Arbeitskontext kann Gamification darstellen. In dieser Arbeit ist die initiale Konzeption und Evaluation eines Gamification-Ansatzes für Anwender eines KI-Service zur Maschinenoptimierung dargestellt und möglichen Anforderungen an ein Konzept zur Motivationssteigerung extrahiert.
Anforderungen an die Mensch-Maschine-Schnittstelle im Automobil auf dem Weg zum autonomen Fahren
(2017)
In den letzten Jahrzehnten haben immer mehr Fahrerassistenzsysteme Einzug in das Automobil gefunden und bereiten damit den Weg zu vollautonomen Fahrzeugen der Zukunft vor. So bieten bereits viele Hersteller Ausstattungsvarianten ihrer Fahrzeuge an, die für den Umstieg in die vollautonome Zukunft gewappnet sind. Um den Menschen mit auf den Weg zu nehmen, werden einige Anforderungen an die Mensch-Maschine-Schnittstelle (MMS) des Automobils gestellt. Für die teilautonomen Fahrzeuge der nächsten Generation gilt es, den Fahrerwechsel zwischen manuellem und autonomen Fahren für die Menschen bestmöglich zu gestalten. Die Arbeit wirft einen Blick auf ausgewählte Ansätze für zukünftige MMS-Systeme und bewertet diese anhand der Übergabezeiten zwischen Mensch und Maschine. Ein Wandel der MMS im Automobil wird empfohlen, um den Menschen mit den neuen Technologien vertraut zu machen.
In dieser Arbeit werden Anforderungen an ein digitales Referenzmodell der Cell and Gene Therapy (CGT) Supply Chain mittels systematischer Literaturrecherche unter partieller Anwendung der Preferred-Reporting-Items-for-Systematic-Reviews-and-Meta-Analyses(PRISMA)-2020-Methode erarbeitet und erläutert. Die Ergebnisse der Literaturrecherche untermauern, dass die CGT Supply Chain standardisierte und automatisierte Prozesse benötigt, gewissen Transportanforderungen gerecht werden sowie eine lückenlose Rückverfolgbarkeit gewährleisten können muss. Die Anforderungen an das Referenzmodell lehnen sich z. T. an die Anforderungen des klassischen Supply-Chain-Operations-Reference(SCOR)-Modells an, bedürfen jedoch einer Veränderung und Weiterentwicklung unter Beachtung der Besonderheiten der CGT Supply Chain. Auf Basis eines Referenzmodells für die CGT Supply Chain, das die aus dieser Arbeit identifizierten Anforderungen beachtet, kann eine übergeordnete Managementplattform aufgebaut werden. Mit der digitalen Abbildung und Vernetzung aller Aktivitäten ist der Grundstein für die Integration in ein Enterprise-Resource-Planning(ERP)-System zum effektiven Data und Process Mining gelegt. Durch eine zunehmend bessere Datenqualität und -quantität entlang der Prozesse der CGT Supply Chain lassen sich verstärkt Informationen über die Prozesse selbst generieren, aus denen weitere Verbesserungsansätze hervorgehen. Eine CGT-Managementplattform bildet demnach die Grundlage für alle Prozesse innerhalb der CGT Supply Chain für einen kontinuierlichen Verbesserungsprozess.
This paper provides new evidence on the formation and anchoring of inflation expectations. I conduct a game experiment and analyze the adjustment as well as the impact of credible targets on expectations. In addition, I evaluate the idiosyncratic determinants on the formation of expectations. The analysis reveals six results: First, I find evidence that long-term inflation expectations are firmly anchored to a credible target. Second, a temporary deviation due to unexpected monetary policy might trigger a decline in credibility, and third a de-anchoring of expectations due to uncertainty. Fourth, I find that people change their expectations little if a credible target exists. Fifth, expectations exhibit a large degree of time-variance only in environments without a target. Sixth, the dynamic adjustment to an ‘incomplete’ equilibrium, which is theoretically unstable, is nevertheless rapid and persistent in case of credible targets. All in all, I demonstrate a unique game setup with contributions to both experimental and monetary economics.
Informationstechnische Systeme, die den Arbeitsablauf im klinischen Bereich unterstützen, sind aktuell auf organisatorische Abläufe beschränkt. Diese Arbeit stellt einen ersten Ansatz vor, wie solch ein System in den perioperativen Bereich eingebracht werden kann. Hierzu wurde eine Workflow Engine mit einer perioperativen Prozess-Visualisierung verknüpft. Das System wurde nach Modell-View-Controller-Prinzip implementiert. Als "Controller" kommt die Workflow Engine zum Einsatz; also "Modell" ein Prozessmodell, mit den erforderlichen klinischen Daten. Der "View" wurde durch eine abgekoppelte Anwendung realisiert, welche auf Web-Technologien basiert. Drei Visualisierungen, die Workflow Engine sowie die Anbindung beider über eine Datenbankschnittstelle, wurden erfolgreich umgesetzt. Bei den drei Visualisierungen wurden jeweils eine Ansicht für den OP-Koordinator, den Springer und eine Ansicht für die Übersicht einer OP erstellt.
To bring a pattern-based perspective to the SOA vs. microservices discussion, we qualitatively analyzed a total of 118 SOA patterns from 2 popular catalogs for their (partial) applicability to microservices. Patterns had to hold up to 5 derived microservices principles to be applicable. 74 patterns (63%) were categorized as fully applicable, 30 (25%) as partially applicable, and 14 (12%) as not applicable. Most frequently violated microservices characteristics werde Decentralization and Single System. The findings suggest that microservices and SOA share a large set of architectural principles and solutions in the general space of service-based systems while only having a small set of differences in specific areas.
This paper provides a quantitative approach to measuring the effectiveness of ambush marketing by using Google data. To our knowledge, it is one of the first studies that develop an empirical approach that directly measures the attention effect of ambush marketing in sports. The new data consists of 14 ambushers (treatment group) and 26 official sponsors (control group) and covers the time period of 2004 to 2012. These firms conducted marketing activities during the past football World Cups and European Championships. The innovation in our paper is the measurement method of attention by means of Google. The results are as follows: First ambush marketing increases product attention significantly. Second the product awareness of ambushers is greater or the same to that of official sponsors. Finally, we demonstrate that ambush marketing has positive impacts on the company's performance. Overall, we conclude that Google provide new insights for the analysis of ambush marketing.
A major lesson of the recent financial crisis is that money market freezes have major macroeconomic implications. This paper develops a tractable model in which we analyze the microeconomic and macroeconomic implications of a systemic banking crisis. In particular, we consider how the systemic crisis affects the optimal allocation of funding for businesses. We show that a central bank should reduce the interest rate to manage a systemic shock and hence smooth the macroeconomic consequences. Moreover, the analysis offers insight on the rational of bank behavior and the role of markets in a systemic crisis. We find that the failure to adopt the optimal policy can lead to economic fragility.
Data analysis is becoming increasingly important to pursue organizational goals, especially in the context of Industry 4.0, where a wide variety of data is available. Here numerous challenges arise, especially when using unstructured data. However, this subject has not been focused by research so far. This research paper addresses this gap, which is interesting for science and practice as well. In a study three major challenges of using unstructured data has been identified: analytical know-how, data issues, variety. Additionally, measures how to improve the analysis of unstructured data in the industry 4.0 context are described. Therefore, the paper provides empirical insights about challenges and potential measures when analyzing unstructured data. The findings are presented in a framework, too. Hence, next steps of the research project and future research points become apparent.
Modern wide bandgap power devices promise higher power conversion performance if the device can be operated reliably. As switching speed increases, the effects of parasitic ringing become more prominent, causing potentially damaging overvoltages during device turn-off. Estimating the expected additional voltage caused by such ringing enables more reliable designs. In this paper, we present an analytical expression to calculate the expected overvoltage caused by parasitic ringing based on parasitic element values and operating point parameters. Simulations and measurements confirm that the expression can be used to find the smallest rise time of the switches’ drain-source voltage for minimum overvoltage. The given expression also allows the prediction of the trade off overvoltage amplitude in case of faster required rise times.
While driving, stress is caused by situations in which the driver estimates their ability to manage the driving demands as insufficient or loses the capability to handle the situation. This leads to increased numbers of driver mistakes and traffic violations. Additional stressing factors are time pressure, road conditions, or dislike for driving. Therefore, stress affects driver and road safety. Stress is classified into two categories depending on its duration and the effects on the body and psyche: short-term eustress and constantly present distress, which causes degenerative effects. In this work, we focus on distress. Wearable sensors are handy tools for collecting biosignals like heart rate, activity, etc. Easy installation and non-intrusive nature make them convenient for calculating stress. This study focuses on the investigation of stress and its implications. Specifically, the research conducts an analysis of stress within a select group of individuals from both Spain and Germany. The primary objective is to examine the influence of recognized psychological factors, including personality traits such as neuroticism, extroversion, psychoticism, stress and road safety. The estimation of stress levels was accomplished through the collection of physiological parameters (R-R intervals) using a Polar H10 chest strap. We observed that personality traits, such as extroversion, exhibited similar trends during relaxation, with an average heart rate 6% higher in Spain and 3% higher in Germany. However, while driving, introverts, on average, experienced more stress, with rates 4% and 1% lower than extroverts in Spain and Germany, respectively.
Today's logistics systems are characterized by uncertainty and constantly changing requirements. Rising demand for customized products, short product life cycles and a large number of variants increases the complexity of these systems enormously. In particular, intralogistics material flow systems must be able to adapt to changing conditions at short notice, with little effort and at low cost. To fulfil these requirements, the material flow system needs to be flexible in three important parameters, namely layout, throughput and product. While the scope of the flexibility parameters is described in literature, the respective effects on an intralogistics material flow system and the influencing factors are mostly unknown. This paper describes how flexibility parameters of an intralogistics system can be determined using a multi-method simulation. The study was conducted in the learning factory “Werk150” on the campus of Reutlingen University with its different means of transport and processes and validated in terms of practical experiments.
This work is a study about a comparison of survey tools and it should help developers in selecting a suited tool for application in an AAL environment. The first step was to identify the basic required functionality of the survey tools used for AAL technologies and to compare these tools by their functionality and assignments. The comparative study was derived from the data obtained, previous literature studies and further technical data. A list of requirements was stated and ordered in terms of relevance to the target application domain. With the help of an integrated assessment method, the calculation of a generalized estimate value was performed and the result is explained. Finally, the planned application of this tool in a running project is explained.
Today, digitalization is firmly anchored in society and business. It is also recognized to have significant impact on the retailing sector. The in-store display of moving images has so far, however, gained little attention by researchers. The aim of this research is to provide a first estimation on the current state of moving images distribution in stationary retail stores. A store check was the basis for analysis and evaluation. In sum, 152 stores were analyzed in Stuttgart, Germany. Out of 152 observed stores, 62 stores showed 177 moving images. Detailed analyses about content, mood, color and the actors of motion pictures showed that all aspects are very well harmonized with the target group of the store. The chapter provides a basic estimation of the in-store diffusion of moving images. Thereby, avenues for further research are opened up.
The possibility to bring the interference source, close to the potential target is characterized by the property of the source as stationary, portable, mobile, very mobile and highly mobile [3]. Starting from the existing and well-known IEME interference or IEMI (Intentional Electromagnetic Interference) and the already existing classifications an analysis of methods based on a comparative study of the methods used to classify the intentional EM environment is carried out, which takes into account the frequency, the cost, the amplitude of the noise signal, the radiated power and the energy of a pulse of radiation.
There are several intra-operative use cases which require the surgeon to interact with medical devices. I used the Leap Motion Controller as input device for three use-cases: 2D-interaction (e.g. advancing EPR data), selection of a value (e.g. room illumination brightness) and an application point and click scenario. I evaluated the Palm Mouse as the most suitable gesture solution to coordinate the mouse and advise to use the implementation using all fingers to perform a click. This small case study introduces the implementations and methods that result those recommendations.
In der Medizin existieren verschiedene Reifegradmodelle, die die Digitalisierung von Krankenhäusern unterstützen können. Die Anforderungen an ein Reifegradmodell für diesen Zweck umfassen Aspekte aus allgemeinen und spezifischen Bereichen des Krankenhauses. Die Analyse der Reifegradmodelle HIN, CCMM, EMRAM und O-EMRAM zeigt große Lücken im Bereich des OP sowie fehlende Aspekte in der Notaufnahme auf. Ein umfassendes Reifegradmodell wurde nicht gefunden. Durch eine Kombination aus HIN und CCMM könnten fast alle Bereiche ausreichend abgedeckt werden. Zusätzliche Ergänzungen durch spezialisierte Reifegradmodelle oder sogar die Entwicklung eines umfassenden Reifegradmodells wären sinnvoll.
In diesem Beitrag wurde gezeigt, wie mit Hilfe von Verfahren zur Analyse von Petri–Netzen ein in der Programmiersprache Kontaktplan erstelltes SPS–Programm analysiert werden kann. Das Ziel des Verfahrens ist dabei nicht eine Verifikation im eigentlichen Sinne sondern das Aufdecken von verbotenen oder unerwünschten Zuständen. Im Beitrag wurden Regeln zur Transformation des im Kontaktplan erstellten Ablaufs in ein Petri–Netz angegeben und anhand der Analyse eines fehlerhaft implementierten Ablaufs die Leistungsfähigkeit des Ansatzes vorgestellt. Das Beispiel zeigt, dass Programmfehler bereits vor einem Test an der realen Anlage erkannt werden können. Bei der weiteren Entwicklung des Verfahrens liegt ein Schwerpunkt auf der Verallgemeinerung auf im Kontaktplan entwickelte Programmorganisationseinheiten, die nicht nur reine
Abläufe implementieren. Ein weiterer wichtiger Entwicklungsschritt ist die graphische Unterstützung der Fehlersuche im Erreichbarkeitsgraphen, so dass insgesamt ein leistungsfähiges Werkzeug zur Unterstützung der Implementierung von Ablaufsteuerungen im Kontaktplan zur Verfügung steht.
Fragestellung: Das klinische Standardverfahren und Referenz der Schlafmessung und der Klassifizierung der einzelnen Schlafstadien ist die Polysomnographie (PSG). Alternative Ansätze zu diesem aufwändigen Verfahren könnten einige Vorteile bieten, wenn die Messungen auf eine komfortablere Weise durchgeführt werden. Das Hauptziel dieser Forschung Studie ist es, einen Algorithmus für die automatische Klassifizierung von Schlafstadien zu entwickeln, der ausschließlich Bewegungs- und Atmungssignale verwendet [1].
Patienten und Methoden: Nach der Analyse der aktuellen Forschungsarbeiten haben wir multinomiale logistische Regression als Grundlage für den Ansatz gewählt [2]. Um die Genauigkeit der Auswertung zu erhöhen, wurden vier Features entwickelt, die aus Bewegungs- und Atemsignalen abgeleitet wurden. Für die Auswertung wurden die nächtlichen Aufzeichnungen von 35 Personen verwendet, die von der Charité-Universitätsmedizin Berlin zur Verfügung gestellt wurden. Das Durchschnittsalter der Teilnehmer betrug 38,6 +/– 14,5 Jahre und der BMI lag bei durchschnittlich 24,4 +/– 4,9 kg/m2. Da der Algorithmus mit drei Stadien arbeitet, wurden die Stadien N1, N2 und N3 zum NREM-Stadium zusammengeführt. Der verfügbare Datensatz wurde strikt aufgeteilt: in einen Trainingsdatensatz von etwa 100 h und in einen Testdatensatz mit etwa 160 h nächtlicher Aufzeichnungen. Beide Datensätze wiesen ein ähnliches Verhältnis zwischen Männern und Frauen auf, und der durchschnittliche BMI wies keine signifikante Abweichung auf.
Ergebnisse: Der Algorithmus wurde implementiert und lieferte erfolgreiche Ergebnisse: die Genauigkeit der Erkennung von Wach-/NREM-/REM-Phasen liegt bei 73 %, mit einem Cohen’s Kappa von 0,44 für die analysierten 19.324 Schlafepochen von jeweils 30 s. Die beobachtete gewisse Überschätzung der NREM-Phase lässt sich teilweise durch ihre Prävalenz in einem typischen Schlafmuster erklären. Selbst die Verwendung eines ausbalancierten Trainingsdatensatzes konnte dieses Problem nicht vollständig lösen.
Schlussfolgerungen: Die erreichten Ergebnisse haben die Tauglichkeit des Ansatzes prinzipiell bestätigt. Dieser hat den Vorteil, dass nur Bewegungs- und Atemsignale verwendet werden, die mit weniger Aufwand und komfortabler für Benutzer aufgezeichnet werden können als z. B. Herz- oder EEG-Signale. Daher stellt das neue System eine deutliche Verbesserung im Vergleich zu bestehenden Ansätzen dar. Die Zusammenführung der beschriebenen algorithmischen Software mit dem in [1] beschriebenen Hardwaresystem zur Messung von Atem- und Körperbewegungssignalen zu einem autonomen, berührungslosen System zur kontinuierlichen Schlafüberwachung ist eine mögliche Richtung zukünftiger Arbeiten.
Die Arbeit stellt die Möglichkeiten von 3D-Controllern für den Einsatz in der interventionellen Radiologie und insbesondere für die Steuerung der Echtzeit-Magnetresonanztomographie (MRT) dar. Dies ist interessant in Bezug auf die kontrollierte Navigation in ein Zielgewebe. Dabei kann der Interventionalist durch Echtzeit- Bildgebung den Verlauf des Eingriffs verfolgen, allerdings kann er bisher das MRT während der Durchführung des Eingriffs nicht selbst steuern, da dies durch den Assistenten im Nebenraum erfolgt. Die Kommunikation ist bei dem hohen Geräuschpegel aber sehr schwer. Diese Arbeit setzt an dieser Stelle an und analysiert 3D-Controller auf die Eignung für die Echtzeit-Steuerung eines MRTs. Dabei wurden trackingbasierte und trackinglose Geräte betrachtet. Als Ergebnis ließ sich festhalten, dass trackingbasierte Verfahren weniger geeignet sind, aufgrund der nicht ausreichenden Interpretation der Eingaben. Die trackinglosen Geräte hingegen sind aufgrund der korrekten Interpretation aller Eingaben und der intuitiven Bedienung geeignet.
Die Spannungsversorgung elektronischer Steuergeräte im Automotive-Bereich wird zunehmend durch Schaltregler sichergestellt. Der SEPIC (Single Ended Primary Inductance Converter) besitzt die Eigenschaft, eine Spannung aufwärts wie auch abwärts wandeln zu können und könnte somit klassische Buck- und Boost-Wandler ablösen. Dieser Beitrag untersucht den SEPIC hinsichtlich Eignung für Automotive-Anwendungen. Dazu wurde eine Groß- sowie Kleinsignalanalyse am Wandler durchgeführt, mit geeigneten Simulationsmodellen nachgebildet und Messungen gegenüber gestellt. Der SEPIC zeigt als Hauptvorteile:
1. einen verzugsfreien Übergang zwischen Buck-/Boost Betrieb, 2. geringe Eingangswelligkeit, 3.DC-Kurzschlussfestigkeit. Auch hinsichtlich Wirkungsgrad und EMV-Verhalten stellt der SEPIC eine interessante Alternative dar. Der zwischen Ein- und Ausgang liegende Kondensator wird dauerhaft von einem Strom durchflossen, auf Basis der Effektivströme wird das damit verbundene Ausfallrisiko diskutiert.