Refine
Document Type
- Conference proceeding (1039) (remove)
Is part of the Bibliography
- yes (1039)
Institute
- Informatik (570)
- Technik (273)
- ESB Business School (163)
- Texoversum (24)
- Life Sciences (11)
- Zentrale Einrichtungen (2)
Publisher
- IEEE (222)
- Springer (145)
- Hochschule Reutlingen (112)
- Gesellschaft für Informatik e.V (57)
- Association for Computing Machinery (41)
- VDE Verlag (31)
- Association for Information Systems (30)
- SciTePress (21)
- IARIA (19)
- Elsevier (18)
Organizations identified the opportunities of big data analytics to support the business with problem-specific insights through the exploitation of generated data. Sociotechnical solutions are developed in big data projects to reach competitive advantage. Although these projects are aligned to specific business needs, common architectural challenges are not addressed in a comprehensive manner. Enterprise architecture management is a holistic approach to tackle complex business and IT architectures. The transformation of an organization’s EA is influenced by big data transformation processes and their data-driven approach on all layers. In this paper, we review big data literature to analyze which requirements for the EA management discipline are proposed. Based on a systematic literature identification, conceptual categories of requirements for EA management are elicited utilizing an inductive category formation. These conceptual categories of requirements constitute a category system that facilitates a new perspective on EA management and fosters the innovation-driven evolution of the EA management.
discipline.
An ongoing challenge in our days is to lower the impact on the quality of life caused by dysfunctionality through individual support. With the background of an aging society and continuous increases in costs for care, a holistic solution is needed. This solution must integrate individual needs and preferences, locally available possibilities, regional conditions, professional and informal caregivers and provide the flexibility to implement future requirements. The proposed model is a result of a common initiative to overcome the major obstacles and to center a solution on individual needs caused by dysfunctionality.
The increasing slew rate of modern power switches can increase the efficiency and reduce the size of power electronic applications. This requires a fast and robust signal transmission to the gate driver of the high-side switch. This work proposes a galvanically isolated capacitive signal transmission circuit to increase common mode transient immunity (CMTI). An additional signal path is introduced to significantly improve the transmission robustness for small duty cycles to assure a safe turn-off of the power switch. To limit the input voltage range at the comparator on the secondary side during fast high-side transitions, a clamping structure is implemented. A comparison between a conventional and the proposed signal transmission is performed using transistor level simulations. A propagation delay of about 2 ns over a wide range of voltage transients of up to 300V/ns at input voltages up to 600V is achieved.
In practice, the use of layout PCells for analog IC design has not advanced beyond primitive devices and simple modules. This paper introduces a Constraint-Administered PCell-Applying Blocklevel Layout Engine (CAPABLE) which permits PCells to access their context, thus enabling a true "bottom-up" development of complex parameterized modules. These modules are integrated into the design flow with design constraints and applied by an execution cockpit via an automatically built layout script. The practical purpose of CAPABLE is to easily generate full-custom block layouts for given schematic circuits. Perspectively, our results inspire a whole new conception of PCells that can not only act (on demand), but also react (to environmental changes) and interact (with each other).
Power loss measurement of power electronic components and overall systems is sometimes difficult by use of electrical quantities and in few applications even not possible. The calorimetric power loss measurement is an established method to identify the overall system losses with a suitable accuracy. This paper presents a novel method with an open chamber calorimeter under accurate air mass flow, air pressure, humidity measurement and temperature control. The benefits are the approximately halved measurement time compared to established systems and the possibility to control the chamber temperature. So it is possible to measure the power losses at different ambient temperatures.
Even though near-data processing (NDP) can provably reduce data transfers and increase performance, current NDP is solely utilized in read-only settings. Slow or tedious to implement synchronization and invalidation mechanisms between host and smart storage make NDP support for data-intensive update operations difficult. In this paper, we introduce a low-latency cache-coherent shared lock table for update NDP settings in disaggregated memory environments. It utilizes the novel CCIX interconnect technology and is integrated in neoDBMS, a near-data processing DBMS for smart storage. Our evaluation indicates end-to-end lock latencies of ∼80-100ns and robust performance under contention.
Continuous monitoring of individual vital parameters can provide information for the assessment of one’s health and indications of medical problems in the context of personalized medicine. Correlations between parameters and health issues are to be evaluated. As one project in this topic area, a telemedicine platform is implemented to gather data of outpatients via wearables and accumulate them for physicians and researchers to review. This work extracts requirements, draws use case scenarios, and shows the current system architecture consisting of a patient application, a physician application with a web server, and a backend server application. In further work, the prototype will assist to develop a vendor-free and open monitoring solution. A conclusion on functionality and usability will be evaluated in an imminent first study.
Non-fungible tokens (NFTs) are unique digital assets that have recently gained significant popularity, particularly in the digital art sector. The success of NFTs and other blockchain-based innovations depends on their ac-acceptance and use by consumers. This study aims to understand the impact of moral values on the acceptance of NFTs. Based on a quantitative survey with over 800 complete responses, the analysis shows that moral aspects of NFTs are indeed important for potential users. However, there is an attitude-behavior gap, as the positive impact of moral values on the intention to use NFTs is not reflected in the actual current usage of NFTs by the respondents. This study contributes to knowledge by providing new empirical data on the acceptance of NFTs and highlighting the role of moral values on the acceptance decision.
In 2013, Royal Philips was two years into a daunting transformation. Following declining financial performance, CEO Frans van Houten aimed to turn the Dutch icon into a "high-performing Company" by 2017. This case study examines the challenges of the business-driven IT transformation at Royal Philips, a diversified technology company. The case discusses three crucial issues. First, the case reflects on Philips’ aim at creating value from combining locally relevant products and services while also leveraging its global scale and scope. Rewarded and unrewarded business complexity is analyzed. Second, the case identifies the need to design and align multiple elements of an enterprise (organizational, cultural, technical) to balance local responsiveness with global scale. Third, the case explains the role of IT (as an asset instead of a liability) in Philips’ transformation and discusses the new IT landscape with its digital platforms, and the new practices to create effective business-IT partnerships.
The energy sector in Germany, as in many other countries, is undergoing a major transformation. To achieve the climate targets, numerous measures to implement smart energy and resource efficiency are necessary. Therefore, energy companies are experiencing increasing pressure from politics and society to transform their business areas in a sustainable manner and implement smart and sustainable business models. Consequently, numerous resources are expected to flow into the development and implementation of new business models. But often these efforts remain unsuccessful in practice. There is a large amount of literature on barriers and drivers of smart and sustainable business models in the energy sector. But what are the factors that companies struggle with most when developing and implementing new business models in practice? To answer this question, the results of a systematic literature review were evaluated by conducting semi-structured interviews with experts of the German energy sector. Six categories of transformation barriers were identified: Organizational, Financial, Legal, Partner-Network, Societal and Technological barriers. To overcome these barriers, recommendations for action and key success factors are outlined by the experts interviewed. The interview study validates key barriers and drivers in terms of their significance in practice in the German energy sector and makes recommendations to advance the smart and sustainable transformation of the energy sector.
In a time of upheaval and digitalization, new business models for companies play an important role. Decentralized power generation and energy efficiency indicators to achieve climate goals and to reduce global warming are currently forcing energy companies to develop new business models. In recent years, many methods of business model development have been introduced to create new business ideas. But what are the obstacles in implementing these business models in the energy sector to develop new business opportunities? And what challenges do companies face in this respect? To answer this question, a systematic literature review was conducted in this paper. As a result, eight categories were identified which summarise the main barriers for the implementation of new business models in the energy domain.
The energy turnaround, digitalization and decreasing revenues forces enterprises in the energy domain to develop new business models. Following a Design Science Research approach, we showed in two action research projects that businesses models in the energy domain result in complex ecosystems with multiple actors. Additionally, we identified that municipal utilities have problems with the systematic development of business models. In order to solve the problem, we captured together with the partners of the enterprises the requirements in a second phase. Further we developed a method which consist of the following components: Method for the creative development of a new business model in form of a Business Model Canvas (BMC). A mapping between the e3Value ontology and the BMC for modelling a business ecosystem. The Business Model Configurator (BMConfig) prototype for modelling and simulating the e3Value-Ontology. The Business model can be quantified and analyzed for its viability. We demonstrate the feasibility of our approach in business model of a power community.
This paper investigates the impact of dynamic capabilities (DC) on brand love. From a resource-based view, there is little clarity vis-à-vis the specific capabilities that drive the ability to create brand love. This paper focuses on three research questions: Firstly, which dynamic capabilities are relevant for brand love? Secondly, how strong is the impact of certain dynamic capabilities on brand love? Thirdly, which conditions mediate and moderate the impact of specific dynamic capabilities on brand love? Data from a multi-method research approach have been used to itentify the specific capabilities that corporations need, to enhance brand love. Furthermore, a standardized online survey was conducted on marketing executives and evaluated by structural equation modeling. The results indicate, that customer expertise plays a major role in the relationship between dynamic capabilities and brand love. Furthermore, this relationship is more important in markets that have a low competitive differentiation in products and services.
As "the most international company on earth", DHL Express promised to deliver packages between almost any pair of countries within a defined time-frame. To fulfill this promise, the company had introduced a set of global business and technology standards. While standardization had many advantages (improving service for multinational customers, faster response to changes in import/export regulations, sharing of best practices etc.), it created impediments to local innovation and responsiveness in DHL Express' network of 220 countries/territories. Reconciling standardization-innovation tradeoffs is a critical management issue for global companies in the digital economy.
This case describes one large, successful company's approach to the tradeoff of standardization versus innovation.
Enterprise Architecture (EA) management is an activity that seeks to foster the alignment of business and IT, and pursues various goals further operationalizing this alignment. Key to effective EA management is a framework that defines the roles, activities, and viewpoints used for EA management in accordance to the concerns that the stakeholders aim to address. Consensus holds that such frameworks are organization-specific and hence they are designed in governance activities for EA management. As of today, top-down approaches for governance are used to derive organization-specific frameworks. These usually lack systematic mechanisms for improving the framework based on the feedback of the responsible stakeholders. We outline a bottom-up approach for EA management governance that systematically observes the behavior of the actors to learn user concerns and recommend appropriate viewpoints. With this approach, we complement traditional top-down governance activities.
Bootstrap circuits are mainly used for supplying a gate driver circuit to provide the gate overdrive voltage for a high-side NMOS transistor. The required charge has to be provided by a bootstrap capacitor which is often too large for integration if an acceptable voltage dip at the capacitor has to be guaranteed. Three options of an area efficient bootstrap circuit for a high side driver with an output stage of two NMOS transistors are proposed. The key idea is that the main bootstrap capacitor is supported by a second bootstrap capacitor, which is charged to a higher voltage and connected when the gate driver turns on. A high voltage swing at the second capacitor leads to a high charge allocation. Both bootstrap capacitors require up to 70% less area compared to a conventional bootstrap circuit. This enables compact power management systems with fewer discrete components and smaller die size. A calculation guideline for optimum bootstrap capacitor sizing is given. The circuit was manufactured in a 180nm high-voltage BiCMOS technology as part of a high-voltage gate driver. Measurements confirm the benefit of high-voltage charge storing. The fully integrated bootstrap circuit including two stacked 75.8pF and 18.9pF capacitors results in a voltage dip lower than 1V. This matches well with the theory of the calculation guideline.
The main challenge when driving heat pumps by PV-electricity is balancing differing electrical and thermal demands. In this article, a heuristic method for optimal operation of a heat pump driven by a maximum share of PV-electricity is presented. For this purpose, the (DHW) are activated in order shift the operation of the heat pump to times of PV-generation. The system under consideration refers to thermal and electrical demands of a single family house. It consists of a heat pump, a thermal energy storage for DHW and of grid connected heating and generation of domestic hot water, the heat pump runs with two different supply temperatures and thereby achieving a maximum overall COP. Within the algorithm for optimization a set of heuristic rules is developed in a way that the operational characteristics of the heat pump in terms of minimum running and stopping times are met as well as the limiting constraints of upper and lower limits of room temperature and energy content of electricity generated, a varying number of heat pump schedules fulfilling the bundary conditions are created. Finally, the schedule offering the maximum on-site utilization of PV-electricity with a minimum number of starts of the heat pump, which serves as secondary condition, is selected. Yearly simulations of this combination have been carried out. Initial results of this method indicate a significant rise in on-site consumption of the PV-electricity and heating demand fulfilment by renewable electricity with no need for a massive TES for the heating system in terms of a big water tank.
Size and cost of a boost converter can be minimized by reducing the voltage overshoot and fastening the transient response in case of load transient. The presented technique improves the transient response of a current mode controlled boost converter, which usually suffers from bandwidth limitation because of its right-half-plane zero (RHPZ). The proposed technique comprises a load current estimation which works as part of a digital controller without any additional measurements. Based on the latest load estimation the controller parameters are adapted, achieving small voltage overshoot and fast transient response. The presented technique was implemented in a digital control circuit, consisting of an ASIC in a 110 nm-technology, a Xilinx Spartan-6 field programmable gate array (FPGA), and a TI-ADS8422 analog to-digital-converter (ADC). Simulation and measurements of a 4V-to-6.3V, 500mA boost converter show an improvement of 50% in voltage overshoot and response time to load transient.
Eine neue Methode zur Berechnung von Temperaturen in Bonddrähten umgeben von einem endlichen Mold wird vorgestellt. Sie ist schneller als die übliche Finite Elemente-Methode (FEM), während sie vergleichbare Resultate produziert. Für manche Parameter funktioniert unsere Methode, während die FEM-Methode versagt. Der Algorithmus ist im sogenannten Bondrechner implementiert, der eine leicht zu benutzende Oberfläche für Designer von mikroelektronischen Systemen bereitstellt. Seine Anwendung hat das Potential, die Zuverlässigkeit von Bonddrähten zu verbessern. Ein nichtidealer Parameter für den Wärmetransfer vom Bonddraht zum Mold-Package wurde ebenfalls berücksichtigt. Dieser Parameter ändert sich wahrscheinlich unter Alterseinflüssen und ist daher sehr wichtig für Zuverlässigkeits-Schätzungen. In unserer Methode wird die Wechselwirkung von Nachbardrähten ebenfalls berücksichtigt. Diese wird immer wichtiger, weil der Durchmesser und der wechselseitige Abstand der Bonddrähte sich verringert, wegen der fortschreitenden Miniaturisierung der Chip-Verpackungen. Unser Programm kann ebenfalls Temperaturen für transiente Ströme berechnen und den Strom berechnen, der zu einer gegebenen Maximaltemperatur gehört.
We introduce bloomRF as a unified method for approximate membership testing that supports both point- and range-queries. As a first core idea, bloomRF introduces novel prefix hashing to efficiently encode range information in the hash-code of the key itself. As a second key concept, bloomRF proposes novel piecewisemonotone hash-functions that preserve local order and support fast range-lookups with fewer memory accesses. bloomRF has near-optimal space complexity and constant query complexity. Although, bloomRF is designed for integer domains, it supports floating-points, and can serve as a multi-attribute filter. The evaluation in RocksDB and in a standalone library shows that it is more efficient and outperforms existing point-range-filters by up to 4× across a range of settings and distributions, while keeping the false-positive rate low.
Mit dem Betrieb von KWK-Anlagen lässt sich nennenswert Primärenergie einsparen. KWK-Anlagen werden aus diesem Grund aufgrund verschiedener Gesetze und Richtlinien gefördert. Zum wirtschaftlichen Betrieb einer KWK-Anlage ist es erforderlich, den größtmöglichen Teil des erzeugten elektrischen Stroms entweder selbst zu verbrauchen oder an Dritte (Mieter, Wohnungseigentümer…) zu verkaufen. Mit dem KWKG 2016 werden größere KWK-Anlagen interessant, und Anlagen mit geringerer jährlicher Laufzeit können sich sogar wirtschaftlicher darstellen als reine Grundlastanlagen.
This paper contributes to the automatic detection of perioperative workflow by developing a binary endoscope localization. Automated situation recognition in the context of an intelligent operating room requires the automatic conversion of low level cues into more abstract high level information. Imagery from a laparoscope delivers rich content that is easy to obtain but hard to process. We introduce a system which detects if the endoscope's distal tip is inside or outsiede the patient based on the endoscope video. This information can be used as one parameter in a situation recognition pipeline. Our localization performs in real-time at a video resolution of 1280x720 and 5-fold cross validation yields mean F1-scores of up to 0,94 on videos of 7 laparoscopies.
Kreativität, Problemlösekompetenz und kollaboratives Arbeiten werden in zahlreichen internationalen Studien sowie von der OECD (2017) als Schlüsselkompetenzen des 21. Jahrhunderts definiert. Ungeachtet dessen orientieren sich viele Lehr-Lern Methoden noch immer an der Vermittlung vordefinierter Lösungswege. Studien im Sekundarbereich in den USA, Deutschland und Asien zeigen, dass Design Thinking durch seine kreativen und kollaborativen Elemente zu einem nachhaltigeren Lernerfolg bei Lernenden und seitens der Lehrenden zu höherer Zufriedenheit bei der Vermittlung der Inhalte führen kann.
Kernelemente des Design Thinking sind: der iterative Prozess mit seinen Phasen Verstehen, Beobachten, Sichtweisen definieren, Ideen finden, Prototypen bauen, Testen; die Arbeit in multidisziplinären Teams sowie die Nutzerorientierung bei der Definition der Aufgabe (Brown, 2009). Die Phasen des iterativen Prozesses weisen eine hohe Kongruenz mit den prozessorientierten Kompetenzen des Faches Kunst/Werken und des Sachunterrichts gemäß dem Bildungsplan für Grundschulen (Ministerium für Kultus, Jugend und Sport Baden Württemberg, 2016) auf. Im Rahmen eines interdisziplinären Promotionsvorhabens an der PH Freiburg soll, basierend auf einem qualitativen Forschungsdesign, untersucht werden, inwieweit sich Design Thinking eignet, Kreativität, Problemlösekompetenz und kollaboratives Arbeiten von Grundschulkindern in Kunst/Werken und im Sachunterricht aus Sicht von Lehrpersonen zu fördern. Vorstudien mit Lehrpersonen und Ausbildungslehrkräften, bei welchen Erhebungen per Fragebogen nach Teilnahme an einem Design Thinking Workshop eingesetzt wurden, sowie zwei Pilotunterrichtseinheiten an Grundschulen mit Teilnehmender Beobachtung, Experteninterviews und Kinderinterviews in Kleingruppen, zeigen erste Ergebnisse.
Die Wirkungsgrade ("Normnutzungsgrade") nach DIN 4709 bilden den praktischen Betrieb von Mikro-Blockheizkraftwerken besser ab. Insbesondere bei den thermischen Wirkungsgraden ergeben sich nach DIN 4709 geringere Werte im Vergleich zu stationären Messungen aufgrund der An-/Abfahrverluste und der Speicherverluste. Der Betrieb des Zusatzkessels führt zu einer Reduktion der Primärenergieeinsparung der Gesamtanlage.
Job advertisements are important means of communicating role expectations for management accountants to the labor market. They provide information about which roles of management accountants are sought by companies or which roles are expected. However, which roles are communicated in job advertisements is unknown so far. Using a large sample of 889 job ads and a text-mining approach, we show an apparent mix of different role types with a strong focus on a rather classic role: the watchdog role. However, individuals with business partner characteristics are more often sought for leadership positions or in family businesses and small and medium-sized enterprises (SMEs). The results challenge the current role discussion for management accountants as business partners in practice and some academic fields.
In Zusammenarbeit mit dem Medizinproduktehersteller ulrich medical wird eine User Experience und Usability Studie an der Software der im Moment eingesetzten Kontrastmittelinjektoren durchgeführt. Das Unternehmen möchte eine neue Variante eines Kontrastmittelinjektors entwickeln, der als Basis eine verbesserte Version dieser Softwares enthält. Benutzerstudien können mit den unterschiedlichsten Methoden durchgeführt werden. Das geeignete Vorgehen muss definiert und die Testpersonen in Bezug zur eingesetzten Methode ermittelt werden. Bei Medizinprodukten muss zusätzlich auf strikte Auflagen in Normen und Gesetzen geachtet werden. Die Grundlage zur Methodenauswahl bildet eine Recherche zu Usability und User Experience Vorgaben für Medizinprodukte. Die Studie wird anhand quantitativer Daten eines Usability Tests im Labor, Fragebögen zur User Experience und qualitativen Post Test- Interviews evaluiert. In erster Linie dient diese Studie der Ermittlung von möglichen Verbesserungen, welche in der darauf folgenden Masterthesis vertieft und umgesetzt werden.
Excellence in IT is a key enabler for the digital transformation of enterprises. To realize the vision of digital enterprises it is necessary to cope with changing business requirements and to align business and IT. In order to evaluate the contribution of enterprise architecture management to these goals, our paper explores the impact of various factors to the perceived benefit of EAM in enterprises. Based on literature, we build an empirical research model. It is tested with empirical data of European EAM experts using a structural equation modelling approach. It is shown that changing business requirements, IT business alignment, the complexity of information technology infrastructure as well as enterprise architecture knowledge of information technology employees are crucial impact factors to the perceived benefit of EAM in enterprises.
Fundamentale Veränderungen der heutigen Arbeitswelt stellen Menschen, Systeme, Prozesse und ganze Organisationen vor erhebliche Herausforderungen. Der Faktor Mensch leistet in allen Bereichen dieses Wirkgefüges einen essentiellen Beitrag zum Wettbewerbsvorteil vieler produzierender Unternehmen am Standort Deutschland. Der Wandel von Automatisierung zu selbststeuernden Unternehmen geht dabei nicht spurlos an dem wandlungsfähigsten Glied dieses Gefüges, dem Menschen, vorüber. Belastungsarten verändern sich, singuläre Bewältigungsstrategien genügen nicht mehr, um einen optimalen Beanspruchungszustand jedes einzelnen Individuums zu erreichen und gleichzeitig das höchstmögliche Potenzial zu schöpfen. Das Belastungs- und Beanspruchungscockpit bildet einen Lösungsansatz zur systematischen und durchgängigen Bewertung von Belastungszuständen und der individuellen Beanspruchung von Beschäftigten an Montagearbeitsplätzen. Es liefert in Echtzeit Informationen zum Belastungs- und Beanspruchungszustand des Mitarbeiters und kann mit Ergonomiebewertungsverfahren verknüpft werden. Der Aspekt der Multidimensionalität umfasst die Bewertung verschiedener Indikatoren unter Betrachtung ihrer Wirkzusammenhänge.
Die bedarfsgerechte Steuerung dezentraler thermischer Energiesysteme, wie Kraft-Wärme-Kopplungs- (KWK-) Anlagen und Wärmepumpen, kann einen entscheidenden Beitrag zur Deckung bzw. Reduktion der Residuallast leisten und so für eine Verringerung der konventionellen Reststromversorgung und den damit einhergehenden Treibhausgasemissionen sorgen. Dafür wurde an der Hochschule Reutlingen in mehrjähriger Forschungsarbeit ein prognosebasierter Steuerungsalgorithmus entwickelt. Gegenstand dieses Beitrags bilden neben der Vorstellung eben jenes Steuerungsalgorithmus auch dessen praktische Umsetzungsvarianten: Eine auf einer speicherprogrammierbaren Steuerung (SPS) rein lokal ausführbare Version sowie eine Webservice-Anwendung für den parallelen Betrieb mehrerer Anlagen – ausgehend von einem zentralen Server. Erprobungen am KWK-Prüfstand der Hochschule Reutlingen bestätigen die zuverlässige Funktionsweise des Algorithmus in den verschiedenen Umsetzungsvarianten. Gleichzeitig wird der Vorteil der bedarfsgerechten Steuerung gegenüber dem, insbesondere im Mikro-KWK-Bereich standardmäßig vorliegenden, wärmegeführten Betrieb in Form einer Steigerung der Eigenstromdeckung von bis zu 27 % aufgezeigt. Neben der bedarfsgerechten Steuerung bedient der entwickelte Algorithmus zudem noch ein weiteres Anwendungsgebiet: Den vorhersagbaren KWK-Betrieb, der beispielsweise in Form täglicher Einspeiseprognose im Rahmen des Redispatch 2.0 eingefordert wird. Die Vorhersage des KWK-Betriebs ist dabei auf zwei Weisen möglich: Als erste Option kann der wärmegeführte Betrieb direkt über den Algorithmus abgebildet und prognostiziert werden. Eine andere Möglichkeit stellt wiederum die bedarfsgerechte Steuerung der Anlage dar; der berechnete optimale Fahrplan entspricht dabei gleichzeitig der Betriebsprognose des KWK-Geräts. Damit ist der entwickelte Steuerungsalgorithmus in der Lage, auf unterschiedliche Weisen zum Gelingen der Energiewende beizutragen.
It is assumed that more education leads to better understanding of complex systems. Some researchers claim, however, find indications that simple mechanisms like stocks and flows are not well understood even by people who have passed higher education. In this paper, we test people’s understanding of complex systems with the widely studied stock-and-flow (SF) tasks (Booth Sweeney and Sterman 2000). SF tasks assess people’s understanding of the interplay between stocks and flows. We investigate SF failure of domain experts and novices in different knowledge domains. In particular, we compare performance on the original study’s Bathtub task with the square wave pattern (Booth Sweeney and Sterman 2000) with two alternative cover stories from the engineering and business domains on different groups of business and engineering students from different semesters. Further, we show that, while engineering students perform better than business students, with progressing in higher education, students seem to lose the capability of dealing with simple SF tasks from domains other than their field. We thus find hints on déformation professionelle in higher education.
SF-failure, the inability of people to correctly determine the behavior of simple stock and flow structures is subject of a long research stream. Reasons for SF-failure can be attributed to different reasons, one of them being lacking domain specific experience, thus familiarity with the problem context. In this article we present a continuation of an experiment to examine the role of educational background in SF-performance. We base the question set on the Bathtub Dynamics tasks introduced by Booth Sweeney and Sterman (2000) and vary the cover stories. In this paper we describe how we developed and tested a new cover story for the engineering domain and implemented the recommendations from a prior study. We test three sets of questions with engineering students which enables us to compare the results to a previous study in which we tested the questions with business students. Results mainly support our hypothesis that context familiarity increases SF-performance. With our findings we further develop the methodology of the research on SF-failure.
Prior studies ascribed people’s poor performance in dealing with basic systems concepts to different causes. While results indicate that, among other things, domain specific experience and familiarity with the problem context play a role in this stock-flow-(SF-)performance, this has not yet been fully clarified. In this article, we present an experiment that examines the role of educational background in SF-performance. We hypothesize that SF-performance increases when the problem context is embedded in the problem solver’s knowledge domain, indicated by educational background. Using the square wave pattern and the sawtooth pattern tasks from the initial study by Booth Sweeney and Sterman (2000), we design two additional cover stories for the former, the Vehicle story from the engineering domain and the Application story from the business domain, next to the original Bathtub story. We then test the three sets of questions on business students. Results mainly support our hypothesis. Interestingly, participants even do better on a more complex behavioral pattern from their knowledge domain than on a simpler pattern from more distant domains. Although these findings have to be confirmed by further studies, they contribute both to the methodology of future surveys and the context familiarity discussion.
Virtual Reality (VR) technology has the potential to support knowledge communication in several sectors. Still, when educators make use of immersive VR technology in favor of presenting their knowledge, their audience within the same room may not be able to see them anymore due to wearing head-mounted displays (HMDs). In this paper, we propose the Avatar2Avatar system and design, which augments the visual aspect during such a knowledge presentation. Avatar2Avatar enables users to see both a realistic representation of their respective counterpart and the virtual environment at the same time. We point out several design aspects of such a system and address design challenges and possibilities that arose during implementation. We specifically explore opportunities of a system design for integrating 2D video-avatars in existing roomscale VR setups. An additional user study indicates a positive impact concerning spatial presence when using Avatar2Avatar.
The EU funded project RobLog recently developed a system able to autonomously unload coffee sacks from a standard container. Being the first of its kind, a further development is needed in order for the system to be competitive against manual labor. Financing this development entails a risk, hence a justified skepticism, which can be overcome by the longsighted view of the existing market potential. This paper presents a method to estimate the market potential of autonomous unloading systems for heavy deformable goods. Starting from the analysis of the coffee trade, first the current coffee traffic is investigated in order to calculate the number of autonomous systems needed to handle the imported sacks; Results are validated and the method is extended for the calculation of the potential of other market segments, where the same unloading technology can be applied.
Das Provisioning Tool automaIT wurde prototypisch um die Möglichkeit eines Data Discovery erweitert, mit dem Ziel, nicht durch automaIT verwaltete Systeme anbinden und steuern zu können. Daten aus dem Data Discovery werden mittels dem Tool Facter gesammelt und können dynamisch in ausführbare Modelle von automaIT integriert und ausgewertet werden. Dadurch kann der Verlauf weiterer Provisionierungsschritte gesteuert werden, ohne dass es eines manuellen Eingriffs bedarf.
According to several surveys and statistics, the great majority of companies previously not accustomed to automation are piloting solutions to automate business processes. Those accustomed to automation also attempt to introduce more of it, focusing on automation-unfriendly processes that remained manual. However, when the decision on what and whether to automate is not trivial for evident reasons, even industry leaders may get stuck on an overwhelming question: where to begin automating? The question remains too often unanswered as state-of-the-art methods fail to consider the whole picture. This paper introduces a holistic approach to the decision-making for investments in automation. The method supports the iterative analysis and evaluation of operative processes, providing tools for a quantitative approach to the decision-making. Thanks to the method, a large pool of processes can be first considered and then filtered out in order to select the one that yields the best value for the automation in the specific context. After introducing the method, a case study is reported for validation before the discussion.
The high system flexibility necessary for the full automation of complex and unstructured tasks leads to increased technological complexity, thus to higher costs and lower performance. In this paper, after an introduction to the different dimensions of flexibility, a method for flexible modular configuration and evaluation of systems of systems is introduced. The method starts from process requirements and, considering factors such as feasibility, development costs, market potential and effective impact on the current processes, enables the evaluation of a flexible systems of systems equipped with the needed functionalities before its actual development. This allows setting the focus on those aspects of flexibility that add market value to the system, thus promoting the efficient development of systems addressed to interested customers in intralogistics. An example of application of the method is given and discussed.
Physical analog IC design has not been automated to the same degree as digital IC design. This shortfall is primarily rooted in the analog IC design problem itself, which is considerably more complex even for small problem sizes. Significant progress has been made in analog automation in several R&D target areas in recent years. Constraint engineering and generator-based module approaches are among the innovations that have emerged. Our paper will first present a brief review of the state of the art of analog layout automation. We will then introduce active and open research areas and present two visions – a “continuous layout design flow” and a “bottom-up meets top-down design flow” – which could significantly push analog design automation towards its goal of analog synthesis.
In a time of digital transformation, the ability to quickly and efficiently adapt software systems to changed business requirements becomes more important than ever. Measuring the maintainability of software is therefore crucial for the long-term management of such products. With service-based systems (SBSs) being a very important form of enterprise software, we present a holistic overview of such metrics specifically designed for this type of system, since traditional metrics – e.g. object oriented ones – are not fully applicable in this case. The selected metric candidates from the literature review were mapped to 4 dominant design properties: size, complexity, coupling, and cohesion. Microservice-based systems (μSBSs) emerge as an agile and fine grained variant of SBSs. While the majority of identified metrics are also applicable to this specialization (with some limitations), the large number of services in combination with technological heterogeneity and decentralization of control significantly impacts automatic metric collection in such a system. Our research therefore suggests that specialized tool support is required to guarantee the practical applicability of the presented metrics to μSBSs.
DMOS transistors often suffer from substantial self-heating during high power dissipation, which can lead to thermal destruction if the device temperature reaches excessive values. A successfully demonstrated method to reduce the peak temperature is the redistribution of power dissipation density from the hotter to the cooler device areas by careful layout modification. However, this is very tedious and time-consuming if complex-shaped devices as often found in industrial applications are considered.
This paper presents an approach for fully automatic layout optimization which requires only a few hours processing time. The approach is applied to complex shaped test structures which are investigated by measurements and electro-thermal simulations. Results show a significantly lower peak temperature and an energy capability gain of 84 %, offering potential for a 18 % size reduction of active area.
Switched reluctance motors are particularly attractive due to their simple structure. The control of this machine type requires the instants, to switch the currents in the motor phases in an appropriate sequence. These switching instants are determined either based on a position sensor, or on signals generated by a sensorless method. A very simple sensorless method uses the switching frequency of the hysteresis controllers used for phase current control. This paper first presents an automatic commissioning method for this sensorless method and second a startup procedure, thus enhancing this approach towards an application in industry.
Automatic classification of rotating machinery defects using Machine Learning (ML) algorithms
(2020)
Electric machines and motors have been the subject of enormous development. New concepts in design and control allow expanding their applications in different fields. The vast amount of data have been collected almost in any domain of interest. They can be static; that is to say, they represent real-world processes at a fixed point of time. Vibration analysis and vibration monitoring, including how to detect and monitor anomalies in vibration data are widely used techniques for predictive maintenance in high-speed rotating machines. However, accurately identifying the presence of a bearing fault can be challenging in practice, especially when the failure is still at its incipient stage, and the signal-to-noise ratio of the monitored signal is small. The main objective of this work is to design a system that will analyze the vibration signals of a rotating machine, based on recorded data from sensors, in the time/frequency domain. As a consequence of such substantial interest, there has been a dramatic increase of interest in applying Machine Learning (ML) algorithms to this task. An ML system will be used to classify and detect abnormal behavior and recognize the different levels of machine operation modes. The proposed solution can be deployed as predictive maintenance for Industry 4.0.
The state of the art proposes the microservices architectural style to build applications. Additionally, container virtualization and container management systems evolved into the perfect fit for developing, deploying, and operating microservices in line with the DevOps paradigm. Container virtualization facilitates deployment by ensuring independence from the runtime environment. However, microservices store their configuration in the environment. Therefore, software developers have to wire their microservice implementation with technologies provided by the target runtime environment such as configuration stores and service registries. These technological dependencies counteract the portability benefit of using container virtualization. In this paper, we present AUTOGENIC - a model-based approach to assist software developers in building microservices as self configuring containers without being bound to operational technologies. We provide developers with a simple configuration model to specify configuration operations of containers and automatically generate a self-configuring microservice tailored for the targeted runtime environment. Our approach is supported by a method, which describes the steps to automate the generation of self-configuring microservices. Additionally, we present and evaluate a prototype, which leverages the emerging TOSCA standard.
Ein wirtschaftlicher Betrieb von KWK-Anlagen ist erreichbar, wenn Geräte mit gutem elektrischen Wirkungsgrad und geringen Anschaffungs- und Wartungskosten eingesetzt werden und der im BHKW erzeugte Strom zum größtmöglichen Anteil im Objekt verbraucht wird. Der Pufferspeicher einer KWK-Anlage sollte ausreichend groß bemessen sein (Flexibilität, Eigenstromoptimierung...). Ein größeres BHKW ist nicht automatisch unwirtschaftlicher aufgrund der geringeren Betriebszeit. Es bietet dagegen ein höheres Potenzial für eine bedarfsgerechte Stromeinspeisung in das Netz.
In mehreren Untersuchungen hat sich gezeigt, dass sich die Wahrnehmung des eigenen Körpers in einer virtuellen Umgebung positiv auf die Wahrnehmung der gesamten Umgebung auswirkt. Für diese Untersuchungen wurden der Körper einer Person, oder Teile davon, als animierter Avatar aus der Ego-Perspektive dargestellt. Im Kontext der Informatikkonferenz Informatics Inside 2014 an der Hochschule Reutlingen soll in dieser Arbeit eine andere Möglichkeit der Darstellung untersucht werden. In einer prototypischen Augmented Virtuality Anwendung soll die virtuelle Umgebung um reale Inhalte erweitert werden. Es soll einer Person ermöglicht werden, Teile ihres eigenen Körpers nicht als Avatar, sondern auf Basis einer Kameraaufnahme als realistische Repräsentation wahrzunehmen. Die Arbeit beschreibt hierbei die gesetzten Ziele, sowie Aufbau und Funktionsweise der prototypischen Anwendung und deren derzeitigen Stand.
Business Process Management (BPM) ist aufgrund seiner Bedeutung für prozessorientierte Unternehmen und den daraus resultierenden Anforderungen hinsichtlich interner Betriebsorganisation und Audits, ein zentraler Bestandteil. Die Einführung und Aufrechterhaltung von BPM stellt jedoch einen erheblichen Aufwand dar, da Prozesse aufgenommen, modelliert und aktuell gehalten werden müssen. Empirische Belege zeigen, dass erfolgreiche Prozessmodellierung dabei eine besondere Herausforderung darstellt, welche häufig nicht zufriedenstellend nachhaltig gelingt. Ein wesentlicher Erfolgsfaktor für die nachhaltige Prozessorientierung in Unternehmen ist somit die konsistente und aktuelle Prozessmodellierung, sowie deren Adaption an externe und interne Veränderungen. Mittels einer Literaturrecherche werden die relevanten Dimensionen zur nachhaltigen Prozessorientierung auf Grundlage der Prozessmodellierung ermittelt. Auf deren Basis wird ein adaptives handlungsorientiertes Framework für die praktische Anwendung in Unternehmen abgeleitet.
While Microservices promise several beneficial characteristics for sustainable long-term software evolution, little empirical research covers what concrete activities industry applies for the evolvability assurance of Microservices and how technical debt is handled in such systems. Since insights into the current state of practice are very important for researchers, we performed a qualitative interview study to explore applied evolvability assurance processes, the usage of tools, metrics, and patterns, as well as participants’ reflections on the topic. In 17 semi-structured interviews, we discussed 14 different Microservice-based systems with software professionals from 10 companies and how the sustainable evolution of these systems was ensured. Interview transcripts were analyzed with a detailed coding system and the constant comparison method.
We found that especially systems for external customers relied on central governance for the assurance. Participants saw guidelines like architectural principles as important to ensure a base consistency for evolvability. Interviewees also valued manual activities like code review, even though automation and tool support was described as very important. Source code quality was the primary target for the usage of tools and metrics. Despite most reported issues being related to Architectural Technical Debt (ATD), our participants did not apply any architectural or service-oriented tools and metrics. While participants generally saw their Microservices as evolvable, service cutting and finding an appropriate service granularity with low coupling and high cohesion were reported as challenging. Future Microservices research in the areas of evolution and technical debt should take these findings and industry sentiments into account.
Physicians in interventional radiology are exposed to high physical stress. To avoid negative long-term effects resulting from unergonomic working conditions, we demonstrated the feasibility of a system that gives feedback about unergonomic
situations arising during the intervention based on the Azure Kinect camera. The overall feasibility of the approach could be shown.
The promise of the EVs is twofold. First, rejuvenating a transport sector that still heavily depends on fossil fuels and second, integrating intermittent renewable energies into the power mix. However, it is still not clear how electricity networks will cope with the predicted increase in EVs and their charging demand, especially in combination with conventional energy demand. This paper proposes a methodology which allows to predict the impact of EV charging behavior on the electricity grid. Moreover, this model simulates the driving and charging behavior of heterogeneous EV drivers which differ in their mobility pattern, decision-making heuristics and charging strategies. The simulations show that uncoordinated charging results in charging load clustering. In contrast, decentralized coordination allows to fill the valleys of the conventional load curve and to integrate EVs without the need of a costly expansion of the electricity grid.
New storage technologies, such as Flash and Non- Volatile Memories, with fundamentally different properties are appearing. Leveraging their performance and endurance requires a redesign of existing architecture and algorithms in modern high performance databases. Multi-Version Concurrency Control (MVCC) approaches in database systems, maintain multiple timestamped versions of a tuple. Once a transaction reads a tuple the database system tracks and returns the respective version eliminating lock-requests. Hence under MVCC reads are never blocked, which leverages well the excellent read performance (high throughput, low latency) of new storage technologies. Upon tuple updates, however, established implementations of MVCC approaches (such as Snapshot Isolation) lead to multiple random writes – caused by (i) creation of the new and (ii) in-place invalidation of the old version – thus generating suboptimal access patterns for the new storage media. The combination of an append based storage manager operating with tuple granularity and snapshot isolation addresses asymmetry and in-place updates. In this paper, we highlight novel aspects of log-based storage, in multi-version database systems on new storage media. We claim that multi-versioning and append-based storage can be used to effectively address asymmetry and endurance. We identify multi-versioning as the approach to address dataplacement in complex memory hierarchies. We focus on: version handling, (physical) version placement, compression and collocation of tuple versions on Flash storage and in complex memory hierarchies. We identify possible read- and cacherelated optimizations.
Mit der Verfügbarkeit leistungsfähiger Computer haben rechnergestützte Simulationsverfahren überall in Wissenschaft und Technik Einzug gehalten. Die modellbasierte Simulation als "virtuelles Experiment" stellt insbesondere im Entwurf technischer Systeme ein wirksames und längst unverzichtbares Hilfsmittel dar, um Entwicklungsergebnisse hinsichtlich gewünschter Eigenschaften abzusichern. Die Möglichkeiten heutiger Simulationsmethoden sind faszinierend, weshalb gerade Anfänger (aber nicht nur diese) der Gefahr ausgesetzt sind, deren Ergebnisse unkritisch zu übernehmen. Besondere Bedeutung kommt hier der Lehre zu. Neben der Anwendung der Simulationswerkzeuge ist es wichtig, den Studierenden auch deren theoretische Grundlagen nahe zu bringen und damit ihr Bewusstsein hinsichtlich der Grenzen der Simulation zu schärfen. Der Workshop der ASIM/GI-Fachgruppen "Simulation technischer Systeme" und "Grundlagen und Methoden in Modellbildung und Simulation" bringt Fachleute aus Wirtschaft und Wissenschaft zum Erfahrungsaustausch rund um die Simulation zusammen. Hierbei werden alle Aspekte von den Grundlagen über die Methoden bis hin zu Werkzeugen und Anwendungsbeispielen angesprochen.
Painting galleries typically provide a wealth of data composed of several data types. Those multivariate data are too complex for laymen like museum visitors to first, get an overview about all paintings and to look for specific categories. Finally, the goal is to guide the visitor to a specific painting that he wishes to have a more closer look on. In this paper we describe an interactive visualization tool that first provides such an overview and lets people experiment with the more than 41,000 paintings collected in the web gallery of art. To generate such an interactive tool, our technique is composed of different steps like data handling, algorithmic transformations, visualizations, interactions, and the human user working with the tool with the goal to detect insights in the provided data. We illustrate the usefulness of the visualization tool by applying it to such characteristic data and show how one can get from an overview about all paintings to specific paintings.
Demand forecasting intermittent time series is a challenging business problem. Companies have difficulties in forecasting this particular form of demand pattern. On the one hand, it is characterized by many non-demand periods and therefore classical statistical forecasting algorithms, such as ARIMA, only work to a limited extent. On the other hand, companies often cannot meet the requirements for good forecasting models, such as providing sufficient training data. The recent major advances of artificial intelligence in applications are largely based on transfer learning. In this paper, we investigate whether this method, originating from computer vision, can improve the forecasting quality of intermittent demand time series using deep learning models. Our empirical results show that, in total, transfer learning can reduce the mean square error by 65 percent. We also show that especially short (65 percent reduction) and medium long (91 percent reduction) time series benefit from this approach.
Artificial Intelligence (AI) in der Markenführung: Künstliche Neuronale Netze zur Markenimagemessung
(2023)
Da Künstliche Neuronale Netze die Modellierung nichtlinearer und vielschichtiger Beziehungen ermöglichen, befasst sich dieser Beitrag mit deren Einsatzmöglichkeiten für die methodisch anspruchsvolle Analyse und Messung des Markenimages. Zur Veranschaulichung des konzeptionellen Ansatzes wird am empirischen Beispiel des Sportartikelherstellers adidas ein mehrschichtiges Künstliches Neuronales Netz zwischen den Bewertungen spezifischer Markenattribute und der Gesamtbewertung der Marke erzeugt. Auf der Grundlage einer Analyse der Verbindungsgewichte des Künstliches Neuronales Netzes wird die Bedeutung verschiedener Markenattribute für die Markenbewertung gemessen, wodurch sich konkrete Implikationen für die Praxis der Markenführung ableiten lassen.
Presently, many companies are transforming their strategy and product base, as well as their culture, processes and information systems to become more digital or to approach for a digital leadership. In the last years new business opportunities appeared using the potential of the Internet and related digital technologies, like Internet of Things, services computing, cloud computing, edge and fog computing, social networks, big data with analytics, mobile systems, collaboration networks, and cyber physical systems. Digitization fosters the development of IT environments with many rather small and distributed structures, like the Internet of Things, Microservices, or other micro-granular elements. This has a strong impact for architecting digital services and products. The change from a closed-world modeling perspective to more flexible open-world composition and evolution of micro-granular system architectures defines the moving context for adaptable systems. We are focusing on a continuous bottom-up integration of micro-granular architectures for a huge amount of dynamically growing systems and services, as part of a new digital enterprise architecture for service dominant digital products.
The current advancement of Artificial Intelligence (AI) combined with other digitalization efforts significantly impacts service ecosystems. Artificial intelligence has a substantial impact on new opportunities for the co-creation of value and the development of intelligent service ecosystems. Motivated by experiences and observations from digitalization projects, this paper presents new methodological perspectives and experiences from academia and practice on architecting intelligent service ecosystems and explores the impact of artificial intelligence through real cases supporting an ongoing validation. Digital enterprise architecture models serve as an integral representation of business, information, and technological perspectives of intelligent service-based enterprise systems to support management and development. This paper focuses on architectural models for intelligent service ecosystems, showing the fundamental business mechanism of AI-based value co-creation, the corresponding digital architecture, and management models. The focus of this paper presents the key architectural model perspectives for the development of intelligent service ecosystems.
Our paper gives first answers on a fundamental question: how can the design of architectures of intelligent digital systems and services be accomplished methodologically? Intelligent systems and services are the goals of many current digitalization efforts today and part of massive digital transformation efforts based on digital technologies. Digital systems and services are the foundation of digital platforms and ecosystems. Digtalization disrupts existing businesses, technologies, and economies and promotes the architecture of open environments. This has a strong impact on new value-added opportunities and the development of intelligent digital systems and services. Digital technologies such as artificial intelligence, the Internet of Things, services computing, cloud computing, big data with analytics, mobile systems, and social enterprise networks systems are important enablers of digitalization. The current publication presents our research on the architecture of intelligent digital ecosystems and products and services influenced by the service-dominant logic. We present original methodological extensions and a new reference model for digital architectures with an integral service and value perspective to model intelligent systems and services that effectively align digital strategies and architectures with artificial intelligence as main elements to support intelligent digitalization.
The early involvement of experiences gained through intelligence and data analysis is becoming increasingly important in order to develop new products, leading to a completely different conception of product creation, development and engineering processes using the advantages that the dedication of the digital twin entails. Introducing a novel stage gate process in order to be holistically anchored in learning factories adopting idea generation and idea screening in an early stage, beta testing of first prototypes, technical implementation in real production scenarios, business analysis, market evaluation, pricing, service models as well as innovative social media portals. Corresponding product modelling in the sense of sustainability, circular economy, and data analytics forecasts the product on the market both before and after market launch with the interlinking of data interpretation nearby in real-time. The digital twin represents the link between the digital model and the digital shadow. Additionally, the connection of the digital twin with the product provides constantly updated operating status and process data as well as mapping of technical properties and real-world behaviours. A future-networking product, by embedded information technology with the ability to initiate and carry out one's own further development, is able to interact with people and environments and thus is relevant to the way of life of future generations. In today's development work for this new product creation approach, on one hand, "Werk150" is the object of the development itself and on the other hand the validation environment. In the next step, new learning modules and scenarios for trainings at master level will be derived from these findings.
Patterns are virtually simulated in 3D CAD programs before production to check the fit. However, achieving lifelike representations of human avatars, especially regarding soft tissue dynamics, remains challenging. This is mainly since conventional avatars in garment CAD programs are simulated with a continuous hard surface and not corresponding to the human physical and mechanical body properties of soft tissue. In the real world, the human body’s natural shape is affected by the contact pressure of tight-fitting textiles. To verify the fit of a simulated garment, the interactions between the individual body shape and the garment must be considered. This paper introduces an innovative approach to digitising the softness of human tissue using 4D scanning technology. The primary objective of this research is to explore the interactions between tissue softness and different compression levels of apparel, exerting pressure on the tissue to capture the changes in the natural shape. Therefore, to generate data and model an avatar with soft body physics, it is essential to capture the deform ability and elasticity of the soft tissue and map it into the modification options for a simulation. To aim this, various methods from different fields were researched and compared to evaluate 4D scanning as the most suitable method for capturing tissue deformability in vivo. In particular, it should be considered that the human body has different deformation capabilities depending on age, the amount of muscle and body fat. In addition, different tissue zones have different mechanical properties, so it is essential to identify and classify them to back up these properties for the simulation. It has been shown that by digitising the obtained data of the different defined applied pressure levels, a prediction of the deformation of the tissue of the exact person becomes possible. As technology advances and data sets grow, this approach has the potential to reshape how we verify fit digitally with soft avatars and leverage their realistic soft tissue properties for various practical purposes.
Due to Industry 4.0, the full value creation has the chance to undergo a fundamental technological transformation, the realisation of which, however, requires the commitment of every company for its own benefit. The new approaches of Industry 4.0 are often hardly evaluated, let alone proven, so that SMEs in particular often cannot properly estimate the potentials and risks, and often waiting too long with the migration towards Industry 4.0. In addition, they often do not pursue an integrated concept in order to identify possible potentials through changes in their business models. . As part of the research project "GEN-I 4.0 – Geschäftsmodell-Entwicklung für die Industrie 4.0” ", the ESB Business School at Reutlingen University of Applied Sciences and the Fraunhofer Institute for Industrial Engineering and Organization FHG IAO were engaged by the Baden-Württemberg Foundation from 2016 to 2018 to develop tools and an approach how the local economy can develop digital business models for itself in a methodical, beneficial and targeted manner. Through international analyses and interviews GEN-I 4.0 gained and concretized the knowledge required for the evaluation and selection of solutions and approaches for the transfer to develop digital business models. Together with the know-how of the project partners on Industry 4.0 and business model development, the findings were incorporated into the development of two software tools with which SMEs are shown the potentials of Industry 4.0 for their individual business model, online and in selfassessment, and given a comprehensive structured, concrete approach to development, as well as their individual risk. Users of the tools are supported by the selected platform for the networking of different players to implement innovative business models accompanied by coaching concepts for the companies in the follow-up and implementation of the assessment results.
Enterprises and societies currently face crucial challenges, while Industry 4.0 becomes important in the global manufacturing industry all the more. Industry 4.0 offers a range of opportunities for companies to increase the flexibility and efficiency of production processes. The development of new business models can be promoted with digital platforms and architectures for Industry 4.0. Therefore, products from the healthcare sector can increase in value. The adaptive integrated digital architecture framework (AIDAF) for Industry 4.0 is expected to promote and implement the digital platforms and robotics for healthcare and medical communities efficiently. In this paper, we propose that various digital platforms and robotics are designed and evaluated for digital healthcare as for manufacturing industry with Industry 4.0. We argue that the design of an open healthcare platform “Open Healthcare Platform 2030 - OHP2030” for medical product design and robotics can be developed with AIDAF. The vision of AIDAF applications to enable Industry 4.0 in the OHP2030 research initiative is explained and referenced, extended in the context of Society 5.0.
Enterprises and societies currently face essential challenges, and digital transformation can contribute to their resolution. Enterprise architecture (EA) is useful for promoting digital transformation in global companies and information societies covering ecosystem partners. The advancement of new business models can be promoted with digital platforms and architectures for Industry 4.0 and Society 5.0. Therefore, products from the sector of healthcare, manufacturing and energy, etc. can increase in value. The adaptive integrated digital architecture framework (AIDAF) for Industry 4.0 and the design thinking approach is expected to promote and implement the digital platforms and digital products for healthcare, manufacturing and energy communities more efficiently. In this paper, we propose various cases of digital transformation where digital platforms and products are designed and evaluated for digital IT, digital manufacturing and digital healthcare with Industry 4.0 and Society 5.0. The vision of AIDAF applications to perform digital transformation in global companies is explained and referenced, extended toward the digitalized ecosystems such as Society 5.0 and Industry 4.0.
The character of knowledge-intense processes is that participants decide the next process activities on base of the present information and their expert knowledge. The decisions of these knowledge workers are in general non-deterministic. It is not possible to model these processes in advance and to automate them using a process engine of a BPM system. Hence, in this context a process instance is called a case, because there is no predefined model that could be instantiated. Domain-specific or general case management systems are used to support the knowledge workers. These systems provide all case information and enable users to define the next activities, but they have no or only limited activity recommendation capabilities. In the following paper, we present a general concept for a self-learning system based on process mining that suggests the next best activity on quantitative and qualitative data for a given case. As a proof of concept, it was applied to the area of insurance claims settlement.
This paper reports an analysis of application and impact of FMEA on susceptibility of generic IT-networks. It is not new that in communication system, the frequency and the data transmission rate play a very important role. The rapid increase in miniaturization of electronic devices leads to very sensitivity against electromagnetic interference. Since the IT network with the data transfer rate makes a huge contribution to this development it is very important to monitor their functionality. Therefore, tests are performed to observe and ensure the data transfer rate of IT networks against IEMI. A fault tree model is presented and observed effects during radiation of disturbance on complex system by a HPEM interference sources are described using a continuous and consistent model of the physical layer to the application layer.
The aim of this paper is to show to what extent Artificial Intelligence can be used to optimize forecasting capability in procurement as well as to compare AI with traditional statistic methods. At the same time this article presents the status quo of the research project ANIMATE. The project applies Artificial Intelligence to forecast customer orders in medium-sized companies.
Precise forecasts are essential for companies. For planning, decision making and controlling. Forecasts are applied, e.g. in the areas of supply chain, production or purchasing. Medium-sized companies have major challenges in using suitable methods to improve their forecasting ability.
Companies often use proven methods such as classical statistics as the ARIMA algorithm. However, simple statistics often fail while applied for complex non-linear predictions.
Initial results show that even a simple MLP ANN produces better results than traditional statistic methods. Furthermore, a baseline (Implicit Sales Expectation) of the company was used to compare the performance. This comparison also shows that the proposed AI method is superior.
Until the developed method becomes part of corporate practice, it must be further optimized. The model has difficulties with strong declines, for example due to holidays. The authors are certain that the model can be further improved. For example, through more advanced methods, such as a FilterNet, but also through more data, such as external data on holiday periods.
Today many scientific works are using deep learning algorithms and time series, which can detect physiological events of interest. In sleep medicine, this is particularly relevant in detecting sleep apnea, specifically in detecting obstructive sleep apnea events. Deep learning algorithms with different architectures are used to achieve decent results in accuracy, sensitivity, etc. Although there are models that can reliably determine apnea and hypopnea events, another essential aspect to consider is the explainability of these models, i.e., why a model makes a particular decision. Another critical factor is how these deep learning models determine how severe obstructive sleep apnea is in patients based on the apnea-hypopnea index (AHI). Deep learning models trained by two approaches for AHI determination are exposed in this work. Approaches vary depending on the data format the models are fed: full-time series and window-based time series.
In this paper, we propose a radical new approach for scale-out distributed DBMSs. Instead of hard-baking an architectural model, such as a shared-nothing architecture, into the distributed DBMS design, we aim for a new class of so-called architecture-less DBMSs. The main idea is that an architecture-less DBMS can mimic any architecture on a per-query basis on-the-fly without any additional overhead for reconfiguration. Our initial results show that our architecture-less DBMS AnyDB can provide significant speedup across varying workloads compared to a traditional DBMS implementing a static architecture.
Steady state efficiency optimization techniques for induction motors are state of the art and various methods have already been developed. This paper provides new insights in the efficiency optimized operation in dynamic regime. The paper proposes an anticipative flux modification in order to decrease losses during torque and speed transients. These trajectories are analyzed based on a numerical study for different motors. Measurement results for one motor are given as well.
Im Rahmen der wissenschaftlichen Vertiefung an der Hochschule Reutlingen befasst sich diese Arbeit mit der Untersuchung der Anforderungen und der Machbarkeit zur computergestützten Erkennung der Deutschen Gebärdensprache (DGS) und des deutschen Fingeralphabets. Die Erkenntnisse aus dieser Arbeit dienen als Grundlage zur Entwicklung eines Systems zur Übersetzung von Gebärden der DGS oder des Fingeralphabets in die deutsche Schriftsprache. Zunächst werden grundlegende Informationen zu Geschichte, Aufbau und Grammatik der DGS und des Fingeralphabets aufgeführt. Die Erkennung der Gebärden soll durch optische Bewegungssensoren erfolgen. Hierfür werden unterschieliche Sensortypen betrachtet und verglichen. Im weiteren Verlauf erfolgt die Analyse der benutzerspezifischen und technischen Anforderungen. Erstere basieren auf der Befragung einer Fokusgruppe aus gehörlosen und hörenden Menschen aus dem Bereich der Gehörlosen-, Schwerhörigen- und Sprachbehindertenpädagogik. Abgeleitet aus den Informationen der Anforderungsanalyse ergibt sich, bis zu einem gewissen Grad, die Machbarkeit aus technischer und benutzerspezifischer Sicht. Abschließend erfolgen die Zusammenfassung der Anforderungen, welche an das zu entwickelnde System gestllt werden, sowie eine Handlungsempfehlung für die Entwicklung eines Prototyps.
In Folge der gegenwärtigen Digitalisierung in der produzierenden Industrie werden Anwendungen oder Services mit potentiell positiven Auswirkungen auf Faktoren wie Effektivität und Arbeitsqualität entwickelt. Ein geeigneter Ansatz zur Stärkung motivierender Aspekte im Arbeitskontext kann Gamification darstellen. In dieser Arbeit ist die initiale Konzeption und Evaluation eines Gamification-Ansatzes für Anwender eines KI-Service zur Maschinenoptimierung dargestellt und möglichen Anforderungen an ein Konzept zur Motivationssteigerung extrahiert.
Anforderungen an die Mensch-Maschine-Schnittstelle im Automobil auf dem Weg zum autonomen Fahren
(2017)
In den letzten Jahrzehnten haben immer mehr Fahrerassistenzsysteme Einzug in das Automobil gefunden und bereiten damit den Weg zu vollautonomen Fahrzeugen der Zukunft vor. So bieten bereits viele Hersteller Ausstattungsvarianten ihrer Fahrzeuge an, die für den Umstieg in die vollautonome Zukunft gewappnet sind. Um den Menschen mit auf den Weg zu nehmen, werden einige Anforderungen an die Mensch-Maschine-Schnittstelle (MMS) des Automobils gestellt. Für die teilautonomen Fahrzeuge der nächsten Generation gilt es, den Fahrerwechsel zwischen manuellem und autonomen Fahren für die Menschen bestmöglich zu gestalten. Die Arbeit wirft einen Blick auf ausgewählte Ansätze für zukünftige MMS-Systeme und bewertet diese anhand der Übergabezeiten zwischen Mensch und Maschine. Ein Wandel der MMS im Automobil wird empfohlen, um den Menschen mit den neuen Technologien vertraut zu machen.
Informationstechnische Systeme, die den Arbeitsablauf im klinischen Bereich unterstützen, sind aktuell auf organisatorische Abläufe beschränkt. Diese Arbeit stellt einen ersten Ansatz vor, wie solch ein System in den perioperativen Bereich eingebracht werden kann. Hierzu wurde eine Workflow Engine mit einer perioperativen Prozess-Visualisierung verknüpft. Das System wurde nach Modell-View-Controller-Prinzip implementiert. Als "Controller" kommt die Workflow Engine zum Einsatz; also "Modell" ein Prozessmodell, mit den erforderlichen klinischen Daten. Der "View" wurde durch eine abgekoppelte Anwendung realisiert, welche auf Web-Technologien basiert. Drei Visualisierungen, die Workflow Engine sowie die Anbindung beider über eine Datenbankschnittstelle, wurden erfolgreich umgesetzt. Bei den drei Visualisierungen wurden jeweils eine Ansicht für den OP-Koordinator, den Springer und eine Ansicht für die Übersicht einer OP erstellt.
To bring a pattern-based perspective to the SOA vs. microservices discussion, we qualitatively analyzed a total of 118 SOA patterns from 2 popular catalogs for their (partial) applicability to microservices. Patterns had to hold up to 5 derived microservices principles to be applicable. 74 patterns (63%) were categorized as fully applicable, 30 (25%) as partially applicable, and 14 (12%) as not applicable. Most frequently violated microservices characteristics werde Decentralization and Single System. The findings suggest that microservices and SOA share a large set of architectural principles and solutions in the general space of service-based systems while only having a small set of differences in specific areas.
Data analysis is becoming increasingly important to pursue organizational goals, especially in the context of Industry 4.0, where a wide variety of data is available. Here numerous challenges arise, especially when using unstructured data. However, this subject has not been focused by research so far. This research paper addresses this gap, which is interesting for science and practice as well. In a study three major challenges of using unstructured data has been identified: analytical know-how, data issues, variety. Additionally, measures how to improve the analysis of unstructured data in the industry 4.0 context are described. Therefore, the paper provides empirical insights about challenges and potential measures when analyzing unstructured data. The findings are presented in a framework, too. Hence, next steps of the research project and future research points become apparent.
Modern wide bandgap power devices promise higher power conversion performance if the device can be operated reliably. As switching speed increases, the effects of parasitic ringing become more prominent, causing potentially damaging overvoltages during device turn-off. Estimating the expected additional voltage caused by such ringing enables more reliable designs. In this paper, we present an analytical expression to calculate the expected overvoltage caused by parasitic ringing based on parasitic element values and operating point parameters. Simulations and measurements confirm that the expression can be used to find the smallest rise time of the switches’ drain-source voltage for minimum overvoltage. The given expression also allows the prediction of the trade off overvoltage amplitude in case of faster required rise times.
Today's logistics systems are characterized by uncertainty and constantly changing requirements. Rising demand for customized products, short product life cycles and a large number of variants increases the complexity of these systems enormously. In particular, intralogistics material flow systems must be able to adapt to changing conditions at short notice, with little effort and at low cost. To fulfil these requirements, the material flow system needs to be flexible in three important parameters, namely layout, throughput and product. While the scope of the flexibility parameters is described in literature, the respective effects on an intralogistics material flow system and the influencing factors are mostly unknown. This paper describes how flexibility parameters of an intralogistics system can be determined using a multi-method simulation. The study was conducted in the learning factory “Werk150” on the campus of Reutlingen University with its different means of transport and processes and validated in terms of practical experiments.
The possibility to bring the interference source, close to the potential target is characterized by the property of the source as stationary, portable, mobile, very mobile and highly mobile [3]. Starting from the existing and well-known IEME interference or IEMI (Intentional Electromagnetic Interference) and the already existing classifications an analysis of methods based on a comparative study of the methods used to classify the intentional EM environment is carried out, which takes into account the frequency, the cost, the amplitude of the noise signal, the radiated power and the energy of a pulse of radiation.
There are several intra-operative use cases which require the surgeon to interact with medical devices. I used the Leap Motion Controller as input device for three use-cases: 2D-interaction (e.g. advancing EPR data), selection of a value (e.g. room illumination brightness) and an application point and click scenario. I evaluated the Palm Mouse as the most suitable gesture solution to coordinate the mouse and advise to use the implementation using all fingers to perform a click. This small case study introduces the implementations and methods that result those recommendations.
In der Medizin existieren verschiedene Reifegradmodelle, die die Digitalisierung von Krankenhäusern unterstützen können. Die Anforderungen an ein Reifegradmodell für diesen Zweck umfassen Aspekte aus allgemeinen und spezifischen Bereichen des Krankenhauses. Die Analyse der Reifegradmodelle HIN, CCMM, EMRAM und O-EMRAM zeigt große Lücken im Bereich des OP sowie fehlende Aspekte in der Notaufnahme auf. Ein umfassendes Reifegradmodell wurde nicht gefunden. Durch eine Kombination aus HIN und CCMM könnten fast alle Bereiche ausreichend abgedeckt werden. Zusätzliche Ergänzungen durch spezialisierte Reifegradmodelle oder sogar die Entwicklung eines umfassenden Reifegradmodells wären sinnvoll.
In diesem Beitrag wurde gezeigt, wie mit Hilfe von Verfahren zur Analyse von Petri–Netzen ein in der Programmiersprache Kontaktplan erstelltes SPS–Programm analysiert werden kann. Das Ziel des Verfahrens ist dabei nicht eine Verifikation im eigentlichen Sinne sondern das Aufdecken von verbotenen oder unerwünschten Zuständen. Im Beitrag wurden Regeln zur Transformation des im Kontaktplan erstellten Ablaufs in ein Petri–Netz angegeben und anhand der Analyse eines fehlerhaft implementierten Ablaufs die Leistungsfähigkeit des Ansatzes vorgestellt. Das Beispiel zeigt, dass Programmfehler bereits vor einem Test an der realen Anlage erkannt werden können. Bei der weiteren Entwicklung des Verfahrens liegt ein Schwerpunkt auf der Verallgemeinerung auf im Kontaktplan entwickelte Programmorganisationseinheiten, die nicht nur reine
Abläufe implementieren. Ein weiterer wichtiger Entwicklungsschritt ist die graphische Unterstützung der Fehlersuche im Erreichbarkeitsgraphen, so dass insgesamt ein leistungsfähiges Werkzeug zur Unterstützung der Implementierung von Ablaufsteuerungen im Kontaktplan zur Verfügung steht.
Die Arbeit stellt die Möglichkeiten von 3D-Controllern für den Einsatz in der interventionellen Radiologie und insbesondere für die Steuerung der Echtzeit-Magnetresonanztomographie (MRT) dar. Dies ist interessant in Bezug auf die kontrollierte Navigation in ein Zielgewebe. Dabei kann der Interventionalist durch Echtzeit- Bildgebung den Verlauf des Eingriffs verfolgen, allerdings kann er bisher das MRT während der Durchführung des Eingriffs nicht selbst steuern, da dies durch den Assistenten im Nebenraum erfolgt. Die Kommunikation ist bei dem hohen Geräuschpegel aber sehr schwer. Diese Arbeit setzt an dieser Stelle an und analysiert 3D-Controller auf die Eignung für die Echtzeit-Steuerung eines MRTs. Dabei wurden trackingbasierte und trackinglose Geräte betrachtet. Als Ergebnis ließ sich festhalten, dass trackingbasierte Verfahren weniger geeignet sind, aufgrund der nicht ausreichenden Interpretation der Eingaben. Die trackinglosen Geräte hingegen sind aufgrund der korrekten Interpretation aller Eingaben und der intuitiven Bedienung geeignet.
Die Spannungsversorgung elektronischer Steuergeräte im Automotive-Bereich wird zunehmend durch Schaltregler sichergestellt. Der SEPIC (Single Ended Primary Inductance Converter) besitzt die Eigenschaft, eine Spannung aufwärts wie auch abwärts wandeln zu können und könnte somit klassische Buck- und Boost-Wandler ablösen. Dieser Beitrag untersucht den SEPIC hinsichtlich Eignung für Automotive-Anwendungen. Dazu wurde eine Groß- sowie Kleinsignalanalyse am Wandler durchgeführt, mit geeigneten Simulationsmodellen nachgebildet und Messungen gegenüber gestellt. Der SEPIC zeigt als Hauptvorteile:
1. einen verzugsfreien Übergang zwischen Buck-/Boost Betrieb, 2. geringe Eingangswelligkeit, 3.DC-Kurzschlussfestigkeit. Auch hinsichtlich Wirkungsgrad und EMV-Verhalten stellt der SEPIC eine interessante Alternative dar. Der zwischen Ein- und Ausgang liegende Kondensator wird dauerhaft von einem Strom durchflossen, auf Basis der Effektivströme wird das damit verbundene Ausfallrisiko diskutiert.
Requirements Engineering (RE) umfasst sämtliche systematische Schritte zur Entwicklung eines Systems, um die Bedürfnisse der Nutzer und Vorgaben, die an dieses gestellt werden, zu erfüllen. Das RE eines ausgewählten Herstellers für klinische Informationssysteme (KIS) wurde untersucht und es stellt sich als intransparent als auch teilweise unzureichend dar. Das Ausmaß des Einsatzes von systematischen Vorgehensweisen und Methoden zum RE wurden beim ausgewählten KIS-Hersteller analysiert. Die Analyse zeigt, dass RE weit verbreitet ist, aber differenziert betrieben wird.
Das Ziel dieser Arbeit ist es, den Stand der Technik des RE für die KIS Entwicklung zu ermitteln. Es werden wichtige Faktoren des RE für die Entwicklung von KIS beschrieben. Die Ergebnisse dieser Arbeit werden als erster Schritt für die Optimierung des RE des ausgewählten KIS-Herstellers dienen.
This paper enhances SWARM, a novel deterministic analog layout automation approach based on the idea of cellular automata. SWARM implements a decentralized interaction model in which responsive layout modules, covering basic circuit types, autonomously move, rotate and deform themselves to let constraint-compliant, compact layout solutions emerge from a synergetic flow of self-organization. With the ability to consider design constraints both implicitly and explicitly, SWARM joins the layout quality of procedural generators with the flexibility of optimization algorithms, combining these two kinds of automation into a “bottom-up meets top-down” flow. The new enhancements are demonstrated in an OTA example, depicting the power of SWARM and its enormous potential for future developments.
The supply of customer-specific products is leading to the increasing technical complexity of machines and plants in the manufacturing process. In order to ensure the availability of the machines and plants, maintenance is considered as an essential key. The application of cyber-physical systems enables the complexity to be mastered by improving the availability of information, implementing predictive maintenance strategies and the provision of all relevant information in real-time. The present research project deals with the development of a cost-effective and retrofittable smart maintenance system for the application of ultraviolet (UV) lamps. UV lamps are used in a variety of applications such as curing of materials and water disinfection, where UV lamps are still used instead of UV LED due to their higher effectiveness. The smart maintenance system enables continuous condition monitoring of the UV lamp through the integration of sensors. The data obtained are compared with data from existing lifetime models of UV lamps to provide information about the remaining useful lifetime of the UV lamp. This ensures needs-based maintenance measures and more efficient use of UV lamps. Furthermore, it is important to have accurate information on the remaining useful lifetime of a UV lamp, as the unplanned breakdown of a UV lamp can have far-reaching consequences. The key element is the functional model of the envisioned cyber-physical system, describing the dependencies between the sensors and actuator, the condition monitoring system as well as the IoT platform. Based on the requirements developed and the functional model, the necessary hardware and software are selected. Finally, the system is developed and retrofitted to a simulated curing process of a 3D printer to validate its functional capability. The developed system leads to improved information availability of the condition of UV lamps, predictive maintenance measures and context-related provision of information.
An ultra-low power capacitance extrema and ratio detector for electrostatic energy harvesters
(2015)
The power supply is one of the major challenges for applications like internet of things IoTs and smart home. The maintenance issue of batteries and the limited power level of energy harvesting is addressed by the integrated micro power supply presented in this paper. Connected to the 120/230 Vrms mains, which is one of the most reliable energy sources and anywhere indoor available, it provides a 3.3V DC output voltage. The micro power supply consists of a fully integrated ACDC and DCDC converter with one external low voltage SMD buffer capacitor. The micro power supply is fabricated in a low cost 0.35 μm 700 V CMOS technology and covers a die size of 7.7 mm2. The use of only one external low voltage SMD capacitor, results in an extremely compact form factor. The ACDC is a direct coupled, full wave rectifier with a subsequent bipolar shunt regulator, which provides an output voltage around 17 V. The DCDC stage is a fully integrated 4:1 SC DCDC converter with an input voltage as high as 17 V and a peak efficiency of 45 %. The power supply achieves an overall output power of 3 mW, resulting in a power density of 390 μW/mm2. This exceeds prior art by a factor of 11.
This paper presents a compact four-arm spiral antenna, which may be used in direction-finding applications but also mobile communication systems. The antenna is fed sequentially at its outside-ends using a sequential phase network embedded in grounded multilayer dielectric media. Sequential rotation is applied to generate the axial mode M1 but also the conical mode M2 in the same frequency band. The antenna exhibits good radiation characteristics in the frequency band of interest.
This work presents a disconnected transaction model able to cope with the increased complexity of longliving, hierarchically structured, and disconnected transactions. Wecombine an Open and Closed Nested Transaction Model with Optimistic Concurrency Control and interrelate flat transactions with the aforementioned complex nature. Despite temporary inconsistencies during a transaction’s execution our model ensures consistency.
Intelligent Tutoring Systems (ITSs) are increasingly used in modern education to automatically give students individual feedback on their performance. The advantage for students is fast individual feedback on their answers to asked questions, while lecturers benefit from considerable time savings and easy delivery of educational material. Of course, it is important that the provided feedback is as effective as direct feedback from the lecturer. However, in digital teaching, lecturers cannot assess the student’s knowledge precisely but can only provide information on which questions were answered correctly and incorrectly. Therefore, this paper presents a concept for integrating ITS elements into the gamified e-learning platform IT-REX so that the feedback quality can be improved to support students in the best possible way.
This paper presents a laboratory experiment integrating the fields of electronics design, power electronics and drive control. The aim of this experiment is first to illustrate the need for a deep knowledge and the challenges in power electronics and its applications, in this particular case for drive control. The different tasks in this experiment are executed on a complete setup for a brushless dc motor test bench. The tasks assigned to the students are designed such that, in some tasks the knowledge from a particular field, power electronics, electronic design or drive control is deepened, whereas in other tasks the knowledge from more than one of these fields is needed to solve the given problem. Thus, the experiment trains students in the particular domains but illustrates as well the links between power electronics, electronic design and drive control.
IGBT modules with anti-parallel FWDs are widely used in inductive load switching power applications, such as motor drive applications. Nowadays there is a continuous effort to increase the efficiency of such systems by decreasing their switching losses. This paper addresses the problems arising in the turn-on process of an IGBT working in hard-switching conditions. A method is proposed which achieves – contrary to most other approaches – a high switching speed and, at the same time, a low peak reverse-recovery current. This is done by applying an improved gate current waveform that is briefly lowered during the turn-on process. The proposed method achieves low switching losses. Its effectiveness is demonstrated by experimental results with IGBT modules for 600V and 1200V.
An experimental study of a zero voltage switching SiC boost converter with an active snubber network
(2015)
This paper presents a quasi-resonant, zero voltage switching (ZVS) SiC boost converter for an output power of up to 10 kW. The converter is realized with an easily controllable active snubber network that allows a reduction of switching losses by minimizing the voltage stress applied to the active switch. With this approach, an increase of the switching frequency is possible, allowing a reduction of the system size. Experiments show a maximum converter efficiency up to 99.2% for a switching frequency of 100 kHz. A second version of the converter enables a further size reduction by increasing the switching frequency to 300 kHz while still reaching a high efficiency up to 98.4 %.
For a long time, most discrete accelerators have been attached to host systems using various generations of the PCI Express interface. However, with its lack of support for coherency between accelerator and host caches, fine-grained interactions require frequent cache-flushes, or even the use of inefficient uncached memory regions. The Cache Coherent Interconnect for Accelerators (CCIX) was the first multi-vendor standard for enabling cache-coherent host-accelerator attachments, and already is indicative of the capabilities of upcoming standards such as Compute Express Link (CXL). In our work, we compare and contrast the use of CCIX with PCIe when interfacing an ARM-based host with two generations of CCIX-enabled FPGAs. We provide both low-level throughput and latency measurements for accesses and address translation, as well as examine an application-level use-case of using CCIX for fine-grained synchronization in an FPGA-accelerated database system. We can show that especially smaller reads from the FPGA to the host can benefit from CCIX by having roughly 33% shorter latency than PCIe. Small writes to the host have a latency roughly 32% higher than PCIe, though, since they carry a higher coherency overhead. For the database use-case, the use of CCIX allowed to maintain a constant synchronization latency even with heavy host-FPGA parallelism.
In recent years, the cloud has become an attractive execution environment for parallel applications, which introduces novel opportunities for versatile optimizations. Particularly promising in this context is the elasticity characteristic of cloud environments. While elasticity is well established for client-server applications, it is a fundamentally new concept for parallel applications. However, existing elasticity mechanisms for client-server applications can be applied to parallel applications only to a limited extent. Efficient exploitation of elasticity for parallel applications requires novel mechanisms that take into account the particular runtime characteristics and resource requirements of this application type. To tackle this issue, we propose an elasticity description language. This language facilitates users to define elasticity policies, which specify the elasticity behavior at both cloud infrastructure level and application level. Elasticity at the application level is supported by an adequate programming and execution model, as well as abstractions that comply with the dynamic availability of resources. We present the underlying concepts and mechanisms, as well as the architecture and a prototypical implementation. Furthermore, we illustrate the capabilities of our approach through real-world scenarios.
This paper presents an efficient implementation of a reconfigurable battery stack which allows full exploitation of the capacity of every single cell. Contrary to most other approaches, it is possible to electrically remove one or more cells from the battery stack. Therefore, the overall capacity of the system is not restricted by the weaker cells, and cells with very different states of health can be used, making the system very attractive for refurbished batteries. For the required switches, low-voltage high-current MOSFETs are used. A demonstrator has been built with a total capacity of up to 3.5 kWh, a nominal voltage of 35 V, and currents up 200 A.