Refine
Year of publication
- 2017 (302) (remove)
Document Type
- Conference proceeding (122)
- Journal article (104)
- Book chapter (32)
- Book (28)
- Patent / Standard / Guidelines (6)
- Working Paper (5)
- Doctoral Thesis (2)
- Anthology (1)
- Issue of a journal (1)
- Review (1)
Is part of the Bibliography
- yes (302)
Institute
- ESB Business School (96)
- Informatik (88)
- Technik (58)
- Texoversum (33)
- Life Sciences (26)
Publisher
- Springer (44)
- Hochschule Reutlingen (29)
- IEEE (27)
- Elsevier (18)
- Gesellschaft für Informatik (16)
- ACM (8)
- Universitätsbibliothek Tübingen (8)
- Shaker (7)
- Springer Gabler (7)
- Association for Information Systems (AIS) (5)
- VDE Verlag (5)
- Università Politecnica delle Marche (4)
- Wiley (4)
- Technische Universität Berlin (3)
- Thexis Verlag (3)
- American Marketing Assoc. (2)
- Deutsche Gesellschaft für Computer- und Roboterassistierte Chirurgie e. V. (2)
- Duncker & Humblot (2)
- EACH USP (2)
- Hanser (2)
- IOP (2)
- IOS Press (2)
- Koordinierungsstelle (2)
- Lange (2)
- MIM, Marken-Institut München (2)
- New-Business-Verl. (2)
- Routledge (2)
- Schäffer-Poeschel (2)
- System Dynamics Society (2)
- The Association for Computing Machinery, Inc. (2)
- The Kelley School of Business, Indiana University (2)
- VDE Verlag GmbH (2)
- Verband Deutscher Werkzeug- und Formenbauer (2)
- American Chemical Society (1)
- American Marketing Association (1)
- Amsterdam Fashion Institute (1)
- Auto & Design (1)
- BMBF (1)
- Beltz Juventa (1)
- British Institute of Non-Destructive Testing (1)
- DAAD – Deutscher Akademischer Austauschdienst (1)
- DIMECC Oy (1)
- DITF (1)
- De Gruyter (1)
- De Gruyter Oldenbourg (1)
- Deutsches Textilforschungszentrum Nord-West (1)
- Development and Entrepreneurship Agency (1)
- EMW (1)
- Emerald (1)
- FPRC (1)
- Franzis-Verl. (1)
- Hampp (1)
- Hanser ; GBI Genios (1)
- Harvard Business School (1)
- Harvard Business School Publ. (1)
- Haufe Group (1)
- IBP, Internat. Business Press Publ. (1)
- INFORMS (1)
- IOP Publ. (1)
- Inderscience Publ. (1)
- Konrad-Adenauer-Stiftung (1)
- Koordinierungsstelle Forschung und Entwicklung der Fachhochschulen des Landes Baden-Württemberg (1)
- Lemmens (1)
- MDPI (1)
- MHP. a Porsche Company (1)
- MIT Center for Information Systems Research (1)
- Macmillan Publishers Limited (1)
- Nature Publishing Group (1)
- Newcastle University (1)
- PLOS (1)
- Pearson (1)
- PeerJ Ltd. (1)
- Public Verl.-Ges. und Anzeigenagentur (1)
- RSC (1)
- RSC Publ. (1)
- RWTH (1)
- Royal Society of Chemistry (1)
- SPIE (1)
- Science and Technology Publications, Lda (1)
- Springer Vieweg (1)
- The Royal Society of Chemistry (1)
- UVK Verlagsgesellschaft (1)
- UVK Verlagsgesellschaft mbH (1)
- University of Konstanz, University Library (1)
- Universität Trier (1)
- Verl. Werben-&-Verkaufen (1)
- Verl.-Gruppe Handelsblatt (1)
- Verlag Moderne Industrie (1)
- Verlag W. Kohlhammer (1)
- WIP (1)
- Wiley Interscience (1)
- Wiley-Blackwell (1)
- Wissenschaftliche Verlagsgesellschaft (1)
- Wydawnictwo Uniwersytetu Jagiellońskiego (1)
- libreriauniversitaria.it.edizioni (1)
- pVS - pro Verlag und Service GmbH & Co. KG (1)
- wbv (1)
Purpose: The purpose of this paper is to describe and discuss the current state of fashion business academic education worldwide. This is motivated by the wish to develop recommendations for the fashion business bachelor program of Reutlingen Uni versity.
Design/methodology/approach: This paper is based on a systematic review of relevant fashion business academic programs. A qualitative comparison is conducted through a categorization of the programs’ content and a score system evaluating the programs’ concepts.
Findings: Key findings were that several factors ensure successful fashion business education: Industry connections, international networks, project-based work, personalized career services and innovative approaches in teaching that include all steps along the fashion value chain.
Research limitations/implications: The research was primarily limited due to the limited number of schools assessed. As a result of the restricted time frame, those schools that were presented could only be analyzed regarding a few aspects. Future research should focus on a more in-depth analysis and further-reaching comparisons, e.g. comparisons with teaching concepts outside the fashion business area or with requirements by fashion companies.
Die Arbeit stellt die Vision des Internet of Things (IoT) vor und betrachtet sowohl Möglichkeiten der Nutzung als auch Gefahrenpotentiale für die Sicherheit der Nutzer. Insbesondere wird hierbei der Anwendungsfall Smart Home näher betrachtet und am Beispiel ZigBee gravierende Schwächen dieser Geräte aufgezeigt.
Durch Industrie 4.0 kann die individuelle Fertigung von kleineren Stückzahlen zu geringen Kosten ermöglicht werden. Dafür müssen alle Anlagen miteinander vernetzt werden, um Daten austauschen und kommunizieren zu können. Durch die Vernetzung können neue Risiken und Gefahren entstehen. In dieser Arbeit wird die ITSicherheit in der Industrie 4.0 anhand möglichen Bedrohungsszenarien, Herausforderungen und Gegenmaßnahmen evaluiert. Dabei wird untersucht, welche Möglichkeiten Industrieunternehmen haben, um Hackerangriffen vorzubeugen und ob bereits etablierte Sicherheitskonzepte für industrielle Anlagen einfach übernommen werden können.
Das Ziel dieser Arbeit ist, die Infrastruktur einer modernen Fahrzeug-zu Fahrzeug-Kommunikation auf ihre Sicherheit zu prüfen. Dazu werden die Sicherheitsstandards für die Funkkommunikation genauer beschrieben und anschließend mit möglichen Angriffsmodellen geprüft. Mit dem erläuterten Wissen der VANET Architektur werden verschiedene Angriffe verständlicher. Dadurch werden die Schwachstellen offengelegt und Gegenmaßnahmen an passenden Punkten in der Architektur verdeutlicht.
Diese Arbeit beschäftigt sich mit dem neuen elektronischen Personalausweis. Zum einen werden in diesem Paper die Sicherheitsziele des Personalausweises und die technische Umsetzung der Architektur und Protokolle erklärt. Es wird der Ablauf einer Online-Identifizierung für einen Nutzer mithilfe des Ausweises aufgezeigt. Risiken und Schwachstellen der Technologie im Software- und Hardwarebereich werden diskutiert und die bereits erfolgten Hack-Angriffe aufgezeigt. Die Arbeit legt Möglichkeiten dar, wie sich der Nutzer vor Angriffen schützen kann. Es werden die Gründe genannt, warum der neue Personalausweis online nur schwar Anklang findet und warum die Aufklärung über die zur Verfügung stehenden Anwendungen, eine Preisreduzierung der Lesegeräte sowie die vom Europa-Parlament und Europarat erlassene eIDAS-Verordnung nicht helfen werden, um die Nutzung voranzutreiben. Ergebnisse hierfür liefert eine Nutzerstudie. Zum anderen werden Ideen genannt, wie die Nutzung der elektronischen Funktionen des Ausweises stattdessen zu fördern ist.
Im Rahmen der wissenschaftlichen Vertiefung soll auf Basis der vorhandenen Ansätze das IT-Risikomanagement evaluiert werden. Hierbei soll die Frage, inwiefern das IT-Risikomanagement dem Unternehmen eine Hilfestellung bieten kann, geklärt und anschließend anhand von zwei Fallbeispielen dargestellt werden.
Mittlerweile ist der Einsatz von technischen Hilfsmitteln zu Analysezwecken im Sport fester Bestandteil im Trainingsalltag von Trainern und Athleten. In nahezu jeder Sportart werden Videoaufzeichnungen genutzt, um die Bewegungsausführung zu dokumentieren und zu analysieren. Allerdings reichen Aufnahmen von einem statischen Standort oftmals nicht mehr aus. An dieser Stelle kann Virtual Reality (VR) eine Lösung dieses Problems bieten. Durch VR kann der aufgezeichneten Szene eine weitere Ebene hinzugefügt und die Bewegungsabläufe neu und detaillierter bewertet werden. Um Bewegungen in einer virtuellen Umgebung abzubilden, müssen diese mittels Motion Capturing (MoCap) aufgezeichnet werden. Ziel dieser Arbeit ist es, herauszufinden, ob das MoCap System Perception Neuron in der Lage ist, Bewegungen in hoher Geschwindigkeit zu erfassen.
In den letzten Jahren beschäftigten sich Forscher und Automobilhersteller mit den Voraussetzungen für die Einführung von autonomem Fahren. Für Innovationen und Geschäftsmodelle im Bereich der intelligenten Mobilität, aber auch innerhalb der digitalen Wertschöpfungskette, spielen generell Zuverlässigkeit und Qualität der digitalen Datenübertragung eine entscheidende Rolle. Bevor das autonome Fahren vollständig eingeführt wird, muss man feststellen, welche Anforderungen an die digitale Infrastruktur beachtet werden müssen, gleichzeitig muss die Bedrohungslandschaft für autonomes Fahren analysiert werden.
Die folgende Arbeit beschäftigt sich damit, die Anforderungen und Gefahren zu analysieren und allgemeine Handlungsempfehlungen vorzuschlagen.
Ein stark erforschtes Gebiet der Computer Vision ist die Detektion von markanten Punkten des Gesichtszuges (englisch: facial feature detection), wie der Mundwinkel oder des Kinns. Daher lassen sich eine Vielzahl von veröffentlichten Verfahren finden, die sich jedoch teils deutlich hinsichtlich der Detektionsgenauigkeit, Robustheit und Geschwindigkeit unterscheiden. So sind viele Verfahren nur bedingt echtzeitfähig oder liefern nur mit hochaufgelösten Bildquellen ein zufriedenstellendes Ergebnis. In den letzten Jahren wurden daher Verfahren entwickelt, die versuchen, diese Problematiken zu lösen. In dieser Arbeit erfolgt eine Betrachtung dreier dieser State-of-the-Art Verfahren: Constrained Local Neural Fields (CLNF), Discriminative Response Map Fitting (DRMF) und Structured Output SVM (SO SVM), sowie deren Implementierungen. Dazu erfolgt ein empirischer Vergleich hinsichtlich der Detektionsgenauigkeit.
Background. The application of lean management is standard in many companies all over the world. It is used to continuously optimise existing production processes and to reduce the complexity of administrative processes. Unfortunately, in higher education, the awareness of lean management as a highly effective methodology is quite low.
Research aims. The research aim is to show how the lean strategy can be applied in university environments. Finally, this paper addresses the question why it is so difficult to implement lean in a university environment and how an institution of higher education can move forward towards becoming a lean university.
Methodology. Based on a literature review, five key lean principles are presented and examples of their implementation are discussed using short case studies from our own institution. We also compare our findings with those in the literature.
Key findings. Lean offers the chance to improve the management of higher education institutions. This requires a commitment on the part of the university top management aiming at convincing all stakeholders that a culture of lean helps the institution to be able to adapt to the rapidly changing environment of higher education.
Der folgende Artikel befasst sich mit Wearables für Pferde. Ziel ist es, die Sicherheit der Tiere bei einem Ausbruch von einer Weide zu erhöhen und damit Personen- und Sachschäden zu minimieren. Hierzu wird der Stand der Technik zur Standortbestimmung im Freien zusammengetragen und durch eine Klassifizierung der unterschiedlichen Ansätze ermittelt, welche Standortbestimmung pferdegerecht erscheint. Zudem soll ein Fragebogen konzipiert werden, um Charakteristiken und Funktionalitäten für einen Prototypen festzustellen.
In der Medizin existieren verschiedene Reifegradmodelle, die die Digitalisierung von Krankenhäusern unterstützen können. Die Anforderungen an ein Reifegradmodell für diesen Zweck umfassen Aspekte aus allgemeinen und spezifischen Bereichen des Krankenhauses. Die Analyse der Reifegradmodelle HIN, CCMM, EMRAM und O-EMRAM zeigt große Lücken im Bereich des OP sowie fehlende Aspekte in der Notaufnahme auf. Ein umfassendes Reifegradmodell wurde nicht gefunden. Durch eine Kombination aus HIN und CCMM könnten fast alle Bereiche ausreichend abgedeckt werden. Zusätzliche Ergänzungen durch spezialisierte Reifegradmodelle oder sogar die Entwicklung eines umfassenden Reifegradmodells wären sinnvoll.
Die Arbeit stellt die Möglichkeiten von 3D-Controllern für den Einsatz in der interventionellen Radiologie und insbesondere für die Steuerung der Echtzeit-Magnetresonanztomographie (MRT) dar. Dies ist interessant in Bezug auf die kontrollierte Navigation in ein Zielgewebe. Dabei kann der Interventionalist durch Echtzeit- Bildgebung den Verlauf des Eingriffs verfolgen, allerdings kann er bisher das MRT während der Durchführung des Eingriffs nicht selbst steuern, da dies durch den Assistenten im Nebenraum erfolgt. Die Kommunikation ist bei dem hohen Geräuschpegel aber sehr schwer. Diese Arbeit setzt an dieser Stelle an und analysiert 3D-Controller auf die Eignung für die Echtzeit-Steuerung eines MRTs. Dabei wurden trackingbasierte und trackinglose Geräte betrachtet. Als Ergebnis ließ sich festhalten, dass trackingbasierte Verfahren weniger geeignet sind, aufgrund der nicht ausreichenden Interpretation der Eingaben. Die trackinglosen Geräte hingegen sind aufgrund der korrekten Interpretation aller Eingaben und der intuitiven Bedienung geeignet.
Die digitale Zukunft zu definieren und zu gestalten ist in aller Munde - in der Industrie, der Lehre und so auch im Fokus der diesjährigen Informatics Inside Konferenz. Dazu gehören einerseits die Möglichkeiten, die die Digitalisierung mit sich bringt, z.B. beschrieben im Umfeld Krankenhaus oder in der Pferdezucht, andererseits die Schnittstelle zwischen realer und virtueller Welt, ausgeführt an Beispielen der Gesichts- und Bewegungserkennung. Auffällig ist, dass auch die Studierenden sich immer stärker auf die Sicherheit und Privatsphäre persönlicher Daten in einer digitalen Welt fokussieren. Dazu gehören fundamentale Sicherheitsuntersuchungen für ausgewählte Domänen, z.B. Industrie 4.0 oder Smart Home, wie auch die Betrachtung konkreter Einsatzszenarien, wie das autonome Fahren, die Kommunikation zwischen Fahrzeugen und dem neuen Personalausweis. Darüber hinaus stellen die Studierenden ihre Master-Projekte in Kurzbeiträgen vor.
Die Teilnehmer erfüllen nicht nur den Anspruch, die Ergebnisse ihrer Arbeit in schriftlicher Form anschaulich auszuarbeiten, sondern auch interaktiv vor ihrem Publikum zu verteidigen und zu diskutieren. Die Informatics Inside bietet somit ein Forum für Studierende, um während des Studiums zum einen die Ergebnisse ihrer Arbeit professionell einem interessierten Publikum zugänglich zu machen und zum anderen Anregungen anderer Vertiefungsgebiete aufzunehmen, aber auch die Arbeiten anderer kritisch zu hinterfragen.
Reconstructing 3D face shape from a single 2D photograph as well as from video is an inherently ill-posed problem with many ambiguities. One way to solve some of the ambiguities is using a 3D face model to aid the task. 3D morphable face models (3DMMs) are amongst the state of the art methods for 3D face reconstruction, or so called 3D model fitting. However, current existing methods have severe limitations, and most of them have not been trialled on in-the-wild data. Current analysis-by- synthesis methods form complex non linear optimisation processes, and optimisers often get stuck in local optima. Further, most existing methods are slow, requiring in the order of minutes to process one photograph.
This thesis presents an algorithm to reconstruct 3D face shape from a single image as well as from sets of images or video frames in real-time. We introduce a solution for linear fitting of a PCA shape identity model and expression blendshapes to 2D facial landmarks. To improve the accuracy of the shape, a fast face contour fitting algorithm is introduced. These different components of the algorithm are run in iteration, resulting in a fast, linear shape-to- landmarks fitting algorithm. The algorithm, specifically designed to fit to landmarks obtained from in-the-wild images, by tackling imaging conditions that occur in in-the-wild images like facial expressions and the mismatch of 2D–3D contour correspondences, achieves the shape reconstruction accuracy of much more complex, nonlinear state of the art methods, while being multiple orders of magnitudes faster.
Second, we address the problem of fitting to sets of multiple images of the same person, as well as monocular video sequences. We extend the proposed shape-to-landmarks fitting to multiple frames by using the knowledge that all images are from the same identity. To recover facial texture, the approach uses texture from the original images, instead of employing the often-used PCA albedo model of a 3DMM. We employ an algorithm that merges texture from multiple frames in real-time based on a weighting of each triangle of the reconstructed shape mesh.
Last, we make the proposed real-time 3D morphable face model fitting algorithm available as open-source software. In contrast to ubiquitous available 2D-based face models and code, there is a general lack of software for 3D morphable face model fitting, hindering a widespread adoption. The library thus constitutes a significant contribution to the community.
Thematic issue on human-centred ambient intelligence: cognitive approaches, reasoning and learning
(2017)
This editorial presents advances on human-centred Ambient Intelligence applications which take into account cognitive issues when modelling users (i.e. stress, attention disorders), and learn users’ activities/preferences and adapt to them (i.e. at home, driving a car). These papers also show AmI applications in health and education, which make them even more valuable for the general society.
Die Erfindung betrifft einen Rollstuhl mit einem Gestell mit Rädern, einem Sitz sowie zwei gegenüber dem Sitz verlagerbaren Fußplatten und ein Trainingsgerät zur Bewegungstherapie der unteren Extremitäten einer in dem Rollstuhl sitzenden Person. Um das Trainingsgerät vereinfacht auszubilden, enthält das Trainingsgerät unabhängig von einer Fahrbewegung des Rollstuhls betreibbar eine an dem Gestell befestigbare, von einer Steuereinheit gesteuerte Elektromaschine, welche zur wechselweise erzwungenen Verlagerung der beiden Fußplatten mit den Fußplatten mechanisch gekoppelt ist.
The main challenge when driving heat pumps by PV-electricity is balancing differing electrical and thermal demands. In this article, a heuristic method for optimal operation of a heat pump driven by a maximum share of PV-electricity is presented. For this purpose, the (DHW) are activated in order shift the operation of the heat pump to times of PV-generation. The system under consideration refers to thermal and electrical demands of a single family house. It consists of a heat pump, a thermal energy storage for DHW and of grid connected heating and generation of domestic hot water, the heat pump runs with two different supply temperatures and thereby achieving a maximum overall COP. Within the algorithm for optimization a set of heuristic rules is developed in a way that the operational characteristics of the heat pump in terms of minimum running and stopping times are met as well as the limiting constraints of upper and lower limits of room temperature and energy content of electricity generated, a varying number of heat pump schedules fulfilling the bundary conditions are created. Finally, the schedule offering the maximum on-site utilization of PV-electricity with a minimum number of starts of the heat pump, which serves as secondary condition, is selected. Yearly simulations of this combination have been carried out. Initial results of this method indicate a significant rise in on-site consumption of the PV-electricity and heating demand fulfilment by renewable electricity with no need for a massive TES for the heating system in terms of a big water tank.
Painting galleries typically provide a wealth of data composed of several data types. Those multivariate data are too complex for laymen like museum visitors to first, get an overview about all paintings and to look for specific categories. Finally, the goal is to guide the visitor to a specific painting that he wishes to have a more closer look on. In this paper we describe an interactive visualization tool that first provides such an overview and lets people experiment with the more than 41,000 paintings collected in the web gallery of art. To generate such an interactive tool, our technique is composed of different steps like data handling, algorithmic transformations, visualizations, interactions, and the human user working with the tool with the goal to detect insights in the provided data. We illustrate the usefulness of the visualization tool by applying it to such characteristic data and show how one can get from an overview about all paintings to specific paintings.
How to protect the skin from getting sun burnt? The sun can damage your skin e.g. skin cancer. But the sun has a positive effect to the human. The time in sun and the intensity are key values between enjoy the sunbath and having a negative effect to the skin. A smart device like a UV flower could help you to enjoy the sunbath. It measures the UV index around you and gives this information to a smartphone app. The development steps of such a device are described in this paper. The UV flower is made of textile fabrics.
Medical applications are becoming increasingly important in the current development of health care and therefore a crucial part of the medical industry. An essential component is the development of user interfaces for mobile medical applications. The conceptual process is crucial for the further development of the main development process. Inconsistency or errors in the conceptual phase, have a serious impact on all areas and could prevent the certification for market approval.
This paper presents a guide to support developer with this process. It was developed based on a requirement analysis of the legal requirements to publish a medical device.
A sleep study is a test used to diagnose sleep disorders and is usually done in sleep laboratories. The golden standard for evaluation of sleep is overnight polysomnography (PSG). Unfortunately, in-lab sleep studies are expensive and complex procedures. Furthermore, with a minimum of 22 wire attachments to the patient for sleep recording, this medical procedure is invasive and unfamiliar for the subjects. To solve this problem, low-cost home diagnostic systems, based on noninvasive recording methods requires further researches.
For this intention it is important to find suitable bio vital parameters for classifying sleep phases WAKE, REM, light sleep and deep sleep without any physical impairment at the same time. We decided to analyse body movement (BM), respiration rate (RR) and heart rate variability (HRV) from existing sleep recordings to develop an algorithm which is able to classify the sleep phases automatically. The preliminary results of this project show that BM, RR and HRV are suitable to identify WAKE, REM and NREM stage.
To analyze the humans’ sleep it is necessary as to identify the sleep stages, occurring during the sleep, their durations and sleep cycles. The gold standard procedure for this approach is polysomnography (PSG), which classify the sleep stages based on Rechtschaffen and Kales (R-K) method. This method aside the advantages as high accuracy has however some disadvantages, among others time-consuming and uncomfortable for the patient procedure. Therefore, the development of further methods for the sleep classification in addition to PSG is a promising topic for the investigation and this work has as its aim the presentation of possible ways and goals for this development.
Asymmetric read/write storage technologies such as Flash are becoming
a dominant trend in modern database systems. They introduce
hardware characteristics and properties which are fundamentally
different from those of traditional storage technologies such
as HDDs.
Multi-Versioning Database Management Systems (MV-DBMSs)
and Log-based Storage Managers (LbSMs) are concepts that can
effectively address the properties of these storage technologies but
are designed for the characteristics of legacy hardware. A critical
component of MV-DBMSs is the invalidation model: commonly,
transactional timestamps are assigned to the old and the new version,
resulting in two independent (physical) update operations.
Those entail multiple random writes as well as in-place updates,
sub-optimal for new storage technologies both in terms of performance
and endurance. Traditional page-append LbSM approaches
alleviate random writes and immediate in-place updates, hence reducing
the negative impact of Flash read/write asymmetry. Nevertheless,
they entail significant mapping overhead, leading to write
amplification.
In this work we present an approach called Snapshot Isolation
Append Storage Chains (SIAS-Chains) that employs a combination
of multi-versioning, append storage management in tuple granularity
and novel singly-linked (chain-like) version organization.
SIAS-Chains features: simplified buffer management, multi-version
indexing and introduces read/write optimizations to data placement
on modern storage media. SIAS-Chains algorithmically avoids
small in-place updates, caused by in-place invalidation and converts
them into appends. Every modification operation is executed
as an append and recently inserted tuple versions are co-located.
Ziel eines aktuellen Forschungsprojektes an der Hochschule Reutlingen, das gemeinsam mit dem Ingenieurbüro Ganssloser und der Universität Tübingen durchgeführt wird, ist es, Flexibilitäten in Unternehmen, die im Verbund als virtuelles Kraftwerk am Strommarkt agieren, zu erkennen und nutzbar zu machen. Zu diesem Zweck soll eine Steuerbox für Industrie- und Gewerbebetriebe entwickelt werden, die einerseits mit der zentralen Leitwarte des virtuellen Kraftwerks kommuniziert und andererseits die Anlagen des Unternehmens so steuert, dass die zur Verfügung stehenden Flexibilitäten möglichst optimal genutzt werden. Die Hochschule Reutlingen beschäftigt sich innterhalb des Projekts mit der Erkennung und Beschreibung von Flexibilitäten in Unternehmen.
IT Governance (ITG) is crucial due to its significant impact on enabling innovation and enhancing firm performance. Hence, in the last decade ITG has become important in both academic and in practical research. Although several studies have investigated individual aspects of ITG success and its impact on single determinants, the causal relationship of how ITG promotes firm performance remains unclear. Thus, a more comprehensive understanding about the link between ITG and firm performance is needed. To address this gap, this research aims at understanding how ITG and firm performance are related. Therefore, we conducted a systematic literature review (1) to create an overview on how current research structures the link between ITG mechanisms and firm performance, (2) to uncover key constructs as potential mediators or moderators on the general link between ITG and performance, and (3) to set the basis for future studies on the ITG-firm performance relationship.
We were able to identify a set of specific capabilities corporations need to develop in order to enhance brand love. Furthermore, the effects of most dynamic capabilities on brand love have a strong correlation to the degree of customer orientation. Other results are relevant concerning the proposed moderation and mediation hypotheses. Firstly, the impact of customer orientation on brand love is varied under specific market conditions, supporting our central moderation hypothesis (β = .259, p = .001). To be precise, the impact of customer orientation is strongest in markets that have low competitive differentiation in products and services. Other control variables like age, gender, or market form (B2B versus B2C) lead to no significant heterogeneity in the data set. Finally, mediation analyses show no significant “direct effect” of the existing DC constructs on brand love, supporting the mediating role of customer orientation.
Entwicklung eines nicht vergilbenden, faserbasierten BH's mittels innovativer FIM-Technologie
(2017)
Royal Philip's goal was to use innovation to improve the lives of three billion people a year by 2025. To reach that goal, the company was shifting from selling medical products in a transactional manner to providing integrated healthcare solutions based on digital health technology ("HealthTech").
This shift required a dual transformation. On one hand, the company needed to transform how healthcare was conducted. Healthcare professionals would have to change the way they worked and reimbursement schemes needed to change to incentivize payers, providers, and patients in vastly different ways. On the other hand, Philips needed to redesign how it worked internally. The company componentized its business, introduced digital platforms, and co-created solutions with the various stakeholders of the healthcare industry.
In other words: Royal Philips was transforming itself in order to reinvent healthcare in the digital age.
Es wird gezeigt, wie bei Fernspeisung die Vorhersage der Erwärmung mit entsprechender Modellierung verbessert werden kann und wie der Einfluss von Material und Form des Kabelkanals die Erwärmung und das das Temperaturprofil des Bündels beeinflusst. Es wird auch vorgestellt, dass die erhöhte Erwärmung von Metallkabelkanälen auf die geringere Emissivität zurückzuführen ist und wie das verbessert werden kann.
In 2016, German car manufacturer the Audi Group (AUDI AG) was working on an expanding array of digital innovations. The goals of these innovations varied, and included strengthening customer- and employee-facing processes, digitally enhancing existing products, and developing new, potentially disruptive business models. Audi’s IT unit was critical to each of these efforts. Based on personal interviews with 11 IT- and non-IT executives at Audi, this case examines the different ways in which digitization can help to enhance and transform an organization’s processes, products, and business models. The case also highlights the challenges that arise as large companies “digitize.”
Recent digital technologies like the Internet of Things and Augmented Reality have brought IT into companies’ core products. What were previously purely physical products are becoming hybrid or digitized. Despite receiving a lot of recent attention, digitized products have only seen a slow uptake in businesses so far. In this paper, we study the challenges that keep companies from realizing the desired impacts of digitized products and the practices they employ to address these challenges. To do so, we looked at companies from a set of industries that are highly affected by digital transformation, but at the same time hesitant to move to a more digitized world: the creative industries. Based on a literature review and twelve interviews in creative industries, we developed a conceptual model that can serve as a basis for formulating testable hypotheses for further research in this area.
Electronic word-of-mouth (eWoM) communication has received a lot of attention from the academic community. As multiple research papers focus on specific facets of eWoM, there is a need to integrate current research results systematically. Thus, this paper presents a scientific literature analysis in order to determine the current state-of-the-art in the field of eWoM.
This paper examines the efficacy of social media systems in customer complaint handling. The emergence of social media, as a useful complement and (possibly) a viable alternative to the traditional channels of service delivery, motivates this research. The theoretical framework, developed from literature on social media and complaint handling, is tested against data collected from two different channels (hotline and social media) of a German telecommunication services provider, in order to gain insights into channel efficacy in complaint handling. We contribute to the understanding of firm’s technology usage for complaint handling in two ways:
(a) by conceptualizing and evaluating complaint handling quality across traditional and social media channels and (b) by comparing the impact of complaint handling quality on key performance outcomes such as customer loyalty, positive word-of-mouth, and crosspurchase intentions across traditional and social media channels.
Pokémon Go was the first mobile augmented reality (AR) game to reach the top of the download charts of mobile applications. However, little is known about this new generation of mobile online AR games. Existing theories provide limited applicability for user understanding. Against this background, this research provides a comprehensive framework based on uses and gratification theory, technology risk research, and flow theory. The proposed framework aims to explain the drivers of attitudinal and intentional reactions, such as continuance in gaming or willingness to invest money in in-app purchases. A survey among 642 Pokémon Go players provides insights into the psychological drivers of mobile AR games. The results show that hedonic, emotional, and social benefits and social norms drive consumer reactions while physical risks (but not data privacy risks) hinder consumer reactions. However, the importance of these drivers differs depending on the form of user behavior.
How to separate the wheat from the chaff: improved variable selection for new customer acquisition
(2017)
Steady customer losses create pressure for firms to acquire new accounts, a task that is both costly and risky. Lacking knowledge about their prospects, firms often use a large array of predictors obtained from list vendors, which in turn rapidly creates massive high-dimensional data problems. Selecting the appropriate variables and their functional relationships with acquisition probabilities is therefore a substantial challenge. This study proposes a Bayesian variable selection approach to optimally select targets for new customer acquisition. Data from an insurance company reveal that this approach outperforms nonselection methods and selection methods based on expert judgment as well as benchmarks based on principal component analysis and bootstrap aggregation of classification trees. Notably, the optimal results show that the Bayesian approach selects panel-based metrics as predictors, detects several nonlinear relationships, selects very large numbers of addresses, and generates profits. In a series of post hoc analyses, the authors consider prospects’ response behaviors and cross selling potential and systematically vary the number of predictors and the estimated profit per response. The results reveal that more predictors and higher response rates do not necessarily lead to higher profits.
Characterisation of porous knitted titanium for replacement of intervertebral disc nucleus pulposus
(2017)
Effective restoration of human intervertebral disc degeneration is challenged by numerous limitations of the currently available spinal fusion and arthroplasty treatment strategies. Consequently, use of artificial biomaterial implant is gaining attention as a potential therapeutic strategy. Our study is aimed at investigating and characterizing a novel knitted titanium (Ti6Al4V) implant for the replacement of nucleus pulposus to treat early stages of chronic intervertebral disc degeneration. Specific knitted geometry of the scaffold with a porosity of 67.67 ± 0.824% was used to overcome tissue integration failures. Furthermore, to improve the wear resistance without impairing original mechanical strength, electro-polishing step was employed. Electro-polishing treatment changed a surface roughness from 15.22 ± 3.28 to 4.35 ± 0.87 μm without affecting its wettability which remained at 81.03 ± 8.5°. Subsequently, cellular responses of human mesenchymal stem cells (SCP1 cell line) and human primary chondrocytes were investigated which showed positive responses in terms of adherence and viability. Surface wettability was further enhanced to super hydrophilic nature by oxygen plasma treatment, which eventually caused substantial increase in the proliferation of SCP1 cells and primary chondrocytes. Our study implies that owing to scaffolds physicochemical and biocompatible properties, it could improve the clinical performance of nucleus pulposus replacement.
A wide variety of cell types exhibit substrate topography-based behavior, also known as contact guidance. However, the precise cellular mechanisms underlying this process are still unknown. In this study, we investigated contact guidance by studying the reaction of human endothelial cells (ECs) to well-defined microgroove topographies, both during and after initial cell spreading. As the cytoskeleton plays a major role in cellular adaptation to topographical features, two methods were used to perturb cytoskeletal structures. Inhibition of actomyosin contractility with the chemical inhibitor blebbistatatin demonstrated that initial contact guidance events are independent of traction force generation. However, cell alignment to the grooved substrate was altered at later time points, suggesting an initial ‘passive’ phase of contact guidance, followed by a contractility-dependent ‘active’ phase that relies on mechanosensitive feedback. The actin cytoskeleton was also perturbed in an indirect manner by culturing cells upside down, resulting in decreased levels of contact guidance and suggesting that a possible loss of contact between the actin cytoskeleton and the substrate could lead to cytoskeleton impairment. The process of contact guidance at the microscale was found to be primarily lamellipodia driven, as no bias in filopodia extension was observed on micron-scale grooves.
Intermediate filament reorganization dynamically influences cancer cell alignment and migration
(2017)
The interactions between a cancer cell and its extracellular matrix (ECM) have been the focus of an increasing amount of investigation. The role of the intermediate filament keratin in cancer has also been coming into focus of late, but more research is needed to understand how this piece fits in the puzzle of cytoskeleton-mediated invasion and metastasis. In Panc-1 invasive pancreatic cancer cells, keratin phosphorylation in conjunction with actin inhibition was found to be sufficient to reduce cell area below either treatment alone. We then analyzed intersecting keratin and actin fibers in the cytoskeleton of cyclically stretched cells and found no directional correlation. The role of keratin organization in Panc-1 cellular morphological adaptation and directed migration was then analyzed by culturing cells on cyclically stretched polydimethylsiloxane (PDMS) substrates, nanoscale grates, and rigid pillars. In general, the reorganization of the keratin cytoskeleton allows the cell to become more ‘mobile’- exhibiting faster and more directed migration and orientation in response to external stimuli. By combining keratin network perturbation with a variety of physical ECM signals, we demonstrate the interconnected nature of the architecture inside the cell and the scaffolding outside of it, and highlight the key elements facilitating cancer cell-ECM interactions.
AUDI AG has historically focused on producing and selling premium vehicles but has begun to experiment with providing mobility services, built around car sharing. Its response to the so-called sharing economy addressed strategic and transformational challenges. Strategically, the company pursued additional sources of revenue from targeted, premium mobility services, rather than the less segmented services provided by competitors such as BMW and Zipcar. AUDI AG also transformed its organizational structure, processes and architecture to balance autonomy for innovation and integration for competitiveness.
Cell-cell and cell-extracellular matrix (ECM) adhesion regulates fundamental cellular functions and is crucial for cell-material contact. Adhesion is influenced by many factors like affinity and specificity of the receptor-ligand interaction or overall ligand concentration and density. To investigate molecular details of cell ECM and cadherins (cell-cell) interaction in vascular cells functional nanostructured surfaces were used Ligand-functionalized gold nanoparticles (AuNPs) with 6-8 nm diameter, are precisely immobilized on a surface and separated by non-adhesive regions so that individual integrins or cadherins can specifically interact with the ligands on the AuNPs. Using 40 nm and 90 nm distances between the AuNPs and functionalized either with peptide motifs of the extracellular matrix (RGD or REDV) or vascular endothelial cadherins (VEC), the influence of distance and ligand specificity on spreading and adhesion of endothelial cells (ECs) and smooth muscle cells (SMCs) was investigated. We demonstrate that RGD-dependent adhesion of vascular cells is similar to other cell types and that the distance dependence for integrin binding to ECM-peptides is also valid for the REDV motif. VEC-ligands decrease adhesion significantly on the tested ligand distances. These results may be helpful for future improvements in vascular tissue engineering and for development of implant surfaces.
Erfindungsgemäß wird ein Verfahren zur Optimierung des Betriebs eines in einem Regelkreis für einen Aufwärtswandler vorgesehenen digitalen Reglers (30) zur Verfügung gestellt. Das Verfahren umfasst die folgenden Verfahrensschritte: Auswerten (S1) mindestens einer Ausgangsgröße des digitalen Reglers im Betrieb des Aufwärtswandlers. Schätzen (S2) des instantanen Lastwiderstandswertes (RL) in der Strecke des Regelkreises anhand der mindestens einen ausgewerteten Ausgangsgröße. Einstellen (S3) mindestens eines Reglerkoeffizienten des digitalen Reglers anhand des geschätzten instantanen Lastwiderstandswertes (RL) im Betrieb des Aufwärtswandlers. Erfindungsgemäß bedingt eine Veränderung in der Einstellung des mindestens einen Reglerkoeffizienten eine Veränderung der Transitfrequenz im Regelkreis. Ferner wird ein Regelkreis für einen Aufwärtswandler mit einem digitalen Regler zur Verfügung gestellt, welcher eingerichtet ist, um die Schritte des erfindungsgemäßen Verfahrens durchzuführen. Des Weiteren wird ein Computerprogrammprodukt mit computerausführbarem Programmcode zur Durchführung des erfindungsgemäßen Verfahrens zur Verfügung gestellt.
This work presents a fully integrated GaN gate driver in a 180nm HV BCD technology that utilizes high-voltage energy storing (HVES) in an on-chip resonant LC tank, without the need of any external capacitor. It delivers up to 11nC gate charge at a 5V GaN gate, which exceeds prior art by a factor of 45-83, supporting a broad range of GaN transistor types. The stacked LC tank covers an area of only 1.44mm², which corresponds to a superior value of 7.6nC/mm².
In recent years, significant progress was made on switched-capacitor DCDC converters as they enable fully integrated on chip power management. New converter topologies overcame the fixed input-to-output voltage limitation and achieved high efficiency at high power densities. SC converters are attractive to not only mobile handheld devices with small input and output voltages, but also for power conversion in IoTs, industrial and automotive applications, etc. Such applications need to be capable of handling high input voltages of more than 10V. This talk highlights the challenges of the required supporting circuits and high voltage techniques, which arise for high Vin SC converters. It includes level shifters, charge pumps and back-to-back switches. High Vin conversion is demonstrated in a 4:1 SC DCDC converter with an input voltage as high as 17V with a peak efficiency of 45 %, and a buckboost SC converter with an input voltage range starting from 2 up to 13V, which utilizes a total of 17 ratios and achieves a peak efficiency of 81.5 %. Furthermore a highly integrated micro power supply approach is introduced, which is connected directly to the 120/230 Vrms mains, with an output power of 3mW, resulting in a power density >390μW/mm², which exceeds prior art by a factor of 11.
Managing decentralized corporate energy systems is a challenging task for enterprises. However, the integration of energy objectives into business strategy creates difficulties resulting in inefficient decisions. To improve this, practice-proven methods such as the balanced scorecard and enterprise architecture management are transferred to the energy domain. The methods are evaluated based on a case study. Managing multi-dimensionality and high complexity are the main drivers for an effective and efficient energy management system. Both methods show a positive impact on managing decentralized corporate energy systems and are adaptable to the energy domain.
This paper presents a control strategy for optimal utilization of photovoltaic (PV) generated power in conjunction with an Energy Storage System (ESS). The ESS is specifically designed to be retrofitted into existing PV systems in an end-user application. It can be attached in parallel to the PV system and connects to existing DC/AC inverters. In particular, the study covers the impact such a modification has on the output power of existing PV panels. A distinct degradation of PV output power was found due to the different power characteristics of PV panel and ESS. To overcome such degradation a novel feedback system is proposed. The feedback system continuously modifies the power characteristic of the ESS to match the PV panel and thus achieves optimal power utilization. Impact on PV and power point tracking performance is analyzed. Simulation of the proposed system is performed in MATLAB/Simulink. The results are found to be satisfactory.
A novel configuration of the dual active bridge (DAB) DC/DC converter is presented, enabling more efficient wide voltage range conversion at light loads. A third phase leg as well as a center tapped transformer are introduced to one side of the converter. This concept provides two different turn ratios, thus extending the zero voltage switching operation resulting in higher efficiency. A laboratory prototype was built converting an input voltage of 40V to an output voltage in the range of 350V to 650V. Measurements show a significant increase up to 20% in the efficiency for light-load operation.
Multilevel-cell (MLC) flash is commonly deployed in today’s high density NAND memories, but low latency and high reliability requirements make it barely used in automotive embedded flash applications. This paper presents a time domain voltage sensing scheme that applies a dynamic voltage ramp at the cells’ control gate (CG) in order to achieve fast and reliable sensing suitable for automotive applications.
This publication gives a short introduction and overview of the European project SCOUT and introduces a methodology for a holistic approach to record the state of the art in technical (vehicle and connectivity, human factors regarding physiologic and ergonomic level) and non-technical enablers (societal, economic, legal, regulatory and policy level) of connected and automated driving in Europe. The paper addresses beside the technical topics of environmental perception, E/E architecture, actuators and security, the state of the art of the legal framework in the context of connected and automated driving.
In any autonomous driving system, the map for localization plays a vital part that is often underestimated. The map describes the world around the vehicle outside of the sensor view and is a main input into the decision making process in highly complicated scenarios. Thus there are strict requirements towards the accuracy and timeliness of the map. We present a robust and reliable approach towards crowd based mapping using a GraphSLAM framework based on radar sensors. We show on a parking lot that even in dynamically changing environments, the localization results are very accurate and reliable even in unexplored terrain without any map data. This can be achieved by collaborative map updates from multiple vehicles. To show these claims experimentally, the Joint Graph Optimization is compared to the ground truth on an industrial parking space. Mapping performance is evaluated using a dense map from a total station as reference and localization results are compared with a deeply coupled DGPS/INS system.
The European Economic and Monetary Union (EMU) has been in turmoil for more than six years. The present governance rules do not seem to solve the problems neither permanently nor effectively. There is no vision about the future of Europe in the 21st century. This article describes a realignment of the economic governance, which does not necessarily lead to a transfer or political union. However, it solves the current and future challenges. In fact, the redesign of present rules is the most likely as well as legally and economically option today. The key ideais the detachment from the compulsive idea of an ever closer union. However, this vision requires boldness towards greater flexibility together with an exit clause or a state insolvency procedure for incompliant member states.
This paper models the political budget cycle with stochastic differential equations. The paper highlights the development of future volatility of the budget cycle. In fact, I confirm the proposition of a less volatile budget cycle in future. Moreover, I show that this trend is even amplified due to higher transparency. These findings are new evidence in the literature on electoral cycles. I calibrate a rigorous stochastic model on public deficit-to-GDP data for several countries from 1970 to 2012.
Database management systems (DBMS) are critical performance components in large scale applications under modern update intensive workloads. Additional access paths accelerate look-up performance in DBMS for frequently queried attributes, but the required maintenance slows down update performance. The ubiquitous B+ tree is a commonly used key-indexed access path that is able to support many required functionalities with logarithmic access time to requested records. Modern processing and storage technologies and their characteristics require reconsideration of matured indexing approaches for today's workloads. Partitioned B-trees (PBT) leverage characteristics of modern hardware technologies and complex memory hierarchies as well as high update rates and changes in workloads by maintaining partitions within one single B+-Tree. This paper includes an experimental evaluation of PBTs optimized write pattern and performance improvements. With PBT transactional throughput under TPC-C increases 30%; PBT results in beneficial sequential write patterns even in presence of updates and maintenance operations.
Characteristics of modern computing and storage technologies fundamentally differ from traditional hardware. There is a need to optimally leverage their performance, endurance and energy consumption characteristics. Therefore, existing architectures and algorithms in modern high performance database management systems have to be redesigned and advanced. Multi Version Concurrency Control (MVCC) approaches in data-base management systems maintain multiple physically independent tuple versions. Snapshot isolation approaches enable high parallelism and concurrency in workloads with almost serializable consistency level. Modern hardware technologies benefit from multi-version approaches. Indexing multi-version data on modern hardware is still an open research area. In this paper, we provide a survey of popular multi-version indexing approaches and an extended scope of high performance single-version approaches. An optimal multi-version index structure brings look-up efficiency of tuple versions, which are visible to transactions, and effort on index maintenance in balance for different workloads on modern hardware technologies.
Dieses Buch beschreibt detailliert die Voraussetzungen und den Prozessablauf von Business Cases. Diese stellen in der Praxis das wichtigste Instrument dar, um unternehmerische Entscheidungen auf ihre Vorteilhaftigkeit zu analysieren. Um einen adäquaten Business Case zu erstellen, reicht allerdings die reine Beherrschung der relevanten Methoden der Investitionsrechnung nicht aus. Andreas Taschner gibt hilfreiche Anleitungen und Tipps zur Methodenwahl und Ergebnisdarstellung und erläutert weitergehende Fragen, wie die Berücksichtigung von Unsicherheit oder die Einbeziehung nicht-monetärer Faktoren. Die Orientierung am idealtypischen Prozess hilft beim Erarbeiten eigener Business Cases und liefert einen Leitfaden für die ersten selbstständigen Arbeiten. Anwendungsbezogene Fragen und Antworten vertiefen die Thematik. „Business Cases“ wendet sich an Unternehmenspraktiker in den Bereichen Investition, Controlling, Planung und Unternehmensführung. Studierende der Wirtschaftswissenschaften an Fachhochschulen und Universitäten, insbesondere mit den Schwerpunkten Controlling und Unternehmensführung, profitieren von der kompakten Wissensvermittlung.
Die Erfindung betrifft eine Vorrichtung (100) und ein Verfahren zum elektrischen Verbinden und Trennen zweier elektrischer Potentiale (1, 2). Des Weiteren betrifft die Erfindung eine Verwendung der Vorrichtung (100). Dabei umfasst die Vorrichtung (100): – ein erstes Modul, welches einen ersten und einen zweiten Transistor (10a, 10b) umfasst, wobei der erste Transistor (10a) antiseriell zu dem zweiten Transistor (10b) geschaltet ist; und – ein zweites Modul, welches einen dritten und einen vierten Transistor (10c, 10d) umfasst, wobei der dritte Transistor (10c) antiseriell zu dem vierten Transistor (10d) geschaltet ist; wobei das erste Modul und das zweite Modul parallel geschaltet sind.
Using measurement and simulation for understanding distributed development processes in the Cloud
(2017)
Organizations increasingly develop software in a distributed manner. The Cloud provides an environment to create and maintain software-based products and services. Currently, it is widely unknown which software processes are suited for Cloud-based development and what their effects in specific contexts are. This paper presents a process simulation to study distributed development in the Cloud. We contribute a simulation model, which helps analyzing different project parameters and their impact on projects carried out in the Cloud. The simulator helps reproducing activities, developers, issues and events in the project, and it generates statistics, e.g., on throughput, total time, and lead and cycle time. The aim of this simulation model is thus to analyze the tradeoffs regarding throughput, total time, project size, and team size. Furthermore, the modified simulation model aims to help project managers select the most suitable planning alternative. Based on observed projects in Finland and Spain, we simulated a distributed project using artificial and real data. Particularly, we studied the variables project size, team size, throughput, and total project duration. A comparison of the real project data with the results obtained from the simulation shows the simulation producing results close to the real data, and we could successfully replicate a distributed software project. By improving the understanding of distributed development processes, our simulation model thus supports project managers in their decision-making.
The business landscape is changing radically because of software. Companies in all industry sectors are continously finding new flexibilities in this programmable world. They are able to deliver new functionalities even after the product is already in the customer's hands. But success is far from guaranteed if they cannot validate their assumptions about what their customers actually need. A competitor with better knowledge of customer needs can disrupt the market in an instant.
This book introduces continuous experimentation, an approach to continuously and systematically test assumptions about the company's product or service strategy and verify customers' needs through experiments. By observing how customers actually use the product or early versions of it, companies can make better development decisions and avoid potentially expensive and wasteful activities. The book explains the cycle of continuous experimentation, demonstrates its use through industry cases, provides advice on how to conduct experiments with recipes, tools, and models, and lists some common pitfalls to avoid. Use it to get started with continuous experimentation and make better product and service development decisions that are in-line with your customers' needs.
Due to rapidly changing technologies and business contexts, many products and services are developed under high uncertainties. It is often impossible to predict customer behaviors and outcomes upfront. Therefore, product and service developers must continuously find out what customers want, requiring a more experimental mode of management and appropriate support for continuously conducting experiments. We have analytically derived an initial model for continuous experimentation from prior work and matched it against empirical case study findings from two startup companies. We examined the preconditions for setting up an experimentation system for continuous customer experiments. The resulting RIGHT model for Continuous Experimentation (Rapid Iterative value creation Gained through High-frequency Testing) illustrates the building blocks required for such a system and the necessary infrastructure. The major findings are that a suitable experimentation system requires the ability to design, manage, and conduct experiments, create so-called minimum viable products or features, link experiment results with a product roadmap, and manage a flexible business strategy. The main challenges are proper, rapid design of experiments, advanced instrumentation of software to collect, analyse, and store relevant data, and integration of experiment results in the product development cycle, software development process, and business strategy. This summary refers to the article The RIGHT Model for Continuous Experimentation, published in the Journal of Systems and Software [Fa17].
First International Workshop on Hybrid dEveLopmENt Approaches in Software Systems Development
(2017)
A software process is the game plan to organize project teams and run projects. Yet, it still is a challenge to select the appropriate development approach for the respective context. A multitude of development approaches compete for the users’ favor, but there is no silver bullet serving all possible setups. Moreover, recent research as well as experience from practice shows companies utilizing different development approaches to assemble the bestfitting approach for the respective company: a more traditional process provides the basic framework to serve the organization, while project teams embody this framework with more agile (and/or lean) practices to keep their flexibility. The first HELENA workshop aims to bring together the community to discuss recent findings and to steer future work.
The ability to develop and deploy high-quality software at a high speed gets increasing relevance for the comptetitiveness of car manufacturers. Agile practices have shown benefits such as faster time to market in several application domains. Therefore, it seems to be promising to carefully adopt agile practices also in the automotive domain. This article presents findings from an interview-based qualitative survey. It aims at understanding perceived forces that support agile adoption. Particularly, it focuses on embedded software development for electronic control units in the automotive domain.
Software and system development faces numerous challenges of rapidly changing markets. To address such challenges, companies and projects design and adopt specific development approaches by combining well-structured comprehensive methods and flexible agile practices. Yet, the number of methods and practices is large, and available studies argue that the actual process composition is carried out in a fairly ad-hoc manner. The present paper reports on a survey on hybrid software development approaches. We study which approaches are used in practice, how different approaches are combined, and what contextual factors influence the use and combination of hybrid software development approaches. Our results from 69 study participants show a variety of development approaches used and combined in practice. We show that most combinations follow a pattern in which a traditional process model serves as framework in which several fine-grained (agile) practices are plugged in. We further show that hybrid software development approaches are independent from the company size and external triggers. We conclude that such approaches are the results of a natural process evolution, which is mainly driven by experience, learning, and pragmatism.
The digital transformation of the automotive industry has a significant impact on how development processes need to be organized in future. Dynamic market and technological environments require capabilities to react on changes and to learn fast. Agile methods are a promising approach to address these needs but they are not tailored to the specific characteristics of the automotive domain like product line development. Although, there have been efforts to apply agile methods in the automotive domain for many years, significant and widespread adoptions have not yet taken place. The goal of this literature review is to gain an overview and a better understanding of agile methods for embedded software development in the automotive domain, especially with respect to product line development. A mapping study was conducted to analyze the relation between agile software development, embedded software development in the automotive domain and software product line development. Three research questions were defined and 68 papers were evaluated. The study shows that agile and product line development approaches tailored for the automotive domain are not yet fully explored in the literature. Especially, literature on the combination of agile and product line development is rare. Most of the examined combinations are customizations of generic approaches or approaches stemming from other domains. Although, only few approaches for combining agile and software product line development in the automotive domain were found, these findings were valuable for identifying research gaps and provide insights into how existing approaches can be combined, extended and tailored to suit the characteristics of the automotive domain.
Seit über 50 Jahren dominiert die neoklassische Kapitalmarkttheorie unser Verständnis für die Abläufe an Finanzmärkten. Sie hat eine Vielzahl von Theorien und Konzepten (z.B. Portfoliotheorie, Capital Asset Pricing Model oder Value-at-Risk) hervorgebracht und basiert auf der Annahme eines streng rationalen Homo Oeconomicus.
Das vorliegende Buch möchte Praktikern die Türe öffnen zu einer neu entstehenden, verhaltenswissenschaftlichen Sicht auf die Finanzmärkte, in der ein realitätsnäherer Homo Oeconomicus Humanus an den Märkten agiert. Er setzt bei der Entscheidungsfindung begrenzt rationale Heuristiken ein und lässt sich von emotionalen Einflüssen lenken.
Die Autoren schlagen zunächst den Bogen von der neoklassischen Sicht der Finanzmärkte zur Behavioral Finance. Anschließend werden spekulative Blasen, von der Tulpenmanie bis zur Subprime Hypothekenblase, als Anzeichen für begrenzte Rationalität an Finanzmärkten ausführlich vorgestellt. Danach stehen die Heuristiken bei Anlageentscheidungen an Wertpapiermärkten im Vordergrund. Die dadurch ausgelösten Verzerrungen werden ntsprechend ihrer Risiko-/Renditeschädlichkeit im Rahmen des RRS-Index® eingeordnet. Abschließend werden Beispiele für die Anwendung der Behavioral-Finance-Erkenntnisse im Wealth Management und Corporate Governance diskutiert und es wird ein Blick auf aktuelle Entwicklungen der Neuro-Finance und Emotional Finance geworfen.
In dieser Auflage neu hinzugekommen ist Financial Nudging, eine besonders vielversprechende Anwendung von Behavioral Finance-Erkenntnissen.
Afrika ist aufgrund überdurchschnittlicher Wirtschaftswachstumsraten und als die weltweit letzten unbearbeiteten Märkte seit einigen Jahren ein populäres Thema der Wirtschaft. Deutsche Unternehmen sind allerdings mit ihrem Engagement auf den afrikanischen Märkten sehr zurückhaltend. So schwankt der Anteil der deutschen Exporte nach Afrika an den deutschen Gesamtexporten seit zehn Jahren um die zwei Prozent; betrachtet man nur Subsahara‐Afrika, so waren es sogar nur 0,5 % in 2014 (Allafi und Koch 2015, S. 3). Bezüglich der Direktinvestitionen (nur Beteiligungskapital, ohne Direktinvestitionskredite) spielt Afrika eine noch geringere Rolle mit nur 1,5 % aller deutschen Investitionen in 2014, wobei hiervon so gut wie alle nach Nordafrika und Südafrika geflossen sind (Deutsche Bundesbank 2015, S. 12 f.). Neben den Standardgründen wie beispielsweise politischen Risiken, schlechter Infrastruktur, schwacher institutioneller Rahmenbedingungen und Governance‐Problemen (vgl. zum Beispiel World Bank 2016a), ist ein gängiges Markteintrittsproblem die fehlende Verfügbarkeit von lokalen Partnern in den Bereichen Vertrieb, Logistik und teilweise auch Produktion (vgl. zum Beispiel Carlowitz und Röndigs 2016). Aktuell ist ein Markteintritt in Afrika ohne lokalen Partner aufgrund der völlig anderen und schwierigen Rahmenbedingungen fast unmöglich.
Incubators in multinational corporations : development of a corporate incubator operator model
(2017)
This paper analyzes the components of a corporate incubator operator model in multinational companies. Thereby, three relevant phases were identified: pre incubation, incubation, and exit. Each phase contains different criteria that represent critical success factors for a corporate incubator, which are based on theoretical findings and lessons learned from practice. During the pre-incubation phase companies should define their need for a corporate incubator, the origin of ideas and the selection criteria for incubator tenants. The actual phase of incubation refers to the incubator program, which should be flexible with respect to each tenant. Furthermore, resource allocation plays an important role during the incubator program. Exit options after a successful incubation differ according to internal ideas and external start-ups, as well as the objective of the incubator. The research is based on a comprehensive screening of existing incubator literature and a qualitative content analysis of statements from eight experts of international corporate incubators.
Gallium nitride high electron mobility transistors (GaN-HEMTs) have low capacitances and can achieve low switching losses in applications where hard turn-on is required. Low switching losses imply a fast switching; consequently, fast voltage and current transients occur. However, these transients can be limited by package and layout parasitics even for highly optimized systems. Furthermore, a fast switching requires a fast charging of the input capacitance, hence a high gate current.
In this paper, the switching speed limitations of GaN-HEMTs due to the common source inductance and the gate driver supply voltage are discussed. The turn-on behavior of a GaN-HEMT is simulated and the impact of the parasitics and the gate driver supply voltage on the switching losses is described in detail. Furthermore, measurements are performed with an optimized layout for a drain-source voltage of 500 V and a drain-source current up to 60 A.
A device including a first and second monitoring unit, the first monitoring unit detecting a first voltage potential and the second monitoring unit detecting a second voltage potential, the monitoring units comparing the first voltage potential and the second voltage potential to the value of the supply voltage and activate a control unit as a function of the comparisons, the control unit determining a switching point in time of a second power transistor, and an arrangement being present which generates current when the second power transistor is being switched on, the current changing the first voltage potential, and the control unit activates a first power transistor when the first voltage potential has the same value as the supply voltage, so that the first power transistor is de-energized.
Modern power semiconductor devices have low capacitances and can therefore achieve very fast switching transients under hard-switching conditions. However, these transients are often limited by parasitic elements, especially by the source inductance and the parasitic capacitances of the power semiconductor. These limitations cannot be compensated by conventional gate drivers. To overcome this, a novel gate driver approach for power semiconductors was developed. It uses a transformer which accelerates the switching by transferring energy from the source path to the gate path.
Experimental results of the novel gate driver approach show a turn-on energy reduction of 78% (from 80 μJ down to 17 μJ) with a drain-source voltage of 500V and a drain current of 60 A. Furthermore, the efficiency improvement is demonstrated for a hard-switching boost converter. For a switching frequency of 750 kHz with an input voltage of 230V and an output voltage of 400V, it was possible to extend the output power range by 35%(from 2.3kW to 3.1 kW), due to the reduction of the turn-on losses, therefore lowering the junction temperature of the GaN-HEMT.
Within the scope of the present cumulative doctoral thesis six scientific papers were published which illustrates that modern reaction model-free (=isoconversional) kinetic analysis (ICKA) methods represents a universal and effective tool for the controlled processing of thermosetting materials. In order to demonstrate the universal applicability of ICKA methods, the thermal cure of different thermosetting materials having a very broad range of chemical composition (melamine-formaldehyde resins, epoxy resins, polyester-epoxy resins, and acrylate/epoxy resins) were analyzed and mathematically modelled. Some of the materials were based on renewable resources (an epoxy resin was made from hempseed oil; linseed oil was modified into an acrylate/epoxy resin). With the aid of ICKA methods not only single-step but also complex multi-step reactions were modelled precisely. The analyzed thermosetting materials were combined with wood, wood-based products, paper, and plant fibers which are processed to various final products. Some of the thermosetting materials were applied as coating (in form of impregnated décor papers or powder and wet coatings respectively) on wood substrates and the epoxy resin from hempseed oil was mixed with plant fibers and processed into bio-based composites for lightweight applications. From the final products mechanical, thermal, and surface properties were determined. The activation energy as function of cure conversion derived from ICKA methods was utilized to predict accurately the thermal curing over the course of time for arbitrary cure conditions. Furthermore the cure models were used to establish correlations between the cross-linking during processing into products and the properties of the final products. Therewith it was possible to derive the process time and temperature that guarantee optimal cross-linking as well as optimal product properties
The presented wide-Vin step-down converter introduces a parallel-resonant converter (PRC), comprising an integrated 5-bit capacitor array and a 300 nH resonant coil, placed in parallel to a conventional buck converter. Unlike conventional resonant concepts, the implemented soft-switching control eliminates input voltage dependent losses over a wide operating range. This ensures high efficiency across a wide range of Vin= 12-48V, 100-500mA load and 5V output at up to 15MHz switching frequency. The peak efficiency of the converter is 76.3 %. Thanks to the low output current ripple, the output capacitor can be as small as 50 nF, while the inductor tolerates a larger ESR, resulting in small component size. The proposed PRC architecture is also suitable for future power electronics applications using fast-switching GaN devices.
Saving energy and road safety became important in the last decades, hence several driving assistant systems were developed that help to improve the driving behaviour. However, these driving systems cover the area of either energy-efficiency or safety. Furthermore, they do not consider the reaction of the driver to a shown recommendation and the driver stress level. In this paper, the decision process of showing a recommendation to the driver in an energy-efficient and safety relevant driving system is presented. The decision process considers the driver's reaction to a shown recommendation and the driver stress in order to increase the user acceptance and the road safety. The results of the evaluation showed that the driving system was able to show recommendations when needed, while suppressing recommendations when the driver ignored a recommendation repeatedly or when the driver was in stress.
More and more power electronics applications utilize GaN transistors as they enable higher switching frequencies in comparison to conventional Si devices. Faster switching shrinks down the size of passives and enables compact solutions in applications like renewable energy, electrical cars and home appliances. GaN transistors benefit from ~10× smaller gate charge QG and gate drive voltages in the range of typically 5V vs. ~15V for Si.
Modern power transistors are able to switch at very high transition speed, which can cause EMC violations and overshoot. This is addressed by a gate driver with variable gate current, which is able to control the transition speed. The key idea is that the gate driver can influence the di/dt and dv/dt transition separately and optimize whichever transition promises the highest improvement while keeping switching losses low. To account for changes in the load current, supply voltage, etc., a control loop is required in the driver to ensure optimized switching. In this paper, an efficient control scheme for an automotive gate driver with variable output current capability is presented. The effectiveness of the control loop is demonstrated for a MOSFET bridge consisting of OptiMOS-T2™devices with a total gate charge of 39nC. This bridge setup shows dv/dt transitions between 50 to 1000ns, depending on driving current. The driver is able to switch between gate current levels of 1 to 500mA in 10/15ns (rising/falling transition). With the implemented control loop the driver is measured to significantly reduce the ringing and thereby reduce device stress and electromagnetic emissions while keeping switching losses 52% lower than with a constant current driver.
A concept for a slope shaping gate driver IC is proposed, used to establish control over the slew rates of current and voltage during the turn-on and turn off switching transients.
It combines the high speed and linearity of a fully-integrated closed-loop analog gate driver, which is able to perform real-time regulation, with the advantages of digital control, like flexibility and parameter independency, operating in a predictive cycle-bycycle regulation. In this work, the analog gate drive integrated circuit is partitioned into functional blocks and modeled in the small-signal domain, which also includes the non-linearity of parameters. An analytical stability analysis has been performed in order to ensure full functionality of the system controlling a modern generation IGBT and a superjunction MOSFET. Major parameters of influence, such as gate resistor and summing node capacitance, are investigated to achieve stable control. The large-signal behavior, investigated by simulations of a transistor level design, verifies the correct operation of the circuit. Hence, the gate driver can be designed for robust operation.
In a digitally controlled slope shaping system, reliable detection of both voltage and current slope is required to enable a closed-loop control for various power switches independent of system parameters. In most state-of-the-art works, this is realized by monitoring the absolute voltage and current values. Better accuracy at lower DC power loss is achieved by sensing techniques for a reliable passive detection, which is achieved through avoiding DC paths from the high voltage network into the sensing network. Using a high-speed analog-to-digital converter, the whole waveform of the transient derivative can be stored digitally and prepared for a predictive cycle-by-cycle regulation, without requiring high-precision digital differentiation algorithms. To gain an accurate representation of the voltage and current derivative waveforms, system parasitics are investigated and classified in three sections: (1) component parasitics, which are identified by s-parameter measurements and extraction of equivalent circuit models, (2) PCB design issues related to the sensing circuit, and (3) interconnections between adjacent boards.
The contribution of this paper is an optimized sensing network on the basis of the experimental study supporting fast transition slopes up to 100 V/ns and 1 A/ns and beyond, making the sensing technique attractive for slope shaping of fast switching devices like modern generation IGBTs, CoolMOSTM and SiC mosfets. Measurements of the optimized dv/dt and di/dt setups are demonstrated for a hard switched IGBT power stage.
Introducing continuous experimentation in large software-intensive product and service organisations
(2017)
Software development in highly dynamic environments imposes high risks to development organizations. One such risk is that the developed software may be of only little or no value to customers, wasting the invested development efforts.Continuous experiment ation, as an experiment-driven development approach, may reduce such development risks by iteratively testing product and service assumptions that are critical to the success of the software. Although several experiment-driven development approaches are available, there is little guidance available on how to introduce continuous experimentation into an organization. This article presents a multiple-case study that aims at better understanding the process of introducing continuous experimentation into an organization with an already established development process. The results from the study show that companies are open to adopting such an approach and learning throughout the introduction process. Several benefits were obtained, such as reduced development efforts, deeper customer insights, and better support for development decisions. Challenges included complex stakeholder structures, difficulties in defining success criteria, and building experimen- tation skills. Our findings indicate that organizational factors may limit the benefits of experimentation. Moreover, introducing continuous experimentation requires fundamental changes in how companies operate, and a systematic introduction process can increase the chances of a successful start.
Medical applications are becoming increasingly important in the current development of health care and therefore a crucial part of the medical industry. The work focuses on the analysis of requirements and the challenges arisen from designing mobile medical applications in relation to the user interface. The paper describes the current status in the development of mobile medical apps and illustrates the development of e-health market. The author will explain the requirements and will illustrate the hurdles and problems. He refers to the German market which is similar to the European and compares that with the market in the USA.
To assess the quality of a person’s sleep, it is essential to examine the sleep behaviour by identifying the several sleep stages, their durations and sleep cycles. The established and gold standard procedure for sleep stage scoring is overnight polysomnography (PSG) with the Rechtschaffen and Kales (R-K) method. Unfortunately, the conduct of PSG is time-consuming and unfamiliar for the subjects and might have an impact of the recorded data. To avoid the disadvantages with PSG, it is important to make further investigations in low-cost home diagnostic systems. For this intention it is necessary to find suitable bio vital parameters for classifying sleep stages without any physical impairments at the same time. Due to the promising results in several publications we want to analyse existing methods for sleep stage classification based on the parameters body movement,
heartbeat and respiration. Our aim was to find different behaviour patterns in the several sleep stages. Therefore, the average values of 15 whole-night PSG recordings -obtained from the ‘DREAMS
Subjects Database’- where analysed in the light of heartbeat, body movement and respiration with 10 different methods.
Sleep quality and in general, behavior in bed can be detected using a sleep state analysis. These results can help a subject to regulate sleep and recognize different sleeping disorders. In this work, a sensor grid for pressure and movement detection supporting sleep phase analysis is proposed. In comparison to the leading standard measuring system, which is Polysomnography (PSG), the system proposed in this project is a non invasive sleep monitoring device. For continuous analysis or home use, the PSG or wearable actigraphy devices tends to be uncomfortable. Besides this fact, they are also very expensive. The system represented in this work classifies respiration and body movement with only one type of sensor and also in a non invasive way. The sensor used is a pressure sensor. This sensor is low cost and can be used for commercial proposes. The system was tested by carrying out an experiment that recorded the sleep process of a subject. These recordings showed the potential for classification of breathing rate and body movements. Although previous researches show the use of pressure sensors in recognizing posture and breathing, they have been mostly used by positioning the sensors between the mattress and bedsheet. This project however, shows an innovative way to position the sensors under the mattress.
To evaluate the quality of sleep, it is important to determine how much time was spent in each sleep stage during the night. The gold standard in this domain is an overnight polysomnography (PSG). But the recording of the necessary electrophysiological signals is extensive and complex and the environment of the sleep laboratory, which is unfamiliar to the patient, might lead to distorted results. In this paper, a sleep stage detection algorithm is proposed that uses only the heart rate signal, derived from electrocardiogram (ECG), as a discriminator. This would make it possible for sleep analysis to be performed at home, saving a lot of effort and money. From the heart rate, using the fast Fourier transformation (FFT), three parameters were calculated in order to distinguish between the different sleep stages. ECG data along with a hypnogram scored by professionals was used from Physionet database, making it easy to compare the results. With an agreement rate of 41.3%, this approach is a good foundation for future research.
Zukünftige Montagearbeitsplätze müssen veränderten Herausforderungen, wie z. B. der zunehmenden Anzahl von Mensch Roboter-Kollaborationen, gerecht werden. Die Virtual Reality (VR)-Technik bietet im Rahmen der Arbeitsplatzgestaltung neue Möglichkeiten, diesen veränderten Planungsherausforderungen gerecht zu werden. Die Ausarbeitung stellt eine Methode zur Bewertung des sinnvollen Einsatzes der VR-Technik für einen spezifischen Arbeitsplatz vor. Außerdem wird aufgezeigt, wie die VR-Technik in den Prozess der Arbeitsplatzgestaltung integriert werden kann.
46 Prozent der Arbeitsplätze in der Automobilindustrie sind bis 2030 durch Automatisierung und Digitalisierung bedroht – die Tätigkeiten werden dann nicht mehr von Menschen, sondern von intelligenten Robotern und Systemen erledigt. Das ist das zentrale Ergebnis unserer Studie „Digitale Transformation – Der Einfluss der Digitalisierung auf die Workforce in der Automobilindustrie“, die wir gemeinsam mit dem Herman Hollerith Lehr- und Forschungszentrum an der Hochschule Reutlingen erstellt haben.
LDMOS transistors in integrated power technologies are often subject to thermo-mechanical stress, which degrades the on-chip metallization and eventually leads to a short. This paper investigates small sense lines embedded in the LDMOS metallization. It will be shown that their resistance depends strongly on the stress cycle number. Thus, they can be used as aging sensors and predict impending failures. Different test structures have been investigated to identify promising layout configurations. Such sensors are key components for resilient systems that adaptively reduce stress to allow aggressive LDMOS scaling without increasing the risk of failure.
A gate driver approach is presented for the reduction of turn-on losses in hard switching applications. A significant turn-on loss reduction of up to 55% has been observed for SiCMOSFETs. The gate driver approach uses a transformer which couples energy from the power path back into the gate path during switching events, providing increased gate driver current and thereby faster switching speed.
The gate driver approach was tested on a boost converter running at a switching frequency up to 300 kHz. With an input voltage of 300V and an output voltage of 600V, it was possible to reduce the converter losses by 8% at full load. Moreover, the output power range could be extended by 23% (from 2.75kW to 3.4 kW) due to the reduction of the turn-on losses.
Die weiterhin hohen Schulden in einigen Staaten der Europäischen Wirtschafts- und Währungsunion lassen nach wie vor staatliche Insolvenzen befürchten. Um die entstandenen Probleme zu bewältigen, aber auch damit eine solche Situation erst gar nicht eintritt, hält der Autor eine staatliche Insovenzordnung – mit Bail-out durch die anderen Mitgliedstaaten nur in Notfällen – für erforderlich. Er schlägt einen staatlichen Abwicklungsmechanismus für überschuldete Euro-Länder vor, der auf einem Konzept des Sachverständigenrates für Wirtschaft von 2016 beruht.
IT platforms as the foundation of digitized processes and products are vital in a digital economy. However, many companies’ platforms are liabilities, not strategic assets because of their complexity. Consequently, companies initiate IT complexity reduction programs. But these technology-centric programs at best provide temporary relief. Soon after, companies’ platforms become just as complex as before. Based on four case studies, we identify three non-technical drivers of platform complexity: (1) Lacking awareness of consequences business decisions have on platform complexity, (2) Lacking motivation to avoid platform complexity, (3) Lacking authority to protect platforms from complexity. We propose measures to address these drivers that can help achieve more sustainable impact on platform complexity: (1) Removing information asymmetries between those creating complexity and those dealing with complexity, (2) Redefining incentives to include long-term effects on platform complexity, (3) Redressing power imbalances between those who create complexity and those who have to manage it.
Electric freight vehicles have the potential to mitigate local urban road freight transport emissions, but their numbers are still insignificant. Logistics companies often consider electric vehicles as too costly compared to vehicles powered by combustion engines. Research within the body of the current literature suggests that increasing the driven mileage can enhance the competitiveness of electric freight vehicles. In this paper we develop a numeric simulation approach to analyze the cost-optimal balance between a high utilization of medium-duty electric vehicles – which often have low operational costs – and the common requirement that their batteries will need expensive replacements. Our work relies on empirical findings of the real-world energy consumption from a large German field test with medium-duty electric vehicles. Our results suggest that increasing the range to the technical maximum by intermediate (quick) charging and multi-shift usage is not the most cost-efficient strategy in every case. A low daily mileage is more cost-efficient at high energy prices or consumptions, relative to diesel prices or consumptions, or if the battery is not safeguarded by a long warranty. In practical applications our model may help companies to choose the most suitable electric vehicle for the application purpose or the optimal trip length from a given set of options. For policymakers, our analysis provides insights on the relevant parameters that may either reduce the cost gap at lower daily mileages, or increase the utilization of medium-duty electric vehicles, in order to abate the negative impact of urban road freight transport on the environment.
Though bioprinting is a forward-looking approach in bone tissue engineering, the development of bioinks which are on the one hand processable with the chosen printing technique, and on the other hand possess the relevant mechanical as well as osteoconductive features remains a challenge. In the present study, polymer solutions based on methacrylated gelatin and methacrylated hyaluronic acid modified with hydroxyapatite (HAp) particles (5 wt%) were prepared. Encapsulation of primary human adipose derived stem cells in the HAp-containing gels and culture for 28 d resulted in a storage moduli significantly increased to 126% ± 9.6% compared to the value on day 1 by the sole influence of the HAp. Additional use of osteogenic media components resulted in an increase of storage module up to 199% ± 27.8%. Similarly, the loss moduli was increased to 370% ± 122.1% under the influence of osteogenic media components and HAp. Those changes in rheological material characteristics indicate a distinct change in elastic and viscous hydrogel properties, and are attributed to extensive matrix production in the hydrogels by the encapsulated cells, what could also be proven by staining of bone matrix components like collagen I, fibronectin, alkaline phosphatase and osteopontin. When using the cell-laden polymer solutions as bioinks to build up relevant geometries, the ink showed excellent printability and the printed grid structure's integrity remained intact over a culture time of 28 d. Again, an intense matrix formation as well as upregulation of osteogenic markers by the encapsulated cells could be shown. In conclusion, we demonstrated that our HAp-containing bioinks and hydrogels on basis of methacrylated gelatin and hyaluronic acid are on the one hand highly suitable for the build up of relevant three-dimensional geometries with microextrusion bioprinting, and on the other hand exhibit a significant positive effect on bone matrix development and remodeling in the hydrogels, as indicated by rheological measurements and staining of bone components. This makes the developed composite hydrogels an excellent material for bone bioprinting approaches.
Understanding the factors that influence the accuracy of visual SLAM algorithms is very important for the future development of these algorithms. So far very few studies have done this. In this paper, a simulation model is presented and used to investigate the effect of the number of scene points tracked, the effect of the baseline length in triangulation and the influence of image point location uncertainty. It is shown that the latter is very critical, while the other all play important roles. Experiments with a well known semi-dense visual SLAM approach are also presented, when used in a monocular visual odometry mode. The experiments show that not including sensor bias and scale factor uncertainty is very detrimental to the accuracy of the simulation results.
In this work we investigate the behavior of MIS- and Schottky-gate AlGaN/GaN HEMTs under high-power pulsestress. A special setup capable of applying pulses of constant power is used to evaluate the electro-thermal response in different operating points. For both types of devices, the time to failure was found to decrease with increasing drain-source voltage. Overall, the Schottky-gate device displays a higher pulse robustness. The pulse withstand time of the MIS-gate device is limited by the occurrence of a thermal instability at approximately 240°C while the Schottky-gate device displays a rapid increase of the gate leakage current prior to failure. The mechanism responsible for this gate current is further investigated by static and transient temperature measurements and yielded activation energies of 0.6 eV and 0.84 eV.
This paper studies whether a monetary union can be managed solely by a rule based approach. The Five Presidents’ Report of the European Union rejects this idea. It suggests a centralisation of powers. We analyse the philosophy of policy rules from the vantage point of the German economic school of thought. There is evidence that a monetary union consisting of sovereign states is well organised by rules, together with the principle of subsidiarity. The root cause of the euro crisis is rather the weak enforcement of rules, compounded by structural problems. Therefore, we suggest a genuine rule-based paradigm for a stable future of the Economic and Monetary Union.
Under update intensive workloads (TPC, LinkBench) small updates dominate the write behavior, e.g. 70% of all updates change less than 10 bytes across all TPC OLTP workloads. These are typically performed as in-place updates and result in random writes in page-granularity, causing major write-overhead on Flash storage, a write amplification of several hundred times and lower device longevity.
In this paper we propose an approach that transforms those small in-place updates into small update deltas that are appended to the original page. We utilize the commonly ignored fact that modern Flash memories (SLC, MLC, 3D NAND) can handle appends to already programmed physical pages by using various low-level techniques such as ISPP to avoid expensive erases and page migrations. Furthermore, we extend the traditional NSM page-layout with a delta-record area that can absorb those small updates. We propose a scheme to control the write behavior as well as the space allocation and sizing of database pages.
The proposed approach has been implemented under Shore- MT and evaluated on real Flash hardware (OpenSSD) and a Flash emulator. Compared to In-Page Logging it performs up to 62% less reads and writes and up to 74% less erases on a range of workloads. The experimental evaluation indicates: (i) significant reduction of erase operations resulting in twice the longevity of Flash devices under update-intensive workloads; (ii) 15%-60% lower read/write I/O latencies; (iii) up to 45% higher transactional throughput; (iv) 2x to 3x reduction in overall write
amplification.
In the present paper we demonstrate the novel technique to apply the recently proposed approach of In-Place Appends – overwrites on Flash without a prior erase operation. IPA can be applied selectively: only to DB-objects that have frequent and relatively small updates. To do so we couple IPA to the concept of NoFTL regions, allowing the DBA to place update-intensive DB-objects into special IPA-enabled regions. The decision about region configuration can be (semi-)automated by an advisor analyzing DB-log files in the background.
We showcase a Shore-MT based prototype of the above approach, operating on real Flash hardware. During the demonstration we allow the users to interact with the system and gain hands-on experience under different demonstration scenarios.
In the present paper we demonstrate a novel approach to handling small updates on Flash called In-Place Appends (IPA). It allows the DBMS to revisit the traditional write behavior on Flash. Instead of writing whole database pages upon an update in an out-of-place manner on Flash, we transform those small updates into update deltas and append them to a reserved area on the very same physical Flash page. In doing so we utilize the commonly ignored fact that under certain conditions Flash memories can support in-place updates to Flash pages without a preceding erase operation.
The approach was implemented under Shore-MT and evaluated on real hardware. Under standard update-intensive workloads we observed 67% less page invalidations resulting in 80% lower garbage collection overhead, which yields a 45% increase in transactional throughput, while doubling Flash longevity at the same time. The IPA outperforms In-Page Logging (IPL) by more than 50%.
We showcase a Shore-MT based prototype of the above approach, operating on real Flash hardware – the OpenSSD Flash research platform. During the demonstration we allow the users to interact with the system and gain hands on experience of its performance under different demonstration scenarios. These involve various workloads such as TPC-B, TPC-C or TATP.