Informatik
Refine
Year of publication
- 2017 (88) (remove)
Document Type
- Conference proceeding (72)
- Journal article (12)
- Working Paper (2)
- Book chapter (1)
- Patent / Standard / Guidelines (1)
Is part of the Bibliography
- yes (88)
Institute
- Informatik (88)
Publisher
Smart meter based business models for the electricity sector : a systematical literature research
(2017)
The Act on the Digitization of the Energy Transition forces German industries and households to introduce smart meters in order to save engery, to gain individual based electricity tariffs and to digitize the energy data flow. Smart meter can be regarded as the advancement of the traditional meter. Utilizing this new technology enables a wide range of innovative business models that provide additional value for the electricity suppliers as well as for their customers. In this study, we followed a two-step approach. At first, we provide a state-of-the-art comparison of these business models found in the literature and identify structural differences in the way they add value to the offered products and services. Secondly, the business models are grouped into categories with respect to customer segmetns and the added value to the smart grid. Findings indicate that most business models focus on the end-costumer as their main customer.
Towards a practical maintainability quality model for service- and microservice-based systems
(2017)
Although current literature mentions a lot of different metrics related to the maintainability of service-based systems (SBSs), there is no comprehensive quality model (QM) with automatic evaluation and practical focus. To fill this gap, we propose a Maintainability Model for Services (MM4S), a layered maintainability QM consisting of service properties (SPs) related with automatically collectable Service Metrics (SMs). This research artifact created within an ongoing Design Science Research (DSR) project is the first version ready for detailed evaluation and critical feedback. The goal of MM4S is to serve as a simple and practical tool for basic maintainability estimation and control in the context of BSs and their specialization
microservice-based systems (μSBSs).
In a time of digital transformation, the ability to quickly and efficiently adapt software systems to changed business requirements becomes more important than ever. Measuring the maintainability of software is therefore crucial for the long-term management of such products. With service-based systems (SBSs) being a very important form of enterprise software, we present a holistic overview of such metrics specifically designed for this type of system, since traditional metrics – e.g. object oriented ones – are not fully applicable in this case. The selected metric candidates from the literature review were mapped to 4 dominant design properties: size, complexity, coupling, and cohesion. Microservice-based systems (μSBSs) emerge as an agile and fine grained variant of SBSs. While the majority of identified metrics are also applicable to this specialization (with some limitations), the large number of services in combination with technological heterogeneity and decentralization of control significantly impacts automatic metric collection in such a system. Our research therefore suggests that specialized tool support is required to guarantee the practical applicability of the presented metrics to μSBSs.
Anforderungen an die Mensch-Maschine-Schnittstelle im Automobil auf dem Weg zum autonomen Fahren
(2017)
In den letzten Jahrzehnten haben immer mehr Fahrerassistenzsysteme Einzug in das Automobil gefunden und bereiten damit den Weg zu vollautonomen Fahrzeugen der Zukunft vor. So bieten bereits viele Hersteller Ausstattungsvarianten ihrer Fahrzeuge an, die für den Umstieg in die vollautonome Zukunft gewappnet sind. Um den Menschen mit auf den Weg zu nehmen, werden einige Anforderungen an die Mensch-Maschine-Schnittstelle (MMS) des Automobils gestellt. Für die teilautonomen Fahrzeuge der nächsten Generation gilt es, den Fahrerwechsel zwischen manuellem und autonomen Fahren für die Menschen bestmöglich zu gestalten. Die Arbeit wirft einen Blick auf ausgewählte Ansätze für zukünftige MMS-Systeme und bewertet diese anhand der Übergabezeiten zwischen Mensch und Maschine. Ein Wandel der MMS im Automobil wird empfohlen, um den Menschen mit den neuen Technologien vertraut zu machen.
Painting galleries typically provide a wealth of data composed of several data types. Those multivariate data are too complex for laymen like museum visitors to first, get an overview about all paintings and to look for specific categories. Finally, the goal is to guide the visitor to a specific painting that he wishes to have a more closer look on. In this paper we describe an interactive visualization tool that first provides such an overview and lets people experiment with the more than 41,000 paintings collected in the web gallery of art. To generate such an interactive tool, our technique is composed of different steps like data handling, algorithmic transformations, visualizations, interactions, and the human user working with the tool with the goal to detect insights in the provided data. We illustrate the usefulness of the visualization tool by applying it to such characteristic data and show how one can get from an overview about all paintings to specific paintings.
Clinical reading centers provide expertise for consistent, centralized analysis of medical data gathered in a distributed context. Accordingly, appropriate software solutions are required for the involved communication and data management processes. In this work, an analysis of general requirements and essential architectural and software design considerations for reading center information systems is provided. The identified patterns have been applied to the implementation of the reading center platform which is currently operated at the Center of Ophthalmology of the University Hospital of Tübingen.
Data Integration of heterogeneous data sources relies either on periodically transferring large amounts of data to a physical Data Warehouse or retrieving data from the sources on request only. The latter results in the creation of what is referred to as a virtual Data Warehouse, which is preferable when the use of the latest data is paramount. However, the downside is that it adds network traffic and suffers from performance degradation when the amount of data is high. In this paper, we propose the use of a readCheck validator to ensure the timeliness of the queried data and reduced data traffic. It is further shown that the readCheck allows transactions to update data in the data sources obeying full Atomicity, Consistency, Isolation, and Durability (ACID) properties.
Integrierte Schaltkreise (IC) sind ein integraler Bestandteil vieler Geräte wie zum Beispiel Smartphones, Computer oder Fernseher. Auf den Schaltkreisen werden immer mehr Funktionen integriert. Um die Arbeit auch zukünftig in gegebener Zeit bewältigen zu können, bedarf es daher einer Möglichkeit für die gleichzeitige Zusammenarbeit der Entwickler. Unter dem Arbeitstitel eCEDA (eCollaboration for Electronic Design Automation) wird ein Konzept für eine Webanwendung entwickelt, die die Echtzeitkollaboration von Entwicklern im Chipentwurf ermöglichen soll. Dieses Konzept sowie verschiedene Aspekte der Kollaboration werden in dieser Arbeit behandelt.
The increasing number of connected mobile devices such as fitness trackers and smartphones define new data for health insurances, enabling them to gain deeper insights into the health of their customers. These additional data sources plus the trend towards an interconnected health community, including doctors, hospitals and insurers, lead to challenges regarding data filtering, organization and dissemination. First, we analyze what kind of information is relevant for a digital health insurance. Second, functional and non-functional requirements for storing and managing health data in an interconnected environment are defined. Third, we propose a data architecture for a digitized health insurance, consisting of a data model and an application architecture.
Steady growing research material in a variety of databases, repositories and clouds make academic content more than ever hard to discover. Finding adequate material for the own research however is essential for every researcher. Based on recent developments in the field of artificial intelligence and the identified digital capabilities of future universities a change in the basic work of academic research is predicted. This study defines the idea of how artificial intelligence could simplifiy academic research at a digital university. Today's studies in the field of AI spectacle the true potential and its commanding impact on academic research.
Software engineering education is under constant pressure to provide students with industry-relevant knowledge and skills. Educators must address issues beyond exercises and theories that can be directly rehearsed in small settings. Industry training has similar requirements of relevance as companies seek to keep their workforce up to date with technological advances. Real-life software development often deals with large, software-intensive systems and is influenced by the complex effects of teamwork and distributed software development, which are hard to demonstrate in an educational environment. A way to experience such effects and to increase the relevance of software engineering education is to apply empirical studies in teaching. In this paper, we show how different types of empirical studies can be used for educational purposes in software engineering. We give examples illustrating how to utilize empirical studies, discuss challenges, and derive an initial guideline that supports teachers to include empirical studies in software engineering courses. Furthermore, we give examples that show how empirical studies contribute to high-quality learning outcomes, to student motivation, and to the awareness of the advantages of applying software engineering principles. Having awareness, experience, and understanding of the actions required, students are more likely to apply such principles under real-life constraints in their working life.
Context: Development of software intensive products and services increasingly occurs by continuously deploying product or service increments, such as new features and enhancements, to customers. Product and service developers must continuously find out what customers want by direct customer feedback and usage behaviour observation. Objective: This paper examines the preconditions for setting up an experimentation system for continuous customer experiments. It describes the RIGHT model for Continuous Experimentation (Rapid Iterative value creation Gained through High-frequency Testing), illustrating the building blocks required for such a system. Method: An initial model for continuous experimentation is analytically derived from prior work. The model is matched against empirical case study findings from two startup companies and further developed. Results: Building blocks for a continuous experimentation system and infrastructure are presented. Conclusions: A suitable experimentation system requires at least the ability to release minimum viable products or features with suitable instrumentation, design and manage experiment plans, link experiment results with a product roadmap, and manage a flexible business strategy. The main challenges are proper, rapid design of experiments, advanced instrumentation of software to collect, analyse, and store relevant data, and the integration of experiment results in both the product development cycle and the software development process.
Thematic issue on human-centred ambient intelligence: cognitive approaches, reasoning and learning
(2017)
This editorial presents advances on human-centred Ambient Intelligence applications which take into account cognitive issues when modelling users (i.e. stress, attention disorders), and learn users’ activities/preferences and adapt to them (i.e. at home, driving a car). These papers also show AmI applications in health and education, which make them even more valuable for the general society.
Ein stark erforschtes Gebiet der Computer Vision ist die Detektion von markanten Punkten des Gesichtszuges (englisch: facial feature detection), wie der Mundwinkel oder des Kinns. Daher lassen sich eine Vielzahl von veröffentlichten Verfahren finden, die sich jedoch teils deutlich hinsichtlich der Detektionsgenauigkeit, Robustheit und Geschwindigkeit unterscheiden. So sind viele Verfahren nur bedingt echtzeitfähig oder liefern nur mit hochaufgelösten Bildquellen ein zufriedenstellendes Ergebnis. In den letzten Jahren wurden daher Verfahren entwickelt, die versuchen, diese Problematiken zu lösen. In dieser Arbeit erfolgt eine Betrachtung dreier dieser State-of-the-Art Verfahren: Constrained Local Neural Fields (CLNF), Discriminative Response Map Fitting (DRMF) und Structured Output SVM (SO SVM), sowie deren Implementierungen. Dazu erfolgt ein empirischer Vergleich hinsichtlich der Detektionsgenauigkeit.
Mittlerweile ist der Einsatz von technischen Hilfsmitteln zu Analysezwecken im Sport fester Bestandteil im Trainingsalltag von Trainern und Athleten. In nahezu jeder Sportart werden Videoaufzeichnungen genutzt, um die Bewegungsausführung zu dokumentieren und zu analysieren. Allerdings reichen Aufnahmen von einem statischen Standort oftmals nicht mehr aus. An dieser Stelle kann Virtual Reality (VR) eine Lösung dieses Problems bieten. Durch VR kann der aufgezeichneten Szene eine weitere Ebene hinzugefügt und die Bewegungsabläufe neu und detaillierter bewertet werden. Um Bewegungen in einer virtuellen Umgebung abzubilden, müssen diese mittels Motion Capturing (MoCap) aufgezeichnet werden. Ziel dieser Arbeit ist es, herauszufinden, ob das MoCap System Perception Neuron in der Lage ist, Bewegungen in hoher Geschwindigkeit zu erfassen.
Sleep quality and in general, behavior in bed can be detected using a sleep state analysis. These results can help a subject to regulate sleep and recognize different sleeping disorders. In this work, a sensor grid for pressure and movement detection supporting sleep phase analysis is proposed. In comparison to the leading standard measuring system, which is Polysomnography (PSG), the system proposed in this project is a non invasive sleep monitoring device. For continuous analysis or home use, the PSG or wearable actigraphy devices tends to be uncomfortable. Besides this fact, they are also very expensive. The system represented in this work classifies respiration and body movement with only one type of sensor and also in a non invasive way. The sensor used is a pressure sensor. This sensor is low cost and can be used for commercial proposes. The system was tested by carrying out an experiment that recorded the sleep process of a subject. These recordings showed the potential for classification of breathing rate and body movements. Although previous researches show the use of pressure sensors in recognizing posture and breathing, they have been mostly used by positioning the sensors between the mattress and bedsheet. This project however, shows an innovative way to position the sensors under the mattress.
To analyze the humans’ sleep it is necessary as to identify the sleep stages, occurring during the sleep, their durations and sleep cycles. The gold standard procedure for this approach is polysomnography (PSG), which classify the sleep stages based on Rechtschaffen and Kales (R-K) method. This method aside the advantages as high accuracy has however some disadvantages, among others time-consuming and uncomfortable for the patient procedure. Therefore, the development of further methods for the sleep classification in addition to PSG is a promising topic for the investigation and this work has as its aim the presentation of possible ways and goals for this development.
Asymmetric read/write storage technologies such as Flash are becoming
a dominant trend in modern database systems. They introduce
hardware characteristics and properties which are fundamentally
different from those of traditional storage technologies such
as HDDs.
Multi-Versioning Database Management Systems (MV-DBMSs)
and Log-based Storage Managers (LbSMs) are concepts that can
effectively address the properties of these storage technologies but
are designed for the characteristics of legacy hardware. A critical
component of MV-DBMSs is the invalidation model: commonly,
transactional timestamps are assigned to the old and the new version,
resulting in two independent (physical) update operations.
Those entail multiple random writes as well as in-place updates,
sub-optimal for new storage technologies both in terms of performance
and endurance. Traditional page-append LbSM approaches
alleviate random writes and immediate in-place updates, hence reducing
the negative impact of Flash read/write asymmetry. Nevertheless,
they entail significant mapping overhead, leading to write
amplification.
In this work we present an approach called Snapshot Isolation
Append Storage Chains (SIAS-Chains) that employs a combination
of multi-versioning, append storage management in tuple granularity
and novel singly-linked (chain-like) version organization.
SIAS-Chains features: simplified buffer management, multi-version
indexing and introduces read/write optimizations to data placement
on modern storage media. SIAS-Chains algorithmically avoids
small in-place updates, caused by in-place invalidation and converts
them into appends. Every modification operation is executed
as an append and recently inserted tuple versions are co-located.
Software startups often make assumptions about the problems and customers they are addressing as well as the market and the solutions they are developing. Testing the right assumptions early is a means to mitigate risks. Approaches such as Lean Startup foster this kind of testing by applying experimentation as part of a constant build-measure-learn feedback loop. The existing research on how software startups approach experimentation is very limited. In this study, we focus on understanding how software startups approach experimentation and identify challenges and advantages with respect to conducting experiments. To achieve this, we conducted a qualitative interview study. The initial results show that startups often spent a disproportionate amount of time focusing on creating solutions without testing critical assumptions. Main reasons are the lack of awareness, that these assumptions can be tested early and a lack of knowledge and support on how to identify, prioritize and test these assumptions. However, startups understand the need for testing risky assumptions and are open to conducting experiments.
Im Rahmen der wissenschaftlichen Vertiefung soll auf Basis der vorhandenen Ansätze das IT-Risikomanagement evaluiert werden. Hierbei soll die Frage, inwiefern das IT-Risikomanagement dem Unternehmen eine Hilfestellung bieten kann, geklärt und anschließend anhand von zwei Fallbeispielen dargestellt werden.
In the present paper we demonstrate a novel approach to handling small updates on Flash called In-Place Appends (IPA). It allows the DBMS to revisit the traditional write behavior on Flash. Instead of writing whole database pages upon an update in an out-of-place manner on Flash, we transform those small updates into update deltas and append them to a reserved area on the very same physical Flash page. In doing so we utilize the commonly ignored fact that under certain conditions Flash memories can support in-place updates to Flash pages without a preceding erase operation.
The approach was implemented under Shore-MT and evaluated on real hardware. Under standard update-intensive workloads we observed 67% less page invalidations resulting in 80% lower garbage collection overhead, which yields a 45% increase in transactional throughput, while doubling Flash longevity at the same time. The IPA outperforms In-Page Logging (IPL) by more than 50%.
We showcase a Shore-MT based prototype of the above approach, operating on real Flash hardware – the OpenSSD Flash research platform. During the demonstration we allow the users to interact with the system and gain hands on experience of its performance under different demonstration scenarios. These involve various workloads such as TPC-B, TPC-C or TATP.
In the present paper we demonstrate the novel technique to apply the recently proposed approach of In-Place Appends – overwrites on Flash without a prior erase operation. IPA can be applied selectively: only to DB-objects that have frequent and relatively small updates. To do so we couple IPA to the concept of NoFTL regions, allowing the DBA to place update-intensive DB-objects into special IPA-enabled regions. The decision about region configuration can be (semi-)automated by an advisor analyzing DB-log files in the background.
We showcase a Shore-MT based prototype of the above approach, operating on real Flash hardware. During the demonstration we allow the users to interact with the system and gain hands-on experience under different demonstration scenarios.
Under update intensive workloads (TPC, LinkBench) small updates dominate the write behavior, e.g. 70% of all updates change less than 10 bytes across all TPC OLTP workloads. These are typically performed as in-place updates and result in random writes in page-granularity, causing major write-overhead on Flash storage, a write amplification of several hundred times and lower device longevity.
In this paper we propose an approach that transforms those small in-place updates into small update deltas that are appended to the original page. We utilize the commonly ignored fact that modern Flash memories (SLC, MLC, 3D NAND) can handle appends to already programmed physical pages by using various low-level techniques such as ISPP to avoid expensive erases and page migrations. Furthermore, we extend the traditional NSM page-layout with a delta-record area that can absorb those small updates. We propose a scheme to control the write behavior as well as the space allocation and sizing of database pages.
The proposed approach has been implemented under Shore- MT and evaluated on real Flash hardware (OpenSSD) and a Flash emulator. Compared to In-Page Logging it performs up to 62% less reads and writes and up to 74% less erases on a range of workloads. The experimental evaluation indicates: (i) significant reduction of erase operations resulting in twice the longevity of Flash devices under update-intensive workloads; (ii) 15%-60% lower read/write I/O latencies; (iii) up to 45% higher transactional throughput; (iv) 2x to 3x reduction in overall write
amplification.
In this paper we build on our research in data management on native Flash storage. In particular we demonstrate the advantages of intelligent data placement strategies. To effectively manage phsical Flash space and organize the data on it, we utilize novel storage structures such as regions and groups. These are coupled to common DBMS logical structures, thus require no extra overhead for the DBA. The experimental results indicate an improvement of up to 2x, which doubles the longevity of Flash SSD. During the demonstration the audience can experience the advantages of the proposed approach on real Flash hardware.
Incubators in multinational corporations : development of a corporate incubator operator model
(2017)
This paper analyzes the components of a corporate incubator operator model in multinational companies. Thereby, three relevant phases were identified: pre incubation, incubation, and exit. Each phase contains different criteria that represent critical success factors for a corporate incubator, which are based on theoretical findings and lessons learned from practice. During the pre-incubation phase companies should define their need for a corporate incubator, the origin of ideas and the selection criteria for incubator tenants. The actual phase of incubation refers to the incubator program, which should be flexible with respect to each tenant. Furthermore, resource allocation plays an important role during the incubator program. Exit options after a successful incubation differ according to internal ideas and external start-ups, as well as the objective of the incubator. The research is based on a comprehensive screening of existing incubator literature and a qualitative content analysis of statements from eight experts of international corporate incubators.
The digital transformation of the automotive industry has a significant impact on how development processes need to be organized in future. Dynamic market and technological environments require capabilities to react on changes and to learn fast. Agile methods are a promising approach to address these needs but they are not tailored to the specific characteristics of the automotive domain like product line development. Although, there have been efforts to apply agile methods in the automotive domain for many years, significant and widespread adoptions have not yet taken place. The goal of this literature review is to gain an overview and a better understanding of agile methods for embedded software development in the automotive domain, especially with respect to product line development. A mapping study was conducted to analyze the relation between agile software development, embedded software development in the automotive domain and software product line development. Three research questions were defined and 68 papers were evaluated. The study shows that agile and product line development approaches tailored for the automotive domain are not yet fully explored in the literature. Especially, literature on the combination of agile and product line development is rare. Most of the examined combinations are customizations of generic approaches or approaches stemming from other domains. Although, only few approaches for combining agile and software product line development in the automotive domain were found, these findings were valuable for identifying research gaps and provide insights into how existing approaches can be combined, extended and tailored to suit the characteristics of the automotive domain.
Context: The current situation and future scenarios of the automotive domain require a new strategy to develop high quality software in a fast pace. In the automotive domain, it is assumed that a combination of agile development practices and software product lines is beneficial, in order to be capable to handle high frequency of improvements. This assumption is based on the understanding that agile methods introduce more flexibility in short development intervals. Software product lines help to manage the high amount of variants and to improve quality by reuse of software for long term development.
Goal: This study derives a better understanding of the expected benefits for a combination. Furthermore, it identifies the automotive specific challenges that prevent the adoption of agile methods within the software product line.
Method: Survey based on 16 semi structured interviews from the automotive domain, an internal workshop with 40 participants and a discussion round on ESE congress 2016. The results are analyzed by means of thematic coding.
The ability to develop and deploy high-quality software at a high speed gets increasing relevance for the comptetitiveness of car manufacturers. Agile practices have shown benefits such as faster time to market in several application domains. Therefore, it seems to be promising to carefully adopt agile practices also in the automotive domain. This article presents findings from an interview-based qualitative survey. It aims at understanding perceived forces that support agile adoption. Particularly, it focuses on embedded software development for electronic control units in the automotive domain.
Die meisten der aktuell im Allag vorfindlichen Touch-Flächen wurden unter Anwendung komplexer und kostenintensiver Technologien realisiert. Gerade für das Anwendungsszenario eines Touchfloors, bei welchem meist eine überdurchschnittlich große Touch-Fläche erwünscht ist, werden kostengünstigere Umsetzungsmöglichkeiten angestrebt. Dieses Paper dient als Ausgangsbasis für die Umsetzung eines Low-cost Touchfloors, der die kollaborative Arbeit eines Projektteams unterstützen soll. Mithilfe einer Analyse des State of the Arts der Touch-Technologien und einer anschließenden Evaluation, wird die Touch-Technologie abgeleitet, die sich am besten zur Realisierung dieses low-cost Touchfloors eignet. Aus der Evaluation geht hervor, dass vor allem optische Touch-Technologien, insbesondere visionsbasierte, für die Umsetzung von kostengünstigen großen Touch-Flächen geeignet sind.
In times of dynamic markets, enterprises have to be agile to be able to quickly react to market influences. Due to the increasing digitization of products, the enterprise IT often is affected when business models change. Enterprise Architecture Management (EAM) targets a holistic view of the enterprise’ IT and their relations to the business. However, Enterprise Architectures (EA) are complex structures consisting of many layers, artifacts and relationships between them. Thus, analyzing EA is a very complex task for stakeholders. Visualizations are common vehicles to support analysis. However, in practice visualization capabilities lack flexibility and interactivity. A solution to improve the support of stakeholders in analyzing EAs might be the application of visual analytics. Starting from a systematic literature review, this article investigates the features of visual analytics relevant for the context of EAM.
In der Medizin existieren verschiedene Reifegradmodelle, die die Digitalisierung von Krankenhäusern unterstützen können. Die Anforderungen an ein Reifegradmodell für diesen Zweck umfassen Aspekte aus allgemeinen und spezifischen Bereichen des Krankenhauses. Die Analyse der Reifegradmodelle HIN, CCMM, EMRAM und O-EMRAM zeigt große Lücken im Bereich des OP sowie fehlende Aspekte in der Notaufnahme auf. Ein umfassendes Reifegradmodell wurde nicht gefunden. Durch eine Kombination aus HIN und CCMM könnten fast alle Bereiche ausreichend abgedeckt werden. Zusätzliche Ergänzungen durch spezialisierte Reifegradmodelle oder sogar die Entwicklung eines umfassenden Reifegradmodells wären sinnvoll.
Durch Industrie 4.0 kann die individuelle Fertigung von kleineren Stückzahlen zu geringen Kosten ermöglicht werden. Dafür müssen alle Anlagen miteinander vernetzt werden, um Daten austauschen und kommunizieren zu können. Durch die Vernetzung können neue Risiken und Gefahren entstehen. In dieser Arbeit wird die ITSicherheit in der Industrie 4.0 anhand möglichen Bedrohungsszenarien, Herausforderungen und Gegenmaßnahmen evaluiert. Dabei wird untersucht, welche Möglichkeiten Industrieunternehmen haben, um Hackerangriffen vorzubeugen und ob bereits etablierte Sicherheitskonzepte für industrielle Anlagen einfach übernommen werden können.
46 Prozent der Arbeitsplätze in der Automobilindustrie sind bis 2030 durch Automatisierung und Digitalisierung bedroht – die Tätigkeiten werden dann nicht mehr von Menschen, sondern von intelligenten Robotern und Systemen erledigt. Das ist das zentrale Ergebnis unserer Studie „Digitale Transformation – Der Einfluss der Digitalisierung auf die Workforce in der Automobilindustrie“, die wir gemeinsam mit dem Herman Hollerith Lehr- und Forschungszentrum an der Hochschule Reutlingen erstellt haben.
With the Internet of Things being one of the most discussed trends in the computer world lately, many organizations find themselves struggling with the great paradigm shift and thus the implementation of IoT on a strategic level. The Ignite methodoogy as a part of the Enterprise-IoT project promises to support organizations with these strategic issues as it combines best practices with expert knowledge from diverse industries helping to create a better understanding of how to transform into an IoT driven business. A framework that is introduced within the context of IoT business model development is the Bosch IoT Business Model Builder. In this study the provided framework is compared to the Osterwalder Business Model Canvas and the St. Gallen Business Model Navigator, the most commonly used and referenced frameworks according to a quantitative literature analysis.
A sleep study is a test used to diagnose sleep disorders and is usually done in sleep laboratories. The golden standard for evaluation of sleep is overnight polysomnography (PSG). Unfortunately, in-lab sleep studies are expensive and complex procedures. Furthermore, with a minimum of 22 wire attachments to the patient for sleep recording, this medical procedure is invasive and unfamiliar for the subjects. To solve this problem, low-cost home diagnostic systems, based on noninvasive recording methods requires further researches.
For this intention it is important to find suitable bio vital parameters for classifying sleep phases WAKE, REM, light sleep and deep sleep without any physical impairment at the same time. We decided to analyse body movement (BM), respiration rate (RR) and heart rate variability (HRV) from existing sleep recordings to develop an algorithm which is able to classify the sleep phases automatically. The preliminary results of this project show that BM, RR and HRV are suitable to identify WAKE, REM and NREM stage.
To assess the quality of a person’s sleep, it is essential to examine the sleep behaviour by identifying the several sleep stages, their durations and sleep cycles. The established and gold standard procedure for sleep stage scoring is overnight polysomnography (PSG) with the Rechtschaffen and Kales (R-K) method. Unfortunately, the conduct of PSG is time-consuming and unfamiliar for the subjects and might have an impact of the recorded data. To avoid the disadvantages with PSG, it is important to make further investigations in low-cost home diagnostic systems. For this intention it is necessary to find suitable bio vital parameters for classifying sleep stages without any physical impairments at the same time. Due to the promising results in several publications we want to analyse existing methods for sleep stage classification based on the parameters body movement,
heartbeat and respiration. Our aim was to find different behaviour patterns in the several sleep stages. Therefore, the average values of 15 whole-night PSG recordings -obtained from the ‘DREAMS
Subjects Database’- where analysed in the light of heartbeat, body movement and respiration with 10 different methods.
Managing decentralized corporate energy systems is a challenging task for enterprises. However, the integration of energy objectives into business strategy creates difficulties resulting in inefficient decisions. To improve this, practice-proven methods such as the balanced scorecard and enterprise architecture management are transferred to the energy domain. The methods are evaluated based on a case study. Managing multi-dimensionality and high complexity are the main drivers for an effective and efficient energy management system. Both methods show a positive impact on managing decentralized corporate energy systems and are adaptable to the energy domain.
Digitization in the energy sector is a necessity to enable energy savings and energy efficiency potentials. Managing decentralized corporate energy systems is hindered by a non-existence. The required integration of energy objectives into business strategy creates difficulties resulting in inefficient decisions. To improve this, practice-proven methods such as Balanced Scorecard, Enterprise Architecture Management and the Value Network approach are transferred to the energy domain. The methods are evaluated based on a case study. Managing multi-dimensionality, high complexity and multiple actors are the main drivers for an effective and efficient energy management system. The underlying basis to gain the positive impacts of these methods on decentralized corporate energy systems is digitization of energy data and processes.
Software and system development is complex and diverse, and a multitude of development approaches is used and combined with each other to address the manifold challenges companies face today. To study the current state of the practice and to build a sound understanding about the utility of different development approaches and their application to modern software system development, in 2016, we launched the HELENA initiative. This paper introduces the 2nd HELENA workshop and provides an overview of the current project state. In the workshop, six teams present initial findings from their regions, impulse talk are given, and further steps of the HELENA roadmap are discussed.
First International Workshop on Hybrid dEveLopmENt Approaches in Software Systems Development
(2017)
A software process is the game plan to organize project teams and run projects. Yet, it still is a challenge to select the appropriate development approach for the respective context. A multitude of development approaches compete for the users’ favor, but there is no silver bullet serving all possible setups. Moreover, recent research as well as experience from practice shows companies utilizing different development approaches to assemble the bestfitting approach for the respective company: a more traditional process provides the basic framework to serve the organization, while project teams embody this framework with more agile (and/or lean) practices to keep their flexibility. The first HELENA workshop aims to bring together the community to discuss recent findings and to steer future work.
Software and system development faces numerous challenges of rapidly changing markets. To address such challenges, companies and projects design and adopt specific development approaches by combining well-structured comprehensive methods and flexible agile practices. Yet, the number of methods and practices is large, and available studies argue that the actual process composition is carried out in a fairly ad-hoc manner. The present paper reports on a survey on hybrid software development approaches. We study which approaches are used in practice, how different approaches are combined, and what contextual factors influence the use and combination of hybrid software development approaches. Our results from 69 study participants show a variety of development approaches used and combined in practice. We show that most combinations follow a pattern in which a traditional process model serves as framework in which several fine-grained (agile) practices are plugged in. We further show that hybrid software development approaches are independent from the company size and external triggers. We conclude that such approaches are the results of a natural process evolution, which is mainly driven by experience, learning, and pragmatism.
Rapid Prototyping Plattformen reduzieren die Entwicklungszeit, indem das Überprüfen einer Idee in Form eines Prototyps schnell umzusetzen ist und mehr Zeit für die eigentliche Anwendungsentwicklung mit Benutzerschnittstellen zur Verfügung steht. Dieser Ansatz wird schon lange bei technischen Plattformen, wie bspw. dem Arduino, verfolgt. Um diese Form von Prototyping auf Wearables zu übertragen, wird in diesem Paper WearIT vorgestellt. WearIT besteht als Wearable Prototyping Plattform aus vier Komponenten: Einer Weste, Sensor- und Aktorshieldss, einer eigenen bibliothek sowie einem Mainboard bestehend aus Arduino, Raspberry Pi, einer Steckplatine und einem GPS-Modul. Als Ergebnis kann ein Wearable Prototyp schnell, durch das Anbringen von Sensor- und Aktorshields an der WearIT Weste, entwickelt werden. Diese Sensor- und Aktorshields können anschließend durch die WearIT-Bibliothek programmiert werden. Dafür kann über Virtual Network Computing (VNC) mit einem entfernten Rechner auf die Bildschirminhalte des Rasperry Pis zugegriffen und der Arduino programmiert werden.
Using measurement and simulation for understanding distributed development processes in the Cloud
(2017)
Organizations increasingly develop software in a distributed manner. The Cloud provides an environment to create and maintain software-based products and services. Currently, it is widely unknown which software processes are suited for Cloud-based development and what their effects in specific contexts are. This paper presents a process simulation to study distributed development in the Cloud. We contribute a simulation model, which helps analyzing different project parameters and their impact on projects carried out in the Cloud. The simulator helps reproducing activities, developers, issues and events in the project, and it generates statistics, e.g., on throughput, total time, and lead and cycle time. The aim of this simulation model is thus to analyze the tradeoffs regarding throughput, total time, project size, and team size. Furthermore, the modified simulation model aims to help project managers select the most suitable planning alternative. Based on observed projects in Finland and Spain, we simulated a distributed project using artificial and real data. Particularly, we studied the variables project size, team size, throughput, and total project duration. A comparison of the real project data with the results obtained from the simulation shows the simulation producing results close to the real data, and we could successfully replicate a distributed software project. By improving the understanding of distributed development processes, our simulation model thus supports project managers in their decision-making.
Im Rahmen der wissenschaftlichen Vertieung werden unterschiedliche empirische Forschungsmethoden erörtert.
Im ersten Schritt werden die Grundlagen der empirischen Forschungsmethoden ermittelt und klassifiziert. Nach der Klassifikation der Forschungsmethoden werden zwei Forschunsmethoden angewandt. Die Auswahl der Methoden fällt auf die quantitative und qualitative Forschungsmethode.
Diese Forschungsmethoden werden während der Analysephase eingesetzt, um den weltweiten Ist-Zustand der Tochtergesellschaften zu ermitteln. Hierbei geht es um die Analyse des Import- und Export-Prozesses bei den Tochtergesellschaften der HUGO BOSS AG. Ziel ist es, die Ergebnisse für die Master Thesis einzusetzen. Anhand der Ergebnisse können Gemeinsamkeiten oder auch Abweichungen im Zollabwicklungsprozess aufgezeigt werden, die später in der Konzeption berücksichtigt werden.
Hierzu wird die qualitative Methode eingesetzt, welche die Basis für die Konzeptionierung der Umfrage liefert. Abschließend wurde für die Verifikation der Ergebnisse die qualitative Methode für die Interviews eingesetzt.
Scheduled flexibility and individualization of knowledge transfer in foundations of computer science
(2017)
The opening of the German higher education system for new target groups involves a heterogeneous composition of students as never before and face up the universities to new challenges. Due to different educational biographies, the students don't show a homogeneous level of knowledge. Furthermore, their access to course content and their individual learning methods are very diverse. The existing lack of knowledge and the very unequal study speed have a significant influence on the learning behavior and learning motivation. During the first semesters, the dropout rate is appreciably higher. The reform project gives an overview of a didactic restructuring from a formerly conventional teaching and learning concept to a stronger combination of digital offers, combined with classical lectures in the basic modules of computer science. The teaching content is adjusted to the individual requirements and knowledge. Students with different previous knowledge get the possibility to increase their knowledge in different levels of abstraction. The aim of the reform project has to point out the possibilities, also the challenges of the digital process in higher education. At the same time the question has to be explored, how far does an accompanied and self-directed learning in own speed and in own individual depth of knowledge have a positive impact on the motivation and on the study success of a learner.
The business landscape is changing radically because of software. Companies in all industry sectors are continously finding new flexibilities in this programmable world. They are able to deliver new functionalities even after the product is already in the customer's hands. But success is far from guaranteed if they cannot validate their assumptions about what their customers actually need. A competitor with better knowledge of customer needs can disrupt the market in an instant.
This book introduces continuous experimentation, an approach to continuously and systematically test assumptions about the company's product or service strategy and verify customers' needs through experiments. By observing how customers actually use the product or early versions of it, companies can make better development decisions and avoid potentially expensive and wasteful activities. The book explains the cycle of continuous experimentation, demonstrates its use through industry cases, provides advice on how to conduct experiments with recipes, tools, and models, and lists some common pitfalls to avoid. Use it to get started with continuous experimentation and make better product and service development decisions that are in-line with your customers' needs.
Automatisierte Analyse von Review-Daten beschäftigt sich mit den Möglichkeiten, freien Text zu analysieren und relevante Informationen daraus zu extrahieren. Die Arbeit setzt sich dabei mit Methoden des unüberwachten Lernens auseinander. Hierbei steht die Methode der Topic Modellierung im Mittelpunkt. Es werden Verfahren betrachtet, die im Bereich der textbasierten Informationsgewinnung bekannt sind. Latent Semantic Indexing LSI, (probabilistic) pLSI und Latent Dirichlet Allocation (LDA) werden erläutert und verglichen. Die Arbeit zeigt, wie LDA genutzt wurde, um einen nhaltlichen Überblick über einen Datenkorpus von 1 Mio. Reviews zu bekommen und diesen auf einen feineren Detailgrad zu betrachten. Die Topic-basierte Analyse wird genutzt, um Erkentnisse für ein Opinion Mining System zu generieren, welches eine tiefergehende Analyse vornehmen wird. Der gesamte Prozess ist als vollständig automatisiert und maschinell unüberwacht konzeptioniert.
Due to rapidly changing technologies and business contexts, many products and services are developed under high uncertainties. It is often impossible to predict customer behaviors and outcomes upfront. Therefore, product and service developers must continuously find out what customers want, requiring a more experimental mode of management and appropriate support for continuously conducting experiments. We have analytically derived an initial model for continuous experimentation from prior work and matched it against empirical case study findings from two startup companies. We examined the preconditions for setting up an experimentation system for continuous customer experiments. The resulting RIGHT model for Continuous Experimentation (Rapid Iterative value creation Gained through High-frequency Testing) illustrates the building blocks required for such a system and the necessary infrastructure. The major findings are that a suitable experimentation system requires the ability to design, manage, and conduct experiments, create so-called minimum viable products or features, link experiment results with a product roadmap, and manage a flexible business strategy. The main challenges are proper, rapid design of experiments, advanced instrumentation of software to collect, analyse, and store relevant data, and integration of experiment results in the product development cycle, software development process, and business strategy. This summary refers to the article The RIGHT Model for Continuous Experimentation, published in the Journal of Systems and Software [Fa17].