005 Computerprogrammierung, Programme, Daten
Refine
Document Type
- Conference proceeding (19)
- Book (2)
- Book chapter (2)
Is part of the Bibliography
- yes (23)
Institute
- Informatik (22)
- Technik (1)
Publisher
- Springer (16)
- IEEE (3)
- SciTePress (2)
- MFG Stiftung Baden-Württemberg (1)
- dpunkt-Verlag (1)
While several service-based maintainability metrics have been proposed in the scientific literature, reliable approaches to automatically collect these metrics are lacking. Since static analysis is complicated for decentralized and technologically diverse microservice-based systems, we propose a dynamic approach to calculate such metrics from runtime data via distributed tracing. The approach focuses on simplicity, extensibility, and broad applicability. As a first prototype, we implemented a Java application with a Zipkin integrator, 23 different metrics, and five export formats. We demonstrated the feasibility of the approach by analyzing the runtime data of an example microservice based system. During an exploratory study with six participants, 14 of the 18 services were invoked via the system’s web interface. For these services, all metrics were calculated correctly from the generated traces.
Virtual prototyping of integrated mixed-signal smart-sensor systems requires high-performance co-simulation of analog frontend circuitry with complex digital controller hardware and embedded real-time software. We use SystemC/TLM 2.0 in combination with a cycle-count accurate temporal decoupling approach to simulate digital components and firmware code execution at high speed while preserving clock cycle accuracy and, thus, real-time behavior at time quantum boundaries. Optimal time quanta ensuring real-time capability can be calculated and set automatically during simulation if the simulation engine has access to exact timing information about upcoming communication events. These methods fail in case of non-deterministic, asynchronous events resulting in a possibly invalid simulation result. In this paper, we propose an extension of this method to the case of asynchronous events generated by blackbox sources from which a-priori event timing information is not available, such as coupled analog simulators or hardware in the loop. Additional event processing latency and/or rollback effort caused by temporal decoupling is minimized by calculating optimal time quanta dynamically in a SystemC model using a linear prediction scheme. For an example smart-sensor system model, we show that quasi- periodic events that trigger activities in temporally decoupled processes are handled accurately after the predictor has settled.
Software startups often make assumptions about the problems and customers they are addressing as well as the market and the solutions they are developing. Testing the right assumptions early is a means to mitigate risks. Approaches such as Lean Startup foster this kind of testing by applying experimentation as part of a constant build-measure-learn feedback loop. The existing research on how software startups approach experimentation is very limited. In this study, we focus on understanding how software startups approach experimentation and identify challenges and advantages with respect to conducting experiments. To achieve this, we conducted a qualitative interview study. The initial results show that startups often spent a disproportionate amount of time focusing on creating solutions without testing critical assumptions. Main reasons are the lack of awareness, that these assumptions can be tested early and a lack of knowledge and support on how to identify, prioritize and test these assumptions. However, startups understand the need for testing risky assumptions and are open to conducting experiments.
War Anfang des Jahrtausends der Wertbeitrag der IT zum Unternehmenserfolg noch umstritten, so negieren diesen heute nur noch die wenigsten Geschäftsführer. Wie Wertschöpfung durch Alignment von Unternehmens- und IT-Strategie mittels passender IT-Architekturen erzeugt wird, scheint für kleine und mittlere Unternehmen (KMU) verschiedenster Branchen noch immer mysteriös. Besonders fatal ist diese Lücke in den KMU der Kultur- und Kreativwirtschaft, die klassischen Industriesektoren als Innovationslieferanten dienen. An dieser Stelle setzt der vorliegende Bericht an. Er baut auf den Ergebnissen des Forschungsprojekts KonfIT-SSC auf, das in den vergangenen Jahren die Möglichkeit erforschte, mit Produktkonfiguratoren den „strategical fit“ zwischen Business und IT-Strukturen zu bewerkstelligen. Die zentrale Herausforderung bei diesem Vorhaben war es, Daten über Informationssystemstrukturen und die sie bestimmenden Ökosysteme so zu erheben, dass sie einer formalen Modellierung von Regelwerken und der Konfiguration von Geschäftsarchitekturen zugänglich werden. Der vorliegende Bericht liefert Antworten auf die Fragen, wie sich passende IT-Service Strategien für Unternehmen der Kultur- und Kreativwirtschaft erreichen lassen, welchen Beitrag Produktkonfiguratoren dabei liefern können und mit welchen Methoden sich Daten gewinnen lassen, um generische IT-Architekturen für KMU der Kreativbranche definieren zu können. Dabei werden im Verlauf neben den Antworten auf die wissenschaftlichen Fragestellungen auch die Ergebnisse der einzelnen Schritte zur Lösung der Aufgabenstellung in Form eines handelsüblichen Konfigurators dokumentiert. Als Methoden im Rahmen des Vorgehens kommen dabei zur Datengewinnung ein klassischer Literature Review, eine Online-Befragung sowie fünf Fallstudien in kleinen und mittleren Unternehmen der Werbebranche, aber auch Interviews mit Experten zum Einsatz. Bei der Analyse der Daten werden die Modellierung von Wertschöpfungsnetzen (e3value und i*), aber auch die Referenzmodellierung von Unternehmensarchitekturen verwendet. Abschließend wird das Vorgehen bei der Entwicklung der Konfigurationsmodelle (Regelwerke) und der Implementierung erläutert.
Context: The current situation and future scenarios of the automotive domain require a new strategy to develop high quality software in a fast pace. In the automotive domain, it is assumed that a combination of agile development practices and software product lines is beneficial, in order to be capable to handle high frequency of improvements. This assumption is based on the understanding that agile methods introduce more flexibility in short development intervals. Software product lines help to manage the high amount of variants and to improve quality by reuse of software for long term development.
Goal: This study derives a better understanding of the expected benefits for a combination. Furthermore, it identifies the automotive specific challenges that prevent the adoption of agile methods within the software product line.
Method: Survey based on 16 semi structured interviews from the automotive domain, an internal workshop with 40 participants and a discussion round on ESE congress 2016. The results are analyzed by means of thematic coding.
Context: Software product lines are widely used in automotive embedded software development. This software paradigm improves the quality of software variants by reuse. The combination of agile software development practices with software product lines promises a faster delivery of high quality software. However, the set up of an agile software product line is still challenging, especially in the automotive domain. Goal: This publication aims to evaluate to what extend agility fits to automotive product line engineering. Method: Based on previous work and two workshops, agility is mapped to software product line concerns. Results: This publication presents important principles of software product lines, and examines how agile approaches fit to those principles. Additionally, the principles are related to one of the four major concerns of software product line engineering: Business, Architecture, Process, and Organization. Conclusion: Agile software product line engineering is promising and can add value to existing development approaches. The identified commonalities and hindering factors need to be considered when defining a combined agile product line engineering approach.
Due to frequently changing requirements, the internal structure of cloud services is highly dynamic. To ensure flexibility, adaptability, and maintainability for dynamically evolving services, modular software development has become the dominating paradigm. By following this approach, services can be rapidly constructed by composing existing, newly developed and publicly available third-party modules. However, newly added modules might be unstable, resource-intensive, or untrustworthy. Thus, satisfying non-functional requirements such as reliability, efficiency, and security while ensuring rapid release cycles is a challenging task. In this paper, we discuss how to tackle these issues by employing container virtualization to isolate modules from each other according to a specification of isolation constraints. We satisfy non-functional requirements for cloud services by automatically transforming the modules comprised into a container-based system. To deal with the increased overhead that is caused by isolating modules from each other, we calculate the minimum set of containers required to satisfy the isolation constraints specified. Moreover, we present and report on a prototypical transformation pipeline that automatically transforms cloud services developed based on the Java Platform Module System into container-based systems.
Companies are constantly changing their business process models. In team environments, different versions of a process model are created at the same time. These versions of a process model need to be merged from time to time to consolidate changes and create a new common version.
In this short paper, we propose a solution for modifying a merge result. The goal is to create a meaningful merge result by adding connector nodes to the model at specific locations. This increases the amount of possible result models and reduces additional implementation effort.
Software and system development is complex and diverse, and a multitude of development approaches is used and combined with each other to address the manifold challenges companies face today. To study the current state of the practice and to build a sound understanding about the utility of different development approaches and their application to modern software system development, in 2016, we launched the HELENA initiative. This paper introduces the 2nd HELENA workshop and provides an overview of the current project state. In the workshop, six teams present initial findings from their regions, impulse talk are given, and further steps of the HELENA roadmap are discussed.
In this presentation the audience will be: (a) introduced to the aims and objectives of the DBTechNet initiative, (b) briefed on the DBTech EXT virtual laboratory workshops (VLW), i.e. the educational and training (E&T) content which is freely available over the internet and includes vendor-neutral hands-on laboratory training sessions on key database technology topics, and (c) informed on some of the practical problems encountered and the way they have been addressed. Last but not least, the audience will be invited to consider incorporating some or all of the DBTech EXT VLW content into their higher education (HE), vocational education and training (VET), and/or lifelong learning/training type course curricula. This will come at no cost and no commitment on behalf of the teacher/trainer; the latter is only expected to provide his/her feedback on the pedagogical value and the quality of the E&T content received/used.