Informatik
Refine
Document Type
- Conference proceeding (83)
- Journal article (7)
Language
- English (90)
Is part of the Bibliography
- yes (90)
Institute
- Informatik (90)
Publisher
- IEEE (90) (remove)
Organizations identified the opportunities of big data analytics to support the business with problem-specific insights through the exploitation of generated data. Sociotechnical solutions are developed in big data projects to reach competitive advantage. Although these projects are aligned to specific business needs, common architectural challenges are not addressed in a comprehensive manner. Enterprise architecture management is a holistic approach to tackle complex business and IT architectures. The transformation of an organization’s EA is influenced by big data transformation processes and their data-driven approach on all layers. In this paper, we review big data literature to analyze which requirements for the EA management discipline are proposed. Based on a systematic literature identification, conceptual categories of requirements for EA management are elicited utilizing an inductive category formation. These conceptual categories of requirements constitute a category system that facilitates a new perspective on EA management and fosters the innovation-driven evolution of the EA management.
discipline.
Due to frequently changing requirements, the internal structure of cloud services is highly dynamic. To ensure flexibility, adaptability, and maintainability for dynamically evolving services, modular software development has become the dominating paradigm. By following this approach, services can be rapidly constructed by composing existing, newly developed and publicly available third-party modules. However, newly added modules might be unstable, resource-intensive, or untrustworthy. Thus, satisfying non-functional requirements such as reliability, efficiency, and security while ensuring rapid release cycles is a challenging task. In this paper, we discuss how to tackle these issues by employing container virtualization to isolate modules from each other according to a specification of isolation constraints. We satisfy non-functional requirements for cloud services by automatically transforming the modules comprised into a container-based system. To deal with the increased overhead that is caused by isolating modules from each other, we calculate the minimum set of containers required to satisfy the isolation constraints specified. Moreover, we present and report on a prototypical transformation pipeline that automatically transforms cloud services developed based on the Java Platform Module System into container-based systems.
Serverless computing is an emerging cloud computing paradigm with the goal of freeing developers from resource management issues. As of today, serverless computing platforms are mainly used to process computations triggered by events or user requests that can be executed independently of each other. These workloads benefit from on-demand and elastic compute resources as well as per-function billing. However, it is still an open research question to which extent parallel applications, which comprise most often complex coordination and communication patterns, can benefit from serverless computing.
In this paper, we introduce serverless skeletons for parallel cloud programming to free developers from both parallelism and resource management issues. In particular, we investigate on the well known and widely used farm skeleton, which supports the implementation of a wide range of applications. To evaluate our concepts, we present a prototypical development and runtime framework and implement two applications based on our framework: Numerical integration and hyperparameter optimization - a commonly applied technique in machine learning. We report on performance measurements for both applications and discuss
the usefulness of our approach.
To evaluate the quality of a person´s sleep it is essential to identify the sleep stages and their durations. Currently, the gold standard in terms of sleep analysis is overnight polysomnography (PSG), during which several techniques like EEG (eletroencephalogram), EOG (electrooculogram), EMG (electromyogram), ECG (electrocardiogram), SpO2 (blood oxygen saturation) and for example respiratory airflow and respiratory effort are recorded. These expensive and complex procedures, applied in sleep laboratories, are invasive and unfamiliar for the subjects and it is a reason why it might have an impact on the recorded data. These are the main reasons why low-cost home diagnostic systems are likely to be advantageous. Their aim is to reach a larger population by reducing the number of parameters recorded. Nowadays, many wearable devices promise to measure sleep quality using only the ECG and body-movement signals. This work presents an android application developed in order to proof the accuracy of an algorithm published in the sleep literature. The algorithm uses ECG and body movement recordings to estimate sleep stages. The pre-recorded signals fed into the algorithm have been taken from physionet1 online database. The obtained results have been compared with those of the standard method used in PSG. The mean agreement ratios between the sleep stages REM, Wake, NREM-1, NREM-2 and NREM-3 were 38.1%, 14%, 16%, 75% and 54.3%.
In this paper, an approach is introduced how reinforcement learning can be used to achieve interoperability between heterogeneous Internet of Things (IoT) components. More specifically, we model an HTTP REST service as a Markov Decision Process and adapt Q-Learning to the properties of REST so that an agent in the role of an HTTP REST client can learn the semantics of the service and, especially an optimal sequence of service calls to achieve an application specific goal. With our approach, we want to open up and facilitate a discussion in the community, as we see the key for achieving interoperability in IoT by the utilization of artificial intelligence techniques.
OpenAPI, WADL, RAML, and API Blueprint are popular formats for documenting Web APIs. Although these formats are in general both human and machine-readable, only the part of the format describing the syntax of a Web API is machine-understandable. Descriptions, which explain the meaning and purpose of Web API elements, are embedded as natural language text snippets into documents and target human readers but not machines. To enable machines to read and process these state-of-practice Web API documentation, we propose a Transformer model that solves the generic task of identifying a Web API element within a syntax structure that matches a natural language query. For our first prototype, we focus on the Web API integration task of matching output with input parameters and fined-tuned a pre-trained CodeBERT model to the downstream task of question answering with samples from 2,321 OpenAPI documentation. We formulate the original question answering problem as a multiple choice task: given a semantic natural language description of an output parameter (question) and the syntax of the input schema (paragraph), the model chooses the input parameter (answer) in the schema that best matches the description. The paper describes the data preparation, tokenization, and fine-tuning process as well as discusses possible applications of our model as part of a recommender system. Furthermore, we evaluate the generalizability and the robustness of our fine-tuned model, with the result that it achieves an accuracy of 81.46% correctly chosen parameters.
In networked operating room environments, there is an emerging trend towards standardized non-proprietary communication protocols which allow to build new integration solutions and flexible human-machine interaction concepts. The most prominent endeavor is the IEEE 11073 SDC protocol. For some uses cases, it would be helpful if not just medical devices could be controlled based on SDC, but also building automation systems like light, shutters, air condition, etc. For those systems, the KNX protocol is widely used. We build an SDC-to-KNX gateway which allows to use the SDC protocol for sending commands to connected KNX devices. The first prototype system was successfully implemented at the demonstration operating room at Reutlingen University. This is a first step toward the integration of a broader variety of KNX devices.
For decades, Software Process Improvement (SPI) programs have been implemented, inter alia, to improve quality and speed of software development. To set up, guide, and carry out SPI projects, and to measure SPI state, impact, and success, a multitude of different SPI approaches and considerable experience are available. SPI addresses many aspects ranging from individual developer skills to entire organizations. It comprises for instance the optimization of specific activities in the software lifecycle as well as the creation of organization awareness and project culture. In the course of conducting a systematic mapping study on the state-of-the-art in SPI from a general perspective, we observed Global Software Engineering (GSE) becoming a topic of interest in recent years. Therefore, in this paper, we provide a detailed investigation of those papers from the overall systematic mapping study that were classified as addressing SPI in the context of GSE. From the main study’s result set, a set of 30 papers dealing with GSE was selected for an in-depth analysis using the systematic review instrument to study the contributions and to develop an initial picture of how GSE is considered from the perspective of SPI. Our findings show the analyzed papers delivering a substantial discussion of cultural models and how such models can be used to better address and align SPI programs with multi-national environments. Furthermore, experience is shared discussing how agile approaches can be implemented in companies working at the global scale. Finally, success factors and barriers are studied to help companies implementing SPI in a GSE context.
Software development consists to a large extent of human-based processes with continuously increasing demands regarding interdisciplinary team work. Understanding the dynamics of software teams can be seen as highly important to successful project execution. Hence, for future project managers, knowledge about non-technical processes in teams is significant. In this paper, we present a course unit that provides an environment in which students can learn and experience the role of different communication patterns in distributed agile software development. In particular, students gain awareness about the importance of communication by experiencing the impact of limitations of communication channels and the effects on collaboration and team performance. The course unit presented uses the controlled experiment instrument to provide the basic organization of a small software project carried out in virtual teams. We provide a detailed design of the course unit to allow for implementation in further courses. Furthermore, we provide experiences obtained from implementing this course unit with 16 graduate students. We observed students struggling with technical aspects and team coordination in general, while not realizing the importance of communication channels (or their absence). Furthermore, we could show the students that lacking communication protocols impact team coordination and performance regardless of the communication channels used.