Informatik
Refine
Year of publication
- 2019 (104) (remove)
Document Type
- Conference proceeding (86)
- Journal article (15)
- Book (1)
- Doctoral Thesis (1)
- Patent / Standard / Guidelines (1)
Is part of the Bibliography
- yes (104)
Institute
- Informatik (104)
Publisher
- Springer (21)
- IEEE (19)
- Hochschule Reutlingen (16)
- Deutsche Gesellschaft für Computer- und Roboterassistierte Chirurgie e.V. (5)
- Fac. of Organization & Informatics, Univ. of Zagreb (4)
- ACM (3)
- Elsevier (3)
- SCITEPRESS (3)
- Association for Information Systems (AIS) (2)
- Curran Associates Inc. (2)
- GMDS e.V. (2)
- University of Hawaii (2)
- AIP Publishing (1)
- Association for Computing Machinery (1)
- Circle International (1)
- Cuvillier Verlag (1)
- Deutsche Gesellschaft für die Computer- und Roboterassistierte Chirurgie e.V. (1)
- Emerald (1)
- GITO-Verl. (1)
- IBM Research Division (1)
- IOS Press (1)
- PeerJ (1)
- Riga Technical University Press (1)
- SPIE (1)
- Science and Technology Publications (1)
- Shaker Verlag (1)
- Smart Home & Living Baden-Württemberg e.V. (1)
- Springer International Publishing (1)
- Springer Science + Business Media B.V. (1)
- Universität Tübingen (1)
- Wiley-Blackwell (1)
- World Scientific (1)
Urban platforms are essential for smart and sustainable city planning and operation. Today they are mostly designed to handle and connect large urban data sets from very different domains. Modelling and optimisation functionalities are usually not part of the cities software infrastructure. However, they are considered crucial for transformation scenario development and optimised smart city operation. The work discusses software architecture concepts for such urban platforms and presents case study results on the building sector modelling, including urban data analysis and visualisation. Results from a case study in New York are presented to demonstrate the implementation status.
An important shift in software delivery is the definition of a cloud service as an independently deployable unit by following the microservices architectural style. Container virtualization facilitates development and deployment by ensuring independence from the runtime environment. Thus, cloud services are built as container based systems - a set of containers that control the lifecycle of software and middleware components. However, using containers leads to a new paradigm for service development and operation: Self service environments enable software developers to deploy and operate container based systems on their own - you build it, you run it. Following this approach, more and more operational aspects are transferred towards the responsibility of software developers. In this work, we propose a concept for self-adaptive cloud services based on container virtualization in line with the microservices architectural style and present a model-based approach that assists software developers in building these services. Based on operational models specified by developers, the mechanisms required for self-adaptation are automatically generated. As a result, each container automatically adapts itself in a reactive, decentralized manner. We evaluate a prototype which leverages the emerging TOSCA standard to specify operational behavior in a portable manner.
In this paper we describe an interactive web-based visual analysis tool for Formula one races. It first provides an overview about all races on a yearly basis in a calendar-like representation. From this starting point, races can be selected and visually inspected in detail. We support a dynamic race position diagram as well as a more detailed lap times line plot for showing the drivers’ lap times in comparison. Many interaction techniques are supported like selections, filtering, highlighting, color coding, or details-on demand. We illustrate the usefulness of our visualization tool by applying it to a Formula one dataset while we describe the different dynamic visual racing patterns for a number of selected races and drivers.
In recent years, the parallel computing community has shown increasing interest in leveraging cloud resources for executing parallel applications. Clouds exhibit several fundamental features of economic value, like on-demand resource provisioning and a pay-per-use model. Additionally, several cloud providers offer their resources with significant discounts; however, possessing limited availability. Such volatile resources are an auspicious opportunity to reduce the costs arising from computations, thus achieving higher cost efficiency. In this paper, we propose a cost model for quantifying the monetary costs of executing parallel applications in cloud environments, leveraging volatile resources. Using this cost model, one is able to determine a configuration of a cloud-based parallel system that minimizes the total costs of executing an application.
Information and communication technologies support telemedicine to lower health access barriers and to provide better health care. While the potential in Active Assisted Living (AAL) is increasing, it is difficult to evaluate its benefits for the user, and it requires coordinated actions to launch it. The European Commission’s action plan 2012–2020 provides a roadmap to patient empowerment and healthcare, to link up devices and technologies, and to invest in research towards the personalized medicine of the future. As a quickly developing area in medicine, telemonitoring is a demanding field in research and development. Telemonitoring is an essential component of personalized medicine, where health providers can obtain precise information on outcare or chronic patients to improve diagnosis and therapy and also help healthy persons with prevention support. Telemonitoring combines mobile and wearable devices with the personal AAL home environment, a private or (partly) supervised home, most often called ’smart home’. The focus of this workshop is on new hardware and software solutions specifically designed to be applicable in AAL environments to empower patients. This workshop presents system-oriented solutions covering wearable and AAL-embedded devices, computer science infrastructure both at the users’ and the medical premises, to handle the data and decision support systems to support diagnose and treatment.
Integrating tools and applications into a clinically useful system for individual continuous health data surveillance requires an architecture considering all relevant medical and technical conditions. Therefore, the requirements of an integrated system including a health app to collect and monitor sensor data to support personalized medicine are analyzed. The structure and behavior of the system are defined regarding the specific health use cases and scenarios. A vendor-independent architecture, which enables the collection of vital data from arbitrary wearables using a smartphone, is presented. The data is centrally managed and processed by attending physicians. The modular architecture allows the system to extend to new scenarios, data formats, etc. A prototypical implementation of the system shows the feasibility of the approach.
In this paper, we deal with optimizing the monetary costs of executing parallel applications in cloud-based environments. Specifically, we investigate on how scalability characteristics of parallel applications impact the total costs of computations. We focus on a specific class of irregularly structured problems, where the scalability typically depends on the input data. Consequently, dynamic optimization methods are required for minimizing the costs of computation. For quantifying the total monetary costs of individual parallel computations, the paper presents a cost model that considers the costs for the parallel infrastructure employed as well as the costs caused by delayed results. We discuss a method for dynamically finding the number of processors for which the total costs based on our cost model are minimal. Our extensive experimental evaluation gives detailed insights into the performance characteristics of our approach.
Parallel applications are the computational backbone of major industry trends and grand challenges in science. Whereas these applications are typically constructed for dedicated High Performance Computing clusters and supercomputers, the cloud emerges as attractive execution environment, which provides on-demand resource provisioning and a pay-per-use model. However, cloud environments require specific application properties that may restrict parallel application design. As a result, design trade-offs are required to simultaneously maximize parallel performance and benefit from cloud-specific characteristics.
In this paper, we present a novel approach to assess the cloud readiness of parallel applications based on the design decisions made. By discovering and understanding the implications of these parallel design decisions on an application’s cloud readiness, our approach supports the migration of parallel applications to the cloud.We introduce an assessment procedure, its underlying meta model, and a corresponding instantiation to structure this multi-dimensional design space. For evaluation purposes, we present an extensive case study comprising three parallel applications and discuss their cloud readiness based on our approach.
Data analytics tasks on large datasets are computationally intensive and often demand the compute power of cluster environments. Yet, data cleansing, preparation, dataset characterization and statistics or metrics computation steps are frequent. These are mostly performed ad hoc, in an explorative manner and mandate low response times. But, such steps are I/O intensive and typically very slow due to low data locality, inadequate interfaces and abstractions along the stack. These typically result in prohibitively expensive scans of the full dataset and transformations on interface boundaries.
In this paper, we examine R as analytical tool, managing large persistent datasets in Ceph, a wide-spread cluster file-system. We propose nativeNDP – a framework for Near Data Processing that pushes down primitive R tasks and executes them in-situ, directly within the storage device of a cluster-node. Across a range of data sizes, we show that nativeNDP is more than an order of magnitude faster than other pushdown alternatives.