Refine
Year of publication
- 2020 (312) (remove)
Document Type
- Journal article (144)
- Conference proceeding (93)
- Book chapter (41)
- Book (10)
- Report (9)
- Doctoral Thesis (6)
- Working Paper (4)
- Anthology (3)
- Issue of a journal (2)
Is part of the Bibliography
- yes (312)
Institute
- ESB Business School (105)
- Informatik (101)
- Life Sciences (43)
- Technik (37)
- Texoversum (25)
Publisher
- Springer (47)
- Elsevier (30)
- Hochschule Reutlingen (20)
- IEEE (15)
- MDPI (13)
- Springer Gabler (8)
- ACM (7)
- De Gruyter (6)
- Wiley (6)
- AMD Akademie Mode & Design (5)
To remain relevant and mitigate disruption, traditional companies have to engage in multiple fast-paced experiments in digital offerings—revenue-generating solutions to what customers want and are willing to pay for, inspired by what is possible with digital technologies. After launching several digital offering initiatives, reinsurance giant Munich Re noticed that many experienced similar challenges. This case describes how Munich Re addressed these common challenges by building a foundation to help its digital offerings succeed. The foundation provided prioritized and staged funding; dedicated, hands-on expertise; and a digital platform of shared services. By 2020, this foundation was helping to support over seventy initiatives, including several that were in the market generating new sources of revenue for the company by enabling its clients—insurance companies—to better service their own customers.
Durch die Digitalisierung der Arbeitswelt und die New Work-Bewegung verändert sich die Art und Weise, wie wir zu-ammenarbeiten. Entsprechend wird derzeit viel über Führung gesprochen – möglicherweise zu viel. Die zentrale These des Beitrags ist, dass wir besser verstehen können, wie Führungskräften und Mitarbeitende zukunftsorientiert zusam-menarbeiten, wenn wir weniger direkt über Führung reden, sondern zunächst von der Situation aus denken, in der sich Akteure koordinieren. Führung wird dann nicht isoliert, sondern zusammen mit verschiedenen Koordinationsformen betrachtet (wie Autonomie, Selbstorganisation, Management und eben Führung). Das kann in der Praxis helfen, den Wandel der Zusammenarbeit – inklusive der neuen Art von Führung – reflektierter und zielorientierter zu gestalten.
In modernen Arbeitswelten werden zunehmend arbeitsplatzbezogene digitale Technologien eingesetzt. Wenngleich dies zahlreiche Chancen bietet, kann es auch negative Folgen für die Gesundheit von Mitarbeitenden haben. Diese Herausforderungen werden durch die aktuelle Corona-Krise für viele Unternehmen noch verschärft. Stress, der direkt oder indirekt durch den Einsatz von Technologien entsteht, wird als «Technostress» bezeichnet. Wichtige Hebel zu dessen Vermeidung umfassen die Gestaltung von Technologien sowie die Berücksichtigung verschiedener individueller und situativer Faktoren im Rahmen technologischer Veränderungsprozesse.
Fehler, Manipulation und Rationalität – wie das Reporting das Verhalten der Entscheider beeinflusst
(2020)
Der Zweck des Management Reporting besteht darin, den Informationsbedarf der Führungskräfte zu befriedigen. Sowohl Ersteller als auch Nutzer von Berichten handeln aber nur begrenzt rational. Berichte wirken deshalb nicht „zielgenau“, sondern lösen vielfältige nicht gewünschte Reaktionen bei den Beteiligten aus. In diesem Beitrag erfahren Sie, wie sich „der Faktor Mensch“ auf die Erstellung und Nutzung von Management Reports auswirkt und wie ein effektives und effizientes Management Reporting unerwünschte Wirkungen minimieren kann.
Problem: Immer mehr Unternehmen führen Lean-Prinzipien ein, finden ihre Anforderungen an passende Kosteninformation aber von der traditionellen Kostenrechnung nicht ausreichend abgedeckt.
Ziel: Eine am Lean-Gedanken orientierte Kostenrechnung baut neue Kostenzurechnungsobjekte ein und stellt bisher vernachlässigte Kosteninformationen zur Verfügung
Methode: Gängige Kostenrechnungsansätze werden einem geschlossenen “accounting for lean” Ansatz gegenübergestellt, Gemeinsamkeiten und Überschneidungen aufgezeigt.
Cloud resources can be dynamically provisioned according to application-specific requirements and are payed on a per-use basis. This gives rise to a new concept for parallel processing: Elastic parallel computations. However, it is still an open research question to which extent parallel applications can benefit from elastic scaling, which requires resource adaptation at runtime and corresponding coordination mechanisms. In this work, we analyze how to address these system-level challenges in the context of developing and operating elastic parallel tree search applications. Based on our findings, we discuss the design and implementation of TASKWORK, a cloud-aware runtime system specifically designed for elastic parallel tree search, which enables the implementation of elastic applications by means of higher-level development frameworks. We show how to implement an elastic parallel branch-and-bound application based on an exemplary development framework and report on our experimental evaluation that also considers several benchmarks for parallel tree search.
In recent years, the cloud has become an attractive execution environment for parallel applications, which introduces novel opportunities for versatile optimizations. Particularly promising in this context is the elasticity characteristic of cloud environments. While elasticity is well established for client-server applications, it is a fundamentally new concept for parallel applications. However, existing elasticity mechanisms for client-server applications can be applied to parallel applications only to a limited extent. Efficient exploitation of elasticity for parallel applications requires novel mechanisms that take into account the particular runtime characteristics and resource requirements of this application type. To tackle this issue, we propose an elasticity description language. This language facilitates users to define elasticity policies, which specify the elasticity behavior at both cloud infrastructure level and application level. Elasticity at the application level is supported by an adequate programming and execution model, as well as abstractions that comply with the dynamic availability of resources. We present the underlying concepts and mechanisms, as well as the architecture and a prototypical implementation. Furthermore, we illustrate the capabilities of our approach through real-world scenarios.
High Performance Computing (HPC) enables significant progress in both science and industry. Whereas traditionally parallel applications have been developed to address the grand challenges in science, as of today, they are also heavily used to speed up the time-to-result in the context of product design, production planning, financial risk management, medical diagnosis, as well as research and development efforts. However, purchasing and operating HPC clusters to run these applications requires huge capital expenditures as well as operational knowledge and thus is reserved to large organizations that benefit from economies of scale. More recently, the cloud evolved into an alternative execution environment for parallel applications, which comes with novel characteristics such as on-demand access to compute resources, pay-per-use, and elasticity. Whereas the cloud has been mainly used to operate interactive multi-tier applications, HPC users are also interested in the benefits offered. These include full control of the resource configuration based on virtualization, fast setup times by using on-demand accessible compute resources, and eliminated upfront capital expenditures due to the pay-per-use billing model. Additionally, elasticity allows compute resources to be provisioned and decommissioned at runtime, which allows fine-grained control of an application's performance in terms of its execution time and efficiency as well as the related monetary costs of the computation. Whereas HPC-optimized cloud environments have been introduced by cloud providers such as Amazon Web Services (AWS) and Microsoft Azure, existing parallel architectures are not designed to make use of elasticity. This thesis addresses several challenges in the emergent field of High Performance Cloud Computing. In particular, the presented contributions focus on the novel opportunities and challenges related to elasticity. First, the principles of elastic parallel systems as well as related design considerations are discussed in detail. On this basis, two exemplary elastic parallel system architectures are presented, each of which includes (1) an elasticity controller that controls the number of processing units based on user-defined goals, (2) a cloud-aware parallel execution model that handles coordination and synchronization requirements in an automated manner, and (3) a programming abstraction to ease the implementation of elastic parallel applications. To automate application delivery and deployment, novel approaches are presented that generate the required deployment artifacts from developer-provided source code in an automated manner while considering application-specific non-functional requirements. Throughout this thesis, a broad spectrum of design decisions related to the construction of elastic parallel system architectures is discussed, including proactive and reactive elasticity control mechanisms as well as cloud-based parallel processing with virtual machines (Infrastructure as a Service) and functions (Function as a Service). To evaluate these contributions, extensive experimental evaluations are presented.
The objective of the project presented here is to develop an intelligent control algorithm for an energy system consisting of a biogas CHP (combined heat and power), various storage technologies, such as thermal energy storages (TES) and gas storages, and other renewable energy sources, such as photovoltaics. A corresponding algorithm based on the Monte-Carlo method has already been developed at Reutlingen University for CHP units running on natural gas and for heat pumps. The project presented here concentrates on the further development of this algorithm for application to biogas CP units. In this context, an adequate implementation of the gas storage is of primary importance, as it mainly determines the flexibility of the plant. In the course of the validation of the new optimization algorithm, simulations were carried out based on data from the Lower Lindenhof, an agricultural experimental station of the University of Hohenheim. Both an optimization with regard to onsite electricity utilization and an optimization driven by residual load were investigated. Preliminary results show that the optimization algorithm can improve the operation of the biogas CHP unit depending on the selected target function.
Unter dem Begriff Innovation Enabling wird im Folgenden ein Konzept für die ganzheitliche Unterstützung interdisziplinärer Teams beim kreativen und innovativen Problemlösen vor-gestellt. Dieses Konzept unterstützt Moderatoren und Teilnehmergleichermaßen und ein damit realisiertes System bleibt durch die implizite Interaktion für den Nutzer im Hintergrund. Eine zentrale Rolle spielt das Konzept der Awareness Pipeline zur Implementation einer impliziten Interaktion auf Basis eines Sensor-Aktor-Systems, welches in diesem Artikel vorgestellt wird. Die Unterstützung der begleitenden Moderations- und Administrationsaufgaben, wie beispielsweise der automatisierten Dokumentation der Sitzung, sollen in Zukunft einen deutlichen Mehrwert gegenüber einer klassischen Brainstorming-Sitzung bieten.