Refine
Document Type
- Journal article (1241)
- Conference proceeding (1036)
- Book chapter (381)
- Book (223)
- Doctoral Thesis (54)
- Working Paper (37)
- Anthology (32)
- Report (25)
- Patent / Standard / Guidelines (24)
- Issue of a journal (19)
Is part of the Bibliography
- yes (3081)
Institute
- ESB Business School (1103)
- Informatik (873)
- Technik (509)
- Life Sciences (343)
- Texoversum (219)
- Zentrale Einrichtungen (16)
Publisher
- Springer (346)
- IEEE (250)
- Elsevier (220)
- Hochschule Reutlingen (186)
- MDPI (98)
- Springer Gabler (79)
- Gesellschaft für Informatik (66)
- Universitätsbibliothek Tübingen (59)
- Wiley (54)
- ACM (40)
Am Beispiel von zwei Unternehmen mit stark unterschiedlichen Strom- und Wärmebedarfswerten zeigt sich, dass aufgrund einer Amortisationszeit im günstigsten Fall von etwa 2 Jahren der Einsatz von Blockheizkraftwerken in jedem Fall wirtschaftlich lohnenswert ist. Dabei wird deutlich, dass die Auslegung des Blockheizkraftwerkes stark von den Strom- und Wärmebedarfswerten abhängt und dass der Pufferspeicher keinesfalls zu klein ausgelegt werden sollte. Das gute wirtschaftliche Ergebnis gilt bereits für den standardmäßig eingesetzten wärmegeführten Betrieb des Blockheizkraftwerkes, wobei eine intelligente stromoptimierte Steuerung mit Lastspitzenmanagement die Wirtschaftlichkeit weiter verbessert. Grundsätzlich ist darauf zu achten, dass Blockheizkraftwerke auf einen längerfristigen Betrieb ausgelegt sind. Bei jährlichen Betriebszeiten von 4.000 Stunden bis 8.000 Stunden ergibt sich ein Betrieb des Blockheizkraftwerkes über 6 bis 12 Jahre.
Social Selling gewinnt im B-to-B-Vertrieb immer mehr an Bedeutung. Um auf dem globalen Wettbewerbsfeld mithalten zu können, genügt es nicht mehr, seine Kunden über die traditionellen Verkaufskanäle zu identifizieren und zu erreichen. Für diesen Zweck bietet LinkedIn das Vertriebstool Sales Navigator an. Doch wo ist dieses Tool im Social Selling einzuordnen? Können dadurch Mehrwerte im Vertriebsprozess generiert werden? Mithilfe von Experteninterviews werden diese Fragen im Folgenden erörtert.
Der Sondermaschinenbau ist durch eine hohe Variantenvielfalt und komplexe Materialflüsse charakterisiert (Reinhart, Bredow & Pohl, 2009, S. 131). Die vorliegende Publikation stellt die im Zuge eines Praxisprojekts angewendeten Vorgehensweisen zur Materialflussoptimierung im Sondermaschinenbau vor. Dabei werden Erfahrungswerte und Hindernisse herausgearbeitet.
Michael Wörz erfüllte 26 Jahre lang das Amt und die Aufgabe des Referenten für Technik- und Wirtschaftsethik an den Hochschulen für Angewandte Wissenschaften des Landes Baden-Württemberg. Ihm war die Interdisziplinarität, die diese Aufgaben erfordern würde, in sein berufliches Stammbuch geschrieben. Ein Ingenieur, der Philosophie studierte und dort promovierte, der zudem sein wissenschaftliches Schaffen aus der Perspektive der Theorien eines Soziologen vorantrieb, hatte gelernt, interdisziplinär zu denken und zu arbeiten. Ihm fiel die Aufgabe zu, das zum Überleben künftiger Generationen fachübergreifende, notwendige Wissen in die akademische Ausbildung der Hochschulen für Angewandte Wissenschaften zu tragen, und das hat Michael Wörz über die Jahre seiner Amtszeit wahrlich getan. Neben diesen Aufgaben eines engagierten und innovativen Hochschullehrers war er auch in der Politik ein streitbarer Vertreter der Ethik und der Nachhaltigen Entwicklung, der geholfen hat, den Lehrenden an den Hochschulen den Weg zu bereiten.
The flexible and easy-to-use integration of production equipment and IT systems on the shop floor becomes more and more a success factor for manufacturing to adapt rapidly to changing situations. The approach of the Manufacturing Integration Assistant (MIALinx) is to simplify this challenge. The integration steps range from integrating sensors over collecting and rule-based processing of sensor information to the execution of required actions. This paper presents the implementation of MIALinx to retrofit legacy machines for Industry 4.0 in a manufacturing environment and focus on the concept and implementation of the easy-to-use user interface as a key element.
Design thinking is inherently and invariably oriented towards the future in that all design is for products, services or events that will exist in the future, and be used by people in the future. This creates an overlap between the domains of design thinking and strategic foresight. A small but significant literature has grown up in the strategic foresight field as to how design thinking may be used to improve its processes. This paper considers the other side of the relationship: how methods from the strategic foresight field may advance design thinking, improving insight into the needs and preferences of users of tomorrow, including how contextual change may suddenly and fundamentally reshape these. A side-by-side comparison of representative models from each field is presented, and it is shown how these may be assembled together to create foresight-informed design-based innovation.
In smart factories, maintenance is still an important aspect to safeguard the performance of their production. Especially in case of failures of machine components diagnosis is a time-consuming task. This paper presents an approach for a cyber-physical failure management system, which uses information from machines such as programmable logic controller or sensor data and IT systems to support the diagnosis and repairing process. Key element is a model combining the different information sources to detect deviations and to determine a probable failed component. Furthermore, the approach is prototypically implemented for leakage detection in compressed air networks.
The complexity of supply chains increases, especially due to the geographical spread of supplier and customer networks. In the connected and automated supply chains of the industry 4.0, even more nodes are incorporated in supply chains. This paper discusses the possible improvement of process quality in the industry 4.0 through the different blockchain and distributed ledger technologies. We derived hypotheses from a literature review and asked German blockchain experts from the industry to validate and discuss the hypotheses. We find that the different blockchain technologies and consensus algorithms have different strength with regard to quality improvement. One central finding is that IOTA, developed especially for the IoT and deemed the ’next evolutionary step’ is scalable and hence may increase the process efficiency, but at the same time is more vulnerable than other blockchain implementations, which again may reduce the overall process quality.
With the capability of employing virtually unlimited compute resources, the cloud evolved into an attractive execution environment for applications from the High Performance Computing (HPC) domain. By means of elastic scaling, compute resources can be provisioned and decommissioned at runtime. This gives rise to a new concept in HPC: Elasticity of parallel computations. However, it is still an open research question to which extent HPC applications can benefit from elastic scaling and how to leverage elasticity of parallel computations. In this paper, we discuss how to address these challenges for HPC applications with dynamic task parallelism and present TASKWORK, a cloud-aware runtime system based on our findings. TASKWORK enables the implementation of elastic HPC applications by means of higher level development frameworks and solves corresponding coordination problems based on Apache ZooKeeper. For evaluation purposes, we discuss a development framework for parallel branch-and-bound based on TASKWORK, show how to implement an elastic HPC application, and report on measurements with respect to parallel efficiency and elastic scaling.
Due to frequently changing requirements, the internal structure of cloud services is highly dynamic. To ensure flexibility, adaptability, and maintainability for dynamically evolving services, modular software development has become the dominating paradigm. By following this approach, services can be rapidly constructed by composing existing, newly developed and publicly available third-party modules. However, newly added modules might be unstable, resource-intensive, or untrustworthy. Thus, satisfying non-functional requirements such as reliability, efficiency, and security while ensuring rapid release cycles is a challenging task. In this paper, we discuss how to tackle these issues by employing container virtualization to isolate modules from each other according to a specification of isolation constraints. We satisfy non-functional requirements for cloud services by automatically transforming the modules comprised into a container-based system. To deal with the increased overhead that is caused by isolating modules from each other, we calculate the minimum set of containers required to satisfy the isolation constraints specified. Moreover, we present and report on a prototypical transformation pipeline that automatically transforms cloud services developed based on the Java Platform Module System into container-based systems.