Refine
Document Type
- Doctoral Thesis (34) (remove)
Language
- English (34) (remove)
Is part of the Bibliography
- yes (34)
Institute
- Informatik (12)
- ESB Business School (8)
- Life Sciences (6)
- Technik (6)
- Texoversum (2)
Publisher
In modern collaborative production environments where industrial robots and humans are supposed to work hand in hand, it is mandatory to observe the robot’s workspace at all times. Such observation is even more crucial when the robot’s main position is also dynamic e.g. because the system is mounted on a movable platform. As current solutions like physically secured areas in which a robot can perform actions potentially dangerous for humans, become unfeasible in such scenarios, novel, more dynamic, and situation aware safety solutions need to be developed and deployed.
This thesis mainly contributes to the bigger picture of such a collaborative scenario by presenting a data-driven convolutional neural network-based approach to estimate the two-dimensional kinematic-chain configuration of industrial robot-arms within raw camera images. This thesis also provides the information needed to generate and organize the mandatory data basis and presents frameworks that were used to realize all involved subsystems. The robot-arm’s extracted kinematic-chain can also be used to estimate the extrinsic camera parameters relative to the robot’s three-dimensional origin. Further a tracking system, based on a two-dimensional kinematic chain descriptor is presented to allow for an accumulation of a proper movement history which enables the prediction of future target positions within the given image plane. The combination of the extracted robot’s pose with a simultaneous human pose estimation system delivers a consistent data flow that can be used in higher-level applications.
This thesis also provides a detailed evaluation of all involved subsystems and provides a broad overview of their particular performance, based on novel generated, semi automatically annotated, real datasets.
Advancements in Internet of Things (IoT), cloud and mobile computing have fostered the digital enrichment—or “digitization”—of physical products, which are gaining increasing relevance in practice. According to recent studies, global IoT spending will exceed USD 1 Trillion by 2021 and there will be over 25 billion IoT connections (KPMG, 2018). Porter and Heppelmann (2014) state that IT is “revolutionizing products [as …] IT is becoming an integral part of the product itself.” Senior business executives like GE’s former CEO Jeff Immelt (2015) are even proposing that “every industrial company in the coming age is also going to become a software and analytics company.” This reflects the increasing relevance of IT components’ (i.e., software, data analytics, cloud computing) integration into previously purely physical products. We call IT-enriched physical products, “digitized” products to differentiate them from purely intangible “digital” products, such as digital music, e-books, and software. Examples of digitized products include the Philips Hue smartphone-controllable lightbulb, Audi Connect internet-connected cars, or Rolls-Royce’s sensor-enabled pay per use jet engines.
Digitized products provide their producers with a wide range of opportunities to offer new functionality and product capabilities (e.g., autonomy) that traditional, physical products do not exhibit (Porter and Heppelmann, 2014). In addition, the digitization of products allows producers to continuously repurpose their offerings, by extending and/or changing the product functionality and, thus, enabling new value creation opportunities. Based on their re-programmability and connectivity, digitized products “remain essentially incomplete […] throughout their lifetime as users continue to add and delete […] and change […] functional capabilities” (Yoo, 2013). For instance, the Philips Hue connected lightbulb enables remote control of basic functions (e.g., switching on and off the light) as well as setting more advanced light scenes for day-to-day tasks (e.g., relax, read) via Amazon’s Alexa artificial intelligence assistant (Signify, 2019), offerings that were not intended use cases when Signify (previously known as Philips Lighting) created Hue in 2012. Thus, digitized products present limitless potentials for new functionality and unforeseen use cases, which provides them with a huge innovation capacity.
Despite the limitless potentials offered by digitized products, there has been a slow uptake of digitized products by businesses so far (Jernigan et al., 2016; Mocker et al., 2019). According to a 2016 MIT Sloan Management Review report (Jernigan et al., 2016) only 24% of the investigated firms were actively using IoT technologies – a key technology for digitized products. In a more recent research study Mocker et al. (2019) found that the median revenue share from digital offerings (i.e., solutions based on IT enriched products) in large companies only accounted for 5% of the total revenue of the investigated companies.
The slow uptake of digitized products might be explained by the challenges that firms face regarding the changing nature of digitized products. Pervasive digital technologies (such as IoT) change the nature of products by adding new functionality that was previously not part of the value proposition of the products/services (e.g., a pair of shoes embedded with sensors and connectivity allows joggers to have access to data regarding their run distance, speed, etc.) (Yoo et al., 2012). The addition of new functionality and use cases of digitized products makes it harder for producers to design and develop relevant products (Hui 2014). As described in the paper ‘Do Your Customers Actually Want a “Smart” Version of Your Product?’, “just because [firms] can make something with IoT technology doesn’t mean people will want it.” (Smith, 2017).
The shift in digitized products’ nature poses new challenges for producers along the entire product development process (Porter and Heppelmann, 2015; Yoo et al., 2012) and create a paradox in product digitization, described by Yoo et al. (2012) as the paradox of pace: while technology accelerates the rate of innovation, companies need to spend more time to digitize their products, extending time to market. The production of these digitized products also becomes more challenging, e.g., as companies need to deal with different clock-speeds of software and hardware development (Porter and Heppelman, 2015). The above-mentioned challenges suggest that producers need to better understand how they can generate value from their digitized products’ generative potentials.
The body of literature on digitized products has been growing in recent years. For instance, Herterich et al. (2016) investigate how digitized product affordances (i.e., potentials) enable industrial service innovation; Nicolescu et al. (2018) explore the emerging meanings of value associated with IoT; and Benbunan-Fich (2019) studies the impact of basic wearable sensors on the quality of the user experience. However, it remains unclear what it takes for firms to generate value with their digitized product potentials. This dissertation investigates this research gap.
The digital age makes it possible to be globally networked at any time. Digital communication is therefore an important aspect of today’s world. Hence, the further development and expansion of this is becoming increasingly important. Even within a wireless system, copper channels are important as part of the overall network. Given the need to keep pushing at the current limitations, careful design of the cables in connection with an adapted coding of the bits is essential to transmit more and more data.
One of the most popular and widespread cabling technologies is symmetrical copper cabling [1, pp. 8-15]. It is also known as Twisted Pair and it is of immense importance for the cabling of communication networks.
At the time of writing this thesis, data rates of up to 10 GBit/s over a transmission distance of 100 m and 40 GBit/s over a transmission distance of 30 m are standardized for symmetrical copper cabling [2]. Other lengths are not standardized. Short lengths in particular are of great interest for copper cables, because copper cables are usually used for short distances, such as between computers and the campus network or within data centres.
This work has focused on the transmission of higher order Pulse Amplitude Modulation and the associated transmission performance. The central research question is:“how well can we optimize the transmission technique in order to be able to maximise the data bandwidth over Ethernet cable and, given that remote powering is also a significant application of these cables, how much will the resulting heating affect this transmission and what can be done to mitigate that?”
To answer this question, the cable parameters are first examined. A series of spectral measurements, such as Insertion Loss, Return Loss, Near End Crosstalk and Far End Crosstalk, provide information about the electromagnetic interference and the influence of the ohmic resistance on the signal. Based on these findings, the first theoretical statements and calculations can be made. In the next step, data transmissions over different transmission lengths are realized. The examination of the eye diagrams of the different transmission approaches ultimately provides information about the signal quality of the transmissions. An overview of the maximum transmission rate depending on the transmission distance shows the potential for different applications.
Furthermore, the simultaneous transmission of energy and data is a significant advantage of copper. However, the resulting heat development has an influence on the data transmission. Therefore, the influence of the ambient temperature of cables is investigated in the last part and changes in the signal quality are clarified.
The targeted design of monodisperse, mesoporous silica microspheres (MPSMs) as HPLC separation phases is still a challenge. The MPSMs can be generated via a multi-step template-assisted method. However, this method and the factors affecting the individual process steps and resulting material properties are scarcely understood, and specific control of the complex multi-step process has been hardly discussed. In this work, the key synthesis steps were systematically investigated by means of statistical Design of Experiment (DoE). In particular, three steps were considered in detail: 1) the synthesis of porous poly(glycidyl methacrylate-co-ethylene glycol dimethacrylate) (p(GMA-co-EDMA)) particles, which as template particles, determine the structure for the final MPSMs. In this context, functional models were generated, which allow the control of the template properties pore volume, pore size and specific surface area. 2) In the presence of amino-functionalized template particles, the sol-gel process was carried out under Stöber process conditions. The water to tetraethyl orthosilicate (TEOS) ratio, as well as the concentration of ammonia as basic catalyst were varied according to a face-centered central composite design (FCD). The incorporation of silica nanoparticles (SNPs) into the pore network of the porous polymers was investigated by scanning electron microscopy (SEM), evaluation of the pore properties assessed by nitrogen sorption measurements and determination of the inorganic content by thermogravimetric analysis (TGA). Here, the material properties, such as the amount of attached silica, can be specifically controlled in the resulting organic/silica hybrid material (hybrid beads, HBs). Furthermore, depending on the sol-gel conditions three, potentially four, reaction regimes were identified, leading to different HBs. These range from porous polymer particles coated with a thin protective silica layer, to interpenetrating networks of polymer and silica, to potential particles consisting of a porous polymer core coated with a silica shell. Also, the effects of the use of different precursors and solvents on silica incorporation were investigated. 3) To obtain MPSMs from the HBs, the organic polymer template was removed by calcination. The effects of sol-gel process conditions on the resulting MPSMs were evaluated and relationships between process conditions and material properties were shown in predictive models. Fully porous, spherical, monodisperse silica particles with sizes ranging from 0.5 µm to 7.8 µm and pore sizes from 3.5 nm to 72.4 nm can be prepared specifically. Subsequent to organo-functionalization, prepared MPSMs were applied as reversed-phase HPLC column materials. Here, the columns were successfully applied for the separation of proteins and amino acids. The separation performance of the materials depends largely on the property profile of the MPSMs, which is predetermined during the preparation of the HBs.
After more than three decades of electronic design automation, most layouts for analog integrated circuits are still handcrafted in a laborious manual fashion today. This book presents Self-organized Wiring and Arrangement of Responsive Modules (SWARM), a novel interdisciplinary methodology addressing the design problem with a decentralized multi-agent system. Its basic approach, similar to the roundup of a sheep herd, is to let autonomous layout modules interact with each other inside a successively tightened layout zone. Considering various principles of self-organization, remarkable overall solutions can result from the individual, local, selfish actions of the modules. Displaying this fascinating phenomenon of emergence, examples demonstrate SWARM’s suitability for floorplanning purposes and its application to practical place-and-route problems. From an academic point of view, SWARM combines the strengths of procedural generators with the assets of optimization algorithms, thus paving the way for a new automation paradigm called bottom-up meets top-down.
A distinctive highlight of the dissertation at hand is the investigation of multiple apparel supply chain actors incorporating the views of a global apparel retailer in Europe and multiple suppliers in Vietnam and Indonesia.
More specifically, the dissertation presents a coherent investigation starting with the depiction of a conceptual framework for social management strategies as a means for social risk management (SRM), exclusively aiming at the apparel industry. In accordance to the identified research gaps and suggested research directions from the conceptual framework, the role of the apparel sourcing agent for social management strategies was analysed by conducting a multiple case study approach with evidence from Vietnam and Europe, ultimately suggesting ten propositions. Whereas a further multiple case study data collection in Vietnam, Indonesia and Europe allowed for the investigation of buyer-supplier relationships with regards to social compliance strategies by using core tenets of agency theory to interpret the findings and outline ten propositions. Based on the development of a conceptual framework on social SSCM in the apparel industry, the formulation of related 20 propositions with evidence from crucial developing (apparel sourcing) countries, and the application of agency theory which has been declared as a shortfall in this context, this thesis contributes with further grounding to SSCM theory and substantially contributes to the debate by addressing numerous research gaps.
Reconstructing 3D face shape from a single 2D photograph as well as from video is an inherently ill-posed problem with many ambiguities. One way to solve some of the ambiguities is using a 3D face model to aid the task. 3D morphable face models (3DMMs) are amongst the state of the art methods for 3D face reconstruction, or so called 3D model fitting. However, current existing methods have severe limitations, and most of them have not been trialled on in-the-wild data. Current analysis-by- synthesis methods form complex non linear optimisation processes, and optimisers often get stuck in local optima. Further, most existing methods are slow, requiring in the order of minutes to process one photograph.
This thesis presents an algorithm to reconstruct 3D face shape from a single image as well as from sets of images or video frames in real-time. We introduce a solution for linear fitting of a PCA shape identity model and expression blendshapes to 2D facial landmarks. To improve the accuracy of the shape, a fast face contour fitting algorithm is introduced. These different components of the algorithm are run in iteration, resulting in a fast, linear shape-to- landmarks fitting algorithm. The algorithm, specifically designed to fit to landmarks obtained from in-the-wild images, by tackling imaging conditions that occur in in-the-wild images like facial expressions and the mismatch of 2D–3D contour correspondences, achieves the shape reconstruction accuracy of much more complex, nonlinear state of the art methods, while being multiple orders of magnitudes faster.
Second, we address the problem of fitting to sets of multiple images of the same person, as well as monocular video sequences. We extend the proposed shape-to-landmarks fitting to multiple frames by using the knowledge that all images are from the same identity. To recover facial texture, the approach uses texture from the original images, instead of employing the often-used PCA albedo model of a 3DMM. We employ an algorithm that merges texture from multiple frames in real-time based on a weighting of each triangle of the reconstructed shape mesh.
Last, we make the proposed real-time 3D morphable face model fitting algorithm available as open-source software. In contrast to ubiquitous available 2D-based face models and code, there is a general lack of software for 3D morphable face model fitting, hindering a widespread adoption. The library thus constitutes a significant contribution to the community.
Product-Service Systems (PSS) in the fashion industry : an analysis of intra-organizational factors
(2018)
The fashion industry is a vast industry that has grown tremendously over the last decades. This growth causes significant environmental impact since the production of clothes involves high input of energy, water, chemicals and generates great volumes of waste. Even though fashion firms have started to address this challenge by adopting environmental standards, it has turned out that the sole use of eco-friendly material and new manufacturing techniques is insufficient. Instead, sustainable business models are increasingly gaining attention to solve the environmental problems. Offers to rent, swap, repair or redesign clothes are among the most prominent and promising examples. For analytical purposes, these concepts can be assigned to the growing research stream of Product-Service Systems (PSS) that shift the focus from the pure sale of a product toward complementary or substitutional service offers. This decouples customer satisfaction from material consumption, prolongs the garments' lifetime and thus diminishes both material input and appertaining waste. Besides environmental sustainability, PSS imply potential economic benefits for organizations. Particularly in highly competitive industries like the fashion industry, PSS allow firms to differentiate, better compete with cost pressure and mitigate the risk of being imitated by rivels since service is more difficult to replicate. However, fashion PSS are still mainly operated in a niche market by small firms and have yet to be anchored in the mainstream fashion industry.
This thesis studies concurrency control and composition of transactions in computing environments with long living transactions where local data autonomy of transactions is indispensable. This kind of computing architecture is referred to as a Disconnected System where reads are segregated -disconnected- from writes enabling local data autonomy. Disconnecting reads from writes is inspired by Bertrand Meyer's "Command Query Separation" pattern. This thesis provides a simple yet precise definition for a Disconnected System with a focus on transaction management. Concerning concurrency control, transaction management frameworks implement a'one concurrency control mechanism fits all needs strategy'. This strategy, however, does not consider specific characteristics of data access. The thesis shows the limitations of this strategy if transaction load increases, transactions are long lived, local data autonomy is required, and serializability is aimed at isolation level. For example, in optimistic mechanisms the number of aborts suddenly increases if load increases. In pessimistic mechanisms locking causes long blocking times and is prone to deadlocks. These findings are not new and a common solution used by database vendors is to reduce the isolation. This thesis proposes the usage of a novel approach. It suggests choosing the concurrency control mechanism according to the semantics of data access of a certain data item. As a result a transaction may execute under several concurrency control mechanisms. The idea is to introduce lanes similar to a motorway where each lane is dedicated to a certain class of vehicle with the same characteristics. Whereas disconnecting reads and writes sets the traffic's direction, the semantics of data access defines the lanes. This thesis introduces four concurrency control classes capturing the semantics of data access and each of them has an associated tailored concurrency control mechanism. Class O (the optimistic class) implements a first-committer-wins strategy, class R (the reconciliation class) implements a first-n-committers-win strategy, class P (the pessimistic class) implements a first-reader-wins strategy, and class E (the escrow class) implements a first-n-readers-win strategy. In contrast to solutions that adapt the concurrency control mechanism during runtime, the idea is to classify data during the design phase of the application and adapt the classification only in certain cases at runtime. The result of the thesis is a transaction management framework called O|R|P|E. A performance study based on the TPC-C benchmark shows that O|R|P|E has a better performance and a considerably higher commit rate than other solutions. Moreover, the thesis shows that in O|R|P|E aborts are due to application specific limitations, i.e., constraint violations and not due to serialization conflicts. This is a result of considering the semantics.
Context: Fast moving markets and the age of digitization require that software can be quickly changed or extended with new features. The associated quality attribute is referred to as evolvability: the degree of effectiveness and efficiency with which a system can be adapted or extended. Evolvability is especially important for software with frequently changing requirements, e.g. internet-based systems. Several evolvability-related benefits were arguably gained with the rise of service-oriented computing (SOC) that established itself as one of the most important paradigms for distributed systems over the last decade. The implementation of enterprise-wide software landscapes in the style of service-oriented architecture (SOA) prioritizes loose coupling, encapsulation, interoperability, composition, and reuse. In recent years, microservices quickly gained in popularity as an agile, DevOps-focused, and decentralized service-oriented variant with fine-grained services. A key idea here is that small and loosely coupled services that are independently deployable should be easy to change and to replace. Moreover, one of the postulated microservices characteristics is evolutionary design.
Problem Statement: While these properties provide a favorable theoretical basis for evolvable systems, they offer no concrete and universally applicable solutions. As with each architectural style, the implementation of a concrete microservice-based system can be of arbitrary quality. Several studies also report that software professionals trust in the foundational maintainability of service orientation and microservices in particular. A blind belief in these qualities without appropriate evolvability assurance can lead to violations of important principles and therefore negatively impact software evolution. In addition to this, very little scientific research has covered the areas of maintenance, evolution, or technical debt of microservices.
Objectives: To address this, the aim of this research is to support developers of microservices with appropriate methods, techniques, and tools to evaluate or improve evolvability and to facilitate sustainable long-term development. In particular, we want to provide recommendations and tool support for metric-based as well as scenario-based evaluation. In the context of service-based evolvability, we furthermore want to analyze the effectiveness of patterns and collect relevant antipatterns. Methods: Using empirical methods, we analyzed the industry state of the practice and the academic state of the art, which helped us to identify existing techniques, challenges, and research gaps. Based on these findings, we then designed new evolvability assurance techniques and used additional empirical studies to demonstrate and evaluate their effectiveness. Applied empirical methods were for example surveys, interviews, (systematic) literature studies, or controlled experiments.
Contributions: In addition to our analyses of industry practice and scientific literature, we provide contributions in three different areas. With respect to metric-based evolvability evaluation, we identified a set of structural metrics specifically designed for service orientation and analyzed their value for microservices. Subsequently, we designed tool-supported approaches to automatically gather a subset of these metrics from machine-readable RESTful API descriptions and via a distributed tracing mechanism at runtime. In the area of scenario-based evaluation, we developed a tool-supported lightweight method to analyze the evolvability of a service-based system based on hypothetical evolution scenarios. We evaluated the method with a survey (N=40) as well as hands-on interviews (N=7) and improved it further based on the findings. Lastly with respect to patterns and antipatterns, we collected a large set of service-based patterns and analyzed their applicability for microservices. From this initial catalogue, we synthesized a set of candidate evolvability patterns via the proxy of architectural modifiability tactics. The impact of four of these patterns on evolvability was then empirically tested in a controlled experiment (N=69) and with a metric-based analysis. The results suggest that the additional structural complexity introduced by the patterns as well as developers' pattern knowledge have an influence on their effectiveness. As a last contribution, we created a holistic collection of service-based antipatterns for both SOA and microservices and published it in a collaborative repository.
Conclusion: Our contributions provide first foundations for a holistic view on the evolvability assurance of microservices and address several perspectives. Metric- and scenario-based evaluation as well as service-based antipatterns can be used to identify "hot spots" while service-based patterns can remediate them and provide means for systematic evolvability construction. All in all, researchers and practitioners in the field of microservices can use our artifacts to analyze and improve the evolvability of their systems as well as to gain a conceptual understanding of service-based evolvability assurance.
Unter der Zielsetzung der multimodalen, ortsaufgelösten optischen Spektroskopie für die markierungsfreie Charakterisierung biologischer Materialien nach Morphologie und Chemie werden vier Themenschwerpunkte behandelt.
1. Theorie der elastischen / inelastischen Lichtstreuung und laterale Auflösung in der Mikroskopie
2. Erweiterung eines Raman Mikroskops zu einem multimodalen spektralen Imaging System (MSIS) mit Photonenmigrations-Technologie
3. Erweiterung des MSIS zu Super-Resolution Raman Mikroskopie mit einer Festkörper-Immersionslinse
4. Anwendung des entwickelten MSIS auf biologische Materialien
Service robots need to be aware of persons in their vicinity in order to interact with them. People tracking enables the robot to perceive persons by fusing the information of several sensors. Most robots rely on laser range scanners and RGB cameras for this task. The thesis focuses on the detection and tracking of heads. This allows the robot to establish eye contact, which makes interactions feel more natural.
Developing a fast and reliable pose invariant head detector is challenging. The head detector that is proposed in this thesis works well on frontal heads, but is not fully pose-invariant. This thesis further explores adaptive tracking to keep track of heads that do not face the robot. Finally, head detector and adaptive tracker are combined within a new people tracking framework and experiments show its effectiveness compared to a state-of the-art system.
Melamine Formaldehyde (MF) resins are thermosetting synthetic materials. The present work deals with the evaluation of the impregnation process, modification of resin structure and abrasion resistant applications. During the industrial process paper is impregnated by aqueous oligomers. The drying procedure and the corresponding residual volatile content is a crucial step during production, because of its influence on the later surface quality. Standard measurement routines do not differentiate between physical and chemical origin. Using TGA and DSC methods, the evaporation of water could be characterized as a clear separation of solvent evaporation and the release of water during condensation. The method could be used to upgrade current quality control as well as reaction condition tuning. According to the characteristics of duroplastic material, the formed network is very dense but also brittle. Challenging applications require highly modified resins in order to decrease the network density. Substances from bio renewable resources offer chemical possibilities for covalent crosslinking. Several substance classes have been tested for compatibility via hydroxyl groups or amines. The addition of polyols under appropriate reaction conditions showed chemical incorporation into the MF prepolymer. NMR methods have been used to characterize the resins. The synthesized polymers represent a suitable alternative for the usage in challenging furniture and flooring laminate applications. MF applications for scratch and wear resistant surfaces are commonly reinforced by multiple layer setups with inorganic particles. Fulfilling normative requirements a one sheet setup of decorative paper has been developed and tested. The incorporation of special corundum particles directly on the decorative printed paper combined with a new coating system resulted in surfaces of the requested quality for wear resistance surfaces.
After more than three decades of electronic design automation, most layouts for analog integrated circuits are still handcrafted in a laborious manual fashion today. Obverse to the highly automated synthesis tools in the digital domain (coping with the quantitative difficulty of packing more and more components onto a single chip – a desire well known as More Moore), analog layout automation struggles with the many diverse and heavily correlated functional requirements that turn the analog design problem into a More than Moore challenge. Facing this qualitative complexity, seasoned layout engineers rely on their comprehensive expert knowledge to consider all design constraints that uncompromisingly need to be satisfied. This usually involves both formally specified and nonformally communicated pieces of expert knowledge, which entails an explicit and implicit consideration of design constraints, respectively.
Existing automation approaches can be basically divided into optimization algorithms (where constraint consideration occurs explicitly) and procedural generators (where constraints can only be taken into account implicitly). As investigated in this thesis, these two automation strategies follow two fundamentally different paradigms denoted as top-down automation and bottom-up automation. The major trait of top-down automation is that it requires a thorough formalization of the problem to enable a self-intelligent solution finding, whereas a bottom-up automatism –controlled by parameters– merely reproduces solutions that have been preconceived by a layout expert in advance. Since the strengths of one paradigm may compensate the weaknesses of the other, it is assumed that a combination of both paradigms –called bottom-up meets top-down– has much more potential to tackle the analog design problem in its entirety than either optimization-based or generator-based approaches alone.
Against this background, the thesis at hand presents Self-organized Wiring and Arrangement of Responsive Modules (SWARM), an interdisciplinary methodology addressing the design problem with a decentralized multi-agent system. Its basic principle, similar to the roundup of a sheep herd, is to let responsive mobile layout modules (implemented as context-aware procedural generators) interact with each other inside a user-defined layout zone. Each module is allowed to autonomously move, rotate and deform itself, while a supervising control organ successively tightens the layout zone to steer the interaction towards increasingly compact (and constraint compliant) layout arrangements. Considering various principles of self-organization and incorporating ideas from existing decentralized systems, SWARM is able to evoke the phenomenon of emergence: although each module only has a limited viewpoint and selfishly pursues its personal objectives, remarkable overall solutions can emerge on the global scale.
Several examples exhibit this emergent behavior in SWARM, and it is particularly interesting that even optimal solutions can arise from the module interaction. Further examples demonstrate SWARM’s suitability for floorplanning purposes and its application to practical place-and-route problems. The latter illustrates how the interacting modules take care of their respective design requirements implicitly (i.e., bottom-up) while simultaneously paying respect to high level constraints (such as the layout outline imposed top-down by the supervising control organ). Experimental results show that SWARM can outperform optimization algorithms and procedural generators both in terms of layout quality and design productivity. From an academic point of view, SWARM’s grand achievement is to tap fertile virgin soil for future works on novel bottom-up meets top-down automatisms. These may one day be the key to close the automation gap in analog layout design.
Knowledge is an important resource, whose transfer is still not completely understood. The underlying belief of this thesis is that knowledge cannot be transferred directly from one person to another but must be converted for the transfer and therefore is subject to loss of knowledge and misunderstanding. This thesis proposes a new model for knowledge transfer and empirically evaluates this model. The model is based on the belief that knowledge must be encoded by the sender to transfer it to the receiver, who has to decode the message to obtain knowledge.
To prepare for the model this thesis provides an overview about models for knowledge transfer and factors that influence knowledge transfer. The proposed theoretical model for knowledge transfer is implemented in a prototype to demonstrate its applicability. The model describes the influence of the four layers, namely code, syntactic, semantic, and pragmatic layers, on the encoding and decoding of the message. The precise description of the influencing factors and the overlapping knowledge from sender and receiver facilitate its implementation.
The application area of the layered model for knowledge transfer was chosen to be business process modelling. Business processes incorporate an important knowledge resource of an organisation as they describe the procedures for the production of products and services. The implementation in a software prototype allows a precise description of the process by adding semantic to the simple business process modelling language used.
This thesis contributes to the body of knowledge by providing a new model for knowledge transfer, which shows the process of knowledge transfer in greater detail and highlights influencing factors. The implementation in the area of business process modelling reveals the support provided by the model. An expert evaluation indicates that the implementation of the proposed model supports knowledge transfer in business process modelling. The results of the qualitative evaluation are supported by the findings of a qualitative evaluation, performed as a quasi-experiment with a pre-test/post-test design and two experimental groups and one control group. Mann-Whitney U tests indicated that the group that used the tool that implemented the layered model performed significantly better in terms of completeness (the degree of completeness achieved in the transfer) in comparison with the group that used a standard BPM tool (Z = 3.057, p = 0.002, r = 0.59) and the control group that used pen and paper (Z = 3.859, p < 0.001, r = 0.72). The experiment indicates that the implementation of the layered model supports the creation of a business process and facilitates a more precise representation.
IT governance: current state of and future perspectives on the concept of agility in IT governance
(2020)
Digital transformation has changed corporate reality and, with that, corporates’ IT environments and IT governance (ITG). As such, the perspective of ITG has shifted from the design of a relatively stable, closed and controllable system of a self-sufficient enterprise to a relatively fluid, open, agile and transformational system of networked co-adaptive entities. Related to the paradigm shift in ITG, this thesis aims to conceptualize a framework to integrate the concept of agility into the traditional ITG framework and to test the effects of such an extended ITG framework on corporate performance.
To do so, the thesis uses literature research and a mixed method design by blending both qualitative and quantitative research methods. Given the poorly understood situation of the agile mechanisms within the ITG framework, the building process of this thesis’ research model requires an adaptive and flexible approach which involves four different research phases. The initial a priori research model based on a comprehensive review of the extant literature is critically examined and refined at the end of each research phase, which later forms the basis of a subsequent research phase. As a result, the final research model provides guidance on how the conceptualized framework leads to better business/IT alignment as well as how business/IT alignment can mediate the effectiveness of such an extended ITG framework on corporate performance.
The first research phase explores the current state of literature with a focus on the ITG-corporate performance association. This analysis identifies five perspectives with respect to the relationship between ITG and corporate performance. The main variables lead to the perspectives of business/IT alignment, IT leadership, IT capability and process performance, resource relatedness and culture. Furthermore, the analysis presents core aspects explored within the identified perspectives that could act as potential mediators or moderators in the relationship between ITG and corporate performance.
The second research phase investigates the agile aspect of an effective ITG framework in the dynamic contemporary world through a qualitative study. Gleaned from 46 semi-structured interviews across various industries with governance experts, the study identifies 25 agile ITG mechanisms and 22 traditional ITG mechanisms that corporations use to master digital transformation projects. Moreover, the research offers two key patterns indicating to a call for ambidextrous ITG, with corporations alternating between stability and agility in their ITG mechanisms.
In research phase three, a scale development process is conducted in order to develop the agile items explored in research phase two. Through 56 qualitative interviews with professionals the evaluation uncovers 46 agile governance mechanisms. Moreover, these dimensions are rated by 29 experts to identify the most effective ones. This leads to the identification of six structure elements, eight processes, and eight relational mechanisms.
Finally, in research phase four a quantitative research approach through a survey of 400 respondents is established to test and predict the formulated relationships by using the partial least squares structural equation modelling (PLS-SEM) method. The results provide evidence for a strong causal relationship among an expanded ITG concept, business/IT alignment, and corporate performance. These findings reveal that the agile ITG mechanisms within an effective ITG framework seem critical in today’s digital age.
This research is unique in exploring the combination of traditional and agile ITG mechanisms. It contributes to the theoretical base by integrating and extending the literature on ITG, business/IT alignment, ambidexterity and agility, all of which have long been recognized as critical for achieving organizational goals. In summary, this work presents an original analysis of an effective ITG framework for digital transformation by including the agile aspect within the ITG construct. It highlights that is not enough to apply only traditional mechanisms to achieve effective business/IT alignment in today’s digital age; agile ITG mechanisms are also needed. Therefore, a novel ITG framework following an ambidextrous approach is provided consisting of traditional ITG mechanisms as well as newly developed agile ITG practices. This thesis also demonstrates that agile ITG mechanisms can be measured independently of traditional ITG mechanisms within one causal model. This is an important theoretical outcome that allows the current state of ITG to be assessed in two distinct dimensions, offering various pathways for further research on the different antecedents and effects of traditional and agile ITG mechanisms. Furthermore, this thesis makes practical contributions by highlighting the need to develop a basic governance framework powered by traditional ITG mechanisms and simultaneously increase agility in ITG mechanisms. The results imply that corporations might be even more successful if they include both traditional and agile mechanisms in their ITG framework. In this way, the uncovered agile ITG practices may provide a template for CIOs to derive their own mechanisms in following an ambidextrous approach that is suitable for their corporation.
In today’s marketplace, the consumption of luxury goods is at a peak due to increasing global wealth and low interest rates, resulting in a vast supply of goods and services to which customer experiences are more relevant than ever before. One of the most recent developments in this field shows that consumers no longer simply purchase a product or service based on the fact sheet; they are also interested in the experience around the product. Successful brands must develop and maintain individual images to sustain their competitive advantage and build brand equity that is beneficial for customers and firms. Ideally, these will lead to satisfaction and loyalty between a brand, its products, and its customers. Existing research about brand experience and brand equity has mainly focused on functional aspects, which seem to differ for high-value luxury goods. Most studies have focused on industries like retail and fashion brands, sampling university students or visitors to shopping malls, and some have even mixed different types of industries together. This underpins the need for research within a single luxury industry with actual luxury customers who have a solid background with brand experiences.
The purpose of this study was to explore the brand experience spectrum within the automotive industry in Germany, particularly in the affordable luxury sport car sector. Identifying the factors and components that constitute, influence, or leverage/drive a brand experience from their perspective was a clear aim of the study. To achieve this, the study collected data from indepth interviews with German (n=60) respondents who had experience with affordable and luxury sport cars. The conceptual framework was based on two empirically tested models guiding this exploratory consumer research. The first model to build on was the consumerbased brand equity model, empirically tested by Çifci et al. (2016) and Nam et al. (2011). The second conceptual framework was Lemon and Verhoef’s (2016) customer journey model consisting of relevant touchpoints along the following three stages: pre-purchase, purchase, and post-purchase.
The findings of the research demonstrate that, although the six brand equity concepts – brand awareness, physical quality, staff behaviour, self-congruence, brand identification, and lifestyle – are broadly applicable in understanding customer experience in the affordable luxury car industry, the content of these dimensions differs from that suggested by the previous authors. The research established that cognitive and affective (or symbolic) components build the foundation of customer brand experience and supports Çifci et al.’s (2016) and Nam et al.’s (2011) study results. The study also identified brand trust as an important and highly relevant concept for customer brand experience in the luxury automotive car industry. Brand trust influences customer satisfaction and loyalty, therefore improving and complementing the existing model. Furthermore, the study confirmed Lemon and Verhoef’s (2016) process model of the customer journey and experience; however, it suggested two different customer journeys depending on the customers’ previous experience (first-time and experienced buyers). The differences between the two groups and the relevance of the journey touchpoints within the three purchase stages vary significantly in terms and are distinct. Identified key touchpoints for both groups are the contact to a dealer as well as information gathering online. Differences have been found in the length of purchase stages and across the customer journey. The study highlights the importance of trust, identification, and product quality for customer brand experience. Moreover, the findings of this study complement the brand equity model of Çifci et al. (2016) by adding the new concept of trust, which is highly relevant. The current knowledge is complemented by a new understanding and mapping of the customer journey for luxury sports cars in Germany. This study can assist practitioners and managers by providing a compass indicating which touchpoints are relevant to which customer group. Social value can be achieved by encouraging interactions between brand and consumer (e.g. central product launch events) and through brand-oriented interactions among consumers (e.g. dealer events, clubs, or communities). Customers are motivated to express their distinctiveness through product experience and brand identification (belonging/distinction) and to develop a loyal link to brands.
Compared to diesel or gasoline, using compressed natural gas as a fuel allows for significantly decreased carbon dioxide emissions. With the benefits of this technology fully exploited, substantial increases of engine efficiency can be expected in the near future. However, this will lead to exhaust gas temperatures well below the range required for the catalytic removal of residual methane, which is a strong greenhouse gas. By combination with a countercurrent heat exchanger, the temperature level of the catalyst can be raised significantly in order to achieve sufficient levels of methane conversion with minimal additional fuel penalty. This thesis provides fundamental theoretical background of these so-called heat-integrated exhaust purification systems. On this basis, prototype heat exchangers and appropriate operating strategies for highly dynamic operation in passenger cars are developed and evaluated.
Ever since the 1980s, researchers in computer science and robotics have been working on making autonomous cars. Due to recent breakthroughs in research and devel- opment, such as the Bertha Benz Project [ZBS+14], the goal of fully autonomous vehicles seems closer than ever before. Yet a lot of questions remain unanswered. Especially now that the automotive industry moves towards autonomous systems in series production vehicles, the task of precise localization has to be solved with automotive grade sensors and keep memory and processing consumption at a mini- mum. This thesis investigates the Simultaneous Localization and Mapping (SLAM) prob- lem for autonomous driving scenarios on a parking lot using low cost automotive sensors. The main focus is herby devoted to the RAdio Detection And Ranging (RADAR) sensor, which has not been widely analyzed in an autonomous driving scenario so far, even though they are abundant in the automotive industry for ap- plications such as Adaptive Cruise Control (ACC). Due to the high noise floor, the radar sensor has widely been disregarded in the Intelligent Transportation Systems and Robotics communities with regards to SLAM applications. However in this thesis, it is shown that the RADAR sensor proves to be an affordable, robust and precise sensor, when modeling its physical properties correctly. In this regard, a GraphSLAM based framework is introduced, which extracts features from the RADAR sensor and generates an optimized map of the surroundings using the RADAR sensor alone. This framework is used to enable crowd based localization, which is not limited to the RADAR sensor alone. By integrating an automotive Light Detection and Ranging (LiDAR) and stereo camera sensor, a robust and precise localization system can be built that that is suitable for autonomous driving even in complex parking lot scenarios. It it is thereby shown that the RADAR sensor is strongly contributing to obtaining good results in a sensor fusion setup. These results were obtained on an extensive dataset on a parking lot, which has been recorded over the course of several months. It contains different weather conditions, different configurations of parked cars and a multitude of different trajectories to validate the approaches described in this thesis and to come to the conclusion that the RADAR sensor is a reliable sensor in series autonomous driving systems, both in a multi sensor framework and as a single component for localization.
So far, only few authors addressed the serum-free, defined differentiation of adipocytes. And there are hardly any trials available on the defined maintenance of adipocytes. In this study, the development of a defined culture medium for the adipogenic differentiation of primary human adipose-derived stem cells (ASCs) was aimed. Based on the addition of specific factors for the replacement of serum, ASCs were differentiated to viable and characteristic adipocytes for 14 days, which was proven through the accumulation of lipids, the expression of perilipin A and by the release of leptin and glycerol. Furthermore, a defined maintenance medium was developed, which supported the maturation and stability of cells for a long-term period of additional 42 days until day 56.
High Performance Computing (HPC) enables significant progress in both science and industry. Whereas traditionally parallel applications have been developed to address the grand challenges in science, as of today, they are also heavily used to speed up the time-to-result in the context of product design, production planning, financial risk management, medical diagnosis, as well as research and development efforts. However, purchasing and operating HPC clusters to run these applications requires huge capital expenditures as well as operational knowledge and thus is reserved to large organizations that benefit from economies of scale. More recently, the cloud evolved into an alternative execution environment for parallel applications, which comes with novel characteristics such as on-demand access to compute resources, pay-per-use, and elasticity. Whereas the cloud has been mainly used to operate interactive multi-tier applications, HPC users are also interested in the benefits offered. These include full control of the resource configuration based on virtualization, fast setup times by using on-demand accessible compute resources, and eliminated upfront capital expenditures due to the pay-per-use billing model. Additionally, elasticity allows compute resources to be provisioned and decommissioned at runtime, which allows fine-grained control of an application's performance in terms of its execution time and efficiency as well as the related monetary costs of the computation. Whereas HPC-optimized cloud environments have been introduced by cloud providers such as Amazon Web Services (AWS) and Microsoft Azure, existing parallel architectures are not designed to make use of elasticity. This thesis addresses several challenges in the emergent field of High Performance Cloud Computing. In particular, the presented contributions focus on the novel opportunities and challenges related to elasticity. First, the principles of elastic parallel systems as well as related design considerations are discussed in detail. On this basis, two exemplary elastic parallel system architectures are presented, each of which includes (1) an elasticity controller that controls the number of processing units based on user-defined goals, (2) a cloud-aware parallel execution model that handles coordination and synchronization requirements in an automated manner, and (3) a programming abstraction to ease the implementation of elastic parallel applications. To automate application delivery and deployment, novel approaches are presented that generate the required deployment artifacts from developer-provided source code in an automated manner while considering application-specific non-functional requirements. Throughout this thesis, a broad spectrum of design decisions related to the construction of elastic parallel system architectures is discussed, including proactive and reactive elasticity control mechanisms as well as cloud-based parallel processing with virtual machines (Infrastructure as a Service) and functions (Function as a Service). To evaluate these contributions, extensive experimental evaluations are presented.
Customer orientation should be the core engine of every organisation. Information technology can be considered as the enabler to generate competitive advantages through customer processes in marketing, sales and service. The impact of information technologies is the biggest risk and at the same time a huge opportunity for any organisation. Research shows that Customer Relationship Management (CRM) enables organisations to perform better and focus more on their customers (e.g. market capitalisation of Amazon). While global enterprises are shaping the future of customer centricity and information technology, the question arises how German B2B organisations can shift their value contribution from product-centric to customer-centric. Therefore, these organisations are attempting to implement CRM software and putting their customers more into focus. However, the question remains, how organisations are approaching the implementation of CRM and if these attempts are paying off in terms of business performance.
Contributing to this highly topical discussion, this thesis contributes to the body of knowledge about the implementation of CRM in the German B2B sector and how it impacts their business performance. First, theoretical frameworks have been developed based on an extensive literature review. Hereby different aspects of CRM are worked-out and mapped against three dimensions of business performance, namely process efficiency, customer satisfaction and financial performance. Based on the theory, a conceptual framework was developed to test the relationships between CRM and Business Performance (BP). Therefore, a survey with 500 participants has been conducted. Based on this a measurement model was developed to test five main hypotheses.
The findings of these hypotheses suggest, that the implementation of CRM positively impacts business performance. In specific, the usage of analytical CRM and the establishment of a dedicated CRM success measurement correlate with the performance of German B2B organisations. In addition to these main findings, various key statements could be derived from the research and a measurement model was developed, which can be used for different organisational characteristics assessing BP. As a result, CRM implementations can be enhanced, and business performance can be improved.
Increasing concerns regarding the world´s natural resources and sustainability continue to be a major issue for global development. As a result several political initiatives and strategies for green or resource-efficient growth both on national and international levels have been proposed. A core element of these initiatives is the promotion of an increase of resource or material productivity. This dissertation examines material productivity developments in the OECD and BRICS countries between 1980 and 2008. By applying the concept of convergence stemming from economic growth theory to material productivity the analysis provides insights into both aspects: material productivity developments in general as well potentials for accelerated improvements in material productivity which consequently may allow a reduction of material use globally. The results of the convergence analysis underline the importance of policy-making with regard to technology and innovation policy enabling the production of resource-efficient products and services as well as technology transfer and diffusion.
Development of an indoor positioning system to create a digital shadow of production plant layouts
(2023)
The objective of this dissertation is to develop an indoor positioning system that allows the creation of a digital shadow of the plant layout in order to continuously represent the actual state of the physical layout in the virtual space. In order to define the requirements for such a system, potential stakeholders who could benefit from a digital shadow in the context of the plant layout were analysed. In order to generate added value for their work, the requirements were derived from their perspective. As the core of an indoor positioning system is the sensory aspect to capture the physical layout parameters, different potential technologies were compared and evaluated in terms of their suitability for this particular application. Derived from this analysis, the selected concept is based on the use of a pan-tilt-zoom (PTZ) camera in combination with fiducial markers. In order to determine specific camera parameters, a series of experiments were conducted which were necessary to develop the measurement method as well as the mathematical calculation method and coordinate transformation for the determination of poses (positions and angular orientations) of the respective facilities in the plant. In addition, an experimental validation was performed to ensure that the limit values for individual parameters determined in the requirements analysis can be met.
Supply chains have evolved into dynamic, interconnected supply networks, which increases the complexity of achieving end-to-end traceability of object flows and their experienced events. With its capability to ensure a secure, transparent, and immutable environment without relying on a trusted third party, the emerging blockchain technology shows strong potential to enable end-to-end traceability in such complex multitiered supply networks. However, as the dissertation’s systematic literature review reveals, the currently available blockchain-based traceability solutions lack the ability to map object-related supply chain events holistically, which involves mapping objects’ creation and deletion, aggregation and disaggregation, transformation, and transaction. Therefore, this dissertation proposes a novel blockchain-based traceability architecture that integrates governance and token concepts to overcome the limitations of existing architectures. While the governance concept manages the supply chain structure on an application level, the token concept includes all functions to conduct object-related supply chain events. For this to be possible, this dissertation’s token concept introduces token ‘blueprints’, which allow clients to group tokens into different types, where tokens of the same type are non-fungible. Furthermore, blueprints can include minting conditions, which are, for example, necessary when mapping assembly or delivery processes. In addition, the token concept contains logic for reflecting all conducted object-related events in an integrated token history. This ultimately leads to end-to-end traceability of tokens and their physical or abstract representatives on the blockchain. For validation purposes, this dissertation implements the architecture’s components and their update and request relationships in code and proves its applicability based on the Ethereum blockchain. Finally, this dissertation provides a scenario-based evaluation based on two industrial case studies from a manufacturing and logistics perspective to validate the architecture’s capabilities when applied in real-world industrial settings. The proposed blockchain-based traceability architecture thus covers all object-related supply chain events derived from the two industrial case studies and therefore proves its general-purpose end-to-end traceability capabilities of object flows.
Intracranial brain tumors are one of the ten most common malignant cancers and account for substantial morbidity and mortality. The largest histological category of primary brain tumors is the gliomas which occur with an ultimate heterogeneous appearance and can be challenging to discern radiologically from other brain lesions. Neurosurgery is mostly the standard of care for newly diagnosed glioma patients and may be followed by radiation therapy and adjuvant temozolomide chemotherapy.
However, brain tumor surgery faces fundamental challenges in achieving maximal tumor removal while avoiding postoperative neurologic deficits. Two of these neurosurgical challenges are presented as follows. First, manual glioma delineation, including its sub-regions, is considered difficult due to its infiltrative nature and the presence of heterogeneous contrast enhancement. Second, the brain deforms its shape, called “brain shift,” in response to surgical manipulation, swelling due to osmotic drugs, and anesthesia, which limits the utility of pre-operative imaging data for guiding the surgery.
Image-guided systems provide physicians with invaluable insight into anatomical or pathological targets based on modern imaging modalities such as magnetic resonance imaging (MRI) and Ultrasound (US). The image-guided toolkits are mainly computer-based systems, employing computer vision methods to facilitate the performance of peri-operative surgical procedures. However, surgeons still need to mentally fuse the surgical plan from pre-operative images with real-time information while manipulating the surgical instruments inside the body and monitoring target delivery. Hence, the need for image guidance during neurosurgical procedures has always been a significant concern for physicians.
This research aims to develop a novel peri-operative image-guided neurosurgery (IGN) system, namely DeepIGN, that can achieve the expected outcomes of brain tumor surgery, thus maximizing the overall survival rate and minimizing post-operative neurologic morbidity. In the scope of this thesis, novel methods are first proposed for the core parts of the DeepIGN system of brain tumor segmentation in MRI and multimodal pre-operative MRI to the intra-operative US (iUS) image registration using the recent developments in deep learning. Then, the output prediction of the employed deep learning networks is further interpreted and examined by providing human-understandable explainable maps. Finally, open-source packages have been developed and integrated into widely endorsed software, which is responsible for integrating information from tracking systems, image visualization, image fusion, and displaying real-time updates of the instruments relative to the patient domain.
The components of DeepIGN have been validated in the laboratory and evaluated in the simulated operating room. For the segmentation module, DeepSeg, a generic decoupled deep learning framework for automatic glioma delineation in brain MRI, achieved an accuracy of 0.84 in terms of the dice coefficient for the gross tumor volume. Performance improvements were observed when employing advancements in deep learning approaches such as 3D convolutions over all slices, region-based training, on-the-fly data augmentation techniques, and ensemble methods.
To compensate for brain shift, an automated, fast, and accurate deformable approach, iRegNet, is proposed for registering pre-operative MRI to iUS volumes as part of the multimodal registration module. Extensive experiments have been conducted on two multi-location databases: the BITE and the RESECT. Two expert neurosurgeons conducted additional qualitative validation of this study through overlaying MRI-iUS pairs before and after the deformable registration. Experimental findings show that the proposed iRegNet is fast and achieves state-of-the-art accuracies. Furthermore, the proposed iRegNet can deliver competitive results, even in the case of non-trained images, as proof of its generality and can therefore be valuable in intra-operative neurosurgical guidance.
For the explainability module, the NeuroXAI framework is proposed to increase the trust of medical experts in applying AI techniques and deep neural networks. The NeuroXAI includes seven explanation methods providing visualization maps to help make deep learning models transparent. Experimental findings showed that the proposed XAI framework achieves good performance in extracting both local and global contexts in addition to generating explainable saliency maps to help understand the prediction of the deep network. Further, visualization maps are obtained to realize the flow of information in the internal layers of the encoder-decoder network and understand the contribution of MRI modalities in the final prediction. The explainability process could provide medical professionals with additional information about tumor segmentation results and therefore aid in understanding how the deep learning model is capable of processing MRI data successfully.
Furthermore, an interactive neurosurgical display has been developed for interventional guidance, which supports the available commercial hardware such as iUS navigation devices and instrument tracking systems. The clinical environment and technical requirements of the integrated multi-modality DeepIGN system were established with the ability to incorporate: (1) pre-operative MRI data and associated 3D volume reconstructions, (2) real-time iUS data, and (3) positional instrument tracking. This system's accuracy was tested using a custom agar phantom model, and its use in a pre-clinical operating room is simulated. The results of the clinical simulation confirmed that system assembly was straightforward, achievable in a clinically acceptable time of 15 min, and performed with a clinically acceptable level of accuracy.
In this thesis, a multimodality IGN system has been developed using the recent advances in deep learning to accurately guide neurosurgeons, incorporating pre- and intra-operative patient image data and interventional devices into the surgical procedure. DeepIGN is developed as open-source research software to accelerate research in the field, enable ease of sharing between multiple research groups, and continuous developments by the community. The experimental results hold great promise for applying deep learning models to assist interventional procedures - a crucial step towards improving the surgical treatment of brain tumors and the corresponding long-term post-operative outcomes.
Over the last decades, a tremendous change toward using information technology in almost every daily routine of our lives can be perceived in our society, entailing an incredible growth of data collected day-by-day on Web, IoT, and AI applications.
At the same time, magneto-mechanical HDDs are being replaced by semiconductor storage such as SSDs, equipped with modern Non-Volatile Memories, like Flash, which yield significantly faster access latencies and higher levels of parallelism. Likewise, the execution speed of processing units increased considerably as nowadays server architectures comprise up to multiple hundreds of independently working CPU cores along with a variety of specialized computing co-processors such as GPUs or FPGAs.
However, the burden of moving the continuously growing data to the best fitting processing unit is inherently linked to today’s computer architecture that is based on the data-to-code paradigm. In the light of Amdahl's Law, this leads to the conclusion that even with today's powerful processing units, the speedup of systems is limited since the fraction of parallel work is largely I/O-bound.
Therefore, throughout this cumulative dissertation, we investigate the paradigm shift toward code-to-data, formally known as Near-Data Processing (NDP), which relieves the contention on the I/O bus by offloading processing to intelligent computational storage devices, where the data is originally located.
Firstly, we identified Native Storage Management as the essential foundation for NDP due to its direct control of physical storage management within the database. Upon this, the interface is extended to propagate address mapping information and to invoke NDP functionality on the storage device. As the former can become very large, we introduce Physical Page Pointers as one novel NDP abstraction for self-contained immutable database objects.
Secondly, the on-device navigation and interpretation of data are elaborated. Therefore, we introduce cross-layer Parsers and Accessors as another NDP abstraction that can be executed on the heterogeneous processing capabilities of modern computational storage devices. Thereby, the compute placement and resource configuration per NDP request is identified as a major performance criteria. Our experimental evaluation shows an improvement in the execution durations of 1.4x to 2.7x compared to traditional systems. Moreover, we propose a framework for the automatic generation of Parsers and Accessors on FPGAs to ease their application in NDP.
Thirdly, we investigate the interplay of NDP and modern workload characteristics like HTAP. Therefore, we present different offloading models and focus on an intervention-free execution. By propagating the Shared State with the latest modifications of the database to the computational storage device, it is able to process data with transactional guarantees. Thus, we achieve to extend the design space of HTAP with NDP by providing a solution that optimizes for performance isolation, data freshness, and the reduction of data transfers. In contrast to traditional systems, we experience no significant drop in performance when an OLAP query is invoked but a steady and 30% faster throughput.
Lastly, in-situ result-set management and consumption as well as NDP pipelines are proposed to achieve flexibility in processing data on heterogeneous hardware. As those produce final and intermediary results, we continue investigating their management and identified that an on-device materialization comes at a low cost but enables novel consumption modes and reuse semantics. Thereby, we achieve significant performance improvements of up to 400x by reusing once materialized results multiple times.
Human recognition is an important part of perception systems, such as those used in autonomous vehicles or robots. These systems often use deep neural networks for this purpose, which rely on large amounts of data that ideally cover various situations, movements, visual appearances, and interactions. However, obtaining such data is typically complex and expensive. In addition to raw data, labels are required to create training data for supervised learning. Thus, manual annotation of bounding boxes, keypoints, orientations, or actions performed is frequently necessary. This work addresses whether the laborious acquisition and creation of data can be simplified through targeted simulation. If data are generated in a simulation, information such as positions, dimensions, orientations, surfaces, and occlusions are already known, and appropriate labels can be generated automatically. A key question is whether deep neural networks, trained with simulated data, can be applied to real data. This work explores the use of simulated training data using examples from the field of pedestrian detection for autonomous vehicles. On the one hand, it is shown how existing systems can be improved by targeted retraining with simulation data, for example to better recognize corner cases. On the other hand, the work focuses on the generation of data that hardly or not occur at all in real standard datasets. It will be demonstrated how training data can be generated by targeted acquisition and combination of motion data and 3D models, which contain finely graded action labels to recognize even complex pedestrian situations. Through the diverse annotation data that simulations provide, it becomes possible to train deep neural networks for a wide variety of tasks with one dataset. In this work, such simulated data is used to train a novel deep multitask network that brings together diverse, previously mostly independently considered but related, tasks such as 2D and 3D human pose recognition and body and orientation estimation.
Within the scope of the present cumulative doctoral thesis six scientific papers were published which illustrates that modern reaction model-free (=isoconversional) kinetic analysis (ICKA) methods represents a universal and effective tool for the controlled processing of thermosetting materials. In order to demonstrate the universal applicability of ICKA methods, the thermal cure of different thermosetting materials having a very broad range of chemical composition (melamine-formaldehyde resins, epoxy resins, polyester-epoxy resins, and acrylate/epoxy resins) were analyzed and mathematically modelled. Some of the materials were based on renewable resources (an epoxy resin was made from hempseed oil; linseed oil was modified into an acrylate/epoxy resin). With the aid of ICKA methods not only single-step but also complex multi-step reactions were modelled precisely. The analyzed thermosetting materials were combined with wood, wood-based products, paper, and plant fibers which are processed to various final products. Some of the thermosetting materials were applied as coating (in form of impregnated décor papers or powder and wet coatings respectively) on wood substrates and the epoxy resin from hempseed oil was mixed with plant fibers and processed into bio-based composites for lightweight applications. From the final products mechanical, thermal, and surface properties were determined. The activation energy as function of cure conversion derived from ICKA methods was utilized to predict accurately the thermal curing over the course of time for arbitrary cure conditions. Furthermore the cure models were used to establish correlations between the cross-linking during processing into products and the properties of the final products. Therewith it was possible to derive the process time and temperature that guarantee optimal cross-linking as well as optimal product properties
Corporate entrepreneurship in the public sector: exploring the peculiarities of public enterprises
(2021)
Entrepreneurship is predominantly treated as a private-sector phenomenon and consequently its increasing importance in the public sector goes largely unremarked. That impedes the research field of entrepreneurship being capable of spanning multiple sectors. Accordingly, recent research calls for the study of corporate entrepreneurship (CE) as it manifests in the public sector where it can be labeled public entrepreneurship (PE). This dissertation considers government an essential entrepreneurial actor and is led by the central research question: What are the peculiarities of the public sector and how do they impact public enterprises’ entrepreneurial orientation (EO)?
Accordingly, this dissertation includes three studies focusing on public enterprises. Two of the studies set the scope of this thesis by investigating a specific type of organization in a specific context—German majority-government-owned energy suppliers. These enterprises operate in a liberalized market experiencing environmental uncertainties like competitiveness and business transformation.
The aims and results of the studies included in this dissertation can be summarized as follows: The systematic literature review illuminates the stimuli of and barriers to entrepreneurial activities in public enterprises and the potential outcomes of such activities discussed so far. The review reveals that research on EO has tended to focus on the private sector and consequently that barriers to and outcomes of entrepreneurial activities in the public sector remain under-researched. Building on these findings, the qualitative study focuses on the interrelated barriers affecting entrepreneurship in public enterprises and the outcomes of entrepreneurial activities being inhibited. The study adopts an explorative comparative causal mapping approach to address the above-mentioned research goal and the lack of clarity around how barriers identified in the public sphere are interrelated. Furthermore, the study bases its investigation on the different business segments of sales (competitive market) and the distribution grid (natural monopoly) to account for recent calls for fine-grained research on PE. Results were compared with prior findings in the public and private sector. That comparison indicates that the barriers revealed align with aspects discussed in prior research findings relating to both sectors. Examples include barriers associated with the external environment such as legal constraints and barriers originating from within the organization such as employee behavior linked to a value system that hampers entrepreneurial action. However, the most important finding is that a public enterprise’s supervisory board can hinder its progress, a finding running counter to those of previous private-sector research and one that underscores the widespread prejudice that the involvement of a public shareholder and its nominated board of directors has a negative effect on EO. The third study is quantitative (data collection via a questionnaire) and builds on both its predecessors to examine the little understood topic of board behavior and public enterprises’ social orientation as predictors of EO. The study’s results indicate that social orientation represses EO, whereas board strategy control (BSC) does not seem to predict EO. Regarding BSC, we find that the local government owners in our sample are less involved in BSC. The third study also examines board networking and finds its relationship with EO depends on the ownership structure of the public-sector organization. An important finding is that minority shareholders, such as majority privately-owned enterprises and hub firms, repress EO when engaging in board networking.
In summary, this doctoral thesis contributes to the under-researched topic of CE in the public sector. It investigates the peculiarities of this sector by focusing on the supervisory board and social oriented activities and their impact on the enterprise’s EO in the quantitative study. The thesis addresses institutional questions regarding ownership and the last study in particular contributes to expanding resource dependence theory, and invites a nuanced perspective: The original perspective suggests that interorganizational arrangements like interfirm network ties and equity holdings reduce external resource dependency and consequently improve firm performance. The findings within this thesis expose resource delivery to potential contrary effects to extend the understanding of interorganizational action with important implications for practice.
Data collected from internet applications are mainly stored in the form of transactions. All transactions of one user form a sequence, which shows the user´s behaviour on the site. Nowadays, it is important to be able to classify the behaviour in real time for various reasons: e.g. to increase conversion rate of customers while they are in the store or to prevent fraudulent transactions before they are placed. However, this is difficult due to the complex structure of the data sequences (i.e. a mix of categorical and continuous data types, constant data updates) and the large amounts of data that are stored. Therefore, this thesis studies the classification of complex data sequences. It surveys the fields of time series analysis (temporal data mining), sequence data mining or standard classification algorithms. It turns out that these algorithms are either difficult to be applied on data sequences or do not deliver a classification: Time series need a predefined model and are not able to handle complex data types; sequence classification algorithms such as the apriori algorithm family are not able to utilize the time aspect of the data. The strengths and weaknesses of the candidate algorithms are identified and used to build a new approach to solve the problem of classification of complex data sequences. The problem is thereby solved by a two-step process. First, feature construction is used to create and discover suitable features in a training phase. Then, the blueprints of the discovered features are used in a formula during the classification phase to perform the real time classification. The features are constructed by combining and aggregating the original data over the span of the sequence including the elapsed time by using a calculated time axis. Additionally, a combination of features and feature selection are used to simplify complex data types. This allows catching behavioural patterns that occur in the course of time. This new proposed approach combines techniques from several research fields. Part of the algorithm originates from the field of feature construction and is used to reveal behaviour over time and express this behaviour in the form of features. A combination of the features is used to highlight relations between them. The blueprints of these features can then be used to achieve classification in real time on an incoming data stream. An automated framework is presented that allows the features to adapt iteratively to a change in underlying patterns in the data stream. This core feature of the presented work is achieved by separating the feature application step from the computational costly feature construction step and by iteratively restarting the feature construction step on the new incoming data. The algorithm and the corresponding models are described in detail as well as applied to three case studies (customer churn prediction, bot detection in computer games, credit card fraud detection). The case studies show that the proposed algorithm is able to find distinctive information in data sequences and use it effectively for classification tasks. The promising results indicate that the suggested approach can be applied to a wide range of other application areas that incorporate data sequences.
Saving energy and protecting the environment became fundamental for society and politics, why several laws were enacted to increase the energy-efficiency. Furthermore, the growing number of vehicles and drivers leaded to more accidents and fatalities on the roads, why road safety became an important factor as well. Due to the increasing importance of energy-efficiency and safety, car manufacturers started to optimise the vehicle in terms of energy-effciency and safety. However, energy-efficiency and road safety can be also increased by adapting the driving behaviour to the given driving situation. This thesis presents a concept of an adaptive and rule based driving system that tries to educate the driver in energy-efficient and safe driving by showing recommendations on time. Unlike existing driving-systems, the presented driving system considers energy-efficiency and safety relevant driving rules, the individual driving behaviour and the driver condition. This allows to avoid the distraction of the driver and to increase the acceptance of the driving system, while improving the driving behaviour in terms of energy-efficiency and safety. A prototype of the driving system was developed and evaluated. The evaluation was done on a driving simulator using 42 test drivers, who tested the effect of the driving system on the driving behaviour and the effect of the adaptiveness of the driving system on the user acceptance. It has been proven during the evaluation that the energy-efficiency and safety can be increased, when the driving system was used. Furthermore, it has been proven that the user acceptance of the driving system increases when the adaptive feature was turned on. A high user acceptance of the driving system allows a steady usage of the driving system and, thus, a steady improvement of the driving behaviour in terms of energy-efficiency and safety.
The extracellular matrix (ECM) is the non-cellular part of tissues and represents the natural environment of the cells. Next to structural stability, it provides various physical, chemical, and mechanical cues that strongly regulate and influence cellular behavior and are required for tissue morphogenesis, differentiation, and homeostasis. Due to its promising characteristics, ECM is used in a wide range of tissue engineering and regenerative medicine approaches as a biomaterial for coatings and scaffolds. To date, there are two sources for ECM material. First, native ECM is generated by the removal of the residing cells of a tissue or organ (decellularized ECM; dECM). Secondly, cell-derived ECM (cdECM) can be generated by and isolated from in vitro cultured cells. Although both types of ECM were intensively used for tissue engineering and regenerative medicine approaches, studies directly characterizing and comparing them are rare. Hence, in the first part of this thesis, dECM from adipose tissue and cdECM from stem cells and adipogenic differentiated stem cells from adipose tissue (ASCs) were characterized towards their macromolecular composition, structural features, and biological purity. The dECM was found to exhibit higher levels of collagens and lower levels of sulfated glycosaminoglycans compared to cdECMs. Structural characteristics revealed an immature state of collagen fibers in cdECM samples. The obtained results revealed differences between the two ECMs that can relevantly impact cellular behavior and subsequently experimental outcome and should therefore be considered when choosing a biomaterial for a specific application. The establishment of a functional vascular system in tissue constructs to realize an adequate nutrient supply remains challenging. In the second part, the supporting effect of cdECM on the self‐assembled formation of prevascular‐like structures by microvascular endothelial cells (mvECs) was investigated. It could be observed that cdECM, especially adipogenic differentiated cdECM, enhanced the formation of prevascular-like structures. An increased concentration of proangiogenic factors was found in cdECM substrates. The demonstration of cdECMs capability to induce the spontaneous formation of prevascular‐like structures by mvECs highlights cdECM as a promising biomaterial for adipose tissue engineering. Depending on the purpose of the ECM material chemical modification might be necessary. In the third and last part, the chemical functionalization of cdECM with dienophiles (terminal alkenes, cyclopropene) by metabolic glycoengineering (MGE) was demonstrated. MGE allows the chemical functionalization of cdECM via the natural metabolism of the cells and without affecting the chemical integrity of the cdECM. The incorporated dienophile chemical groups can be specifically addressed via catalysts-free, cell-friendly inverse electron-demand Diels‐Alder reaction. Using this system, the successful modification of cdECM from ASCs with an active enzyme could be shown. The possibility to modify cdECM via a cell-friendly chemical reaction opens up a wide range of possibilities to improve cdECM depending on the purpose of the material. Altogether, this thesis highlighted the differences between adipose dECM and cdECM from ASCs and demonstrated cdECM as a promising alternative to native dECM for application in tissue engineering and regenerative medicine approaches.
Intralogistics operations in automotive OEMs increasingly confront problems of overcomplexity caused by a customer-centred production that requires customisation and, thus, high product variability, short-notice changes in orders and the handling of an overwhelming number of parts. To alleviate the pressure on intralogistics without sacrificing performance objectives, the speed and flexibility of logistical operations have to be increased. One approach to this is to utilise three-dimensional space through drone technology. This doctoral thesis aims at establishing a framework for implementing aerial drones in automotive OEM logistic operations.
As of yet, there is no research on implementing drones in automotive OEM logistic operations. To contribute to filling this gap, this thesis develops a framework for Drone Implementation in Automotive Logistics Operations (DIALOOP) that allows for a close interaction between the strategic and the operative level and can lead automotive companies through a decision and selection process regarding drone technology.
A preliminary version of the framework was developed on a theoretical basis and was then revised using qualitative-empirical data from semi-structured interviews with two groups of experts, i.e. drone experts and automotive experts. The drone expert interviews contributed a current overview of drone capabilities. The automotive experts interview were used to identify intralogistics operations in which drones can be implemented along with the performance measures that can be improved by drone usage.
Furthermore, all interviews explored developments and changes with a foreseeable influence on drone implementation.
The revised framework was then validated using participant validation interviews with automotive experts.
The finalised framework defines a step-by-step process leading from strategic decisions and considerations over the identification of logistics processes suitable for drone implementation and the relevant performance measures to the choice of appropriate drone types based on a drone classification specifically developed in this thesis for an automotive context.