Refine
Document Type
- Doctoral Thesis (54) (remove)
Is part of the Bibliography
- yes (54)
Institute
- ESB Business School (18)
- Informatik (18)
- Life Sciences (7)
- Technik (7)
- Texoversum (3)
Publisher
Gegenstand dieser Arbeit ist die Darstellung und Charakterisierung einheitlicher, mesoporöser Silica-Partikel (MPSM) im Mikrometerbereich mit maßgeschneiderten Partikel- und Porendesign für die Hochleistungsflüssigkeitschromatographie. Die Synthese umfasst die Einlagerung von Silica-Nanopartikeln (SNP) in poröse organische Template, welche anschließend bei 600°C zersetzt werden. Die Impfsuspensionspolymerisation von Polystyrol-Partikeln, unter Verwendung von Glycidylmethacrylat, Ethylenglycoldimethacrylat und Porogenen, ermöglicht die Herstellung hochgradig einheitlicher, poröser p(GMA-co-EDMA)-Template. Der Einfluss wesentlicher Faktoren, einschließlich des Monomer-Porogen-Verhältnisses, des Monomerverhältnisses und der Porogenzusammensetzung, werden systematisch untersucht sowie ihre Auswirkungen auf die Porengröße, das Porenvolumen und die spezifische Oberfläche erläutert. Die Anbindung aminofunktionalisierter Substanzen erfolgt durch die Ringöffnung der Epoxidgruppe. Im anschließenden basischen Sol-Gel-Prozess werden die Silica-Nanopartikel aufgrund der Ladungsunterschiede in die funktionalisierten p(GMA-co-EDMA)-Template eingebaut. Die Partikelgröße der SNP beeinflusst wesentlich die Poreneigenschaften der MPSM und hängt von drei Faktoren ab: (i) der Wachstumsgeschwindigkeit in der kontinuierlichen Phase, die durch die Einstellungen des Sol-Gel-Prozesses gesteuert wird, (ii) der Diffusionsrate, die durch elektrostatische Anziehung reguliert wird und vom Grad der Funktionalisierung abhängt und (iii) der Porosität des Polymer-Templats. Die gezielte Anpassung der Poreneigenschaften durch die Prozesseinstellungen erlaubt die präzise Herstellung von MPSM, die auf spezifische Trennherausforderungen zugeschnitten werden und somit die Qualität der HPLC verbessern. Die vorgestellte Synthesestrategie ermöglicht, aufgrund des stufenweisen molekularen Aufbaus, eine bessere Adaption der stationären Phase an spezifische Trennherausforderungen.
The targeted design of monodisperse, mesoporous silica microspheres (MPSMs) as HPLC separation phases is still a challenge. The MPSMs can be generated via a multi-step template-assisted method. However, this method and the factors affecting the individual process steps and resulting material properties are scarcely understood, and specific control of the complex multi-step process has been hardly discussed. In this work, the key synthesis steps were systematically investigated by means of statistical Design of Experiment (DoE). In particular, three steps were considered in detail: 1) the synthesis of porous poly(glycidyl methacrylate-co-ethylene glycol dimethacrylate) (p(GMA-co-EDMA)) particles, which as template particles, determine the structure for the final MPSMs. In this context, functional models were generated, which allow the control of the template properties pore volume, pore size and specific surface area. 2) In the presence of amino-functionalized template particles, the sol-gel process was carried out under Stöber process conditions. The water to tetraethyl orthosilicate (TEOS) ratio, as well as the concentration of ammonia as basic catalyst were varied according to a face-centered central composite design (FCD). The incorporation of silica nanoparticles (SNPs) into the pore network of the porous polymers was investigated by scanning electron microscopy (SEM), evaluation of the pore properties assessed by nitrogen sorption measurements and determination of the inorganic content by thermogravimetric analysis (TGA). Here, the material properties, such as the amount of attached silica, can be specifically controlled in the resulting organic/silica hybrid material (hybrid beads, HBs). Furthermore, depending on the sol-gel conditions three, potentially four, reaction regimes were identified, leading to different HBs. These range from porous polymer particles coated with a thin protective silica layer, to interpenetrating networks of polymer and silica, to potential particles consisting of a porous polymer core coated with a silica shell. Also, the effects of the use of different precursors and solvents on silica incorporation were investigated. 3) To obtain MPSMs from the HBs, the organic polymer template was removed by calcination. The effects of sol-gel process conditions on the resulting MPSMs were evaluated and relationships between process conditions and material properties were shown in predictive models. Fully porous, spherical, monodisperse silica particles with sizes ranging from 0.5 µm to 7.8 µm and pore sizes from 3.5 nm to 72.4 nm can be prepared specifically. Subsequent to organo-functionalization, prepared MPSMs were applied as reversed-phase HPLC column materials. Here, the columns were successfully applied for the separation of proteins and amino acids. The separation performance of the materials depends largely on the property profile of the MPSMs, which is predetermined during the preparation of the HBs.
Development of an indoor positioning system to create a digital shadow of production plant layouts
(2023)
The objective of this dissertation is to develop an indoor positioning system that allows the creation of a digital shadow of the plant layout in order to continuously represent the actual state of the physical layout in the virtual space. In order to define the requirements for such a system, potential stakeholders who could benefit from a digital shadow in the context of the plant layout were analysed. In order to generate added value for their work, the requirements were derived from their perspective. As the core of an indoor positioning system is the sensory aspect to capture the physical layout parameters, different potential technologies were compared and evaluated in terms of their suitability for this particular application. Derived from this analysis, the selected concept is based on the use of a pan-tilt-zoom (PTZ) camera in combination with fiducial markers. In order to determine specific camera parameters, a series of experiments were conducted which were necessary to develop the measurement method as well as the mathematical calculation method and coordinate transformation for the determination of poses (positions and angular orientations) of the respective facilities in the plant. In addition, an experimental validation was performed to ensure that the limit values for individual parameters determined in the requirements analysis can be met.
Intracranial brain tumors are one of the ten most common malignant cancers and account for substantial morbidity and mortality. The largest histological category of primary brain tumors is the gliomas which occur with an ultimate heterogeneous appearance and can be challenging to discern radiologically from other brain lesions. Neurosurgery is mostly the standard of care for newly diagnosed glioma patients and may be followed by radiation therapy and adjuvant temozolomide chemotherapy.
However, brain tumor surgery faces fundamental challenges in achieving maximal tumor removal while avoiding postoperative neurologic deficits. Two of these neurosurgical challenges are presented as follows. First, manual glioma delineation, including its sub-regions, is considered difficult due to its infiltrative nature and the presence of heterogeneous contrast enhancement. Second, the brain deforms its shape, called “brain shift,” in response to surgical manipulation, swelling due to osmotic drugs, and anesthesia, which limits the utility of pre-operative imaging data for guiding the surgery.
Image-guided systems provide physicians with invaluable insight into anatomical or pathological targets based on modern imaging modalities such as magnetic resonance imaging (MRI) and Ultrasound (US). The image-guided toolkits are mainly computer-based systems, employing computer vision methods to facilitate the performance of peri-operative surgical procedures. However, surgeons still need to mentally fuse the surgical plan from pre-operative images with real-time information while manipulating the surgical instruments inside the body and monitoring target delivery. Hence, the need for image guidance during neurosurgical procedures has always been a significant concern for physicians.
This research aims to develop a novel peri-operative image-guided neurosurgery (IGN) system, namely DeepIGN, that can achieve the expected outcomes of brain tumor surgery, thus maximizing the overall survival rate and minimizing post-operative neurologic morbidity. In the scope of this thesis, novel methods are first proposed for the core parts of the DeepIGN system of brain tumor segmentation in MRI and multimodal pre-operative MRI to the intra-operative US (iUS) image registration using the recent developments in deep learning. Then, the output prediction of the employed deep learning networks is further interpreted and examined by providing human-understandable explainable maps. Finally, open-source packages have been developed and integrated into widely endorsed software, which is responsible for integrating information from tracking systems, image visualization, image fusion, and displaying real-time updates of the instruments relative to the patient domain.
The components of DeepIGN have been validated in the laboratory and evaluated in the simulated operating room. For the segmentation module, DeepSeg, a generic decoupled deep learning framework for automatic glioma delineation in brain MRI, achieved an accuracy of 0.84 in terms of the dice coefficient for the gross tumor volume. Performance improvements were observed when employing advancements in deep learning approaches such as 3D convolutions over all slices, region-based training, on-the-fly data augmentation techniques, and ensemble methods.
To compensate for brain shift, an automated, fast, and accurate deformable approach, iRegNet, is proposed for registering pre-operative MRI to iUS volumes as part of the multimodal registration module. Extensive experiments have been conducted on two multi-location databases: the BITE and the RESECT. Two expert neurosurgeons conducted additional qualitative validation of this study through overlaying MRI-iUS pairs before and after the deformable registration. Experimental findings show that the proposed iRegNet is fast and achieves state-of-the-art accuracies. Furthermore, the proposed iRegNet can deliver competitive results, even in the case of non-trained images, as proof of its generality and can therefore be valuable in intra-operative neurosurgical guidance.
For the explainability module, the NeuroXAI framework is proposed to increase the trust of medical experts in applying AI techniques and deep neural networks. The NeuroXAI includes seven explanation methods providing visualization maps to help make deep learning models transparent. Experimental findings showed that the proposed XAI framework achieves good performance in extracting both local and global contexts in addition to generating explainable saliency maps to help understand the prediction of the deep network. Further, visualization maps are obtained to realize the flow of information in the internal layers of the encoder-decoder network and understand the contribution of MRI modalities in the final prediction. The explainability process could provide medical professionals with additional information about tumor segmentation results and therefore aid in understanding how the deep learning model is capable of processing MRI data successfully.
Furthermore, an interactive neurosurgical display has been developed for interventional guidance, which supports the available commercial hardware such as iUS navigation devices and instrument tracking systems. The clinical environment and technical requirements of the integrated multi-modality DeepIGN system were established with the ability to incorporate: (1) pre-operative MRI data and associated 3D volume reconstructions, (2) real-time iUS data, and (3) positional instrument tracking. This system's accuracy was tested using a custom agar phantom model, and its use in a pre-clinical operating room is simulated. The results of the clinical simulation confirmed that system assembly was straightforward, achievable in a clinically acceptable time of 15 min, and performed with a clinically acceptable level of accuracy.
In this thesis, a multimodality IGN system has been developed using the recent advances in deep learning to accurately guide neurosurgeons, incorporating pre- and intra-operative patient image data and interventional devices into the surgical procedure. DeepIGN is developed as open-source research software to accelerate research in the field, enable ease of sharing between multiple research groups, and continuous developments by the community. The experimental results hold great promise for applying deep learning models to assist interventional procedures - a crucial step towards improving the surgical treatment of brain tumors and the corresponding long-term post-operative outcomes.
Human recognition is an important part of perception systems, such as those used in autonomous vehicles or robots. These systems often use deep neural networks for this purpose, which rely on large amounts of data that ideally cover various situations, movements, visual appearances, and interactions. However, obtaining such data is typically complex and expensive. In addition to raw data, labels are required to create training data for supervised learning. Thus, manual annotation of bounding boxes, keypoints, orientations, or actions performed is frequently necessary. This work addresses whether the laborious acquisition and creation of data can be simplified through targeted simulation. If data are generated in a simulation, information such as positions, dimensions, orientations, surfaces, and occlusions are already known, and appropriate labels can be generated automatically. A key question is whether deep neural networks, trained with simulated data, can be applied to real data. This work explores the use of simulated training data using examples from the field of pedestrian detection for autonomous vehicles. On the one hand, it is shown how existing systems can be improved by targeted retraining with simulation data, for example to better recognize corner cases. On the other hand, the work focuses on the generation of data that hardly or not occur at all in real standard datasets. It will be demonstrated how training data can be generated by targeted acquisition and combination of motion data and 3D models, which contain finely graded action labels to recognize even complex pedestrian situations. Through the diverse annotation data that simulations provide, it becomes possible to train deep neural networks for a wide variety of tasks with one dataset. In this work, such simulated data is used to train a novel deep multitask network that brings together diverse, previously mostly independently considered but related, tasks such as 2D and 3D human pose recognition and body and orientation estimation.
In modern collaborative production environments where industrial robots and humans are supposed to work hand in hand, it is mandatory to observe the robot’s workspace at all times. Such observation is even more crucial when the robot’s main position is also dynamic e.g. because the system is mounted on a movable platform. As current solutions like physically secured areas in which a robot can perform actions potentially dangerous for humans, become unfeasible in such scenarios, novel, more dynamic, and situation aware safety solutions need to be developed and deployed.
This thesis mainly contributes to the bigger picture of such a collaborative scenario by presenting a data-driven convolutional neural network-based approach to estimate the two-dimensional kinematic-chain configuration of industrial robot-arms within raw camera images. This thesis also provides the information needed to generate and organize the mandatory data basis and presents frameworks that were used to realize all involved subsystems. The robot-arm’s extracted kinematic-chain can also be used to estimate the extrinsic camera parameters relative to the robot’s three-dimensional origin. Further a tracking system, based on a two-dimensional kinematic chain descriptor is presented to allow for an accumulation of a proper movement history which enables the prediction of future target positions within the given image plane. The combination of the extracted robot’s pose with a simultaneous human pose estimation system delivers a consistent data flow that can be used in higher-level applications.
This thesis also provides a detailed evaluation of all involved subsystems and provides a broad overview of their particular performance, based on novel generated, semi automatically annotated, real datasets.
Supply chains have evolved into dynamic, interconnected supply networks, which increases the complexity of achieving end-to-end traceability of object flows and their experienced events. With its capability to ensure a secure, transparent, and immutable environment without relying on a trusted third party, the emerging blockchain technology shows strong potential to enable end-to-end traceability in such complex multitiered supply networks. However, as the dissertation’s systematic literature review reveals, the currently available blockchain-based traceability solutions lack the ability to map object-related supply chain events holistically, which involves mapping objects’ creation and deletion, aggregation and disaggregation, transformation, and transaction. Therefore, this dissertation proposes a novel blockchain-based traceability architecture that integrates governance and token concepts to overcome the limitations of existing architectures. While the governance concept manages the supply chain structure on an application level, the token concept includes all functions to conduct object-related supply chain events. For this to be possible, this dissertation’s token concept introduces token ‘blueprints’, which allow clients to group tokens into different types, where tokens of the same type are non-fungible. Furthermore, blueprints can include minting conditions, which are, for example, necessary when mapping assembly or delivery processes. In addition, the token concept contains logic for reflecting all conducted object-related events in an integrated token history. This ultimately leads to end-to-end traceability of tokens and their physical or abstract representatives on the blockchain. For validation purposes, this dissertation implements the architecture’s components and their update and request relationships in code and proves its applicability based on the Ethereum blockchain. Finally, this dissertation provides a scenario-based evaluation based on two industrial case studies from a manufacturing and logistics perspective to validate the architecture’s capabilities when applied in real-world industrial settings. The proposed blockchain-based traceability architecture thus covers all object-related supply chain events derived from the two industrial case studies and therefore proves its general-purpose end-to-end traceability capabilities of object flows.
Over the last decades, a tremendous change toward using information technology in almost every daily routine of our lives can be perceived in our society, entailing an incredible growth of data collected day-by-day on Web, IoT, and AI applications.
At the same time, magneto-mechanical HDDs are being replaced by semiconductor storage such as SSDs, equipped with modern Non-Volatile Memories, like Flash, which yield significantly faster access latencies and higher levels of parallelism. Likewise, the execution speed of processing units increased considerably as nowadays server architectures comprise up to multiple hundreds of independently working CPU cores along with a variety of specialized computing co-processors such as GPUs or FPGAs.
However, the burden of moving the continuously growing data to the best fitting processing unit is inherently linked to today’s computer architecture that is based on the data-to-code paradigm. In the light of Amdahl's Law, this leads to the conclusion that even with today's powerful processing units, the speedup of systems is limited since the fraction of parallel work is largely I/O-bound.
Therefore, throughout this cumulative dissertation, we investigate the paradigm shift toward code-to-data, formally known as Near-Data Processing (NDP), which relieves the contention on the I/O bus by offloading processing to intelligent computational storage devices, where the data is originally located.
Firstly, we identified Native Storage Management as the essential foundation for NDP due to its direct control of physical storage management within the database. Upon this, the interface is extended to propagate address mapping information and to invoke NDP functionality on the storage device. As the former can become very large, we introduce Physical Page Pointers as one novel NDP abstraction for self-contained immutable database objects.
Secondly, the on-device navigation and interpretation of data are elaborated. Therefore, we introduce cross-layer Parsers and Accessors as another NDP abstraction that can be executed on the heterogeneous processing capabilities of modern computational storage devices. Thereby, the compute placement and resource configuration per NDP request is identified as a major performance criteria. Our experimental evaluation shows an improvement in the execution durations of 1.4x to 2.7x compared to traditional systems. Moreover, we propose a framework for the automatic generation of Parsers and Accessors on FPGAs to ease their application in NDP.
Thirdly, we investigate the interplay of NDP and modern workload characteristics like HTAP. Therefore, we present different offloading models and focus on an intervention-free execution. By propagating the Shared State with the latest modifications of the database to the computational storage device, it is able to process data with transactional guarantees. Thus, we achieve to extend the design space of HTAP with NDP by providing a solution that optimizes for performance isolation, data freshness, and the reduction of data transfers. In contrast to traditional systems, we experience no significant drop in performance when an OLAP query is invoked but a steady and 30% faster throughput.
Lastly, in-situ result-set management and consumption as well as NDP pipelines are proposed to achieve flexibility in processing data on heterogeneous hardware. As those produce final and intermediary results, we continue investigating their management and identified that an on-device materialization comes at a low cost but enables novel consumption modes and reuse semantics. Thereby, we achieve significant performance improvements of up to 400x by reusing once materialized results multiple times.
Advancements in Internet of Things (IoT), cloud and mobile computing have fostered the digital enrichment—or “digitization”—of physical products, which are gaining increasing relevance in practice. According to recent studies, global IoT spending will exceed USD 1 Trillion by 2021 and there will be over 25 billion IoT connections (KPMG, 2018). Porter and Heppelmann (2014) state that IT is “revolutionizing products [as …] IT is becoming an integral part of the product itself.” Senior business executives like GE’s former CEO Jeff Immelt (2015) are even proposing that “every industrial company in the coming age is also going to become a software and analytics company.” This reflects the increasing relevance of IT components’ (i.e., software, data analytics, cloud computing) integration into previously purely physical products. We call IT-enriched physical products, “digitized” products to differentiate them from purely intangible “digital” products, such as digital music, e-books, and software. Examples of digitized products include the Philips Hue smartphone-controllable lightbulb, Audi Connect internet-connected cars, or Rolls-Royce’s sensor-enabled pay per use jet engines.
Digitized products provide their producers with a wide range of opportunities to offer new functionality and product capabilities (e.g., autonomy) that traditional, physical products do not exhibit (Porter and Heppelmann, 2014). In addition, the digitization of products allows producers to continuously repurpose their offerings, by extending and/or changing the product functionality and, thus, enabling new value creation opportunities. Based on their re-programmability and connectivity, digitized products “remain essentially incomplete […] throughout their lifetime as users continue to add and delete […] and change […] functional capabilities” (Yoo, 2013). For instance, the Philips Hue connected lightbulb enables remote control of basic functions (e.g., switching on and off the light) as well as setting more advanced light scenes for day-to-day tasks (e.g., relax, read) via Amazon’s Alexa artificial intelligence assistant (Signify, 2019), offerings that were not intended use cases when Signify (previously known as Philips Lighting) created Hue in 2012. Thus, digitized products present limitless potentials for new functionality and unforeseen use cases, which provides them with a huge innovation capacity.
Despite the limitless potentials offered by digitized products, there has been a slow uptake of digitized products by businesses so far (Jernigan et al., 2016; Mocker et al., 2019). According to a 2016 MIT Sloan Management Review report (Jernigan et al., 2016) only 24% of the investigated firms were actively using IoT technologies – a key technology for digitized products. In a more recent research study Mocker et al. (2019) found that the median revenue share from digital offerings (i.e., solutions based on IT enriched products) in large companies only accounted for 5% of the total revenue of the investigated companies.
The slow uptake of digitized products might be explained by the challenges that firms face regarding the changing nature of digitized products. Pervasive digital technologies (such as IoT) change the nature of products by adding new functionality that was previously not part of the value proposition of the products/services (e.g., a pair of shoes embedded with sensors and connectivity allows joggers to have access to data regarding their run distance, speed, etc.) (Yoo et al., 2012). The addition of new functionality and use cases of digitized products makes it harder for producers to design and develop relevant products (Hui 2014). As described in the paper ‘Do Your Customers Actually Want a “Smart” Version of Your Product?’, “just because [firms] can make something with IoT technology doesn’t mean people will want it.” (Smith, 2017).
The shift in digitized products’ nature poses new challenges for producers along the entire product development process (Porter and Heppelmann, 2015; Yoo et al., 2012) and create a paradox in product digitization, described by Yoo et al. (2012) as the paradox of pace: while technology accelerates the rate of innovation, companies need to spend more time to digitize their products, extending time to market. The production of these digitized products also becomes more challenging, e.g., as companies need to deal with different clock-speeds of software and hardware development (Porter and Heppelman, 2015). The above-mentioned challenges suggest that producers need to better understand how they can generate value from their digitized products’ generative potentials.
The body of literature on digitized products has been growing in recent years. For instance, Herterich et al. (2016) investigate how digitized product affordances (i.e., potentials) enable industrial service innovation; Nicolescu et al. (2018) explore the emerging meanings of value associated with IoT; and Benbunan-Fich (2019) studies the impact of basic wearable sensors on the quality of the user experience. However, it remains unclear what it takes for firms to generate value with their digitized product potentials. This dissertation investigates this research gap.
The extracellular matrix (ECM) is the non-cellular part of tissues and represents the natural environment of the cells. Next to structural stability, it provides various physical, chemical, and mechanical cues that strongly regulate and influence cellular behavior and are required for tissue morphogenesis, differentiation, and homeostasis. Due to its promising characteristics, ECM is used in a wide range of tissue engineering and regenerative medicine approaches as a biomaterial for coatings and scaffolds. To date, there are two sources for ECM material. First, native ECM is generated by the removal of the residing cells of a tissue or organ (decellularized ECM; dECM). Secondly, cell-derived ECM (cdECM) can be generated by and isolated from in vitro cultured cells. Although both types of ECM were intensively used for tissue engineering and regenerative medicine approaches, studies directly characterizing and comparing them are rare. Hence, in the first part of this thesis, dECM from adipose tissue and cdECM from stem cells and adipogenic differentiated stem cells from adipose tissue (ASCs) were characterized towards their macromolecular composition, structural features, and biological purity. The dECM was found to exhibit higher levels of collagens and lower levels of sulfated glycosaminoglycans compared to cdECMs. Structural characteristics revealed an immature state of collagen fibers in cdECM samples. The obtained results revealed differences between the two ECMs that can relevantly impact cellular behavior and subsequently experimental outcome and should therefore be considered when choosing a biomaterial for a specific application. The establishment of a functional vascular system in tissue constructs to realize an adequate nutrient supply remains challenging. In the second part, the supporting effect of cdECM on the self‐assembled formation of prevascular‐like structures by microvascular endothelial cells (mvECs) was investigated. It could be observed that cdECM, especially adipogenic differentiated cdECM, enhanced the formation of prevascular-like structures. An increased concentration of proangiogenic factors was found in cdECM substrates. The demonstration of cdECMs capability to induce the spontaneous formation of prevascular‐like structures by mvECs highlights cdECM as a promising biomaterial for adipose tissue engineering. Depending on the purpose of the ECM material chemical modification might be necessary. In the third and last part, the chemical functionalization of cdECM with dienophiles (terminal alkenes, cyclopropene) by metabolic glycoengineering (MGE) was demonstrated. MGE allows the chemical functionalization of cdECM via the natural metabolism of the cells and without affecting the chemical integrity of the cdECM. The incorporated dienophile chemical groups can be specifically addressed via catalysts-free, cell-friendly inverse electron-demand Diels‐Alder reaction. Using this system, the successful modification of cdECM from ASCs with an active enzyme could be shown. The possibility to modify cdECM via a cell-friendly chemical reaction opens up a wide range of possibilities to improve cdECM depending on the purpose of the material. Altogether, this thesis highlighted the differences between adipose dECM and cdECM from ASCs and demonstrated cdECM as a promising alternative to native dECM for application in tissue engineering and regenerative medicine approaches.
In today’s marketplace, the consumption of luxury goods is at a peak due to increasing global wealth and low interest rates, resulting in a vast supply of goods and services to which customer experiences are more relevant than ever before. One of the most recent developments in this field shows that consumers no longer simply purchase a product or service based on the fact sheet; they are also interested in the experience around the product. Successful brands must develop and maintain individual images to sustain their competitive advantage and build brand equity that is beneficial for customers and firms. Ideally, these will lead to satisfaction and loyalty between a brand, its products, and its customers. Existing research about brand experience and brand equity has mainly focused on functional aspects, which seem to differ for high-value luxury goods. Most studies have focused on industries like retail and fashion brands, sampling university students or visitors to shopping malls, and some have even mixed different types of industries together. This underpins the need for research within a single luxury industry with actual luxury customers who have a solid background with brand experiences.
The purpose of this study was to explore the brand experience spectrum within the automotive industry in Germany, particularly in the affordable luxury sport car sector. Identifying the factors and components that constitute, influence, or leverage/drive a brand experience from their perspective was a clear aim of the study. To achieve this, the study collected data from indepth interviews with German (n=60) respondents who had experience with affordable and luxury sport cars. The conceptual framework was based on two empirically tested models guiding this exploratory consumer research. The first model to build on was the consumerbased brand equity model, empirically tested by Çifci et al. (2016) and Nam et al. (2011). The second conceptual framework was Lemon and Verhoef’s (2016) customer journey model consisting of relevant touchpoints along the following three stages: pre-purchase, purchase, and post-purchase.
The findings of the research demonstrate that, although the six brand equity concepts – brand awareness, physical quality, staff behaviour, self-congruence, brand identification, and lifestyle – are broadly applicable in understanding customer experience in the affordable luxury car industry, the content of these dimensions differs from that suggested by the previous authors. The research established that cognitive and affective (or symbolic) components build the foundation of customer brand experience and supports Çifci et al.’s (2016) and Nam et al.’s (2011) study results. The study also identified brand trust as an important and highly relevant concept for customer brand experience in the luxury automotive car industry. Brand trust influences customer satisfaction and loyalty, therefore improving and complementing the existing model. Furthermore, the study confirmed Lemon and Verhoef’s (2016) process model of the customer journey and experience; however, it suggested two different customer journeys depending on the customers’ previous experience (first-time and experienced buyers). The differences between the two groups and the relevance of the journey touchpoints within the three purchase stages vary significantly in terms and are distinct. Identified key touchpoints for both groups are the contact to a dealer as well as information gathering online. Differences have been found in the length of purchase stages and across the customer journey. The study highlights the importance of trust, identification, and product quality for customer brand experience. Moreover, the findings of this study complement the brand equity model of Çifci et al. (2016) by adding the new concept of trust, which is highly relevant. The current knowledge is complemented by a new understanding and mapping of the customer journey for luxury sports cars in Germany. This study can assist practitioners and managers by providing a compass indicating which touchpoints are relevant to which customer group. Social value can be achieved by encouraging interactions between brand and consumer (e.g. central product launch events) and through brand-oriented interactions among consumers (e.g. dealer events, clubs, or communities). Customers are motivated to express their distinctiveness through product experience and brand identification (belonging/distinction) and to develop a loyal link to brands.
Intralogistics operations in automotive OEMs increasingly confront problems of overcomplexity caused by a customer-centred production that requires customisation and, thus, high product variability, short-notice changes in orders and the handling of an overwhelming number of parts. To alleviate the pressure on intralogistics without sacrificing performance objectives, the speed and flexibility of logistical operations have to be increased. One approach to this is to utilise three-dimensional space through drone technology. This doctoral thesis aims at establishing a framework for implementing aerial drones in automotive OEM logistic operations.
As of yet, there is no research on implementing drones in automotive OEM logistic operations. To contribute to filling this gap, this thesis develops a framework for Drone Implementation in Automotive Logistics Operations (DIALOOP) that allows for a close interaction between the strategic and the operative level and can lead automotive companies through a decision and selection process regarding drone technology.
A preliminary version of the framework was developed on a theoretical basis and was then revised using qualitative-empirical data from semi-structured interviews with two groups of experts, i.e. drone experts and automotive experts. The drone expert interviews contributed a current overview of drone capabilities. The automotive experts interview were used to identify intralogistics operations in which drones can be implemented along with the performance measures that can be improved by drone usage.
Furthermore, all interviews explored developments and changes with a foreseeable influence on drone implementation.
The revised framework was then validated using participant validation interviews with automotive experts.
The finalised framework defines a step-by-step process leading from strategic decisions and considerations over the identification of logistics processes suitable for drone implementation and the relevant performance measures to the choice of appropriate drone types based on a drone classification specifically developed in this thesis for an automotive context.
Corporate entrepreneurship in the public sector: exploring the peculiarities of public enterprises
(2021)
Entrepreneurship is predominantly treated as a private-sector phenomenon and consequently its increasing importance in the public sector goes largely unremarked. That impedes the research field of entrepreneurship being capable of spanning multiple sectors. Accordingly, recent research calls for the study of corporate entrepreneurship (CE) as it manifests in the public sector where it can be labeled public entrepreneurship (PE). This dissertation considers government an essential entrepreneurial actor and is led by the central research question: What are the peculiarities of the public sector and how do they impact public enterprises’ entrepreneurial orientation (EO)?
Accordingly, this dissertation includes three studies focusing on public enterprises. Two of the studies set the scope of this thesis by investigating a specific type of organization in a specific context—German majority-government-owned energy suppliers. These enterprises operate in a liberalized market experiencing environmental uncertainties like competitiveness and business transformation.
The aims and results of the studies included in this dissertation can be summarized as follows: The systematic literature review illuminates the stimuli of and barriers to entrepreneurial activities in public enterprises and the potential outcomes of such activities discussed so far. The review reveals that research on EO has tended to focus on the private sector and consequently that barriers to and outcomes of entrepreneurial activities in the public sector remain under-researched. Building on these findings, the qualitative study focuses on the interrelated barriers affecting entrepreneurship in public enterprises and the outcomes of entrepreneurial activities being inhibited. The study adopts an explorative comparative causal mapping approach to address the above-mentioned research goal and the lack of clarity around how barriers identified in the public sphere are interrelated. Furthermore, the study bases its investigation on the different business segments of sales (competitive market) and the distribution grid (natural monopoly) to account for recent calls for fine-grained research on PE. Results were compared with prior findings in the public and private sector. That comparison indicates that the barriers revealed align with aspects discussed in prior research findings relating to both sectors. Examples include barriers associated with the external environment such as legal constraints and barriers originating from within the organization such as employee behavior linked to a value system that hampers entrepreneurial action. However, the most important finding is that a public enterprise’s supervisory board can hinder its progress, a finding running counter to those of previous private-sector research and one that underscores the widespread prejudice that the involvement of a public shareholder and its nominated board of directors has a negative effect on EO. The third study is quantitative (data collection via a questionnaire) and builds on both its predecessors to examine the little understood topic of board behavior and public enterprises’ social orientation as predictors of EO. The study’s results indicate that social orientation represses EO, whereas board strategy control (BSC) does not seem to predict EO. Regarding BSC, we find that the local government owners in our sample are less involved in BSC. The third study also examines board networking and finds its relationship with EO depends on the ownership structure of the public-sector organization. An important finding is that minority shareholders, such as majority privately-owned enterprises and hub firms, repress EO when engaging in board networking.
In summary, this doctoral thesis contributes to the under-researched topic of CE in the public sector. It investigates the peculiarities of this sector by focusing on the supervisory board and social oriented activities and their impact on the enterprise’s EO in the quantitative study. The thesis addresses institutional questions regarding ownership and the last study in particular contributes to expanding resource dependence theory, and invites a nuanced perspective: The original perspective suggests that interorganizational arrangements like interfirm network ties and equity holdings reduce external resource dependency and consequently improve firm performance. The findings within this thesis expose resource delivery to potential contrary effects to extend the understanding of interorganizational action with important implications for practice.
IT governance: current state of and future perspectives on the concept of agility in IT governance
(2020)
Digital transformation has changed corporate reality and, with that, corporates’ IT environments and IT governance (ITG). As such, the perspective of ITG has shifted from the design of a relatively stable, closed and controllable system of a self-sufficient enterprise to a relatively fluid, open, agile and transformational system of networked co-adaptive entities. Related to the paradigm shift in ITG, this thesis aims to conceptualize a framework to integrate the concept of agility into the traditional ITG framework and to test the effects of such an extended ITG framework on corporate performance.
To do so, the thesis uses literature research and a mixed method design by blending both qualitative and quantitative research methods. Given the poorly understood situation of the agile mechanisms within the ITG framework, the building process of this thesis’ research model requires an adaptive and flexible approach which involves four different research phases. The initial a priori research model based on a comprehensive review of the extant literature is critically examined and refined at the end of each research phase, which later forms the basis of a subsequent research phase. As a result, the final research model provides guidance on how the conceptualized framework leads to better business/IT alignment as well as how business/IT alignment can mediate the effectiveness of such an extended ITG framework on corporate performance.
The first research phase explores the current state of literature with a focus on the ITG-corporate performance association. This analysis identifies five perspectives with respect to the relationship between ITG and corporate performance. The main variables lead to the perspectives of business/IT alignment, IT leadership, IT capability and process performance, resource relatedness and culture. Furthermore, the analysis presents core aspects explored within the identified perspectives that could act as potential mediators or moderators in the relationship between ITG and corporate performance.
The second research phase investigates the agile aspect of an effective ITG framework in the dynamic contemporary world through a qualitative study. Gleaned from 46 semi-structured interviews across various industries with governance experts, the study identifies 25 agile ITG mechanisms and 22 traditional ITG mechanisms that corporations use to master digital transformation projects. Moreover, the research offers two key patterns indicating to a call for ambidextrous ITG, with corporations alternating between stability and agility in their ITG mechanisms.
In research phase three, a scale development process is conducted in order to develop the agile items explored in research phase two. Through 56 qualitative interviews with professionals the evaluation uncovers 46 agile governance mechanisms. Moreover, these dimensions are rated by 29 experts to identify the most effective ones. This leads to the identification of six structure elements, eight processes, and eight relational mechanisms.
Finally, in research phase four a quantitative research approach through a survey of 400 respondents is established to test and predict the formulated relationships by using the partial least squares structural equation modelling (PLS-SEM) method. The results provide evidence for a strong causal relationship among an expanded ITG concept, business/IT alignment, and corporate performance. These findings reveal that the agile ITG mechanisms within an effective ITG framework seem critical in today’s digital age.
This research is unique in exploring the combination of traditional and agile ITG mechanisms. It contributes to the theoretical base by integrating and extending the literature on ITG, business/IT alignment, ambidexterity and agility, all of which have long been recognized as critical for achieving organizational goals. In summary, this work presents an original analysis of an effective ITG framework for digital transformation by including the agile aspect within the ITG construct. It highlights that is not enough to apply only traditional mechanisms to achieve effective business/IT alignment in today’s digital age; agile ITG mechanisms are also needed. Therefore, a novel ITG framework following an ambidextrous approach is provided consisting of traditional ITG mechanisms as well as newly developed agile ITG practices. This thesis also demonstrates that agile ITG mechanisms can be measured independently of traditional ITG mechanisms within one causal model. This is an important theoretical outcome that allows the current state of ITG to be assessed in two distinct dimensions, offering various pathways for further research on the different antecedents and effects of traditional and agile ITG mechanisms. Furthermore, this thesis makes practical contributions by highlighting the need to develop a basic governance framework powered by traditional ITG mechanisms and simultaneously increase agility in ITG mechanisms. The results imply that corporations might be even more successful if they include both traditional and agile mechanisms in their ITG framework. In this way, the uncovered agile ITG practices may provide a template for CIOs to derive their own mechanisms in following an ambidextrous approach that is suitable for their corporation.
After more than three decades of electronic design automation, most layouts for analog integrated circuits are still handcrafted in a laborious manual fashion today. This book presents Self-organized Wiring and Arrangement of Responsive Modules (SWARM), a novel interdisciplinary methodology addressing the design problem with a decentralized multi-agent system. Its basic approach, similar to the roundup of a sheep herd, is to let autonomous layout modules interact with each other inside a successively tightened layout zone. Considering various principles of self-organization, remarkable overall solutions can result from the individual, local, selfish actions of the modules. Displaying this fascinating phenomenon of emergence, examples demonstrate SWARM’s suitability for floorplanning purposes and its application to practical place-and-route problems. From an academic point of view, SWARM combines the strengths of procedural generators with the assets of optimization algorithms, thus paving the way for a new automation paradigm called bottom-up meets top-down.
High Performance Computing (HPC) enables significant progress in both science and industry. Whereas traditionally parallel applications have been developed to address the grand challenges in science, as of today, they are also heavily used to speed up the time-to-result in the context of product design, production planning, financial risk management, medical diagnosis, as well as research and development efforts. However, purchasing and operating HPC clusters to run these applications requires huge capital expenditures as well as operational knowledge and thus is reserved to large organizations that benefit from economies of scale. More recently, the cloud evolved into an alternative execution environment for parallel applications, which comes with novel characteristics such as on-demand access to compute resources, pay-per-use, and elasticity. Whereas the cloud has been mainly used to operate interactive multi-tier applications, HPC users are also interested in the benefits offered. These include full control of the resource configuration based on virtualization, fast setup times by using on-demand accessible compute resources, and eliminated upfront capital expenditures due to the pay-per-use billing model. Additionally, elasticity allows compute resources to be provisioned and decommissioned at runtime, which allows fine-grained control of an application's performance in terms of its execution time and efficiency as well as the related monetary costs of the computation. Whereas HPC-optimized cloud environments have been introduced by cloud providers such as Amazon Web Services (AWS) and Microsoft Azure, existing parallel architectures are not designed to make use of elasticity. This thesis addresses several challenges in the emergent field of High Performance Cloud Computing. In particular, the presented contributions focus on the novel opportunities and challenges related to elasticity. First, the principles of elastic parallel systems as well as related design considerations are discussed in detail. On this basis, two exemplary elastic parallel system architectures are presented, each of which includes (1) an elasticity controller that controls the number of processing units based on user-defined goals, (2) a cloud-aware parallel execution model that handles coordination and synchronization requirements in an automated manner, and (3) a programming abstraction to ease the implementation of elastic parallel applications. To automate application delivery and deployment, novel approaches are presented that generate the required deployment artifacts from developer-provided source code in an automated manner while considering application-specific non-functional requirements. Throughout this thesis, a broad spectrum of design decisions related to the construction of elastic parallel system architectures is discussed, including proactive and reactive elasticity control mechanisms as well as cloud-based parallel processing with virtual machines (Infrastructure as a Service) and functions (Function as a Service). To evaluate these contributions, extensive experimental evaluations are presented.
Customer orientation should be the core engine of every organisation. Information technology can be considered as the enabler to generate competitive advantages through customer processes in marketing, sales and service. The impact of information technologies is the biggest risk and at the same time a huge opportunity for any organisation. Research shows that Customer Relationship Management (CRM) enables organisations to perform better and focus more on their customers (e.g. market capitalisation of Amazon). While global enterprises are shaping the future of customer centricity and information technology, the question arises how German B2B organisations can shift their value contribution from product-centric to customer-centric. Therefore, these organisations are attempting to implement CRM software and putting their customers more into focus. However, the question remains, how organisations are approaching the implementation of CRM and if these attempts are paying off in terms of business performance.
Contributing to this highly topical discussion, this thesis contributes to the body of knowledge about the implementation of CRM in the German B2B sector and how it impacts their business performance. First, theoretical frameworks have been developed based on an extensive literature review. Hereby different aspects of CRM are worked-out and mapped against three dimensions of business performance, namely process efficiency, customer satisfaction and financial performance. Based on the theory, a conceptual framework was developed to test the relationships between CRM and Business Performance (BP). Therefore, a survey with 500 participants has been conducted. Based on this a measurement model was developed to test five main hypotheses.
The findings of these hypotheses suggest, that the implementation of CRM positively impacts business performance. In specific, the usage of analytical CRM and the establishment of a dedicated CRM success measurement correlate with the performance of German B2B organisations. In addition to these main findings, various key statements could be derived from the research and a measurement model was developed, which can be used for different organisational characteristics assessing BP. As a result, CRM implementations can be enhanced, and business performance can be improved.
Context: Fast moving markets and the age of digitization require that software can be quickly changed or extended with new features. The associated quality attribute is referred to as evolvability: the degree of effectiveness and efficiency with which a system can be adapted or extended. Evolvability is especially important for software with frequently changing requirements, e.g. internet-based systems. Several evolvability-related benefits were arguably gained with the rise of service-oriented computing (SOC) that established itself as one of the most important paradigms for distributed systems over the last decade. The implementation of enterprise-wide software landscapes in the style of service-oriented architecture (SOA) prioritizes loose coupling, encapsulation, interoperability, composition, and reuse. In recent years, microservices quickly gained in popularity as an agile, DevOps-focused, and decentralized service-oriented variant with fine-grained services. A key idea here is that small and loosely coupled services that are independently deployable should be easy to change and to replace. Moreover, one of the postulated microservices characteristics is evolutionary design.
Problem Statement: While these properties provide a favorable theoretical basis for evolvable systems, they offer no concrete and universally applicable solutions. As with each architectural style, the implementation of a concrete microservice-based system can be of arbitrary quality. Several studies also report that software professionals trust in the foundational maintainability of service orientation and microservices in particular. A blind belief in these qualities without appropriate evolvability assurance can lead to violations of important principles and therefore negatively impact software evolution. In addition to this, very little scientific research has covered the areas of maintenance, evolution, or technical debt of microservices.
Objectives: To address this, the aim of this research is to support developers of microservices with appropriate methods, techniques, and tools to evaluate or improve evolvability and to facilitate sustainable long-term development. In particular, we want to provide recommendations and tool support for metric-based as well as scenario-based evaluation. In the context of service-based evolvability, we furthermore want to analyze the effectiveness of patterns and collect relevant antipatterns. Methods: Using empirical methods, we analyzed the industry state of the practice and the academic state of the art, which helped us to identify existing techniques, challenges, and research gaps. Based on these findings, we then designed new evolvability assurance techniques and used additional empirical studies to demonstrate and evaluate their effectiveness. Applied empirical methods were for example surveys, interviews, (systematic) literature studies, or controlled experiments.
Contributions: In addition to our analyses of industry practice and scientific literature, we provide contributions in three different areas. With respect to metric-based evolvability evaluation, we identified a set of structural metrics specifically designed for service orientation and analyzed their value for microservices. Subsequently, we designed tool-supported approaches to automatically gather a subset of these metrics from machine-readable RESTful API descriptions and via a distributed tracing mechanism at runtime. In the area of scenario-based evaluation, we developed a tool-supported lightweight method to analyze the evolvability of a service-based system based on hypothetical evolution scenarios. We evaluated the method with a survey (N=40) as well as hands-on interviews (N=7) and improved it further based on the findings. Lastly with respect to patterns and antipatterns, we collected a large set of service-based patterns and analyzed their applicability for microservices. From this initial catalogue, we synthesized a set of candidate evolvability patterns via the proxy of architectural modifiability tactics. The impact of four of these patterns on evolvability was then empirically tested in a controlled experiment (N=69) and with a metric-based analysis. The results suggest that the additional structural complexity introduced by the patterns as well as developers' pattern knowledge have an influence on their effectiveness. As a last contribution, we created a holistic collection of service-based antipatterns for both SOA and microservices and published it in a collaborative repository.
Conclusion: Our contributions provide first foundations for a holistic view on the evolvability assurance of microservices and address several perspectives. Metric- and scenario-based evaluation as well as service-based antipatterns can be used to identify "hot spots" while service-based patterns can remediate them and provide means for systematic evolvability construction. All in all, researchers and practitioners in the field of microservices can use our artifacts to analyze and improve the evolvability of their systems as well as to gain a conceptual understanding of service-based evolvability assurance.