Refine
Document Type
- Doctoral Thesis (34) (remove)
Language
- English (34) (remove)
Is part of the Bibliography
- yes (34)
Institute
- Informatik (12)
- ESB Business School (8)
- Life Sciences (6)
- Technik (6)
- Texoversum (2)
Publisher
Intralogistics operations in automotive OEMs increasingly confront problems of overcomplexity caused by a customer-centred production that requires customisation and, thus, high product variability, short-notice changes in orders and the handling of an overwhelming number of parts. To alleviate the pressure on intralogistics without sacrificing performance objectives, the speed and flexibility of logistical operations have to be increased. One approach to this is to utilise three-dimensional space through drone technology. This doctoral thesis aims at establishing a framework for implementing aerial drones in automotive OEM logistic operations.
As of yet, there is no research on implementing drones in automotive OEM logistic operations. To contribute to filling this gap, this thesis develops a framework for Drone Implementation in Automotive Logistics Operations (DIALOOP) that allows for a close interaction between the strategic and the operative level and can lead automotive companies through a decision and selection process regarding drone technology.
A preliminary version of the framework was developed on a theoretical basis and was then revised using qualitative-empirical data from semi-structured interviews with two groups of experts, i.e. drone experts and automotive experts. The drone expert interviews contributed a current overview of drone capabilities. The automotive experts interview were used to identify intralogistics operations in which drones can be implemented along with the performance measures that can be improved by drone usage.
Furthermore, all interviews explored developments and changes with a foreseeable influence on drone implementation.
The revised framework was then validated using participant validation interviews with automotive experts.
The finalised framework defines a step-by-step process leading from strategic decisions and considerations over the identification of logistics processes suitable for drone implementation and the relevant performance measures to the choice of appropriate drone types based on a drone classification specifically developed in this thesis for an automotive context.
The extracellular matrix (ECM) is the non-cellular part of tissues and represents the natural environment of the cells. Next to structural stability, it provides various physical, chemical, and mechanical cues that strongly regulate and influence cellular behavior and are required for tissue morphogenesis, differentiation, and homeostasis. Due to its promising characteristics, ECM is used in a wide range of tissue engineering and regenerative medicine approaches as a biomaterial for coatings and scaffolds. To date, there are two sources for ECM material. First, native ECM is generated by the removal of the residing cells of a tissue or organ (decellularized ECM; dECM). Secondly, cell-derived ECM (cdECM) can be generated by and isolated from in vitro cultured cells. Although both types of ECM were intensively used for tissue engineering and regenerative medicine approaches, studies directly characterizing and comparing them are rare. Hence, in the first part of this thesis, dECM from adipose tissue and cdECM from stem cells and adipogenic differentiated stem cells from adipose tissue (ASCs) were characterized towards their macromolecular composition, structural features, and biological purity. The dECM was found to exhibit higher levels of collagens and lower levels of sulfated glycosaminoglycans compared to cdECMs. Structural characteristics revealed an immature state of collagen fibers in cdECM samples. The obtained results revealed differences between the two ECMs that can relevantly impact cellular behavior and subsequently experimental outcome and should therefore be considered when choosing a biomaterial for a specific application. The establishment of a functional vascular system in tissue constructs to realize an adequate nutrient supply remains challenging. In the second part, the supporting effect of cdECM on the self‐assembled formation of prevascular‐like structures by microvascular endothelial cells (mvECs) was investigated. It could be observed that cdECM, especially adipogenic differentiated cdECM, enhanced the formation of prevascular-like structures. An increased concentration of proangiogenic factors was found in cdECM substrates. The demonstration of cdECMs capability to induce the spontaneous formation of prevascular‐like structures by mvECs highlights cdECM as a promising biomaterial for adipose tissue engineering. Depending on the purpose of the ECM material chemical modification might be necessary. In the third and last part, the chemical functionalization of cdECM with dienophiles (terminal alkenes, cyclopropene) by metabolic glycoengineering (MGE) was demonstrated. MGE allows the chemical functionalization of cdECM via the natural metabolism of the cells and without affecting the chemical integrity of the cdECM. The incorporated dienophile chemical groups can be specifically addressed via catalysts-free, cell-friendly inverse electron-demand Diels‐Alder reaction. Using this system, the successful modification of cdECM from ASCs with an active enzyme could be shown. The possibility to modify cdECM via a cell-friendly chemical reaction opens up a wide range of possibilities to improve cdECM depending on the purpose of the material. Altogether, this thesis highlighted the differences between adipose dECM and cdECM from ASCs and demonstrated cdECM as a promising alternative to native dECM for application in tissue engineering and regenerative medicine approaches.
Saving energy and protecting the environment became fundamental for society and politics, why several laws were enacted to increase the energy-efficiency. Furthermore, the growing number of vehicles and drivers leaded to more accidents and fatalities on the roads, why road safety became an important factor as well. Due to the increasing importance of energy-efficiency and safety, car manufacturers started to optimise the vehicle in terms of energy-effciency and safety. However, energy-efficiency and road safety can be also increased by adapting the driving behaviour to the given driving situation. This thesis presents a concept of an adaptive and rule based driving system that tries to educate the driver in energy-efficient and safe driving by showing recommendations on time. Unlike existing driving-systems, the presented driving system considers energy-efficiency and safety relevant driving rules, the individual driving behaviour and the driver condition. This allows to avoid the distraction of the driver and to increase the acceptance of the driving system, while improving the driving behaviour in terms of energy-efficiency and safety. A prototype of the driving system was developed and evaluated. The evaluation was done on a driving simulator using 42 test drivers, who tested the effect of the driving system on the driving behaviour and the effect of the adaptiveness of the driving system on the user acceptance. It has been proven during the evaluation that the energy-efficiency and safety can be increased, when the driving system was used. Furthermore, it has been proven that the user acceptance of the driving system increases when the adaptive feature was turned on. A high user acceptance of the driving system allows a steady usage of the driving system and, thus, a steady improvement of the driving behaviour in terms of energy-efficiency and safety.
Data collected from internet applications are mainly stored in the form of transactions. All transactions of one user form a sequence, which shows the user´s behaviour on the site. Nowadays, it is important to be able to classify the behaviour in real time for various reasons: e.g. to increase conversion rate of customers while they are in the store or to prevent fraudulent transactions before they are placed. However, this is difficult due to the complex structure of the data sequences (i.e. a mix of categorical and continuous data types, constant data updates) and the large amounts of data that are stored. Therefore, this thesis studies the classification of complex data sequences. It surveys the fields of time series analysis (temporal data mining), sequence data mining or standard classification algorithms. It turns out that these algorithms are either difficult to be applied on data sequences or do not deliver a classification: Time series need a predefined model and are not able to handle complex data types; sequence classification algorithms such as the apriori algorithm family are not able to utilize the time aspect of the data. The strengths and weaknesses of the candidate algorithms are identified and used to build a new approach to solve the problem of classification of complex data sequences. The problem is thereby solved by a two-step process. First, feature construction is used to create and discover suitable features in a training phase. Then, the blueprints of the discovered features are used in a formula during the classification phase to perform the real time classification. The features are constructed by combining and aggregating the original data over the span of the sequence including the elapsed time by using a calculated time axis. Additionally, a combination of features and feature selection are used to simplify complex data types. This allows catching behavioural patterns that occur in the course of time. This new proposed approach combines techniques from several research fields. Part of the algorithm originates from the field of feature construction and is used to reveal behaviour over time and express this behaviour in the form of features. A combination of the features is used to highlight relations between them. The blueprints of these features can then be used to achieve classification in real time on an incoming data stream. An automated framework is presented that allows the features to adapt iteratively to a change in underlying patterns in the data stream. This core feature of the presented work is achieved by separating the feature application step from the computational costly feature construction step and by iteratively restarting the feature construction step on the new incoming data. The algorithm and the corresponding models are described in detail as well as applied to three case studies (customer churn prediction, bot detection in computer games, credit card fraud detection). The case studies show that the proposed algorithm is able to find distinctive information in data sequences and use it effectively for classification tasks. The promising results indicate that the suggested approach can be applied to a wide range of other application areas that incorporate data sequences.
Corporate entrepreneurship in the public sector: exploring the peculiarities of public enterprises
(2021)
Entrepreneurship is predominantly treated as a private-sector phenomenon and consequently its increasing importance in the public sector goes largely unremarked. That impedes the research field of entrepreneurship being capable of spanning multiple sectors. Accordingly, recent research calls for the study of corporate entrepreneurship (CE) as it manifests in the public sector where it can be labeled public entrepreneurship (PE). This dissertation considers government an essential entrepreneurial actor and is led by the central research question: What are the peculiarities of the public sector and how do they impact public enterprises’ entrepreneurial orientation (EO)?
Accordingly, this dissertation includes three studies focusing on public enterprises. Two of the studies set the scope of this thesis by investigating a specific type of organization in a specific context—German majority-government-owned energy suppliers. These enterprises operate in a liberalized market experiencing environmental uncertainties like competitiveness and business transformation.
The aims and results of the studies included in this dissertation can be summarized as follows: The systematic literature review illuminates the stimuli of and barriers to entrepreneurial activities in public enterprises and the potential outcomes of such activities discussed so far. The review reveals that research on EO has tended to focus on the private sector and consequently that barriers to and outcomes of entrepreneurial activities in the public sector remain under-researched. Building on these findings, the qualitative study focuses on the interrelated barriers affecting entrepreneurship in public enterprises and the outcomes of entrepreneurial activities being inhibited. The study adopts an explorative comparative causal mapping approach to address the above-mentioned research goal and the lack of clarity around how barriers identified in the public sphere are interrelated. Furthermore, the study bases its investigation on the different business segments of sales (competitive market) and the distribution grid (natural monopoly) to account for recent calls for fine-grained research on PE. Results were compared with prior findings in the public and private sector. That comparison indicates that the barriers revealed align with aspects discussed in prior research findings relating to both sectors. Examples include barriers associated with the external environment such as legal constraints and barriers originating from within the organization such as employee behavior linked to a value system that hampers entrepreneurial action. However, the most important finding is that a public enterprise’s supervisory board can hinder its progress, a finding running counter to those of previous private-sector research and one that underscores the widespread prejudice that the involvement of a public shareholder and its nominated board of directors has a negative effect on EO. The third study is quantitative (data collection via a questionnaire) and builds on both its predecessors to examine the little understood topic of board behavior and public enterprises’ social orientation as predictors of EO. The study’s results indicate that social orientation represses EO, whereas board strategy control (BSC) does not seem to predict EO. Regarding BSC, we find that the local government owners in our sample are less involved in BSC. The third study also examines board networking and finds its relationship with EO depends on the ownership structure of the public-sector organization. An important finding is that minority shareholders, such as majority privately-owned enterprises and hub firms, repress EO when engaging in board networking.
In summary, this doctoral thesis contributes to the under-researched topic of CE in the public sector. It investigates the peculiarities of this sector by focusing on the supervisory board and social oriented activities and their impact on the enterprise’s EO in the quantitative study. The thesis addresses institutional questions regarding ownership and the last study in particular contributes to expanding resource dependence theory, and invites a nuanced perspective: The original perspective suggests that interorganizational arrangements like interfirm network ties and equity holdings reduce external resource dependency and consequently improve firm performance. The findings within this thesis expose resource delivery to potential contrary effects to extend the understanding of interorganizational action with important implications for practice.
Within the scope of the present cumulative doctoral thesis six scientific papers were published which illustrates that modern reaction model-free (=isoconversional) kinetic analysis (ICKA) methods represents a universal and effective tool for the controlled processing of thermosetting materials. In order to demonstrate the universal applicability of ICKA methods, the thermal cure of different thermosetting materials having a very broad range of chemical composition (melamine-formaldehyde resins, epoxy resins, polyester-epoxy resins, and acrylate/epoxy resins) were analyzed and mathematically modelled. Some of the materials were based on renewable resources (an epoxy resin was made from hempseed oil; linseed oil was modified into an acrylate/epoxy resin). With the aid of ICKA methods not only single-step but also complex multi-step reactions were modelled precisely. The analyzed thermosetting materials were combined with wood, wood-based products, paper, and plant fibers which are processed to various final products. Some of the thermosetting materials were applied as coating (in form of impregnated décor papers or powder and wet coatings respectively) on wood substrates and the epoxy resin from hempseed oil was mixed with plant fibers and processed into bio-based composites for lightweight applications. From the final products mechanical, thermal, and surface properties were determined. The activation energy as function of cure conversion derived from ICKA methods was utilized to predict accurately the thermal curing over the course of time for arbitrary cure conditions. Furthermore the cure models were used to establish correlations between the cross-linking during processing into products and the properties of the final products. Therewith it was possible to derive the process time and temperature that guarantee optimal cross-linking as well as optimal product properties
Human recognition is an important part of perception systems, such as those used in autonomous vehicles or robots. These systems often use deep neural networks for this purpose, which rely on large amounts of data that ideally cover various situations, movements, visual appearances, and interactions. However, obtaining such data is typically complex and expensive. In addition to raw data, labels are required to create training data for supervised learning. Thus, manual annotation of bounding boxes, keypoints, orientations, or actions performed is frequently necessary. This work addresses whether the laborious acquisition and creation of data can be simplified through targeted simulation. If data are generated in a simulation, information such as positions, dimensions, orientations, surfaces, and occlusions are already known, and appropriate labels can be generated automatically. A key question is whether deep neural networks, trained with simulated data, can be applied to real data. This work explores the use of simulated training data using examples from the field of pedestrian detection for autonomous vehicles. On the one hand, it is shown how existing systems can be improved by targeted retraining with simulation data, for example to better recognize corner cases. On the other hand, the work focuses on the generation of data that hardly or not occur at all in real standard datasets. It will be demonstrated how training data can be generated by targeted acquisition and combination of motion data and 3D models, which contain finely graded action labels to recognize even complex pedestrian situations. Through the diverse annotation data that simulations provide, it becomes possible to train deep neural networks for a wide variety of tasks with one dataset. In this work, such simulated data is used to train a novel deep multitask network that brings together diverse, previously mostly independently considered but related, tasks such as 2D and 3D human pose recognition and body and orientation estimation.
Over the last decades, a tremendous change toward using information technology in almost every daily routine of our lives can be perceived in our society, entailing an incredible growth of data collected day-by-day on Web, IoT, and AI applications.
At the same time, magneto-mechanical HDDs are being replaced by semiconductor storage such as SSDs, equipped with modern Non-Volatile Memories, like Flash, which yield significantly faster access latencies and higher levels of parallelism. Likewise, the execution speed of processing units increased considerably as nowadays server architectures comprise up to multiple hundreds of independently working CPU cores along with a variety of specialized computing co-processors such as GPUs or FPGAs.
However, the burden of moving the continuously growing data to the best fitting processing unit is inherently linked to today’s computer architecture that is based on the data-to-code paradigm. In the light of Amdahl's Law, this leads to the conclusion that even with today's powerful processing units, the speedup of systems is limited since the fraction of parallel work is largely I/O-bound.
Therefore, throughout this cumulative dissertation, we investigate the paradigm shift toward code-to-data, formally known as Near-Data Processing (NDP), which relieves the contention on the I/O bus by offloading processing to intelligent computational storage devices, where the data is originally located.
Firstly, we identified Native Storage Management as the essential foundation for NDP due to its direct control of physical storage management within the database. Upon this, the interface is extended to propagate address mapping information and to invoke NDP functionality on the storage device. As the former can become very large, we introduce Physical Page Pointers as one novel NDP abstraction for self-contained immutable database objects.
Secondly, the on-device navigation and interpretation of data are elaborated. Therefore, we introduce cross-layer Parsers and Accessors as another NDP abstraction that can be executed on the heterogeneous processing capabilities of modern computational storage devices. Thereby, the compute placement and resource configuration per NDP request is identified as a major performance criteria. Our experimental evaluation shows an improvement in the execution durations of 1.4x to 2.7x compared to traditional systems. Moreover, we propose a framework for the automatic generation of Parsers and Accessors on FPGAs to ease their application in NDP.
Thirdly, we investigate the interplay of NDP and modern workload characteristics like HTAP. Therefore, we present different offloading models and focus on an intervention-free execution. By propagating the Shared State with the latest modifications of the database to the computational storage device, it is able to process data with transactional guarantees. Thus, we achieve to extend the design space of HTAP with NDP by providing a solution that optimizes for performance isolation, data freshness, and the reduction of data transfers. In contrast to traditional systems, we experience no significant drop in performance when an OLAP query is invoked but a steady and 30% faster throughput.
Lastly, in-situ result-set management and consumption as well as NDP pipelines are proposed to achieve flexibility in processing data on heterogeneous hardware. As those produce final and intermediary results, we continue investigating their management and identified that an on-device materialization comes at a low cost but enables novel consumption modes and reuse semantics. Thereby, we achieve significant performance improvements of up to 400x by reusing once materialized results multiple times.
Intracranial brain tumors are one of the ten most common malignant cancers and account for substantial morbidity and mortality. The largest histological category of primary brain tumors is the gliomas which occur with an ultimate heterogeneous appearance and can be challenging to discern radiologically from other brain lesions. Neurosurgery is mostly the standard of care for newly diagnosed glioma patients and may be followed by radiation therapy and adjuvant temozolomide chemotherapy.
However, brain tumor surgery faces fundamental challenges in achieving maximal tumor removal while avoiding postoperative neurologic deficits. Two of these neurosurgical challenges are presented as follows. First, manual glioma delineation, including its sub-regions, is considered difficult due to its infiltrative nature and the presence of heterogeneous contrast enhancement. Second, the brain deforms its shape, called “brain shift,” in response to surgical manipulation, swelling due to osmotic drugs, and anesthesia, which limits the utility of pre-operative imaging data for guiding the surgery.
Image-guided systems provide physicians with invaluable insight into anatomical or pathological targets based on modern imaging modalities such as magnetic resonance imaging (MRI) and Ultrasound (US). The image-guided toolkits are mainly computer-based systems, employing computer vision methods to facilitate the performance of peri-operative surgical procedures. However, surgeons still need to mentally fuse the surgical plan from pre-operative images with real-time information while manipulating the surgical instruments inside the body and monitoring target delivery. Hence, the need for image guidance during neurosurgical procedures has always been a significant concern for physicians.
This research aims to develop a novel peri-operative image-guided neurosurgery (IGN) system, namely DeepIGN, that can achieve the expected outcomes of brain tumor surgery, thus maximizing the overall survival rate and minimizing post-operative neurologic morbidity. In the scope of this thesis, novel methods are first proposed for the core parts of the DeepIGN system of brain tumor segmentation in MRI and multimodal pre-operative MRI to the intra-operative US (iUS) image registration using the recent developments in deep learning. Then, the output prediction of the employed deep learning networks is further interpreted and examined by providing human-understandable explainable maps. Finally, open-source packages have been developed and integrated into widely endorsed software, which is responsible for integrating information from tracking systems, image visualization, image fusion, and displaying real-time updates of the instruments relative to the patient domain.
The components of DeepIGN have been validated in the laboratory and evaluated in the simulated operating room. For the segmentation module, DeepSeg, a generic decoupled deep learning framework for automatic glioma delineation in brain MRI, achieved an accuracy of 0.84 in terms of the dice coefficient for the gross tumor volume. Performance improvements were observed when employing advancements in deep learning approaches such as 3D convolutions over all slices, region-based training, on-the-fly data augmentation techniques, and ensemble methods.
To compensate for brain shift, an automated, fast, and accurate deformable approach, iRegNet, is proposed for registering pre-operative MRI to iUS volumes as part of the multimodal registration module. Extensive experiments have been conducted on two multi-location databases: the BITE and the RESECT. Two expert neurosurgeons conducted additional qualitative validation of this study through overlaying MRI-iUS pairs before and after the deformable registration. Experimental findings show that the proposed iRegNet is fast and achieves state-of-the-art accuracies. Furthermore, the proposed iRegNet can deliver competitive results, even in the case of non-trained images, as proof of its generality and can therefore be valuable in intra-operative neurosurgical guidance.
For the explainability module, the NeuroXAI framework is proposed to increase the trust of medical experts in applying AI techniques and deep neural networks. The NeuroXAI includes seven explanation methods providing visualization maps to help make deep learning models transparent. Experimental findings showed that the proposed XAI framework achieves good performance in extracting both local and global contexts in addition to generating explainable saliency maps to help understand the prediction of the deep network. Further, visualization maps are obtained to realize the flow of information in the internal layers of the encoder-decoder network and understand the contribution of MRI modalities in the final prediction. The explainability process could provide medical professionals with additional information about tumor segmentation results and therefore aid in understanding how the deep learning model is capable of processing MRI data successfully.
Furthermore, an interactive neurosurgical display has been developed for interventional guidance, which supports the available commercial hardware such as iUS navigation devices and instrument tracking systems. The clinical environment and technical requirements of the integrated multi-modality DeepIGN system were established with the ability to incorporate: (1) pre-operative MRI data and associated 3D volume reconstructions, (2) real-time iUS data, and (3) positional instrument tracking. This system's accuracy was tested using a custom agar phantom model, and its use in a pre-clinical operating room is simulated. The results of the clinical simulation confirmed that system assembly was straightforward, achievable in a clinically acceptable time of 15 min, and performed with a clinically acceptable level of accuracy.
In this thesis, a multimodality IGN system has been developed using the recent advances in deep learning to accurately guide neurosurgeons, incorporating pre- and intra-operative patient image data and interventional devices into the surgical procedure. DeepIGN is developed as open-source research software to accelerate research in the field, enable ease of sharing between multiple research groups, and continuous developments by the community. The experimental results hold great promise for applying deep learning models to assist interventional procedures - a crucial step towards improving the surgical treatment of brain tumors and the corresponding long-term post-operative outcomes.
Supply chains have evolved into dynamic, interconnected supply networks, which increases the complexity of achieving end-to-end traceability of object flows and their experienced events. With its capability to ensure a secure, transparent, and immutable environment without relying on a trusted third party, the emerging blockchain technology shows strong potential to enable end-to-end traceability in such complex multitiered supply networks. However, as the dissertation’s systematic literature review reveals, the currently available blockchain-based traceability solutions lack the ability to map object-related supply chain events holistically, which involves mapping objects’ creation and deletion, aggregation and disaggregation, transformation, and transaction. Therefore, this dissertation proposes a novel blockchain-based traceability architecture that integrates governance and token concepts to overcome the limitations of existing architectures. While the governance concept manages the supply chain structure on an application level, the token concept includes all functions to conduct object-related supply chain events. For this to be possible, this dissertation’s token concept introduces token ‘blueprints’, which allow clients to group tokens into different types, where tokens of the same type are non-fungible. Furthermore, blueprints can include minting conditions, which are, for example, necessary when mapping assembly or delivery processes. In addition, the token concept contains logic for reflecting all conducted object-related events in an integrated token history. This ultimately leads to end-to-end traceability of tokens and their physical or abstract representatives on the blockchain. For validation purposes, this dissertation implements the architecture’s components and their update and request relationships in code and proves its applicability based on the Ethereum blockchain. Finally, this dissertation provides a scenario-based evaluation based on two industrial case studies from a manufacturing and logistics perspective to validate the architecture’s capabilities when applied in real-world industrial settings. The proposed blockchain-based traceability architecture thus covers all object-related supply chain events derived from the two industrial case studies and therefore proves its general-purpose end-to-end traceability capabilities of object flows.