Refine
Document Type
- Book (7)
- Doctoral Thesis (5)
Language
- English (12) (remove)
Is part of the Bibliography
- yes (12)
Institute
- ESB Business School (6)
- Informatik (3)
- Life Sciences (2)
- Technik (1)
Publisher
- Universität Tübingen (12) (remove)
In modern collaborative production environments where industrial robots and humans are supposed to work hand in hand, it is mandatory to observe the robot’s workspace at all times. Such observation is even more crucial when the robot’s main position is also dynamic e.g. because the system is mounted on a movable platform. As current solutions like physically secured areas in which a robot can perform actions potentially dangerous for humans, become unfeasible in such scenarios, novel, more dynamic, and situation aware safety solutions need to be developed and deployed.
This thesis mainly contributes to the bigger picture of such a collaborative scenario by presenting a data-driven convolutional neural network-based approach to estimate the two-dimensional kinematic-chain configuration of industrial robot-arms within raw camera images. This thesis also provides the information needed to generate and organize the mandatory data basis and presents frameworks that were used to realize all involved subsystems. The robot-arm’s extracted kinematic-chain can also be used to estimate the extrinsic camera parameters relative to the robot’s three-dimensional origin. Further a tracking system, based on a two-dimensional kinematic chain descriptor is presented to allow for an accumulation of a proper movement history which enables the prediction of future target positions within the given image plane. The combination of the extracted robot’s pose with a simultaneous human pose estimation system delivers a consistent data flow that can be used in higher-level applications.
This thesis also provides a detailed evaluation of all involved subsystems and provides a broad overview of their particular performance, based on novel generated, semi automatically annotated, real datasets.
The sound of brands
(2019)
The aim of this research paper is to both examine and conceptualise the concept of audio branding. Audio branding is an important part of the overall brand management concept and corporate identity. Strong brands ease the choice for customers and convey values and a certain quality promise. Branding is of vital importance. It needs to be acknowledged that only 0.004% of all outer stimuli reach the human consciousness. Therefore, audio branding is a way to further strengthen the overall brand awareness. This leads to an emotional connection with a brand.
This study strives to determine the characteristics of audio branding and to analyse the corporate audio branding of Audi. The result of this research study is the suggestion of the use of audio branding in a way that fits the overall brand picture. Otherwise, the brand communication is inconsistent, and this could lead to a misunderstanding of the brand values for customers. The analysis of the Audi corporate sound design might be beneficial for practitioners. The overall evaluation of the concept of audio branding contributes to the existing body of literature in branding.
This study analyses the impact of Basel III on the fair pricing of bank guarantee facilities.Guarantees are an important risk mitigation instrument between exporters and importers in international trade and regularly a prerequisite for cross border sales contracts to be closed. Basel III – which shall be introduced from 2013 onwards - is a new regulation stipulating higher capital requirements for banks compared to the predecessor Basel II. It will therefore have an impact on the pricing of guarantee facilities which banks provide to exporting companies, making it also a crucial regulation for the cost of exportation overall. The study compares those contents of Basel III and Basel II which are particularly relevant for guarantees in order to identify and crystallize pricing-relevant changes in the regulations and their respective impact potential. The Basel frameworks are analyzed part by part and reviewed in terms of relevance for guarantees. In case of ambiguity the analysis is verified by complementary expert interviews. References and examples are mainly focusing on the German banking system but the basic conclusions can be generalized for those countries adopting Basel III.1 As the result, a case study expresses the quantitative outcomes of different scenarios and the impact of the different price determining factors on the overall fair pricing of bank guarantee facilities.
In a recent publication Novy-Marx (2013) finds evidence that the variable gross profitability has a strong statistical influence on the common variation of stock returns. He also points out that there is common variation in stock returns related to firm profitability that is not captured by the three-factor model of Fama and French (1993). Thus, this thesis augments the three-factor model by the factor gross profitability and examines whether a profitability-based four-factor model is able to better explain monthly portfolio excess returns on the German stock market compared to the three-factor model of Fama and French (1993) and the Capital Asset Pricing Model (CAPM). Based on monthly stock returns of the CDAX over the period July 2008 to June 2014 this thesis documents four main findings. First, a significant positive market risk premium and a significant positive value premium can be identified. No evidence is found for a size or a profitability effect. Second, all included factors have a strong significant effect on monthly portfolio excess returns. Third, the four-factor model clearly outperforms both the three-factor model of Fama and French (1993) and the CAPM in capturing the common variation in monthly portfolio excess returns. The CAPM performs worst. Finally, the results indicate that the three-factor model of Fama and French (1993) is somewhat better in explaining the cross-section of portfolio excess returns than the four-factor model. Again, the CAPM performs worst. Nevertheless, the four-factor model is considered to be an improvement over the three-factor model of Fama and French (1993) and the CAPM in determining stock returns on the German stock market.
The targeted design of monodisperse, mesoporous silica microspheres (MPSMs) as HPLC separation phases is still a challenge. The MPSMs can be generated via a multi-step template-assisted method. However, this method and the factors affecting the individual process steps and resulting material properties are scarcely understood, and specific control of the complex multi-step process has been hardly discussed. In this work, the key synthesis steps were systematically investigated by means of statistical Design of Experiment (DoE). In particular, three steps were considered in detail: 1) the synthesis of porous poly(glycidyl methacrylate-co-ethylene glycol dimethacrylate) (p(GMA-co-EDMA)) particles, which as template particles, determine the structure for the final MPSMs. In this context, functional models were generated, which allow the control of the template properties pore volume, pore size and specific surface area. 2) In the presence of amino-functionalized template particles, the sol-gel process was carried out under Stöber process conditions. The water to tetraethyl orthosilicate (TEOS) ratio, as well as the concentration of ammonia as basic catalyst were varied according to a face-centered central composite design (FCD). The incorporation of silica nanoparticles (SNPs) into the pore network of the porous polymers was investigated by scanning electron microscopy (SEM), evaluation of the pore properties assessed by nitrogen sorption measurements and determination of the inorganic content by thermogravimetric analysis (TGA). Here, the material properties, such as the amount of attached silica, can be specifically controlled in the resulting organic/silica hybrid material (hybrid beads, HBs). Furthermore, depending on the sol-gel conditions three, potentially four, reaction regimes were identified, leading to different HBs. These range from porous polymer particles coated with a thin protective silica layer, to interpenetrating networks of polymer and silica, to potential particles consisting of a porous polymer core coated with a silica shell. Also, the effects of the use of different precursors and solvents on silica incorporation were investigated. 3) To obtain MPSMs from the HBs, the organic polymer template was removed by calcination. The effects of sol-gel process conditions on the resulting MPSMs were evaluated and relationships between process conditions and material properties were shown in predictive models. Fully porous, spherical, monodisperse silica particles with sizes ranging from 0.5 µm to 7.8 µm and pore sizes from 3.5 nm to 72.4 nm can be prepared specifically. Subsequent to organo-functionalization, prepared MPSMs were applied as reversed-phase HPLC column materials. Here, the columns were successfully applied for the separation of proteins and amino acids. The separation performance of the materials depends largely on the property profile of the MPSMs, which is predetermined during the preparation of the HBs.
In order to decouple economic growth from global material consumption it is necessary to implement material efficiency strategies at the level of single enterprises and their supply chains, and to implement circular economy aspects. Manufacturing firms face multiple implementation challenges like cost limitations, competition, innovation and stakeholder pressure, and supplier and customer relationships, among others
. An extended evaluation of triggers and barriers to improve material efficiency in manufacturing companies, along the supply chain and concerning circular economy considerations is provided. This paper delivers an extended literature review, a critical discussion of the current situation and resulting challenges concerning material efficiency approaches in manufacturing supply chains. Finally, a conclusion and outlook on further research direction is given.
Indicators of disruption potentials - analysis of the blockchain technology’s potential impact
(2019)
The goal of this paper was to answer the question whether blockchain has the potential to become a disruption according to Clayton Christensen’s disruption theory. Therefore, the theory and the five characteristics that define the process of disruption were outlined in the first part of the paper. That and the following explanation of the blockchain technology served as the basis for the analysis and evaluation in chapters four to seven. For the analysis, three applications of the DLT, namely payment methods, intermediaries, as well as data storage and transfer, were considered. The fulfillment of the five characteristics of disruption was assessed using an example for each of the three applications.
Additionally, the paper might serve as a basis for future research on the topic, once the technology develops further, since it is generally hard to tell whether the fourth and fifth characteristics are fulfilled by blockchain at this point. Therefore, the results of the paper also back criticism of Christensen’s theory regarding its usefulness for predictions.
This paper suggests that, in the financial services industry, too, the impact of blockchain will be significant. However, given the manifoldness of the services that are part of the industry, it cannot generally be concluded whether the DLT will disrupt the industry. For example, in services related to payment methods, blockchain is unlikely to follow disruptive pattern, despite the recent hype surrounding blockchain-based cryptocurrencies. However, regarding data storage and transfer, the technology might as well follow disruptive pattern in the financial services industry just as the application of blockchain solutions has been doing in the healthcare industry.
Ever since the 1980s, researchers in computer science and robotics have been working on making autonomous cars. Due to recent breakthroughs in research and devel- opment, such as the Bertha Benz Project [ZBS+14], the goal of fully autonomous vehicles seems closer than ever before. Yet a lot of questions remain unanswered. Especially now that the automotive industry moves towards autonomous systems in series production vehicles, the task of precise localization has to be solved with automotive grade sensors and keep memory and processing consumption at a mini- mum. This thesis investigates the Simultaneous Localization and Mapping (SLAM) prob- lem for autonomous driving scenarios on a parking lot using low cost automotive sensors. The main focus is herby devoted to the RAdio Detection And Ranging (RADAR) sensor, which has not been widely analyzed in an autonomous driving scenario so far, even though they are abundant in the automotive industry for ap- plications such as Adaptive Cruise Control (ACC). Due to the high noise floor, the radar sensor has widely been disregarded in the Intelligent Transportation Systems and Robotics communities with regards to SLAM applications. However in this thesis, it is shown that the RADAR sensor proves to be an affordable, robust and precise sensor, when modeling its physical properties correctly. In this regard, a GraphSLAM based framework is introduced, which extracts features from the RADAR sensor and generates an optimized map of the surroundings using the RADAR sensor alone. This framework is used to enable crowd based localization, which is not limited to the RADAR sensor alone. By integrating an automotive Light Detection and Ranging (LiDAR) and stereo camera sensor, a robust and precise localization system can be built that that is suitable for autonomous driving even in complex parking lot scenarios. It it is thereby shown that the RADAR sensor is strongly contributing to obtaining good results in a sensor fusion setup. These results were obtained on an extensive dataset on a parking lot, which has been recorded over the course of several months. It contains different weather conditions, different configurations of parked cars and a multitude of different trajectories to validate the approaches described in this thesis and to come to the conclusion that the RADAR sensor is a reliable sensor in series autonomous driving systems, both in a multi sensor framework and as a single component for localization.
Human recognition is an important part of perception systems, such as those used in autonomous vehicles or robots. These systems often use deep neural networks for this purpose, which rely on large amounts of data that ideally cover various situations, movements, visual appearances, and interactions. However, obtaining such data is typically complex and expensive. In addition to raw data, labels are required to create training data for supervised learning. Thus, manual annotation of bounding boxes, keypoints, orientations, or actions performed is frequently necessary. This work addresses whether the laborious acquisition and creation of data can be simplified through targeted simulation. If data are generated in a simulation, information such as positions, dimensions, orientations, surfaces, and occlusions are already known, and appropriate labels can be generated automatically. A key question is whether deep neural networks, trained with simulated data, can be applied to real data. This work explores the use of simulated training data using examples from the field of pedestrian detection for autonomous vehicles. On the one hand, it is shown how existing systems can be improved by targeted retraining with simulation data, for example to better recognize corner cases. On the other hand, the work focuses on the generation of data that hardly or not occur at all in real standard datasets. It will be demonstrated how training data can be generated by targeted acquisition and combination of motion data and 3D models, which contain finely graded action labels to recognize even complex pedestrian situations. Through the diverse annotation data that simulations provide, it becomes possible to train deep neural networks for a wide variety of tasks with one dataset. In this work, such simulated data is used to train a novel deep multitask network that brings together diverse, previously mostly independently considered but related, tasks such as 2D and 3D human pose recognition and body and orientation estimation.
The intention of this paper is to show that the statistical approach to risk is not enough to explain the behavior of investors. It furthermore proposes ideas and alternative approaches on how to deal with risk. Psychological findings are of particular interest as they might enhance our understanding of risk perception and assessment. The chapter “From the normal distribution to fat tails” starts with the rejection of the normal distribution as a simplifying basis for risk and return. This rejection is supported by several empirical observations like clustering of volatility and fat tails. This leads to a two-step approach for modeling risk and return based on the distinction of conditional and un-conditional changes. Conditional time series models (ARMA, ARCH, GARCH) and alternative distributions are presented (Stable Paretian, Student’s T, EVT) as a way to improve the art of risk and return modeling beyond the normal distribution assumption. The chapter ends with the conclusion that each model is only a statistical approximation and never encompasses the unpredictability of black swans and the nature of human behavior in the financial markets. After having discussed the limitations of the purely statistical approach to risk and return this paper goes beyond the standard theory of finance for two purposes. Firstly, behavioral finance provides some arguments for the limitation of statistics in assessing risk. Secondly, an alternative approach to risk perception is presented. This alternative is called Prospect Theory, a rather psychology-based approach using preferences to explain investors’ actions by human behavior in decision making processes. Starting point is the utility function and the value function followed by a description of the two phases: framing and evaluation. The value function is then clearly distinguished from the utility function by elaborating certain effects like reference points, loss aversion or the weighting function. In this section the paper enters the arena of human risk perception which is far from being monetarily rational in the sense of the homo oeconomicus. With Cumulative Prospect Theory there exists an extension to multiple outcome scenarios where risk does not necessarily have to be known. In such a situation, besides risk, there also exists immeasurable uncertainty. Current research confirms and rejects parts of (Cumulative) Prospect Theory which is not necessarily a bad sign as human behavior is rarely exactly replicable and the complexity does not really allow generalizations. Therefore, even if the theory is not completely correct it still enhances our understanding of risk perception and human decision making which can be a very valuable input for agent-based models. The next chapter analyses in more detail possible distortions from psychological biases in the assessment of risk. In this context the law of small numbers, overconfidence and feelings/experience are discussed. Knowing these biases complicates the idea of developing a risk model even further. However, this is again another step to better understand the underlying processes and motives of decision making in the context of financial markets. The last chapter is an attempt to link the different aspects to get a holistic view on risk behavior. Two possibilities are discussed: Hedonic psychology, with the distinction between blow up and bleeding strategy, and heuristic-based explanations for real observations like clustering of expectations and trust in experts. This leaves space for further research as we do not have a tool that is based on current findings and can actually help us in explaining and predicting behavior in financial markets. One possibility would be to link all these aspects in the approach of computational finance to develop agent-based models in which market observations, psychological findings and the situational context can be integrated.
Based on a survey among customers of seven German municipal utilities, we estimate hierarchical multiple regression models to identify consumer motivations for participating in P2P electricity trading and develop implications for marketing strategies for this currently relatively unknown product. Our results show a low importance of socio-demographics in explaining differences between consumer groups, but high influence of attitudes, knowledge and likelihood to purchase related products. The most valuable target groups for P2P electricity trading marketing strategies of municipal utilities first and foremost should aim at are innovators, especially prosumers. They are well-informed about and open minded concerning electricity sharing and highly environmentally aware. They ask for transparency and are willing to purchase related products. They are attracted by the ability to share generation and consumption and to a lesser extent by economic reasons. Our results indicate that the marketing efforts should to a special degree take peer effects into account, as they are found to wield great influence on general openness towards and purchase intention for P2P electricity products. Finally, municipal utilities should build on the high level of satisfaction and trust of consumers and use P2P electricity trading as measure to keep and win customers willing to change their supplier.
The extracellular matrix (ECM) is the non-cellular part of tissues and represents the natural environment of the cells. Next to structural stability, it provides various physical, chemical, and mechanical cues that strongly regulate and influence cellular behavior and are required for tissue morphogenesis, differentiation, and homeostasis. Due to its promising characteristics, ECM is used in a wide range of tissue engineering and regenerative medicine approaches as a biomaterial for coatings and scaffolds. To date, there are two sources for ECM material. First, native ECM is generated by the removal of the residing cells of a tissue or organ (decellularized ECM; dECM). Secondly, cell-derived ECM (cdECM) can be generated by and isolated from in vitro cultured cells. Although both types of ECM were intensively used for tissue engineering and regenerative medicine approaches, studies directly characterizing and comparing them are rare. Hence, in the first part of this thesis, dECM from adipose tissue and cdECM from stem cells and adipogenic differentiated stem cells from adipose tissue (ASCs) were characterized towards their macromolecular composition, structural features, and biological purity. The dECM was found to exhibit higher levels of collagens and lower levels of sulfated glycosaminoglycans compared to cdECMs. Structural characteristics revealed an immature state of collagen fibers in cdECM samples. The obtained results revealed differences between the two ECMs that can relevantly impact cellular behavior and subsequently experimental outcome and should therefore be considered when choosing a biomaterial for a specific application. The establishment of a functional vascular system in tissue constructs to realize an adequate nutrient supply remains challenging. In the second part, the supporting effect of cdECM on the self‐assembled formation of prevascular‐like structures by microvascular endothelial cells (mvECs) was investigated. It could be observed that cdECM, especially adipogenic differentiated cdECM, enhanced the formation of prevascular-like structures. An increased concentration of proangiogenic factors was found in cdECM substrates. The demonstration of cdECMs capability to induce the spontaneous formation of prevascular‐like structures by mvECs highlights cdECM as a promising biomaterial for adipose tissue engineering. Depending on the purpose of the ECM material chemical modification might be necessary. In the third and last part, the chemical functionalization of cdECM with dienophiles (terminal alkenes, cyclopropene) by metabolic glycoengineering (MGE) was demonstrated. MGE allows the chemical functionalization of cdECM via the natural metabolism of the cells and without affecting the chemical integrity of the cdECM. The incorporated dienophile chemical groups can be specifically addressed via catalysts-free, cell-friendly inverse electron-demand Diels‐Alder reaction. Using this system, the successful modification of cdECM from ASCs with an active enzyme could be shown. The possibility to modify cdECM via a cell-friendly chemical reaction opens up a wide range of possibilities to improve cdECM depending on the purpose of the material. Altogether, this thesis highlighted the differences between adipose dECM and cdECM from ASCs and demonstrated cdECM as a promising alternative to native dECM for application in tissue engineering and regenerative medicine approaches.