Refine
Document Type
- Doctoral Thesis (54) (remove)
Is part of the Bibliography
- yes (54)
Institute
- ESB Business School (18)
- Informatik (18)
- Life Sciences (7)
- Technik (7)
- Texoversum (3)
Publisher
- Universität Tübingen (6)
- University of Portsmouth (3)
- Universität Stuttgart (3)
- Fraunhofer Verlag (2)
- Gabler (2)
- Metropolis-Verlag (2)
- Springer (2)
- Stellenbosch University (2)
- University of the West of Scotland (2)
- Universität Hohenheim (2)
- Universität Ulm (2)
- De Montfort University (1)
- Erasmus University Rotterdam (1)
- Karlsruher Institut für Technologie (1)
- Lang (1)
- Logos (1)
- Shaker Verlag (1)
- Technische Universität Bergakademie Freiberg (1)
- Technische Universität Darmstadt (1)
- Tectum Verlag (1)
- Tübinger Vereinigung für Volkskunde (1)
- Universidad Carlos III de Madrid (1)
- Universität Leipzig (1)
- VDI Verlag (1)
- Verlag Dr. Kovač (1)
The digital age makes it possible to be globally networked at any time. Digital communication is therefore an important aspect of today’s world. Hence, the further development and expansion of this is becoming increasingly important. Even within a wireless system, copper channels are important as part of the overall network. Given the need to keep pushing at the current limitations, careful design of the cables in connection with an adapted coding of the bits is essential to transmit more and more data.
One of the most popular and widespread cabling technologies is symmetrical copper cabling [1, pp. 8-15]. It is also known as Twisted Pair and it is of immense importance for the cabling of communication networks.
At the time of writing this thesis, data rates of up to 10 GBit/s over a transmission distance of 100 m and 40 GBit/s over a transmission distance of 30 m are standardized for symmetrical copper cabling [2]. Other lengths are not standardized. Short lengths in particular are of great interest for copper cables, because copper cables are usually used for short distances, such as between computers and the campus network or within data centres.
This work has focused on the transmission of higher order Pulse Amplitude Modulation and the associated transmission performance. The central research question is:“how well can we optimize the transmission technique in order to be able to maximise the data bandwidth over Ethernet cable and, given that remote powering is also a significant application of these cables, how much will the resulting heating affect this transmission and what can be done to mitigate that?”
To answer this question, the cable parameters are first examined. A series of spectral measurements, such as Insertion Loss, Return Loss, Near End Crosstalk and Far End Crosstalk, provide information about the electromagnetic interference and the influence of the ohmic resistance on the signal. Based on these findings, the first theoretical statements and calculations can be made. In the next step, data transmissions over different transmission lengths are realized. The examination of the eye diagrams of the different transmission approaches ultimately provides information about the signal quality of the transmissions. An overview of the maximum transmission rate depending on the transmission distance shows the potential for different applications.
Furthermore, the simultaneous transmission of energy and data is a significant advantage of copper. However, the resulting heat development has an influence on the data transmission. Therefore, the influence of the ambient temperature of cables is investigated in the last part and changes in the signal quality are clarified.
High Performance Computing (HPC) enables significant progress in both science and industry. Whereas traditionally parallel applications have been developed to address the grand challenges in science, as of today, they are also heavily used to speed up the time-to-result in the context of product design, production planning, financial risk management, medical diagnosis, as well as research and development efforts. However, purchasing and operating HPC clusters to run these applications requires huge capital expenditures as well as operational knowledge and thus is reserved to large organizations that benefit from economies of scale. More recently, the cloud evolved into an alternative execution environment for parallel applications, which comes with novel characteristics such as on-demand access to compute resources, pay-per-use, and elasticity. Whereas the cloud has been mainly used to operate interactive multi-tier applications, HPC users are also interested in the benefits offered. These include full control of the resource configuration based on virtualization, fast setup times by using on-demand accessible compute resources, and eliminated upfront capital expenditures due to the pay-per-use billing model. Additionally, elasticity allows compute resources to be provisioned and decommissioned at runtime, which allows fine-grained control of an application's performance in terms of its execution time and efficiency as well as the related monetary costs of the computation. Whereas HPC-optimized cloud environments have been introduced by cloud providers such as Amazon Web Services (AWS) and Microsoft Azure, existing parallel architectures are not designed to make use of elasticity. This thesis addresses several challenges in the emergent field of High Performance Cloud Computing. In particular, the presented contributions focus on the novel opportunities and challenges related to elasticity. First, the principles of elastic parallel systems as well as related design considerations are discussed in detail. On this basis, two exemplary elastic parallel system architectures are presented, each of which includes (1) an elasticity controller that controls the number of processing units based on user-defined goals, (2) a cloud-aware parallel execution model that handles coordination and synchronization requirements in an automated manner, and (3) a programming abstraction to ease the implementation of elastic parallel applications. To automate application delivery and deployment, novel approaches are presented that generate the required deployment artifacts from developer-provided source code in an automated manner while considering application-specific non-functional requirements. Throughout this thesis, a broad spectrum of design decisions related to the construction of elastic parallel system architectures is discussed, including proactive and reactive elasticity control mechanisms as well as cloud-based parallel processing with virtual machines (Infrastructure as a Service) and functions (Function as a Service). To evaluate these contributions, extensive experimental evaluations are presented.
Product-Service Systems (PSS) in the fashion industry : an analysis of intra-organizational factors
(2018)
The fashion industry is a vast industry that has grown tremendously over the last decades. This growth causes significant environmental impact since the production of clothes involves high input of energy, water, chemicals and generates great volumes of waste. Even though fashion firms have started to address this challenge by adopting environmental standards, it has turned out that the sole use of eco-friendly material and new manufacturing techniques is insufficient. Instead, sustainable business models are increasingly gaining attention to solve the environmental problems. Offers to rent, swap, repair or redesign clothes are among the most prominent and promising examples. For analytical purposes, these concepts can be assigned to the growing research stream of Product-Service Systems (PSS) that shift the focus from the pure sale of a product toward complementary or substitutional service offers. This decouples customer satisfaction from material consumption, prolongs the garments' lifetime and thus diminishes both material input and appertaining waste. Besides environmental sustainability, PSS imply potential economic benefits for organizations. Particularly in highly competitive industries like the fashion industry, PSS allow firms to differentiate, better compete with cost pressure and mitigate the risk of being imitated by rivels since service is more difficult to replicate. However, fashion PSS are still mainly operated in a niche market by small firms and have yet to be anchored in the mainstream fashion industry.
A distinctive highlight of the dissertation at hand is the investigation of multiple apparel supply chain actors incorporating the views of a global apparel retailer in Europe and multiple suppliers in Vietnam and Indonesia.
More specifically, the dissertation presents a coherent investigation starting with the depiction of a conceptual framework for social management strategies as a means for social risk management (SRM), exclusively aiming at the apparel industry. In accordance to the identified research gaps and suggested research directions from the conceptual framework, the role of the apparel sourcing agent for social management strategies was analysed by conducting a multiple case study approach with evidence from Vietnam and Europe, ultimately suggesting ten propositions. Whereas a further multiple case study data collection in Vietnam, Indonesia and Europe allowed for the investigation of buyer-supplier relationships with regards to social compliance strategies by using core tenets of agency theory to interpret the findings and outline ten propositions. Based on the development of a conceptual framework on social SSCM in the apparel industry, the formulation of related 20 propositions with evidence from crucial developing (apparel sourcing) countries, and the application of agency theory which has been declared as a shortfall in this context, this thesis contributes with further grounding to SSCM theory and substantially contributes to the debate by addressing numerous research gaps.
After more than three decades of electronic design automation, most layouts for analog integrated circuits are still handcrafted in a laborious manual fashion today. Obverse to the highly automated synthesis tools in the digital domain (coping with the quantitative difficulty of packing more and more components onto a single chip – a desire well known as More Moore), analog layout automation struggles with the many diverse and heavily correlated functional requirements that turn the analog design problem into a More than Moore challenge. Facing this qualitative complexity, seasoned layout engineers rely on their comprehensive expert knowledge to consider all design constraints that uncompromisingly need to be satisfied. This usually involves both formally specified and nonformally communicated pieces of expert knowledge, which entails an explicit and implicit consideration of design constraints, respectively.
Existing automation approaches can be basically divided into optimization algorithms (where constraint consideration occurs explicitly) and procedural generators (where constraints can only be taken into account implicitly). As investigated in this thesis, these two automation strategies follow two fundamentally different paradigms denoted as top-down automation and bottom-up automation. The major trait of top-down automation is that it requires a thorough formalization of the problem to enable a self-intelligent solution finding, whereas a bottom-up automatism –controlled by parameters– merely reproduces solutions that have been preconceived by a layout expert in advance. Since the strengths of one paradigm may compensate the weaknesses of the other, it is assumed that a combination of both paradigms –called bottom-up meets top-down– has much more potential to tackle the analog design problem in its entirety than either optimization-based or generator-based approaches alone.
Against this background, the thesis at hand presents Self-organized Wiring and Arrangement of Responsive Modules (SWARM), an interdisciplinary methodology addressing the design problem with a decentralized multi-agent system. Its basic principle, similar to the roundup of a sheep herd, is to let responsive mobile layout modules (implemented as context-aware procedural generators) interact with each other inside a user-defined layout zone. Each module is allowed to autonomously move, rotate and deform itself, while a supervising control organ successively tightens the layout zone to steer the interaction towards increasingly compact (and constraint compliant) layout arrangements. Considering various principles of self-organization and incorporating ideas from existing decentralized systems, SWARM is able to evoke the phenomenon of emergence: although each module only has a limited viewpoint and selfishly pursues its personal objectives, remarkable overall solutions can emerge on the global scale.
Several examples exhibit this emergent behavior in SWARM, and it is particularly interesting that even optimal solutions can arise from the module interaction. Further examples demonstrate SWARM’s suitability for floorplanning purposes and its application to practical place-and-route problems. The latter illustrates how the interacting modules take care of their respective design requirements implicitly (i.e., bottom-up) while simultaneously paying respect to high level constraints (such as the layout outline imposed top-down by the supervising control organ). Experimental results show that SWARM can outperform optimization algorithms and procedural generators both in terms of layout quality and design productivity. From an academic point of view, SWARM’s grand achievement is to tap fertile virgin soil for future works on novel bottom-up meets top-down automatisms. These may one day be the key to close the automation gap in analog layout design.
Service robots need to be aware of persons in their vicinity in order to interact with them. People tracking enables the robot to perceive persons by fusing the information of several sensors. Most robots rely on laser range scanners and RGB cameras for this task. The thesis focuses on the detection and tracking of heads. This allows the robot to establish eye contact, which makes interactions feel more natural.
Developing a fast and reliable pose invariant head detector is challenging. The head detector that is proposed in this thesis works well on frontal heads, but is not fully pose-invariant. This thesis further explores adaptive tracking to keep track of heads that do not face the robot. Finally, head detector and adaptive tracker are combined within a new people tracking framework and experiments show its effectiveness compared to a state-of the-art system.
So far, only few authors addressed the serum-free, defined differentiation of adipocytes. And there are hardly any trials available on the defined maintenance of adipocytes. In this study, the development of a defined culture medium for the adipogenic differentiation of primary human adipose-derived stem cells (ASCs) was aimed. Based on the addition of specific factors for the replacement of serum, ASCs were differentiated to viable and characteristic adipocytes for 14 days, which was proven through the accumulation of lipids, the expression of perilipin A and by the release of leptin and glycerol. Furthermore, a defined maintenance medium was developed, which supported the maturation and stability of cells for a long-term period of additional 42 days until day 56.
Este trabajo se enmarca dentro del vasto contexto de Ciudades Inteligentes, y se centra en el área de la conducción inteligente de vehículos, tanto en zonas urbanas como interurbanas, mediante la recogida de datos en tiempo real, medidos con sensores, por parte de los propios conductores, así como de datos capturados mediante simulación.
El objetivo de este trabajo es doble. Por un lado, el estudio y aplicación de las diferentes técnicas y métodos de detección de valores atípicos en bases de datos multivariantes, además de una comparativa entre ellos mediante las pruebas llevadas a cabo con datos de tráfico real. Y por otro lado, establecer una relación entre las situaciones anómalas de tráfico, como puedan ser atascos o accidentes, con los valores atípicos multivariantes encontrados.
La detección de valores atípicos representa una de las tareas más importantes a la hora de realizar cualquier análisis de datos, sea cual sea el dominio o área de estudio, ya que entre sus funciones primordiales se encuentra el descubrir información útil, que resulta de gran valor, y que por lo general queda oculta por la alta dimensión de los datos.
Con el uso de mecanismos de detección de valores atípicos junto con métodos de clasificación supervisada, se va a poder llevar a cabo el reconocimiento de elementos de la infraestructura vial urbana como pueden ser rotondas, pasos de cebra, cruces o semáforos.
Anhand von drei empirischen Studien zeigt Christina Kühnl, wie Unternehmen ihren Innovationsprozess optimieren bzw. wie sie die Adoption einer Innovation in ihrer Organisation sicherstellen können. Dabei vergleicht sie die Erfolgsfaktoren von Produkt- und Dienstleistungsinnovationen unter Berücksichtigung nichtlinearer Effekte, identifiziert unternehmensinterne Erfolgsfaktoren und verdeutlicht, welchen Einfluss das soziale Umfeld auf die individuelle Adoptionsentscheidung ausübt.
In dieser Arbeit wird ein Ansatz zur Unterstützung von Werkern, Meistern und Instandhaltern vorgestellt, der es ermöglicht, aus der auftretenden Situation heraus (ad hoc), auf aktuelle notwendige Informationen und die Zusammenhänge in einer variantenreichen Serienfertigung zuzugreifen. Schwerpunkt bildet das unternehmensneutrale Gesamtkonzept des fertigungsnahen Kontextinformationssystems, das aus dem Produktionsumgebungsmodell und der Systemarchitektur besteht. Das Produktionsumgebungsmodell beschreibt und vernetzt enthaltene Informationen und Zusammenhänge einer variantenreichen Serienfertigung. Hauptordnungskriterien sind hier die Zugehörigkeit zu einer bestimmten Gruppe (Typ), die Identität eines Gegenstands, dessen Ort und Betriebszustand über die Zeit. Die Systemarchitektur ist modular aufgebaut. Die Module werden in Erfassungsmodule, Kontextverwaltungsmodule, Funktionsmodule zur automatischen und manuellen Informationsfilterung sowie Präsentationsmodule untergliedert und kommunizieren über eine einheitliche Schnittstelle.