Refine
Document Type
- Journal article (1236)
- Conference proceeding (1034)
- Book chapter (381)
- Book (223)
- Doctoral Thesis (54)
- Working Paper (37)
- Anthology (32)
- Report (25)
- Patent / Standard / Guidelines (24)
- Issue of a journal (19)
Is part of the Bibliography
- yes (3074)
Institute
- ESB Business School (1099)
- Informatik (871)
- Technik (507)
- Life Sciences (343)
- Texoversum (219)
- Zentrale Einrichtungen (16)
Publisher
- Springer (345)
- IEEE (250)
- Elsevier (218)
- Hochschule Reutlingen (186)
- MDPI (97)
- Springer Gabler (79)
- Gesellschaft für Informatik (66)
- Universitätsbibliothek Tübingen (59)
- Wiley (54)
- ACM (40)
Die Faszination des "Touchpoint Airport" und der Trend hin zu immer mehr Mobilität machen Flughäfen für die werbungtreibende Wirtschaft zunehmend interessanter. Das belegen die Wachstumsraten der Out-of-Home-Werbung an Flughäfen und die Investitionen der Airports in analoge wie innovative digitale Medien. Der Grund dafür ist einfach: überdurchschnittlich kaufkräftige Zielgruppen lassen sich mit Flughafenwerbung über ihre gesamte Customer-Journey in einem besonderen Umfeld emotional ansprechen. Starken Brands bieten Airports die ideale Bühne für eine nachhaltige Markeninszenierung. Doch wie genau funktioniert Airport-Werbung, für welche Unternehmen eignet sich ein werblicher Auftritt am Flughafen, welche Zielgruppen lassen sich am Airport erreichen und wer sind die kompetenten Ansprechpartner für Airport-Werbung? Diese und viele weitere Fragen beantwortet das Jahrhuch Aiport Marketing 2020.
Nowadays, the demand for a MEMS development/design kit (MDK) is even more in focus than ever before. In order to achieve a high quality and cost effectiveness in the development process for automotive and consumer applications, an advanced design flow for the MEMS (micro electro mechanical systems) element is urgently required. In this paper, such a development methodology and flow for parasitic extraction of active semiconductor devices is presented. The methodology considers geometrical extraction and links the electrically active pn junctions to SPICE standard library models and subsequently extracts the netlist. An example for a typical pressure sensor is presented and discussed. Finally, the results of the parasitic extraction are compared with fabricated devices in terms of accuracy and capability.
Due to the lack of sophisticated component libraries for microelectromechanical systems (MEMS), highly optimized MEMS sensors are currently designed using a polygon driven design flow. The advantage of this design flow is its accurate mechanical simulation, but it lacks a method for an efficient and accurate electrostatic analysis of parasitic effects of MEMS. In order to close this gap in the polygon-driven design flow, we present a customized electrostatic analysis flow for such MEMS devices. Our flow features a 2.5D fabrication-process simulation, which simulates the three typical MEMS fabrication steps (namely deposition of materials including topography, deep reactive-ion etching, and the release etch by vapor-phase etching) very fast and on an acceptable abstraction level. Our new 2.5D fabrication-process simulation can be combined with commercial field-solvers such as they are commonly used in the design of integrated circuits. The new process simulation enables a faster but nevertheless satisfactory analysis of the electrostatic parasitic effects, and hence simplifies the electrical optimization of MEMS.
Rapidly growing population and increasing amount of shipments induced by the e-commerce are two of the main reasons for the constantly rising urban freight traffic. Cities are therefore overwhelmed by a growing stream of goods and the available infrastructure, shared between people and goods traffic, often reached its maximum capacity. Phenomena such as traffic congestion, pollution and lack of space are direct consequences of this trend and their impact on the quality of life in the city is not negligible. City administrations are keen to evaluate innovative city logistics concepts and adopt alternative solutions, to overcome the challenges posed by such a dynamic environment, constrained in existing infrastructure. In this paper, a heuristic method based on the utility analysis is presented. Thanks to a modular approach accounting for stakeholders´ requirements, possible different scenarios and available technologies, the development of new city logistic concepts is supported. The proposed method is then applied to a case study concerning the city of Reutlingen (Germany). Results are presented and a brief discussion leads to the conclusion.
The metric and qualitative analysis of models of the upper and lower dental arches is an important aspect of orthodontic treatment planning. Currently available eLearning systems for dental education only allow access to digital learning materials, and do not interactively support the learning progress. Moreover, to date no study compared the efficiency of learning methods based on physical or digital study models. For this pilot study, 18 dental students were separated into two groups to investigate whether the learning success in study model analysis with an interactive elearning system is higher based on digital models or on conventional plaster models. The results show that with the digital method less time is needed per model analysis. Moreover, the digital approach leads to higher total scores than that based on plaster models. We conclude that interactive eLearning using digital dental arch models is a promising tool for dental education.
OR-Pad - Entwicklung eines Prototyps zur sterilen Informationsanzeige am OP-Situs : meeting abstract
(2019)
Hintergrund: Oftmals werden Informationen aus der Krankenakte oder von Bildgebungsverfahren nur auf recht weit vom Operationsgebiet entfernten Monitoren, außerhalb der ergonomischen Sichtachse des Operateurs, dargestellt. Dies führt dazu, dass relevante Informationen übersehen werden oder ihr Informationspotenzial nicht ausgeschöpft werden kann. In Papierform mitgenommene Notizen befinden sich während der OP außerhalb des sterilen Bereichs und sind dadurch für den Operateur nicht ohne Weiteres zugänglich. Auch bei intraoperativen Einträgen für die OP Dokumentation ist der Operateur auf die Mithilfe der Assistenz angewiesen. Durch die zusätzlichen Kommunikationswege entstehen dabei ein personeller und zeitlicher Mehraufwand und das Fehlerpotenzial nimmt zu. Das anwendungsorientierte Forschungsprojekt OR-Pad - Nutzung von portablen Informationsanzeigen im Operationssaal - soll dem Operateur zu einem verbesserten Informationsfluss verhelfen. Die Idee entstand aus der klinischen Routine der Anatomie und Urologie des Universitätsklinikums Tübingen und wird nun durch Fördermittel vom Ministerium für Wissenschaft, Forschung und Kunst Baden-Württemberg sowie vom Europäischen Fonds für regionale Entwicklung an der Hochschule Reutlingen zu einem High Fidelity-Prototypen weiterentwickelt.
Ziel: Ziel des OR-Pad Projekts ist es, während einer OP zum aktuellen Zeitpunkt klinisch relevante Informationen in unmittelbarer Nähe zum Operateur darzustellen. Mithilfe des Systems soll der Informationsfluss zwischen dem Eingriff sowie dessen Vor- und Nachbereitung optimiert werden. Der Operateur soll vorab relevante Informationen, wie aktuelle Röntgenbilder oder persönliche Notizen, zur intraoperativen Anzeige auswählen können, die dann am OP-Situs auf einer sterilen Informationsanzeige dargestellt werden. Durch die Positionierung soll eine ergonomische Sichtachse sowie die direkte Interaktion mit dem System ermöglicht werden. Kontextrelevante Informationen sollen basierend auf dem aktuellen OP-Verlauf durch die Entwicklung einer Situationserkennung automatisch bereitgestellt werden. Zur Optimierung des Informationsflusses gehört ebenfalls die Unterstützung der OP-Dokumentation. Für diese sollen während des Eingriffs manuell vom Operateur sowie automatisch vom System Einträge, wie Zeitpunkte oder intraoperative Aufnahmen, erstellt werden. Aus diesen soll nach dem Eingriff die OP-Dokumentation generiert und damit der Prozess qualitativer und zeiteffizienter gestaltet werden.
Methodik: Zur Erreichung des Ziels werden zunächst die klinischen Anforderungen spezifiziert und in ein Lastenheft überführt. Hierfür werden Interviews und Beobachtungen bei mehreren Interventionen durchgeführt. Nach dem User-Centered-Designprozess werden Personas und Nutzungsszenarien entworfen und mit klinischen Projektpartnern in mehreren Iterationen evaluiert. Es gilt eine Informationsarchitektur aufzubauen, die eine Einbettung klinischer Informationssysteme sowie Bild- und Gerätedaten aus dem OP-Netzwerk erlaubt. Eine Situationserkennung, basierend auf Prozessmodellen, soll zur Abschätzung des Operationsfortschritts entwickelt werden. Zur Befestigung der Informationsanzeige sollen geeignete Haltemechanismen eingesetzt werden. Das OR-Pad System soll laufend im Lehr- und Forschungs-OP der Hochschule Reutlingen getestet und im Sinne agiler Produktentwicklung mit den klinischen Projektpartnern abgestimmt werden. Der finale Funktionsprototyp soll abschließend in den Versuchs-OPs der Anatomie Tübingen getestet und evaluiert werden.
Ergebnisse: Über eine erste Datenerhebung mittels Contextual Inquiry konnten erste Anforderungen an das OR-Pad System erfasst werden, woraus ein Low-Fidelity-Prototyp resultierte. Die Evaluation über Experteninterviews führte in die zweite Iteration, in der das Konzept entsprechend der Ergebnisse angepasst wurde. Über Hospitationen am Uniklinikum Tübingen fand eine weitere Datenerhebung zur Erstellung von Szenarien für die intraoperativen Anwendungsfälle statt. Anhand der Anforderungen wurde ein Konzept für die Benutzerschnittstelle entworfen, die im weiteren Verlauf mit den klinischen Projektpartnern evaluiert wird.
Powered by e-commerce and vital in the manufacturing industry, intralogistics became an increasingly important and labour-intensive process. In highly standardized automation-friendly environments, such as the automotive sector, most of efficiently automatable intralogistics tasks have already been automated. Due to aging population in EU and ergonomic regulations, the urge to automate intralogistics tasks became consistent also where product and process standardization is lower. That is the case of the production line or cell material supply process, where an increasing number of product variants and individually customized products combined with the necessary ability of reacting to changes in market conditions led to smaller and more frequent replenishment to the points of use in the production plant and to the chaotic addition of production cells in shop floor layout. This led in turn to inevitable traffic growth with unforeseeable related delays and increased level of safety threats and accidents. In this paper, we use the structured approach of the Quality Interaction Function Deployment to analyse the process of supply of assembly lines, seeking the most efficient combination of automation and manual labour, satisfying all stakeholders´ requirements. Results are presented and discussed.
In standardized sectors such as the automotive, the cost-benefit ratio of automation solutions is high as they contribute to increase capacity, decrease costs and improve product quality. In less standardized application fields, the contribution of automation to improvements in capacity, cost and quality blurs. The automation of complex and unstructured tasks requires sophisticated, expensive and low-performing systems, whose impact on product quality is oftentimes not directly perceived by customers. As a result, the full automation of process chains in the general manufacturing or the logistic sectors is often a sub optimal solution. Taking the distance from the false idea that a process should be either fully automated, or fully manual, this paper presents a novel heuristic method for design of lean human-robot interaction, the Quality Interaction Function Deployment, with the objective of the “right level of automation”. Functions are divided among human and automated agents and several automation scenarios are created and evaluated with respect to their compliance to the requirements of all process´ stakeholders. As a result, synergies among operators (manual tasks) and machines (automated tasks) are improved, thus reducing time-losses and increasing productivity.
It is expected that ongoing digitalisation will drive the merger between the manufacturing world and the internet world, possibly leading to a next industrial revolution, currently called “Industry 4.0”. The driving forces behind this development are new business opportunities and competition advantages arising from mass production customisation as well as rapid individual product development and manufacturing. Key factors of the development towards Industry 4.0 are discussed. Threats and opportunities arising from these developments for future production are discussed. Actual examples from real-time customized manufacturing of consumer products are given. As mechatronic systems and industrial robots are widely used in manufacturing and in particular in assembly, it is discussed how they can be connected to and used in digitalised industrial systems. Different examples of remote controlled systems are presented, like remote controlled KUKA robot for handling and quality control, PLC-controlled equipment, drive systems, FESTO handling system and others. The architecture of an assembly cell is presented, where industrial robots are set-up for batch-one production or can directly receive control / production information on-line and in real-time over the factory network. Methods for remote maintenance and monitoring of systems over the internet and production operator support over the internet are presented as well.
Natural extracellular matrix (ECM) represents an ideal biomaterial for tissue engineering and regenerative medicine approaches. For further functionalization, there is a need for specific addressable functional groups within this biomaterial. Metabolic glycoengineering (MGE) provides a technique to incorporate modified monosaccharide derivatives into the ECM during their assembly, which was shown by us earlier for the production of a modified fibroblast-derived dermal ECM.
Social Selling – ein innovativer Vertriebsansatz, der die Prinzipien des digitalen Marketings auf den Vertrieb anwendet – findet in der Unternehmenspraxis zunehmend Beachtung. Die Forschung, insbesondere zur Ausgestaltung von Social Selling, steht allerdings noch am Anfang. Mit Hilfe der Daten der Social-Media-Kanäle Facebook und LinkedIn von zwei Industriegüterunternehmen wird in einer explorativen Studie herausgefunden, dass eine direkte Vernetzungsanfrage zur Erweiterung des Netzwerks effizient ist und dass Social-Selling-Beiträge, die zu Beginn und Ende einer Woche vor allem vormittags als visuelles Format (Fotos, Videos) veröffentlicht werden, am erfolgversprechendsten hinsichtlich Klicks, Likes, Shares und Comments sind.
Potentials of smart contracts-based disintermediation in additive manufacturing supply chains
(2019)
We investigate which potentials are created by using smart contracts for disintermediation in supply chains for additive manufacturing. Using a qualitative, critical realist research approach, we analyzed three case studies with companies active in additive manufactures. Based on interviews with experts from these companies, we could identify eight key requirements for disintermediation and associate four potentials of smart contracts-based disintermediation.
Digitale Transformation: Können Sie den Begriff noch hören, ohne mit den Augen zu rollen? Auch wenn der Begriff in aller Munde ist, besteht immer noch große Verwirrung darüber, was eigentlich so neu daran sein soll. Immerhin setzen Unternehmen ja (digitale) Informationstechnologien (IT) seit Jahrzehnten ein, um Geschäftsprozesse zu verbessern.
Background/Aim: The aim of this study was the development of a new osteoconductivity index to determine the bone healing capacities of bone substitute materials (BSM) on the basis of 3D microcomputed tomographic (μ-CT) data. Materials and Methods: Sinus biopsies were used for the comparative analysis of the integration behavior of two xenogeneic BSM (cerabone® and Bio Oss®). 3D μ-CT and data sets from histomorphometrical measurements based on 2D histological slices were used to measure the bone-material-contact and the tissue distribution within the biopsies. The tissue reactions to both BSM were microscopically analyzed. Results: The 3D and 2D results of the osteoconductivity measurements showed comparable material-bone contacts for both BSM, but the 2D data were significantly lower. The same results were found when tissue distribution was measured in both groups. The histopathological analysis showed comparative tissue reactions in both BSM. Conclusion: Osteoconductivity index is a reliable measurement parameter for determining the healing capacities of BSM. The observed differences between both measurement methods could be assigned to the resolution capacity of μ-CT data that did not allow for a precise interface distinction between both BSM and bone tissue. Histomorphometrical data based on histological slides still allow for a more exact evaluation.
Vitamin E (VitE) additives are important in treating osteoarthritis inclusive cartilage regeneration due to their antioxidant and anti-inflammatory properties. The present research study focuses on the ability of biological antioxidant VitE (alpha-tocopherol isoform) to reduce or minimize oxidative degradation of soft implantable polyurethane (PU) elastomers after extended periods of time (5 months) in vitro. The effect of the oxidation storage media on the morphology of the segmented PUs was evaluated by mechanical softening, crystallization and melting behavior of both soft and hard segments (SS, HS) using dynamic mechanical analysis (DMA). Bulk mechanical properties of the potential implant materials during ageing were predicted from comprehensive mechanical testing of the biomaterials under tension and compression cyclic loads. 5-months in vitro data suggest that the prepared siloxane-poly(carbonate urethane) formulations have sufficient resistance against degradation to be suitable materials for chondral long term bio-stable implants. Most importantly, the positive effect of incorporating VitE (0.5 or 1.0% w/w) as bio-antioxidant and lubricant on the bio-stability was observed for all PU types. VitE-additives protected the surface layer from erosion and cracking during chemical oxidation in vitro as well as from thermal oxidation during extrusion re-processing.
The aim of this study was to predefine the pore structure of β-tricalcium phosphate (β-TCP) scaffolds with different macro pore sizes (500, 750, and 1000 µm), to characterize β-TCP scaffolds, and to investigate the growth behavior of cells within these scaffolds. The lead structures for directional bone growth (sacrificial structures) were produced from polylactide (PLA) using the fused deposition modeling techniques. The molds were then filled with β-TCP slurry and sintered at 1250° C, whereby the lead structures (voids) were burnt out. The scaffolds were mechanically characterized (native and after incubation in simulated body fluid (SBF) for 28 d). In addition, biocompatibility was investigated by live/dead, cell proliferation and lactate dehydrogenase assays.
The present publication reports the purification effort of two natural bone blocks, that is, an allogeneic bone block (maxgraft®, botiss biomaterials GmbH, Zossen, Germany) and a xenogeneic block (SMARTBONE®, IBI S.A., Mezzovico Vira, Switzerland) in addition to previously published results based on histology. Furthermore, specialized scanning electron microscopy (SEM) and in vitro analyses (XTT, BrdU, LDH) for testing of the cytocompatibility based on ISO 10993-5/-12 have been conducted. The microscopic analyses showed that both bone blocks possess a trabecular structure with a lamellar subarrangement. In the case of the xenogeneic bone block, only minor remnants of collagenous structures were found, while in contrast high amounts of collagen were found associated with the allogeneic bone matrix. Furthermore, only island-like remnants of the polymer coating in case of the xenogeneic bone substitute seemed to be detectable. Finally, no remaining cells or cellular remnants were found in both bone blocks. The in vitro analyses showed that both bone blocks are biocompatible. Altogether, the purification level of both bone blocks seems to be favorable for bone tissue regeneration without the risk for inflammatory responses or graft rejection. Moreover, the analysis of the maxgraft® bone block showed that the underlying purification process allows for preserving not only the calcified bone matrix but also high amounts of the intertrabecular collagen matrix.
Artefaktkorrektur und verfeinerte Metriken für ein EEG-basiertes System zur Müdigkeitserkennung
(2019)
Fragestellung: Müdigkeit ist ein oft unterschätztes, aber dennoch großes Problem im Straßenverkehr. Von rund 2,5 Mio. Verkehrsunfällen 2015 in Deutschland, waren 2898 Unfälle, mit insgesamt 59 Toten (~1,7 % der Todesfälle), auf Übermüdung zurückzuführen. Schätzungen gehen von einer Dunkelziffer von bis zu 20 % aus. In einer ersten eigenen Studie wurde überprüft, ob ein mobiles EEG in einem Fahrsimulator Müdigkeitszustände zuverlässig erkennen kann. Die Erkennungsrate lag lediglich bei 61 %. Ziel dieser Arbeit ist, das verwendete Messsystem zu verbessern. Dazu wird die Genauigkeit durch eine Artefaktkorrektur und mit Hilfe von verfeinerten Qualitätsmetriken erhöht. Eine erkannte Übermüdung wird dem Fahrer dann in angemessener Weise angezeigt, so dass er entsprechend reagieren kann.
Patienten und Methoden: Die Independent Component Analysis (ICA) ist ein multivariates Verfahren, um mehrere Zufallsvariablen zu analysieren. Für die Entscheidung, ob ein Fahrer gerade müde oder wach ist, wird der erstellte Merkmalsvektor für jede Sequenz mit ICA klassifiziert. Dafür wird ein trainierter Machine-Learning-Algorithmus eingesetzt, der in der Lage ist, auch unbekannte Datensätze in Klassen einzuteilen. Um die benötigten Frequenzwerte zu erhalten, wurde für jeden EEG-Kanal eine Fourier Transformation durchgeführt. Der erstellte Merkmalsvektor wird im nächsten Schritt durch ein Künstliches Neuronales Netz klassifiziert. Für das Training werden vorab erstellte Merkmalsvektoren mit den Klassen „Wach“ und „Müde“ versehen. Diese Daten werden zufällig gemischt und im Verhältnis 2:1 in eine Trainings- und Testmenge geteilt. Das Experiment wurde mit acht Personen mit jeweils zweimal 45 min Testfahrt durchgeführt.
Ergebnisse: Der komplette Datensatz besteht aus 150.000 Signalwerten, welche zu ca. 7000 Sequenzen zusammengefasst werden. Durch die Anwendung der Qualitätsmetrik bleiben 4370 Sequenzen für das Training übrig. Bei invaliden Sequenzen aufgrund von EEG-Artefakten gibt es deutliche Unterschiede. Im „Wach“ Zustand werden dreimal so viele Sequenzen verworfen als im „Müde“ Zustand. Insgesamt werden bei wachen Probanden im Schnitt ca. 50 % der Sequenzen verworfen, bei Müden lediglich 25 %. Im Durchschnitt erreicht das System eine Erkennungsrate von 73 % für beide Zustände. Vergleicht man nun das Verhältnis von „Wach“ und „Müde“ und lässt „Leichte Müdigkeit“ außen vor, liegen die Ergebnisse bei über 90 %.
Schlussfolgerungen: Die Ergebnisse zeigen, dass die Aufmerksamkeit während des Experiments abnimmt bzw. die Müdigkeit zunimmt. Dies verdeutlichen zum einen subjektive und objektive Beobachtungen von Müdigkeitsanzeichen. Zum anderen lassen sich messbare und klassifizierbare Unterschiede im EEG Signal nachweisen. Die als Merkmale eingesetzten Theta-Wellen zeigten eine niedrigere Amplitude gegen Ende des Experiments. Die Erweiterung der binären Klassifizierung führt zu einer weiteren Stabilisierung der Ergebnisse. Artefaktkorrektur und Qualitätsmetriken steigern die Güte der Daten weiter. Die entwickelte Anwendung zur Müdigkeitserkennung ermittelt messbare Zeichen von Müdigkeit und kann eine gute Entscheidung über die Fahrtauglichkeit treffen.
The SDGs give an overview of the world's development challenges of the present and the coming decades and set a new global agenda for more inclusive and sustainable development and growth. These challenges also represent opportunities for social innovations and the creation of scalable and financially self-sustaining solutions by businesses and (social) entrepreneurs. Examples of solutions to social and ecological challenges are for instance providing low-income communities with access to affordable, quality products and services in areas such as water and sanitation, energy, health, education and finance. New business models can meet customer demands by providing solutions and thereby create opportunities for low-income people as employees, suppliers and distributors.
Der Girlboss Mythos : die gesellschaftlichen und ökonomischen Perspektiven der Gender-Debatte
(2019)
Faktisch sind Frauen heute gleichberechtigt. Sie haben die gleichen Chancen, Rechte und Möglichkeiten wie Männer. Dennoch weisen maßgebliche Studien darauf hin, dass die Anzahl von Frauen auf allen Führungsebenen stagniert oder nur im Schneckentempo wächst. In der medialen Diskussion rund um das Thema Frauen im Management ist die Welt auf den ersten Blick in zwei Lager geteilt. Ein Lager stellt ernüchtert fest, dass Frauen selbst Schuld sind an ihrer Situation. Oft werden hier gerade erfolgreiche Frauen zitiert, die ihren Geschlechtsgenossinnen den nötigen Erfolgswillen oder die Opferbereitschaft absprechen. Das andere Lager scheint die Sachlage genau entgegengesetzt zu beurteilen. Überall gut ausgebildete, hochmotivierte Frauen, die an Glasdecken stoßen oder denen von der Gesellschaft im Allgemeinen und Männern im Besonderen die Türen versperrt werden. Dieses Buch trägt zu einer wissenschaftlich nüchternen Diskussion bei, um die aktuelle gesellschaftspolitische Situation differenzierter und abseits von abgegriffenen Dogmen zu betrachten.
The use of gamification in workplace learning to encourage employee motivation and engagement
(2019)
When we think about playing a game, be it a card game, board game, sport, or video game, we generally associate the act of playing with a positive experience like having fun, enjoying the interaction with others, or feeling a greater motivation to reach a certain goal. By contrast, workplace learning is often perceived as being dull. Employees are likely at some point in their career to find themselves stuck in a rigidly defined seminar for a long period of time or in front of their computer navigating through a mandatory e-learning course on a dry topic such as standards of business conduct of safety policies.
In recent years, organizations have tried to leverage the motivating quality of games for more serious learning contexts. Gamification entails transferring those elements and principles from games to nongaming context that improve user experience and engagement. In this chapter, we will specifically focus on the context of workplace learning.
Many researchers have explored the phenomenon of intercultural communication since Edward T. Hall first brought it to light in the late 1950s. Although the literature is quite extensive, the ongoing sociopolitical struggles are evidence that even in the twenty-first century, society has limited intercultural as well as intracultural communication competence. This limited understanding continues to bring about discord in every facet of life, including work.
The modern workforce is expected to possess certain knowledge, skills, and attitudes that are inherently different from those expected from previous generations. Due to globalization, intercultural competence and highly effective communication skills are at the top of the list - a working knowledge of English as the lingua franca of today's business world can be considered as a first step.
There is no denying that organizations, whether domestic or global, whether educational, governmental, or business, are undergoing rapid transformation. However, what is causing it? Prompted by the need to remain relevant and competitive, organizations constantly try to reinvent themselves. Those that do not, according to the laws of economics, will simply serve no purpose and will eventually cease to exist. Regardless of sector or industry, an organization's success pivots around its human talent. Hence, it is crucial to manage it and cultivate certain traits, knowledge, and skills. In today's global economy, organizations are more interconnected than ever before and thus the challenges they face require that employees possess not only expert knowledge, problem-solving, cross-cultural, and cross-functional teaming skills, but also good communications skills and agile thinking.
Indicators of disruption potentials - analysis of the blockchain technology’s potential impact
(2019)
The goal of this paper was to answer the question whether blockchain has the potential to become a disruption according to Clayton Christensen’s disruption theory. Therefore, the theory and the five characteristics that define the process of disruption were outlined in the first part of the paper. That and the following explanation of the blockchain technology served as the basis for the analysis and evaluation in chapters four to seven. For the analysis, three applications of the DLT, namely payment methods, intermediaries, as well as data storage and transfer, were considered. The fulfillment of the five characteristics of disruption was assessed using an example for each of the three applications.
Additionally, the paper might serve as a basis for future research on the topic, once the technology develops further, since it is generally hard to tell whether the fourth and fifth characteristics are fulfilled by blockchain at this point. Therefore, the results of the paper also back criticism of Christensen’s theory regarding its usefulness for predictions.
This paper suggests that, in the financial services industry, too, the impact of blockchain will be significant. However, given the manifoldness of the services that are part of the industry, it cannot generally be concluded whether the DLT will disrupt the industry. For example, in services related to payment methods, blockchain is unlikely to follow disruptive pattern, despite the recent hype surrounding blockchain-based cryptocurrencies. However, regarding data storage and transfer, the technology might as well follow disruptive pattern in the financial services industry just as the application of blockchain solutions has been doing in the healthcare industry.
Recently, practitioners have begun appraising an effective customer journey design (CJD) as an important source of customer value in increasingly complex and digitalized consumer markets. Research, however, has neither investigated what constitutes the effectiveness of CJD from a consumer perspective nor empirically tested how it affects important variables of consumer behavior. The authors define an effective CJD as the extent to which consumers perceive multiple brand-owned touchpoints as designed in a thematically cohesive, consistent, and context-sensitive way. Analyzing consumer data from studies in two countries (4814 consumers in total), they provide evidence of the positive influence of an effective CJD on customer loyalty through brand attitude — over and above the effects of brand experience. Importantly, an effective CJD more strongly influences utilitarian brand attitudes, while brand experience more strongly affects hedonic brand attitudes. These underlying mechanisms are also prevalent when testing for the contingency factors services versus goods, perceived switching costs, and brand involvement.
Among the multitude of software development processes available, hardly any is used by the book. Regardless of company size or industry sector, a majority of project teams and companies use customized processes that combine different development methods— so-called hybrid development methods. Even though such hybrid development methods are highly individualized, a common understanding of how to systematically construct synergetic practices is missing. In this paper, we make a first step towards devising such guidelines. Grounded in 1,467 data points from a large-scale online survey among practitioners, we study the current state of practice in process use to answer the question: What are hybrid development methods made of? Our findings reveal that only eight methods and few practices build the core of modern software development. This small set allows for statistically constructing hybrid development methods. Using an 85% agreement level in the participants’ selections, we provide two examples illustrating how hybrid development methods are characterized by the practices they are made of. Our evidence-based analysis approach lays the foundation for devising hybrid development methods.
Context: Organizations are increasingly challenged by high market dynamics, rapidly evolving technologies and shifting user expectations. In consequence, many organizations are struggling with their ability to provide reliable product roadmaps by applying traditional roadmapping approaches. Currently, many companies are seeking opportunities to improve their product roadmapping practices and strive for new roadmapping approaches. A typical first step towards advancing the roadmapping capabilities of an organization is to assess the current situation. Therefore, the so-called maturity model DEEP for assessing the product roadmapping capabilities of companies operating in dynamic and uncertain environments has been developed and published by the authors.
Objective: The aim of this article is to conduct an initial validation of the DEEP model in order to understand its applicability better and to see if important concepts are missing. In addition, the aim of this article is to evolve the model based on the findings from the initial validation.
Method: The model has been given to practitioners such as product managers with the request to perform a self-assessment of the current product roadmapping practices in their company. Afterwards, interviews with each participant have been conducted in order to gain insights.
Results: The initial validation revealed that some of the stages of the model need to be rearranged and minor usability issues were found. The overall structure of the model was well received. The study resulted in the development of the version 1.1 of the DEEP product roadmap maturity model which is also presented in this article.
Through increasing market dynamics, rapidly evolving technologies and shifting user expectations coupled with the adoption of lean and agile practices, companies are struggling with their ability to provide reliable product roadmaps by applying traditional approaches. Currently, most companies are seeking opportunities to improve their product roadmapping practices. As a first challenge they have to assess their current product roadmapping capabilities in order to better understand how to improve their practices and how to switch to a new approach. The aim of this article is to provide an initial maturity model for product roadmapping practices that is especially suited for assessing the roadmapping capabilities of companies operating in dynamic and uncertain market environments. Based on interviews with 15 experts from 13 various companies the current state of practice regarding product roadmapping was identified. Afterwards, the model development was conducted in the context of expert workshops with the Robert Bosch GmbH and researchers. The study results in the so-called DEEP 1.0 product roadmap maturity model which allows companies to conduct a self assessment of their product roadmapping practice.
Context: Organizations are increasingly challenged by dynamic and technical market environments. Traditional product roadmapping practices such as detailed and fixed long-term planning typically fail in such environments. Therefore, companies are actively seeking ways to improve their product roadmapping approach. Goal: This paper aims at identifying problems and challenges with respect to product roadmapping. In addition, it aims at understanding how companies succeed in improving their roadmapping practices in their respective company contexts. The study focuses on mid-sized and large companies developing software-intensive products in dynamic and technical market environments. Method: We conducted semi structured expert interviews with 15 experts from 13 German companies and conducted a thematic data analysis. Results: The analysis showed that a significant number of companies is still struggling with traditional feature based product-roadmapping and opinion based prioritization of features. The most promising areas for improvement are stating the outcomes a company is trying to achieve and making them part of the roadmap, sharing or co-developing the roadmap with stakeholders, and the establishing discovery activities.
Context: Companies in highly dynamic markets increasingly struggle with their ability to plan product development and to create reliable roadmaps. A main reason is the decreasing lack of predictability of markets, technologies, and customer behaviors. New approaches for product roadmapping seem to be necessary in order to cope with today's highly dynamic conditions. Little research is available with respect to such new approaches. Objective: In order to better understand the state of the art and to identify research gaps, this article presents a review of the scientific literature with respect to product roadmapping. Method: We performed a systematic literature review (SLR) with respect to identify papers in the field of computer science. Results: After filtering, the search resulted in a set of 23 relevant papers. The identified papers focus on different aspects such as roadmap types, processes for creating and updating roadmaps, problems and challenges with roadmapping, approaches to visualize roadmaps, generic frameworks and specific aspects such as the combination of roadmaps with business modeling. Overall, the scientific literature covers many important aspects of roadmapping but does provide only little knowledge on how to create product roadmaps under highly dynamic conditions. Research gaps address, for instance, the inclusion of goals or outcomes into product roadmaps, the alignment of a roadmap with a product vision, and the inclusion of product discovery activities in product roadmaps. In addition, the transformation from traditional roadmapping processes to new ways of roadmapping is not sufficiently addressed in the scientific literature.
Software process improvement (SPI) is around for decades, but it is a critically discussed topic. In several waves, different aspects of SPI have been discussed in the past, e.g., large scale company-level SPI programs, maturity models, success factors, and in-project SPI. It is hard to find new streams or a consensus in the community, but there is a trend coming along with agile and lean software development. Apparently, practitioners reject extensive and prescriptive maturity models and move towards smaller, faster and continuous project-integrated SPI. Based on data from two survey studies conducted in Germany (2012) and Europe (2016), we analyze the process customization for projects and practices for implementing SPI in the participating companies. Our findings indicate that, even in regulated industry sectors, companies increasingly adopt in-project SPI activities, primarily with the goal to continuously optimize specific processes. Therefore, with this paper, we want to stimulate a discussion on how to evolve traditional SPI towards a continuous learning environment.
The emergence of agile methods and practices has not only changed the development processes but might also have affected how companies conduct software process improvement (SPI). Through a set of complementary studies, we aim to understand how SPI has changed in times of agile software development. Specifically, we aim (a) to identify and characterize the set of publications that connect elements of agility to SPI, (b) to explore to which extent agile methods/practices have been used in the context of SPI, and (c) to understand whether the topics addressed in the literature are relevant and useful for industry professionals. To study these questions, we conducted an in-depth analysis of the literature identified in a previous mapping study, an interview study, and an analysis of the responses given by industry professionals to SPI related questions stemming from an independently conducted survey study. Regarding the first question, we identified 55 publications that focus on both SPI and agility of which 48 present and discuss how agile methods/practices are used to steer SPI initiatives. Regarding the second question, we found that the two most frequently mentioned agile methods in the context of SPI are Scrum and Extreme Programming (XP), while the most frequently mentioned agile practices are integrate often, test-first, daily meeting, pair programming, retrospective, on-site customer, and product backlog. Regarding the third question, we found that a majority of the interviewed and surveyed industry professionals see SPI as a continuous activity. They agree with the agile SPI literature that agile methods/practices play an important role in SPI activities but that the importance given to specific agile methods/practices does not always coincide with the frequency with which these methods/practices are mentioned in the literature.
Efficient and robust 3D object reconstruction based on monocular SLAM and CNN semantic segmentation
(2019)
Various applications implement slam technology, especially in the field of robot navigation. We show the advantage of slam technology for independent 3d object reconstruction. To receive a point cloud of every object of interest void of its environment, we leverage deep learning. We utilize recent cnn deep learning research for accurate semantic segmentation of objects. In this work, we propose two fusion methods for cnn-based semantic segmentation and slam for the 3d reconstruction of objects of interest in order to obtain a more robustness and efficiency. As a major novelty, we introduce a cnn-based masking to focus slam only on feature points belonging to every single object. Noisy, complex or even non-rigid features in the background are filtered out, improving the estimation of the camera pose and the 3d point cloud of each object. Our experiments are constrained to the reconstruction of industrial objects. We present an analysis of the accuracy and performance of each method and compare the two methods describing their pros and cons.
Im Rahmen dieser Arbeit wurde eine Software-Architektur entwickelt, mit der sich Interaktionen zwischen autonomen Fahrzeugen und Passanten im Straßenverkehr in einer simulierten Umgebung untersuchen lassen. Hierbei wird das autonome Fahrzeug durch einen externen Fahrsimulator gesteuert. Der Einsatz eines Motion-Capture-Systems ermöglicht dabei die Aufzeichnung und Übertragungen der Bewegungsdaten von Passant und Fahrer in die virtuelle Umgebung. Durch den Einsatz von head-mounted Displays sollen Akteure die virtuelle Umgebung möglichst als real empfinden. Auf Basis der entwickelten Software-Architektur wurde eine Simulationsumgebung realisiert, in der Interaktionen zwischen einem Passant und einem autonomem Fahrzeug untersucht werden können. Das Projekt soll das Potential von Motion-Capture gestützten Simulationsumgebungen für die Konzeption und Entwicklung von autonomen Fahrsystemen aufzeigen.
Über die letzten Jahre nehmen die Verkehrsunfälle zwischen Fahrzeugen und Fahrrädern und Motorrädern immer weiter zu. Die Ursache ist meistens eine unachtsam geöffnete Fahrzeugtür, mit der ein anfahrender Zweiradfahrer kollidiert. Diese Unfälle können verheerende Folgen für alle Beteiligten haben. Aus diesem Grund soll der technologische Fortschritt, welcher sich durch die gesamte Automobilbranche zieht, um eine zusätzliche Komponente erweitert werden. Hierbei handelt es sich um ein System für Zweiradfahrer, welches eine sich öffnende Fahrzeugtür frühzeitig erkennen soll, den Fahrer somit frühzeitig warnen kann und gegebenenfalls Strategien zu Unfallvermeidung einleiten kann. Dieses Konzept soll die Entwicklung des Systems vorgeben und grundlegend erläutern.
Digitalization of products and services commonly causes substantial changes in business models, operations, organization structures and IT infrastructures of enterprises. Motivated by experiences and observations from digitalization projects, the paper investigates the effects of digitalization on enterprise architectures (EA). EA models serve as representation of business, information system and technical aspects of an enterprise to support management and development. By comparing EA models before and after digitalization, the paper analyzes the kinds of changes visible in the EA model. The most important finding is that newly created digitized products and the associated (product)- and enterprise architecture are no longer properly integrated into the overall architecture and even exist in parallel. Thus, the focus of this work is on showing these parallel architectures and proposing derivations for a better integration.
The Dual Active Bridge (DAB) is a very promising topology for future power converters. However, careless operation can lead to a DC component in the transformer current. The problem is further exacerbated when the phase shift changes during operation. This work presents a study of DC bias effects on the DAB with special regard to transient effects introduced by sudden shifts in the output load. We present a simple yet effective approach to avoid DC bias entirely.
Die Navigation mit dem E-Bike soll eine positive Nutzerfahrung sein. Deshalb wurde im Zuge dieser Arbeit im Rahmen von Bosch E-Bike Systems ein multimodales smart User Interface (MSUI) entwickelt. Das Konzept umfasst visuelle Turn-by-Turn Signale, taktile Vibrationssignale im Lenker und eine auditive Sprachausgabe. Ziel der Arbeit ist ein Prototyp, der sich für die Evaluation von Nutzerbedürfnissen in Bezug auf verschiedene multimodale Rückmeldemöglichkeiten eignet.
Die Simulation menschlichen Gruppenverhaltens kann bei der Kapazitäten-, Risiko- und Evakuierungs Planung von Gebäuden hilfreich sein, bei der Produktion von Filmen für eindrucksvolle Massen-Szenen eingesetzt werden oder virtuelle Schauplätze in Echtzeit-Anwendungen beleben. Die Herausforderungen liegen vor allem in einem realistischen Erscheinungsbild der virtuellen Crowd, glaubwürdigem Verhalten innerhalb eines sozialen Verbundes, realitätsnahen Animationen und der Wahrung der Echtzeitfähigkeit interaktiver Anwendungen. Im Rahmen dieser Arbeit wird der aktuelle Stand der Technik vorgestellt, Technologien evaluiert und ein Crowd Simulation Prototyp mit der Unity Engine implementiert.
Die Bedeutung und Stellung der Informationstechnologie erlebte in den letzten 60 Jahren einen fortlaufenden Wandel. Der anfänglich rein unterstützende Charakter entwickelte sich immer mehr zu einem wichtigen Bestandteil der Aufbau- und Ablauforganisation im Unternehmen. Ein definiertes IT-Servicemanagement im Unternehmen sieht sich mittlerweile gleichgeordnet mit den restlichen Fachabteilungen, tritt mit seinen Leistungen als Dienstleister auf und betrachtet die Fachabteilungen als „Kunde“. Neue Technologien und Innovationen und die daraus resultierenden Neudefinitionen bestehender Anforderungen sollen im Rahmen der Digitalisierung in Unternehmen positive Effekte zeigen. IT Infrastructure Library (ITIL) wird als Framework für IT-Servicemanagement in der Industrie und im öffentlichen Dienst genutzt. Der Ansatz von ITIL unterstützte den Kulturwandel und sensibilisierte das Management und die Mitarbeiter darin, serviceorientiert zu denken. Da dieser Ansatz einen vordefinierten, zyklischen Ablauf hat, könnten schnell eintreffende Kundenanforderungen nicht fristgerecht umgesetzt werden, weshalb agile Methoden wie der DevOps-Ansatz in den Vordergrund rücken. Die Herausforderung besteht darin, den Kulturwandel bei der Einführung von DevOps in bestehenden ITIL-Strukturen in Unternehmen zu fördern.
Eines der gängigsten bildgebenden Verfahren in der Medizin ist die Sonografie. Jedoch ist die Reproduzierbarkeit der Ultraschalldiagnostik bis heute noch immer ein Problem, wodurch Fehldiagnosen gestellt werden. Durch das in diesem Papier vorgestellte prototypische System zur Unterstützung für Medizinstudenten in Ultraschallseminaren sollen Anforderungen zur Reproduzierbarkeit einer Ultraschalluntersuchung definiert werden. Durch Experteninterviews wurden Einblicke in die klinischen Abläufe und den Krankenhaus-Alltag gewonnen, welche Inhalte relevant sind, um die Reproduzierbarkeit von Ultraschalluntersuchungen zu ermöglichen.
Durch das stetige Wachstum an neuen Technologien und Möglichkeiten steht der Verschmelzung von Technologien mit dem Menschen kaum noch etwas im Wege. Die Untersuchung der Implantate und die damit verbundenen Risiken sind ein Teil dieser Arbeit. Von Bedeutung sind hier die Funktionsweise und die IT-Sicherheitsaspekte. Alle in dieser Arbeit dargestellten Implantate benötigen eine Kommunikation nach außen. Diese Kommunikationsmöglichkeit birgt Risiken, die nicht nur auf die Daten der Träger beschränkt sind, sondern auch gesundheitliche Risiken beinhalten.
Segmentierung von Polypen in Koloskopie-Bilddaten : eine Potentialanalyse von Deep-Learning-Methoden
(2018)
Kolorektale Karzinome haben eine hohe Sterblichkeitsrate, wenn sie spät entdeckt werden. Eine frühzeitige Entfernung von bösartigen Polypen im Magen-Darm-Trakt, die deren Vorstufen bilden, bietet jedoch hohe Überlebenschancen. Bei Darmspiegelungen werden gerade kleine Polypen aber recht häufig übersehen. Zuverlässige bildverarbeitende Systeme, die Polypen in einem Koloskopie-Frame nicht nur detektieren, sondern pixelgenau segmentieren, könnten Ärzten bei Darmkrebs-Screenings helfen. Diese Arbeit analysiert den aktuellen Stand der Segmentierung von Polypen im Gastrointestinaltrakt. Weiterführend wird untersucht, inwiefern die in letzter Zeit sehr erfolgreichen Methoden des Deep Learning hier Vorteile bieten.
Enterprises are transforming their strategy, culture, processes, and their information systems to enlarge their digitalization efforts or to approach for digital leadership. The digital transformation profoundly disrupts existing enterprises and economies. In current times, a lot of new business opportunities appeared using the potential of the Internet and related digital technologies: The Internet of Things, services computing, cloud computing, artificial intelligence, big data with analytics, mobile systems, collaboration networks, and cyber physical systems. Digitization fosters the development of IT environments with many rather small and distributed structures, like the Internet of Things, microservices, or other micro-granular elements. Architecting micro-granular structures have a substantial impact on architecting digital services and products. The change from a closed-world modeling perspective to more flexible Open World of living software and system architectures defines the context for flexible and evolutionary software approaches, which are essential to enable the digital transformation. In this paper, we are revealing multiple perspectives of digital enterprise architecture and decisions to effectively support value and service oriented software systems for intelligent digital services and products.
Presently, many companies are transforming their strategy and product base, as well as their culture, processes and information systems to become more digital or to approach for a digital leadership. In the last years new business opportunities appeared using the potential of the Internet and related digital technologies, like Internet of Things, services computing, cloud computing, edge and fog computing, social networks, big data with analytics, mobile systems, collaboration networks, and cyber physical systems. Digitization fosters the development of IT environments with many rather small and distributed structures, like the Internet of Things, Microservices, or other micro-granular elements. This has a strong impact for architecting digital services and products. The change from a closed-world modeling perspective to more flexible open-world composition and evolution of micro-granular system architectures defines the moving context for adaptable systems. We are focusing on a continuous bottom-up integration of micro-granular architectures for a huge amount of dynamically growing systems and services, as part of a new digital enterprise architecture for service dominant digital products.
New business opportunities appeared using the potential of the Internet and related digital technologies, like the Internet of Things, services computing, artificial intelligence, cloud, edge, and fog computing, social networks, big data with analytics, mobile systems, collaboration networks, and cyber-physical systems. Companies are transforming their strategy and product base, as well as their culture, processes and information systems to adopt digital transformation or to approach for digital leadership. Digitalization fosters the development of IT environments with many rather small and distributed structures, like the Internet of Things, Microservices, or other micro-granular elements. Digitalization has a substantial impact for architecting the open and complex world of highly distributed digital servcies and products, as part of a new digital enterprise architecture, which structure and direct service-dominant digital products and services. The present research paper investigates mechanisms for supporting the evolution of digital enterprise architectures with user-friendly methods and instruments of interaction, visualization, and intelligent decision management during the exploration of multiple and interconnected perspectives by an architecture management cockpit.
Enterprise Governance, Risk and Compliance (GRC) systems are key to managing risks threatening modern enterprises from many different angles. Key constituent to GRC systems is the definition of controls that are implemented on the different layers of an Enterprise Architecture (EA). Controls become part of a “concern” of the EA, which allows to use an EA viewpoint to cover control compliance assessments. In this article we explore this relationship further, derive a metamodel linking control and EA, and elicit how this linkage give rise to a hierarchic understanding of the viewpoint concept for EAs. We complement these considerations with an expository instantiation in a cockpit for control compliance applied in an international enterprise in the insurance industry.
Im Projekt "Heat4SmartGrid" soll untersucht werden, ob und wie mit Hilfe von Wärmepumpen der Anteil erneuerbarer Energien an der Wärmeversorgung in Baden-Württemberg (BW) gesteigert werden und gleichzeitig das Verteilnetz druch eine intelligente Steuerung der Wärmepumpensysteme entlastet werden kann.
Hierzu ist im AP 1 für das Jahr 2050 ein Wärmebedarf in BW von 35 TWh errechnet worden, bei 40 TWh im Jahr 2030. Im Vergleich zum Jahr 2015 ergibt sich so ein Rückgang um 30% zum Jahr 2030 und bis zum Jahr 2050 um 40%. Weiterhin steigt auf Grund von energetischer Sanierung im Gebäudebestand das technische Potenzial für Wärmepumpen, ausgehend von 8 TWh im Jahr 2015, auf 20 TWh bis 2030 umd auf 23 TWh bis 2040. Insgesamt könnten so 63% aller Wohnanteile in BW durch Wärmepumpen mit thermischer Energie versorgt werden Der Einsatz von Wärmepumpensystemen ist somit ein wichtiger Baustein für das Gelingen der Wärmewende.
Zur Steuerung der Wärmepumpen sind in AP 2 Betriebsmodei in Abhängigkeit von Anwendung und Gebäudetyp entwickelt worden. Diese werden mittels Korrelationsfunktionen für die Heizleistung für Luft-Wasser- und Sole-Wasser-Wärmepumpen bestimmt. Hierauf aufbauend sind für die in AP 1 ermittelten Gebäudetypen die erreichbare Jahresarbeitszahl der beiden Wärmepumpentechnologien ermittelt worden.
Zur intelligenten system- und netzdienlichen Steuerung dieser Wärmepumpensysteme werden Prognosen über die lokale Erzeugung und den lokalen Verbrauch benötigt, die in AP 5 erarbeitet werden. In Abhängigkeit der Prognoseanwendung sind sowohl univariate (elektrische Last und thermische Brauchwarmwasserlast) als auch multivariate Prognosemodelle (PV-Erzeugung und thermische Heizwarmwasserlast) implementiert worden.
Recognizing human actions is a core challenge for autonomous systems as they directly share the same space with humans. Systems must be able to recognize and assess human actions in real-time. To train the corresponding data-driven algorithms, a significant amount of annotated training data is required. We demonstrate a pipeline to detect humans, estimate their pose, track them over time and recognize their actions in real-time with standard monocular camera sensors. For action recognition, we transform noisy human pose estimates in an image like format we call Encoded Human Pose Image (EHPI). This encoded information can further be classified using standard methods from the computer vision community. With this simple procedure, we achieve competitive state-of-the-art performance in pose based action detection and can ensure real-time performance. In addition, we show a use case in the context of autonomous driving to demonstrate how such a system can be trained to recognize human actions using simulation data.
RoPose-Real: real world dataset acquisition for data-driven industrial robot arm pose estimation
(2019)
It is necessary to employ smart sensory systems in dynamic and mobile workspaces where industrial robots are mounted on mobile platforms. Such systems should be aware of flexible and non-stationary workspaces and able to react autonomously to changing situations. Building upon our previously presented RoPose-system, which employs a convolutional neural network architecture that has been trained on pure synthetic data to estimate the kinematic chain of an industrial robot arm system, we now present RoPose-Real. RoPose-Real extends the prior system with a comfortable and targetless extrinsic calibration tool, to allow for the production of automatically annotated datasets for real robot systems. Furthermore, we use the novel datasets to train the estimation network with real world data. The extracted pose information is used to automatically estimate the observing sensor pose relative to the robot system. Finally we evaluate the performance of the presented subsystems in a real world robotic scenario.
Learning to translate between real world and simulated 3D sensors while transferring task models
(2019)
Learning-based vision tasks are usually specialized on the sensor technology for which data has been labeled. The knowledge of a learned model is simply useless when it comes to data which differs from the data on which the model has been initially trained or if the model should be applied to a totally different imaging or sensor source. New labeled data has to be acquired on which a new model can be trained. Depending on the sensor, this can even get more complicated when the sensor data becomes more abstract and hard to be interpreted and labeled by humans. To enable reuse of models trained for a specific task across different sensors minimizes the data acquisition effort. Therefore, this work focuses on learning sensor models and translating between them, thus aiming for sensor interoperability. We show that even for the complex task of human pose estimation from 3D depth data recorded with different sensors, i.e. a simulated and a Kinect 2TM depth sensor, human pose estimation can greatly improve by translating between sensor models without modifying the original task model. This process especially benefits sensors and applications for which labels and models are difficult if at all possible to retrieve from raw sensor data.
The investigation of stress requires to distinguish between stress caused by physical activity and stress that is caused by psychosocial factors. The behaviour of the heart in response to stress and physical activity is very similar in case the set of monitored parameters is reduced to one. Currently, the differentiation remains difficult and methods which only use the heart rate are not able to differentiate between stress and physical activity, without using additional sensor data input. The approach focusses on methods which generate signals providing characteristics that are useful for detecting stress, physical activity, no activity and relaxation.
Autismus-Spektrum-Störungen (ASD) bei Kindern werden häufig zu spät diagnostiziert und die Begleitung der chronischen Krankheit gestaltet sich schwierig. Der vorgestellte Ansatz erlaubt die Behandlung der Kinder in dem bekannten häuslichen Umfeld und versucht die Beziehungen zwischen Schlaf und Verhalten herauszuarbeiten. Die gewonnenen Erkenntnisse sollen die Lebensqualität der Patienten verbessern und den Eltern Hilfestellung geben. Die notwendige infrastrukturelle Unterstützung wird durch medizinisches Fachpersonal geleistet, das auf einen web-basierten Service zurückgreifen kann, der sämtliche Prozesse (Diagnostik, Datenerfassung, -aufzeichnung und Training etc.) begleitet. Die anonymisierten Daten werden in einem Diagnosesystem zentral abgelegt und können so für zukünftige Behandlungsstrategien nutzbar sein. Die umfassende Lösung setzt auf zentrale Elemente von Smart-Homes und AAL auf.
Fatigue and drowsiness are responsible for a significant percentage of road traffic accidents. There are several approaches to monitor the driver's drowsiness, ranging from the driver's steering behavior to the analysis of the driver, e.g. eye tracking, blinking, yawning, or electrocardiogram (ECG). This paper describes the development of a low-cost ECG sensor to derive heart rate variability (HRV) data for drowsiness detection. The work includes hardware and software design. The hardware was implemented on a printed circuit board (PCB) designed so that the board can be used as an extension shield for an Arduino. The PCB contains a double, inverted ECG channel including low-pass filtering and provides two analog outputs to the Arduino, which combines them and performs the analog-to-digital conversion. The digital ECG signal is transferred to an NVidia embedded PC where the processing takes place, including QRS-complex, heart rate, and HRV detection as well as visualization features. The resulting compact sensor provides good results in the extraction of the main ECG parameters. The sensor is being used in a larger frame, where facial-recognition-based drowsiness detection is combined with ECG-based detection to improve the recognition rate under unfavorable light or occlusion conditions.
Haptic softness is a central product attribute for many fabric-related retailers. Can those retailers use music - an easy to implement in-store atmospheric cue - to influence consumers' perception of this central product attribute? Across four studies, this research shows that high (vs. low) music softness enhances consumers' haptic softness perceptions. We argue that this cross-modal effect occurs owing to a transfer of softness-related associations from the auditory to the haptic modality. To better inform retail practice, we examine three managerially relevant boundary conditions at the product and store levels.
Seit über 160 Jahren ist die Hochschule Reutlingen ein Innovationsmtor in der Region. Die Webschule zur Förderung der Textilindustrie entwickelt sich in dieser Zeit zu einer der größten Hochschulen für angewandte Wissenschaften im Land. An fünf Fakultäten studieren mehr als 5500 Fach- und Führungskräfte von morgen - praxisnah und mit internationaler Ausrichtung.
Diese Dokumentation, die von einen Zeitzeugen erstellt wurde, der die Aufbauphase der "Reutlinger Betriebswirtschaft" beinahe von Anfang an mitgemacht hat, soll auch den jüngeren Kolleginnen und Kollegen deutlich machen, wie die Fakultät entstanden und im Laufe der ersten beiden Jahrzehnte gewachsen st. Zeithorizont: Gründung am 1. Oktober 1971 bis zum Erscheinen des ersten Hochschulrankings im Heft 1 des "manager-magazins" von 1995. Dies entspricht einem Berichtszeitraum von knapp 25 Jahren. Dieser Zeitrahmen soll zeigen, wie die Saat, die bei der Gründung und in der Aufbauphase ausgesät wurde, die ersten sichtbaren Früchte getragen hat.
Die Erholung unseres Körpers und Gehirns von Müdigkeit ist direkt abhängig von der Qualität des Schlafes, die aus den Ergebnissen einer Schlafstudie ermittelt werden kann. Die Klassifizierung der Schlafstadien ist der erste Schritt dieser Studie und beinhaltet die Messung von Biovitaldaten und deren weitere Verarbeitung. Das non-invasive Schlafanalyse-System basiert auf einem Hardware-Sensornetz aus 24 Drucksensoren, das die Schlafphasenerkennung ermöglicht. Die Drucksensoren sind mit einem energieeffizienten Mikrocontroller über einen systemweiten Bus mit Adressarbitrierung verbunden. Ein wesentlicher Unterschied dieses Systems im Vergleich zu anderen Ansätzen ist die innovative Art, die Sensoren unter der Matratze zu platzieren. Diese Eigenschaft erleichtert die kontinuierliche Nutzung des Systems ohne fühlbaren Einfluss auf das gewohnte Bett. Das System wurde getestet, indem Experimente durchgeführt wurden, die den Schlaf verschiedener gesunder junger Personen aufzeichneten. Die ersten Ergebnisse weisen auf das Potenzial hin, nicht nur Atemfrequenz und Körperbewegung, sondern auch Herzfrequenz zu erfassen.
This document presents an algorithm for a nonobtrusive recognition of Sleep/Wake states using signals derived from ECG, respiration, and body movement captured while lying in a bed. As a core mathematical base of system data analytics, multinomial logistic regression techniques were chosen. Derived parameters of the three signals are used as the input for the proposed method. The overall achieved accuracy rate is 84% for Wake/Sleep stages, with Cohen’s kappa value 0.46. The presented algorithm should support experts in analyzing sleep quality in more detail. The results confirm the potential of this method and disclose several ways for its improvement.
The goal of this paper pretends to show how a bed system with an embedded system with sensor is able to analyze a person’s movement, breathing and recognizing the positions that the subject is lying on the bed during the night without any additional physical contact. The measurements are performed with sensors placed between the mattress and the frame. An Intel Edison board was used as an endpoint that served as a communication node from the mesh network to external service. Two nodes and Intel Edison are attached to the bottom of the bed frame and they are connected to the sensors.
Power loss measurement of power electronic components and overall systems is sometimes difficult by use of electrical quantities and in few applications even not possible. The calorimetric power loss measurement is an established method to identify the overall system losses with a suitable accuracy. This paper presents a novel method with an open chamber calorimeter under accurate air mass flow, air pressure, humidity measurement and temperature control. The benefits are the approximately halved measurement time compared to established systems and the possibility to control the chamber temperature. So it is possible to measure the power losses at different ambient temperatures.
This paper presents the preliminary results of a setof research projects being developed at the distributed resources laboratory at the University of Reutlingen. The main aim of these projects is to couple distributed ledger technologies (DLTs) with distributed control of microgrids. Firstly, a DLT based solution for a local market platform has been developed. This enables end customers to participate in new local micro-energy-markets by providing them with a distributed, decentralized, transparent and secure Peer to Peer (P2P) payment system. Secondly, this solution has been integrated with an autonomous (agent-based) grid management. The integrated solution of both marked platform as well as agent based control has been implemented and tested in a real microgrid with different distributed components such as PV System, CHP and different kinds of controllable loads. This microgrid is located in the distributed energy resources laboratory at the University of Reutlingen. Thirdly, the resulting solution is being implemented as an easy to customize market solution by AC2SG Software Oy, a Finland based software company, developing solutions for the Indian market. In a next phase, the solution is going to be tested in real environment in off-grids systems in India.
This study is about estimating the reproducibility of finding palpation points of three different anatomical landmarks in the human body (Xiphoid Process and the 2 Hip Crests) to support a navigated ultrasound application. On 6 test subjects with different body mass index the three palpation points were located five times by two examiners. The deviation from the target position was calculated and correlated to the fat thickness above each palpation point. The reproducibility of the measurements had a mean error of ≈13.5 mm +- 4 mm, which seems to be sufficient for the desired application field.
In an effort to make the cultural and institutional aspects of energy efficiency in industrial organizations more visible, this article introduces a theoretical framework of decision-making processes. Taking a sociological perspective and viewing organizations as cultural systems embedded in wider social contexts, I have developed a multilevel framework addressing institutional, organizational, and individual dimensions shaping decisions on energy efficiency. The framework's development is based on qualitative empirical fieldwork and integrates insights into organizational theory; neo-institutional theory, the attention-based view of the firm, and organizational culture theories. I conclude that decisions on energy efficiency are results of problematization and theorization processes. These processes emerge between the institutional issue-field, the organization, and its members. The model explains decisions shaped by environment (external and material), organizational processes (energy-efficiency practices, climate and culture) and individuals’ characteristics. The framework serves several purposes: introducing a meta-theory of decision making, providing a concept for empirical analysis, and enabling connectivity to the research on barriers.
Bei der spanenden Bearbeitung metallischer Werkstücke mit Werkzeugmaschinen ist die Produktivität und Qualität der erzeugten Werkstücke wesentliches Kriterium für die Wirtschaftlichkeit. Zur Erreichung dieser Ziele sind genaue Kenntnisse der Leistungsfähigkeit und Eigenschaften der eingesetzten Produktionsmittel erforderlich. Dazu sind seit geraumer Zeit unterschiedliche Methoden der Untersuchung z.B. der statischen und dynamischen Maschineneigenschaften bekannt. Dazu gehören die Messung der statischen und dynamischen Nachgiebigkeit, die Aufnahme der Eigenschwingungen mittels der experimentellen Modalanalyse. Diese Methoden werden häufig nur im Laborbetrieb angewandt. In diesem Beitrag werden Kriterien dargestellt, die bei der Übertragung der Analyse auf den realen Betrieb noch zu berücksichtigen sind, um die Ergebnisse interpretieren zu können.
In der Orthopädie werden Robotersysteme bereits seit mehreren Jahren erfolgreich unterstützend eingesetzt. Dieser Ansatz erfordert die vorgelagerte Erstellung eines digitalen Modells auf Basis von medizinischen Bilddatensätzen. Die Erstellung und Überprüfung der Modelle soll in einer browserbasierten Client- Server-Anwendung erfolgen. Hierfür ist die Darstellung von zweidimensionalen und dreidimensionalen Datensätzen erforderlich. Basis dieses Papers ist die Entwicklung eines Ansatzes zur interaktiven, browserbasierten dreidimensionalen Darstellung medizinischer Planungsdaten. Die Anwendung stellt ein Proof of Concept dar, ob die bestehenden Desktopanwendungen zur Darstellung von Planungsdaten ersetzt werden können. Mit Hilfe des Frameworks AMI.js wurde die Anwendung umgesetzt. Sie erfüllt alle definierten Anforderungen und kann somit die aktuellen Desktopanwendungen ersetzen.
Das Internet der Dinge verändert die Customer-Experience nachhaltig, beispielsweise indem neue Dienste die Konsumenten kognitiv entlasten. Das Management sollte die neuen Interaktionsmöglichkeiten mit den Konsumenten nutzen und leistungsfähige Benutzerschnittstellen entwickeln sowie analytisches Know-how und Partnerschaften aufbauen.
Zur Unterstützung des Operateurs wird eine patientennahe Informationsanzeige entwickelt, die kontextrelevante Informationen entsprechend der aktuellen Situation bereitstellen kann. Hierfür soll eine Situationserkennung konzipiert werden, die auf unterschiedliche intraoperative Prozesse übertragen werden kann. Ziel der adaptiven Situationserkennung ist das Erkennen spezifischer Situationen durch intraoperative Informationen unterschiedlicher Datenquellen im Operationssaal. Innerhalb der Datenerhebung und -analyse wurden Anwendungsfälle für die Situationserkennung definiert sowie chirurgische Prozessmodelle erstellt, die intraoperative Ereignisse abbilden. Auf Basis dieser Informationen wurde ein Konzept entworfen, das sich zunächst auf die Erkennung abstrakter generalisierter Phasen, unabhängig vom Eingriff, fokussiert und sich Schritt für Schritt auf granulare Prozessschritte spezifizieren lässt. Diese Flexibilität soll die Übertragbarkeit des Konzepts auf intraoperative Prozesse ermöglichen und den Operateur dadurch gezielt mit kontextrelevanten Informationen unterstützen. Das Konzept wird in zukünftigen Schritten weiterentwickelt.
Mammographie-Geräte werden in der Diagnostik von Mammakarzinomen eingesetzt. Die ursprüngliche Technik wurde in den letzten Jahren von analogen Röntgenfilmen zu digital integrierten Systemen weiterentwickelt. Durch die Tomosynthese, bei der in einem Schnittbildverfahren mehrere Schichten des Organismus untersucht werden können, können auch überlagerte Strukturen sichtbar gemacht werden. Um als adäquate Grundlage zur Diagnostik von malignen Tumoren dienen zu können, müssen einige qualitative Anforderungen erfüllt werden. Bisher gibt es wenig Literatur, die Anforderungen und den Aufbau solcher Geräte systematisch beschreiben. Im Rahmen dieser Arbeit werden auf Basis der Literatur und bestehender Systeme die qualitativen Anforderungen identifiziert. Der prinzipielle Aufbau solcher Systeme wird anhand der einzelnen Systembausteine in der semiformalen Notationssprache SysML gezeigt. Die grundlegende Funktionsweise eines tomosynthesefähigen Mammographie Gerätes wird in dieser Arbeit zusammenfassend und anhand der einzelnen Systembausteine beschrieben. Diese Arbeit dient der Vermittlung eines umfassenden Verständnisses für die digitale Mammographie, um als Grundlage für die Dokumentation von qualitativen Anforderungen dienen zu können.
This study describes a non-contact measuring and parameter identification procedure designed to evaluate inhomogeneous stiffness and damping characteristics of the annular ligament in the physiological amplitude and frequency range without the application of large static external forces that can cause unnatural displacements of the stapes. To verify the procedure, measurements were first conducted on a steel beam. Then, measurements on an individual human cadaveric temporal bone sample were performed. The estimated results support the inhomogeneous stiffness and damping distribution of the annular ligament and are in a good agreement with the multiphoton microscopy results which show that the posterior-inferior corner of the stapes footplate is the stiffest region of the annular ligament. This method can potentially help to establish a correlation between stiffness and damping characteristics of the annular ligament and inertia properties of the stapes and, thus, help to reduce the number of independent parameters in the model-based hearing diagnosis.
Due to the large interindividual variances and the poor optical accessibility of the ear, the specificity of hearing diagnostics today is severely restricted to a certain clinical picture and quantitative assessment. Often only a yes or no decision is possible, which depends strongly on the subjective assessment of the ENT physician. A novel approach, in which objectively obtainable, non invasive audiometric measurements are evaluated using a numerical middle ear model, makes it possible to make the hidden middle ear properties visible and quantifiable. The central topic of this paper is a novel parameter identification algorithm that combines inverse fuzzy arithmetic with an artificial neural network in order to achieve a coherent diagnostic overall picture in the comparison of model and measurement. Its usage is shown at a pathological pattern called malleus fixation where the upper ligament of the malleus is pathologically stiffened.
5-hydroxymethyl-furfural (HMF) and furfural are interesting as potential platform chemicals for a bio-based chemical production economy. Within the scope of this work, the process routes under technical development for the production of these platform chemicals were investigated. For two selected processes, the material and energy flows, as well as the carbon footprint, were examined in detail. The possible production process optimizations, further development potentials, and the research demand against the background of the reduction of the primary energy expenditure were worked out.
Telemetrie und Homemonitoring werden bereits in vielen Gesundheitsbereichen erfolgreich genutzt. Moderne Herzschrittmacher ermöglichen durch telemetrische Datenübertragung das Homemonitoring aktueller Gesundheits- und Zustandsdaten durch PatientInnen und ÄrztInnen. Für die Weiterentwicklung existierender Produkte ist ein grundlegendes Verständnis der Anforderungen an und des Aufbaus solcher Systeme notwendig. Bisher existieren
herstellerunabhängige Betrachtungen dieser noch nicht. Durch die Verwendung von SysML als semiformale Notationssprache wird das System Herzschrittmacher und Homemonitoring modelliert. Die Anforderungen an ein solches System lassen sich aus bestehenden Produkten ableiten. Die vorliegende Arbeit beschreibt die Systemarchitektur solcher Systeme, anhand derer die Anbindung an Informationssysteme über das Homemonitoringsystem und die dadurch umgesetzten Funktionen gezeigt werden.
Private equity (PE) firms are investment firms that acquire equity shares in companies. The goal of PE firms is to exit the investment after few years with a substantial increase in value. PE firms often claim to outperform the market, i.e. to create alpha.
The overall aim of this paper is to unravel the mystery of value creation in the PE industry. First, the author presents a conceptual framework for value creation in the PE industry based on a multiple valuation model that breaks down value creation into different elements. Second, the paper evaluates whether PE firms really create value by analysing and combining results from prior empirical studies based on the conceptual framework.
The results show that existing empirical evidence is mixed but that there is indeed a tendency toward a positive evidence that PE firms create economic value in average. However, there are methodological difficulties in measuring the value creation and studies are often subject to bias. Finally, it is pointed out that the question whether PE firms really create value has to be viewed from different perspectives such as the perspective of the PE firm, the investors and the portfolio companies.
Companies are constantly changing their business process models. In team environments, different versions of a process model are created at the same time. These versions of a process model need to be merged from time to time to consolidate changes and create a new common version.
In this short paper, we propose a solution for modifying a merge result. The goal is to create a meaningful merge result by adding connector nodes to the model at specific locations. This increases the amount of possible result models and reduces additional implementation effort.
While several service-based maintainability metrics have been proposed in the scientific literature, reliable approaches to automatically collect these metrics are lacking. Since static analysis is complicated for decentralized and technologically diverse microservice-based systems, we propose a dynamic approach to calculate such metrics from runtime data via distributed tracing. The approach focuses on simplicity, extensibility, and broad applicability. As a first prototype, we implemented a Java application with a Zipkin integrator, 23 different metrics, and five export formats. We demonstrated the feasibility of the approach by analyzing the runtime data of an example microservice based system. During an exploratory study with six participants, 14 of the 18 services were invoked via the system’s web interface. For these services, all metrics were calculated correctly from the generated traces.
Microservices are a topic driven mainly by practitioners and academia is only starting to investigate them. Hence, there is no clear picture of the usage of Microservices in practice. In this paper, we contribute a qualitative study with insights into industry adoption and implementation of Microservices. Contrary to existing quantitative studies, we conducted interviews to gain a more in-depth understanding of the current state of practice. During 17 interviews with software professionals from 10 companies, we analyzed 14 service-based systems. The interviews focused on applied technologies, Microservices characteristics, and the perceived influence on software quality. We found that companies generally rely on well established technologies for service implementation, communication, and deployment. Most systems, however, did not exhibit a high degree of technological diversity as commonly expected with Microservices. Decentralization and product character were different for systems built for external customers. Applied DevOps practices and automation were still on a mediocre level and only very few companies strictly followed the you build it, you run it principle. The impact of Microservices on software quality was mainly rated as positive. While maintainability received the most positive mentions, some major issues were associated with security. We present a description of each case and summarize the most important findings of companies across different domains and sizes. Researchers may build upon our findings and take them into account when designing industry-focused methods.
While the concepts of object-oriented antipatterns and code smells are prevalent in scientific literature and have been popularized by tools like SonarQube, the research field for service-based antipatterns and bad smells is not as cohesive and organized. The description of these antipatterns is distributed across several publications with no holistic schema or taxonomy. Furthermore, there is currently little synergy between documented antipatterns for the architectural styles SOA and Microservices, even though several antipatterns may hold value for both. We therefore conducted a Systematic Literature Review (SLR) that identified 14 primary studies. 36 service-based antipatterns were extracted from these studies and documented with a holistic data model. We also categorized the antipatterns with a taxonomy and implemented relationships between them. Lastly, we developed a web application for convenient browsing and implemented a GitHub-based repository and workflow for the collaborative evolution of the collection. Researchers and practitioners can use the repository as a reference, for training and education, or for quality assurance.
Background: Design patterns are supposed to improve various quality attributes of software systems. However, there is controversial quantitative evidence of this impact. Especially for younger paradigms such as service- and microservice-based systems, there is a lack of empirical studies.
Objective: In this study, we focused on the effect of four service-based patterns - namely process abstraction, service façade, decomposed capability, and event-driven messaging - on the evolvability of a system from the viewpoint of inexperienced developers.
Method: We conducted a controlled experiment with Bachelor students (N = 69). Two functionally equivalent versions of a service-based web shop - one with patterns (treatment group), one without (control group) - had to be changed and extended in three tasks. We measured evolvability by the effectiveness and efficiency of the participants in these tasks. Additionally, we compared both system versions with nine structural maintainability metrics for size, granularity, complexity, cohesion, and coupling.
Results: Both experiment groups were able to complete a similar number of tasks within the allowed 90 min. Median effectiveness was 1/3. Mean efficiency was 12% higher in the treatment group, but this difference was not statistically significant. Only for the third task, we found statistical support for accepting the alternative hypothesis that the pattern version led to higher efficiency. In the metric analysis, the pattern version had worse measurements for size and granularity while simultaneously having slightly better values for coupling metrics. Complexity and cohesion were not impacted.
Interpretation: For the experiment, our analysis suggests that the difference in efficiency is stronger with more experienced participants and increased from task to task. With respect to the metrics, the patterns introduce additional volume in the system, but also seem to decrease coupling in some areas.
Conclusions: Overall, there was no clear evidence for a decisive positive effect of using service-based patterns, neither for the student experiment nor for the metric analysis. This effect might only be visible in an experiment setting with higher initial effort to understand the system or with more experienced developers.
Software evolvability is an important quality attribute, yet one difficult to grasp. A certain base level of it is allegedly provided by service- and microservice-based systems, but many software professionals lack systematic understanding of the reasons and preconditions for this. We address this issue via the proxy of architectural modifiability tactics. By qualitatively mapping principles and patterns of Service Oriented Architecture (SOA) and microservices onto tactics and analyzing the results, we cannot only generate insights into service-oriented evolution qualities, but can also provide a modifiability comparison of the two popular service-based architectural styles. The results suggest that both SOA and microservices possess several inherent qualities beneficial for software evolution. While both focus strongly on loose coupling and encapsulation, there are also differences in the way they strive for modifiability (e.g. governance vs. evolutionary design). To leverage the insights of this research, however, it is necessary to find practical ways to incorporate the results as guidance into the software development process.
Purpose – Many start-ups are in search of cooperation partners to develop their innovative business models. In response, incumbent firms are introducing increasingly more cooperation systems to engage with startups. However, many of these cooperations end in failure. Although qualitative studies on cooperation models have tried to improve the effectiveness of incumbent start-up strategies, only a few have empirically examined start-up cooperation behavior. The paper aims to discuss these issues.
Design/methodology/approach – Drawing from a series of qualitative and quantitative studies. The scale dimensions are identified on an interview based qualitative study. Following workshops and questionnaire-based studies identify factors and rank them. These ranked factors are then used to build a measurement scale that is integrated in a standardized online questionnaire addressing start-ups. The gathered data are then analyzed using PLS-SEM.
Findings – The research was able to build a multi-item scale for start-ups cooperation behavior. This scale can be used in future research. The paper also provides a causal analysis on the impact of cooperation behavior on start-up performance. The research finds, that the found dimensions are suitable for measuring cooperation behavior. It also shows a minor positive effect on start-up’s performance.
Originality/value – The research fills the gap of lacking empirical research on the cooperation between start-ups and established firms. Also, most past studies focus on organizational structures and their performance when addressing these cooperations. Although past studies identified the start-ups behavior as a relevant factor, no empirical research has been conducted on the topic yet.
The coupling of the heat and power sector is required as supply and demand in the German electricity mix drift further and further apart with a high percentage of renewable energy. Heat pumps in combination with thermal energy storage systems can be a useful way to couple the heat and power sectors. This paper presents a hardware-in the-loop test bench for experimental investigation of optimized control strategies for heat pumps. 24-hour experiments are carried out to test whether the heat pump is able to serve optimized schedules generated by a MATLAB algorithm. The results show that the heat pump is capable of following the generated schedules, and the maximum deviation of the operational time between schedule and experiment is only 3%. Additionally, the system can serve the demand for space heating and DHW at any time.
A distinctive highlight of the dissertation at hand is the investigation of multiple apparel supply chain actors incorporating the views of a global apparel retailer in Europe and multiple suppliers in Vietnam and Indonesia.
More specifically, the dissertation presents a coherent investigation starting with the depiction of a conceptual framework for social management strategies as a means for social risk management (SRM), exclusively aiming at the apparel industry. In accordance to the identified research gaps and suggested research directions from the conceptual framework, the role of the apparel sourcing agent for social management strategies was analysed by conducting a multiple case study approach with evidence from Vietnam and Europe, ultimately suggesting ten propositions. Whereas a further multiple case study data collection in Vietnam, Indonesia and Europe allowed for the investigation of buyer-supplier relationships with regards to social compliance strategies by using core tenets of agency theory to interpret the findings and outline ten propositions. Based on the development of a conceptual framework on social SSCM in the apparel industry, the formulation of related 20 propositions with evidence from crucial developing (apparel sourcing) countries, and the application of agency theory which has been declared as a shortfall in this context, this thesis contributes with further grounding to SSCM theory and substantially contributes to the debate by addressing numerous research gaps.
A large body of literature is concerned with models of presence— the sensory illusion of being part of a virtual scene— but there is still no general agreement on how to measure it objectively and reliably. For the presented study, we applied contemporary theory to measure presence in virtual reality. Thirty-seven participants explored an existing commercial game in order to complete a collection task. Two startle events were naturally embedded in the game progression to evoke physical reactions and head tracking data was collected in response to these events. Subjective presence was recorded using a post-study questionnaire and real-time assessments. Our novel implementation of behavioral measures lead to insights which could inform future presence research: We propose a measure in which startle reflexes are evoked through specific events in the virtual environment, and head tracking data is compared to the range and speed of baseline interactions.
In recent years, the parallel computing community has shown increasing interest in leveraging cloud resources for executing parallel applications. Clouds exhibit several fundamental features of economic value, like on-demand resource provisioning and a pay-per-use model. Additionally, several cloud providers offer their resources with significant discounts; however, possessing limited availability. Such volatile resources are an auspicious opportunity to reduce the costs arising from computations, thus achieving higher cost efficiency. In this paper, we propose a cost model for quantifying the monetary costs of executing parallel applications in cloud environments, leveraging volatile resources. Using this cost model, one is able to determine a configuration of a cloud-based parallel system that minimizes the total costs of executing an application.
Mystery shopping (MS) is a widely used tool to monitor the quality of service and personal selling. In consultative retail settings, assessments of mystery shoppers are supposed to capture the most relevant aspects of sales people’s service and sales behavior. Given the important conclusions drawn by managers from MS results, the standard assumption seems to be that assessments of mystery shoppers are strongly related to customer satisfaction and sales performance. However, surprisingly scant empirical evidence supports this assumption. We test the relationship between MS assessments and customer evaluations and sales performance with large-scale data from three service retail chains. Surprisingly, we do not find asubstantial correlation. The results show that mystery shoppers are not good proxies for real customers. While MS assessments are not related to sales, our findings confirm the established correlation between customer satisfaction measurements and sales results.
In this paper we describe an interactive web-based visual analysis tool for Formula one races. It first provides an overview about all races on a yearly basis in a calendar-like representation. From this starting point, races can be selected and visually inspected in detail. We support a dynamic race position diagram as well as a more detailed lap times line plot for showing the drivers’ lap times in comparison. Many interaction techniques are supported like selections, filtering, highlighting, color coding, or details-on demand. We illustrate the usefulness of our visualization tool by applying it to a Formula one dataset while we describe the different dynamic visual racing patterns for a number of selected races and drivers.
Polyurethane-bases block copolymers (TPCUs) are block-copolymers with systematically varied soft and hard segments. They have been suggested to serve as material for chondral implants in joint regeneration. Such applications may require the adhesion of chondrocytes to the implant surface, facilitating cell growth while keeping their phenotype. Thus, aims of this work were (1) to modify the surface of soft biostable polyurethane-based model implants (TPCU and TSiPCU) with high-molecular weight hyaluronic acid (HA) using an optimized multistep strategy of immobilization, and (2) to evaluate bioactivity of the modified TPCUs in vitro. Our results show no cytotoxic potential of the TPCUs. HAbioactive molecules (Mw =700kDa) were immobilized onto the polyurethane surface via polyethylenimine (PEI) spacers, and modifications were confirmed by several characterization methods. Tests with porcine chondrocytes indicated the potential of the TPCU-HA for inducing enhanced cell proliferation.
Purpose – The purpose of this paper is to examine the mediating effect of psychological contract breach on the relationship between job insecurity and counterproductive workplace behavior (CWB) and the moderating effect of employment status in this relationship.
Design/methodology/approach – Data were collected from 212 supervisor–subordinate dyads in a large Chinese state-owned air transportation group. AMOS 17.0 software was used to examine the hypothesized predictions and the theoretical model.
Findings – The results showed that psychological contract breach partially mediates the effect of job insecurity on CWB, including organizational counterproductive workplace behavior and interpersonal counterproductive workplace behavior. In addition, the relationships between job insecurity, psychological contract breach and CWB differ significantly between permanent workers and contract workers.
Originality/value – The present study provides a new insight into explaining the linkage between job insecurity and negative work behaviors as well as suggestions to managers on minimizing the harmful effects of job insecurity.
Organisationslernen
(2019)
Durch Organisationslernen passen sich Organisationen an veränderte Umweltanforderungen (Digitalisierung, politische Reformen, usw.) an. Organisationen können die Lernfähigkeit erhöhen, indem sie ihre dynamischen Fähigkeiten durch eine geringe Arbeitsteilung stärken, ihren Absorptionsprozess von Wissen hinterfragen, und strukturelle und zeitliche Ambidextrie schaffen. Sie können sich am Leitbild der lernenden Organisation orientieren und flache Organisationsstrukturen sowie Teamarbeit fördern. Insbesondere für öffentliche Verwaltungen, die derzeit nicht ausreichend lernfähig sind, bietet das Organisationslernen sinnvolle Ansatzpunkte.
Data analytics tasks on large datasets are computationally intensive and often demand the compute power of cluster environments. Yet, data cleansing, preparation, dataset characterization and statistics or metrics computation steps are frequent. These are mostly performed ad hoc, in an explorative manner and mandate low response times. But, such steps are I/O intensive and typically very slow due to low data locality, inadequate interfaces and abstractions along the stack. These typically result in prohibitively expensive scans of the full dataset and transformations on interface boundaries.
In this paper, we examine R as analytical tool, managing large persistent datasets in Ceph, a wide-spread cluster file-system. We propose nativeNDP – a framework for Near Data Processing that pushes down primitive R tasks and executes them in-situ, directly within the storage device of a cluster-node. Across a range of data sizes, we show that nativeNDP is more than an order of magnitude faster than other pushdown alternatives.
Die Förderung der Kraft-Wärme-Kopplung (KWK) in kleinen und mittleren Unternehmen der Galvanotechnik stellt ein erklärtes Ziel des Landes Baden-Württemberg und des Forschungsprojekts GalvanoFlex_BW dar. Als komplexe Energieeffizienzmaßnahme stellt die Kraft-Wärme-Kopplung erhöhte Anforderungen an die Unternehmen und das professionelle Umfeld (Beratung, Service, Handwerk, Contracting). Hemmnisse zur Umsetzung der Technologie finden sich daher sowohl innerhalb der Unternehmen als auch außerhalb. Die Hemmnisse bei der Umsetzung der Kraft-Wärme-Kopplung in der Galvanotechnik sind auf unterschiedliche Ursachen zurückzuführen, wie hohe Komplexität der KWK-Technologie, schwierige Bewertung des Gesamtnutzens im Unternehmen, mangelnde personelle Ausstattung oder auch fehlende Unternehmerentscheidungen. Empfehlungen der Forschungspartner zu deren Überwindung können aus den Ergebnisses der sozial-wissenschaftlichen Begleitforschung gewonnen werden.
Ganz gleich, ob im privaten oder beruflichen Alltag, begleiten uns digitale Medien heute nahezu überall. Dabei dienen sie nicht nur zur Unterhaltung, sondern helfen uns, Arbeitsabläufe effizienter und produktiver durchzuführen. Doch die Arbeit des Menschen ist bei Weitem nicht überflüssig geworden. Durch die steigenden Anforderungen ist die Nachfrage nach qualifiziertem Fachpersonal heute höher denn je. Währenddessen müssen Mitarbeiter in der Lage sein, mit der rasanten Entwicklung neuer Produkte und Technologien Schritt zu halten. Dabei ist eine qualitative Aus- und Weiterbildung unumgänglich. Beginnend mit der Bildung von Medienkompetenz in Schulen bis hin zur Fach- und Berufsbildung sowie beruflichen Weiterbildung, muss der Umgang mit digitalen Technologien gelehrt sein. Darüber hinaus bieten diese Technologien neue Potenziale zur Verbesserung von Bildungskonzepten und können zudem dabei helfen, den Lernerfolg zu steigern.
Diese Arbeit beschäftigt sich mit der Evaluation einer VR-basierten Lernumgebung und untersucht mögliche Auswirkungen auf den Lernerfolg durch die verkörperte Darstellung eines virtuellen Instruktors. Dazu wurde die technische Implementierung einer kollaborativen Lernumgebung vorgenommen, mit welcher anschließend eine Versuchsreihe mit 16 Probanden durchgeführt wurde. Im Hinblick auf eine mögliche Steigerung der Effizienz in der eigenständigen Bewältigung von Montageaufgaben nach unterschiedlichen Instruktionsarten, wurden keine signifikanten Leistungsverbesserungen festgestellt.
Abreinigbare Schlauchfilter kommen zur Abscheidung von Stäuben sowie staubförmigen Substanzen zum Einsatz. Aufgrund typischer Prozessbedingungen unterliegen sie während ihres Einsatzes thermischer, chemischer und mechanischer Beanspruchung. Das IGF-Projekt Nr. 18307 "Untersuchung der chemischen und thermischen Degradation von abreinigbaren Filtermedien und Verbesserung deren Beständigkeit durch Oberflächenmodifikation" hat mehrere Prüfmethoden verglichen.
In den letzten Jahren hat das Gebiet der additiven Fertigung einen unglaublichen Schub erfahren. Es haben sich kostengünstige Endkonsumerdrucker in der sogenannten "Makerszene" verbreitet, die vielfältige neue Fertigungsmöglichkeiten bieten. Überzogene Berichterstattungen über diese Möglichkeiten haben einen wahren Hype ausgelöst. Trotz der entstandenen neuen Chancen zur Fertigung von Produkten bleiben die realisierbaren Möglichkeiten oft hinter den Erwartungen zurück. Einerseits liegt das an der Teilequalität und an den zur Verfügung stehenden Materialien. Andererseits kommen im Endkonsumerbereich überwiegend Drucker auf der Basis des FDM- oder DLP-Verfahrens zum Einsatz, damit ergibt sich folglich eine sehr eingeschränkte Verfahrenspalette.