Ja
Refine
Document Type
- Journal article (604)
- Conference proceeding (437)
- Book (90)
- Book chapter (52)
- Working Paper (32)
- Doctoral Thesis (28)
- Report (24)
- Issue of a journal (18)
- Review (5)
- Anthology (2)
Is part of the Bibliography
- yes (1293)
Institute
- ESB Business School (456)
- Informatik (428)
- Life Sciences (162)
- Technik (146)
- Texoversum (77)
- Zentrale Einrichtungen (12)
Publisher
- Hochschule Reutlingen (180)
- Elsevier (135)
- MDPI (99)
- Gesellschaft für Informatik e.V (66)
- Universität Tübingen (65)
- Springer (58)
- De Gruyter (39)
- IARIA (26)
- MIM, Marken-Institut München (23)
- Wiley (21)
A MATLAB toolbox was developed both for teachers performing quick experimental demonstrations during lectures and for students practicing measurement and frequency analysis procedures. The conceptual purpose was to support fundamental acoustics courses with contents defined by the DEGA recommendation 102. All implemented functions and parameters are visible at once and quickly adjustable by a GUI without submenus. A user manual is provided with explanations of how to get started and how all implemented functions can be applied. The toolbox probably still contains bugs. All users are welcome to inform the author about their experiences and proposals for improvement. In future it is planned to convert "Acoustics" to the MATLAB app designer format as Mathworks announced GUIDE to be replaced. Useful extensions would be additional tabs containing animations of sound propagation phenomena or sound fields caused by different sources.
Real estate markets are known to fluctuate. The real estate market in Stuttgart, Germany, has been booming for more than a decade: square-meter price hit top levels and real estate agents claim that market prices will continue to increase. In this paper, we test this market understanding by developing and analyzing a system dynamics model that depicts the Stuttgart real estate market. Simulating the model explains oscillating behavior arising from significant time delays and endogenous feedback structures – and not necessarily oscillating interest rates, as market experts assume. Scenarios provide insights into the system's behavior reacting to changes exogenous to the model. The first scenario tests the market development under increasing interest rates. The other scenario deals with possible effects on the real estate market if the regional automotive economy suffers from intense competition with new market players entering with alternative fuel vehicles and new technologies. With a policy run we test market structure changes to eliminate cyclical effects. The paper confirms that the business cycle in the Stuttgart real estate market arises from within the system's underlying structure, thus emphasizing the importance of understanding feedback structures.
Entrepreneurs and small and medium enterprises usually have issues on developing new prototypes, new ideas or testing new techniques. In order to help them, in the last years, academic Software Factories, a new concept of collaboration between universities and companies have been developed. Software Factories provide a unique environment for students and companies. Students benefit from the possibility of working in a real work environment learning how to apply the state of the art of the existing techniques and showing their skills to entrepreneurs. Companies benefit from the risk-free environment where they can develop new ideas, in a protected environment. Universities finally benefit from this setup as a perfect environment for empirical studies in industrial-like environment. In this paper, we present the network of academic Software Factories in Europe, showing how companies had already benefit from existing Software Factories and reporting success stories. The results of this paper can increase the network of the factories and help other universities and companies to setup similar environment to boost the local economy.
In daily life, people tend to use mental shortcuts to simplify and speed up their decision-making processes. A halo effect exists if the impression created by a dominant attribute influences how other attributes of an object or subject are judged. It involves a cognitive bias that leads to distorted assessments. However, the halo effect has barely been researched in a sports-related context, although it can substantially contribute to understanding how sport fans think and behave. The objective of this paper is to answer the question that is of interest for both theory and practice of sports marketing: Is there a halo effect in sports? Does the sporting success or failure of a professional soccer team radiate or even outshine other sports related and non-sports aspects and influence or distort how the club is perceived by its fans? Fans of six soccer clubs selected from the first German soccer league Bundesliga were interviewed. This paper presents the results of an empirical study based on a data set consisting of a total of 4,180 cases. The results of the analyses substantiate the distortion of the fans’ perception with regard to a very diverse range of aspects that is triggered by the sporting success or failure of their favorite club.
Today 40 Gbps is in development at IEEE 802.3bq over four pair balanced cabling. In this paper, we describe a transmission experiment of 25 Gbps enabling either a single pair transmission of 25 Gbps over a 30 meter balanced cabling channel, or a 100 Gbps transmission via a four-pair balanced channel. A scalable matrix modeling tool is introduced which allows the prediction of transmission characteristics of a channel taking mode conversion into account . We applied this tool to characterize PCB-channels including the magnetics and PCB for a four-pair 100 Gbps transmission. We evaluated prototype cables and connecting hardware for frequencies up to 2 GHz and beyond. Finally we investigated possible line encoding schemes and provide measurement results of a transmission over 30 m with a data rate of 25 Gbps per twisted pair.
The 21st century: an era where emojis and hashtags find their way into every sentence, where taking selfies, live tweeting and mining bitcoin are the norm, and where Insta-culture dictates what we say and do. This is the era into which the digital native was born. With so many changes in every aspect of our lives, how is it that one of the most influential aspects, our education, has remained unchanged? Our education system not only fails to appeal to today’s students, but more importantly, it fails to equip them with the skills required in the 21st Century. It is thus of no surprise that industries feel graduates entering the workplace lack skills in critical thinking, problem solving and self-directed learning. AI, machine learning and big data: Tools and mechanisms we so eagerly incorporate to create smart factories yet are hesitant to use elsewhere. Gamification and games have shown great results in education and training; with most research suggesting a stronger focus on personalization and adaptation. When combined with analytics and machine learning, the potential of games is yet to be realized. A real-time adaptive game would not only always present an appropriate degree of challenge for the individual but would allow for a shift in focus from the recitation of facts, to the application of information filtered to solve the particular problem at hand. South Africa, a country faced with a severe skills gap, could benefit greatly from games. If used correctly, they may just offer a desperately needed contribution toward equipping both current and future employees with the skills needed to survive in the 21st century. This paper explores the feasibility of using such games for enhanced knowledge dissemination and the upskilling of the workforce.
To date, special interest has been paid to composite scaffolds based on polymers enriched with hydroxyapatite (HA). However, the role of HA containing different trace elements such as silicate in the structure of a polymer scaffold has not yet been fully explored. Here, we report the potential use of silicate-containing hydroxyapatite (SiHA) microparticles and microparticle aggregates in the predominant range from 2.23 to 12.40 μm in combination with polycaprolactone (PCL) as a hybrid scaffold with randomly oriented and well-aligned microfibers for regeneration of bone tissue. Chemical and mechanical properties of the developed 3D scaffolds were investigated with XRD, FTIR, EDX and tensile testing. Furthermore, the internal structure and surface morphology of the scaffolds were analyzed using synchrotron X-ray μCT and SEM. Upon culturing human mesenchymal stem cells (hMSC) on PCL-SiHA scaffolds, we found that both SiHA inclusion and microfiber orientation affected cell adhesion. The best hMSCs viability was revealed at 10 day for the PCL-SiHA scaffolds with well-aligned structure (~82%). It is expected that novel hybrid scaffolds of PCL will improve tissue ingrowth in vivo due to hydrophilic SiHA microparticles in combination with randomly oriented and well-aligned PCL microfibers, which mimic the structure of extracellular matrix of bone tissue.
For a holistic assessment of the interaction between the human body and tight fitted clothing, it is necessary to consider the mechanical properties of the body. Default avatars in CAD software are usually solid and do not take this interaction into account. For this purpose, a solid avatar is converted to a deformable one by using the soft body physics implementation in the simulation program Blender. The fit of a 3D garment on both avatars are compared, which allows a first evaluation of the differences between these approaches.
Hochschulabsolventen sind für Unternehmen eine der wichtigsten Quellen für die Nachwuchsrekrutierung. Doch wie erreichen Sie die jungen Studenten am Besten? Eine bloße Ausschreibung einer Stelle auf der Unternehmenswebseite reicht nicht mehr aus. Wir zeigen Ihnen, wie Sie bereits vor dem Bewerbungsprozess in der Lebenswirklichkeit (Relevant Set) der Studierenden präsent werden, um überhaupt als Arbeitgeber in Betracht gezogen zu werden.
Managers recognize that software development project teams need to be developed and guided. Although technical skills are necessary, non-technical (NT) skills are equally, if not more, necessary for project success. Currently, there are no proven tools to measure the NT skills of software developers or software development teams. Behavioral markers (observable behaviors that have positive or negative impacts on individual or team performance) are beginning to be successfully used by airline and medical industries to measure NT skill performance. The purpose of this research is to develop and validate the behavior marker system tool that can be used by different managers or coaches to measure the NT skills of software development individuals and teams. This paper presents an empirical study conducted at the Software Factory where users of the behavior marker tool rated video clips of software development teams. The initial results show that the behavior marker tool can be reliably used with minimal training.
This article analyses and compares the performance of regulators in the fields of finance and sport, especially cycling. I hypothesize that the courses of crises or scandals is the best time to study the lessons of regulatory response. First, I take into account the differences in both finance and cycling by looking at the nature of the rules and institutions governing the field. Second, I estimate the attention effect on new regulation in response to crises or scandals. The interest of the paper is in the alignment of incentives to prevent regulatory capture and to ensure accountability and enforceability. The paper concludes that the differences hold important lessons that call for the reform of rules and institutions governing finance and cycling alike.
A closed-loop control for a cooperative innovation culture in interorganizational R&D projects
(2022)
Since project managers only have a limited authority in interorganizational R&D projects a cooperative innovation culture is essential for team cohesion and thus for achieving project scope in time and cost. For its development different factors depending on underlying values are essential. These factors must be learned iteratively by the project members so that they are living the values of a cooperative innovation culture. Hence, this paper raises the following research question: “How to control living the values of a cooperative innovation culture in interorganizational R&D projects?” To answer this question, a closed-loop control for a cooperative innovation culture is developed. The developed closed-loop control system includes several different functional units which show essential roles and several different variables which show what to consider and design in the control system. In addition, the developed closed-loop control system is generalized for other types of projects such as intraorganizational projects.
Telemedicine is becoming an increasingly important approach to diagnostic, treat or prevent diseases. However, the usage of Information Communication Technologies in healthcare results in a considerable amount of data that must be efficiently and securely transmitted. Many manufacturers provide telemedicine platforms without regarding interoperability, mobility and collaboration. This paper describes a collaborative mobile telemonitoring platform that can use the IEEE 11073 and HL7 communication standards or adapt proprietary protocols. The proposed platform also covers the security and modularity aspects. Furthermore this work introduces an Android-based prototype implementation
Towards a sustainable future, looking beyond the system boundaries of a single manufacturing company is necessary to promote meaningful collaborations in terms of circular economy principles. In this context digital data processing technologies to connect the potential collaborators are seen as enablers to make use of proven collaborative circular business models (CCBMs). Since most of such data processing technologies rely on features to describe the entities involved, it is essential to provide guidance for identifying and selecting the relevant and most appropriate ones. Defining critical success factors (CSFs) is considered a suitable instrument to describe the decisive factors. A systematic literature review (SLR), followed by a qualitative synthesis is investigating two scientific fields of work, namely (1) the general relevant features of CCBMs and, (2) methodologies for determining CSFs. This results in the development of a conceptual framework which provides guidance for digital applications that perform further digital processing based on the relevant CSFs relating to the specific CCBM.
Assistant platforms are becoming a key element for the business model of many companies. They have evolved from assistance systems that provide support when using information (or other) systems to platforms in their own. Alexa, Cortana or Siri may be used with literally thousands of services. From this background, this paper develops the notion of assistant platforms and elaborates a conceptual model that supports businesses in developing appropriate strategies. The model consists of three main building blocks, an architecture that depicts the components as well as the possible layers of an assistant platform, the mechanism that determines the value creation on assistant platforms, and the ecosystem with its network effects, which emerge from the multi-sided nature of assistant platforms. The model has been derived from a literature review and is illustrated with examples of existing assistant platforms. Its main purpose is to advance the understanding of assistant platforms and to trigger future research.
A fast transient current-mode buckboost DC-DC converter for portable devices is presented. Running at 1 MHz the converter provides stable 3 V from a 2.7 V to 4.2 V Li-Ion battery. A small voltage under-/overshoot is achieved by fast transient techniques: (1) adaptive pulse skipping (APS) and (2) adaptive compensation capacitance (ACC). The proposed converter was implemented in a 0.25 μm CMOS technology. Load transient simulations confirm the effectiveness of APS and ACC. The improvement in voltage undershoot and response time at light-to-heavy load step (100 mA to 500 mA), are 17 % and 59 %, respectively, in boost mode and 40 % and 49 %, respectively, in buck mode. Similar results are achieved at heavy-to-light load step for overshoot and response time.
In smart factories, maintenance is still an important aspect to safeguard the performance of their production. Especially in case of failures of machine components diagnosis is a time-consuming task. This paper presents an approach for a cyber-physical failure management system, which uses information from machines such as programmable logic controller or sensor data and IT systems to support the diagnosis and repairing process. Key element is a model combining the different information sources to detect deviations and to determine a probable failed component. Furthermore, the approach is prototypically implemented for leakage detection in compressed air networks.
Type 1 diabetes is a chronic and a life threatening disease: an adjusted treatment and a proper management of the disease are crucial to prevent or delay the complications of diabetes. Although during the last decade the development of the artificial pancreas has presented great advances in diabetes care, the multiple daily injections therapy still represents the most widely used treatment option for type 1 diabetes. This work presents the proposal and first development stages of an application focused on guiding patients using the continuous glucose monitors and smart pens together with insulin and carbohydrates recommendations. Our proposal aims to develop a platform to integrate a series of innovative machine learning models and tools rigorously tested together with the use of the latest IoT devices to manage type 1 diabetes. The resulting system actually closes the loop, like the artificial pancreas, but in an intermittent way.
In modern times markets are very dynamic. This situation requires agile enterprises to have the ability to react fast on market influences. Thereby an enterprise’ IT is especially affected, because new or changed business models have to be realized. However, enterprise architectures (EA) are complex structures consisting of many artifacts and relationships between them. Thus analyzing an EA becomes to a complex task for stakeholders. In addition, many stakeholders are involved in decision-making processes, because Enterprise Architecture Management (EAM) targets providing a holistic view of the enterprise. In this article we use concepts of Adaptive Case Management (ACM) to design a decision-making case consisting of a combination of different analysis techniques to support stakeholders in decision-making. We exemplify the case with a scenario of a fictive enterprise.
A methodology for designing planar spiral antennas with a feeding network embedded within a dielectric is presented. To avoid a purely academic work which may not be manufactured with available standard technologies, the approach takes into account manufacturing process requirements by choice of used materials in the simulation. General design rules are provided. They encompass amongst others, selection criteria for dielectric material, aspects to consider when sketching the radiating element design, as well as those for the implementation of the feeding network. A rule of thumb, which maybe helpful in the determination of the antenna supporting substrate’s height, has been found. The appeal of the method resides in the fact that it eases up the design process and helps to minimize errors, saving time and money. The approach also enables the design of a compact and small-size spiral antenna as antenna-in-package (AiP), and provides the opportunity to assemble the antenna with other RF components/systems on the same layer stack or on the same integration platform.
The best fully automated analysis process achieves even better classification results than the established manual process. The best algorithms for the three analysis steps are (i) SGLTR (Savitzky-Golay Laplace operator filter thresholding regions) and LM (Local Maxima) for automated peak identification, (ii) EM clustering (Expectation Maximization) and DBSCAN (Density-Based Spatial Clustering of Applications with Noise) for the clustering step and (iii) RF (Random Forest) for multivariate classification. Thus, automated methods can replace the manual steps in the analysis process to enable an unbiased high throughput use of the technology.
This paper presents the concept of the system architecture of a flexible cyber-physical factory control system. The system allows the automation of process structures using cyber-physical fractal nodes. These nodes have a functional and independent form and can be clustered to larger structures. This makes it possible to equip the factory with a flexible, freely scalable, modular system. The description of this system architecture and the associated rules and conditions is outlined in the concept.
The maintenance of railway infrastructure remains a challenge. Data acquisition technologies have evolved because of Industry 4.0, expanding the capabilities of predictive maintenance. Despite the advances, the potential of these emerging technologies has not been fully realised. This paper presents a technology selection framework in support of railway infrastructure predictive maintenance, which is based on qualitative methods. It consists of three stages, including the mapping of the infrastructure characteristics with the identified technologies, the evaluation of the most appropriate technologies, and the sourcing thereof. This presents the collective decision support output of the framework.
Intralogistics operations in automotive OEMs increasingly confront problems of overcomplexity caused by a customer-centred production that requires customisation and, thus, high product variability, short-notice changes in orders and the handling of an overwhelming number of parts. To alleviate the pressure on intralogistics without sacrificing performance objectives, the speed and flexibility of logistical operations have to be increased. One approach to this is to utilise three-dimensional space through drone technology. This doctoral thesis aims at establishing a framework for implementing aerial drones in automotive OEM logistic operations.
As of yet, there is no research on implementing drones in automotive OEM logistic operations. To contribute to filling this gap, this thesis develops a framework for Drone Implementation in Automotive Logistics Operations (DIALOOP) that allows for a close interaction between the strategic and the operative level and can lead automotive companies through a decision and selection process regarding drone technology.
A preliminary version of the framework was developed on a theoretical basis and was then revised using qualitative-empirical data from semi-structured interviews with two groups of experts, i.e. drone experts and automotive experts. The drone expert interviews contributed a current overview of drone capabilities. The automotive experts interview were used to identify intralogistics operations in which drones can be implemented along with the performance measures that can be improved by drone usage.
Furthermore, all interviews explored developments and changes with a foreseeable influence on drone implementation.
The revised framework was then validated using participant validation interviews with automotive experts.
The finalised framework defines a step-by-step process leading from strategic decisions and considerations over the identification of logistics processes suitable for drone implementation and the relevant performance measures to the choice of appropriate drone types based on a drone classification specifically developed in this thesis for an automotive context.
Maintenance is an increasingly complex and knowledge-intensive field. In order to address these challenges, assistance systems based on augmented, mixed, or virtual reality can be applied. Therefore, the objective of this paper is to present a framework that can be used to identify, select, and implement an assistance system based on reality technology in the maintenance environment. The development of the framework is based on a systematic literature review and subject matter expert interviews. The framework provides the best technological and economic solution in several steps. The validation of the framework is carried out through a case study.
This work presents a fully integrated GaN gate driver in a 180nm HV BCD technology that utilizes high-voltage energy storing (HVES) in an on-chip resonant LC tank, without the need of any external capacitor. It delivers up to 11nC gate charge at a 5V GaN gate, which exceeds prior art by a factor of 45-83, supporting a broad range of GaN transistor types. The stacked LC tank covers an area of only 1.44mm², which corresponds to a superior value of 7.6nC/mm².
Purpose
Supporting the surgeon during surgery is one of the main goals of intelligent ORs. The OR-Pad project aims to optimize the information flow within the perioperative area. A shared information space should enable appropriate preparation and provision of relevant information at any time before, during, and after surgery.
Methods
Based on previous work on an interaction concept and system architecture for the sterile OR-Pad system, we designed a user interface for mobile and intraoperative (stationary) use, focusing on the most important functionalities like clear information provision to reduce information overload. The concepts were transferred into a high-fidelity prototype for demonstration purposes. The prototype was evaluated from different perspectives, including a usability study.
Results
The prototype’s central element is a timeline displaying all available case information chronologically, like radiological images, labor findings, or notes. This information space can be adapted for individual purposes (e.g., highlighting a tumor, filtering for own material). With the mobile and intraoperative mode of the system, relevant information can be added, preselected, viewed, and extended during the perioperative process. Overall, the evaluation showed good results and confirmed the vision of the information system.
Conclusion
The high-fidelity prototype of the information system OR-Pad focuses on supporting the surgeon via a timeline making all available case information accessible before, during, and after surgery. The information space can be personalized to enable targeted support. Further development is reasonable to optimize the approach and address missing or insufficient aspects, like the holding arm and sterility concept or new desired features.
This paper presents a new European initiative to support the sustainable empowerment of the ageing society. Empowerment in this context represents the capability to have a self-determined, autonomous and healthy life. The paper justifies the need of such an initiative and highlights the role that telemedicine and ambient assisted living can play in this environment.
Condition monitoring supported with artificial intelligence, cloud computing, and industrial internet of things (IIoT) technologies increases the feasibility of predictive maintenance. However, the cost of traditional sensors, data acquisition systems, and the required information technology expert-knowledge challenge the industry. This paper presents a hybrid condition monitoring system (CMS) architecture consisting of a distributed, low-cost IIoT-sensor solution. The CMS uses micro-electro-mechanical system (MEMS) microphones for data acquisition, edge computing for signal preprocessing, and cloud computing, including artificial neural networks (ANN) for higher-level information processing. The system's feasibility is validated using a testbed for reciprocating linear-motion axes.
Data governance have been relevant for companies for a long time. Yet, in the broad discussion on smart cities, research on data governance in particular is scant, even though data governance plays an essential role in an environment with multiple stakeholders, complex IT structures and heterogeneous processes. Indeed, not only can a city benefit from the existing body of knowledge on data governance, but it can also make the appropriate adjustments for its digital transformation. Therefore, this literature review aims to spark research on urban data governance by providing an initial perspective for future studies. It provides a comprehensive overview of data governance and the relevant facets embedded in this strand of research. Furthermore, it provides a fundamental basis for future research on the development of an urban data governance framework.
The members of the European TRIZ Campus (ETC) have been learning from and working together with many honorable members of MATRIZ Official for many years and feel very connected to the official International TRIZ Association.
To further spread the TRIZ methodology and TRIZ teaching in the European area in the past 12 months the ETC has put a lot of thought in how making TRIZ accessible to a broader audi-ence and getting more professionals in touch with the methodology was one of the focal points.
To this end, we have developed new formats such as the "Trainer Day" to support trainers on their way into practice. We have drawn up detailed quality guidelines for the teaching of the TRIZ methodology, which are intended to provide orientation for the design of training classes and docu-mentation. We strive for exchange with representatives of "neighbouring" methods such as Six sigma, Lean, DFMA and Design Thinking to indicate synergies and added value among methods and approaches of different kinds. We are testing formats for community building, in order to connect users of all places more strongly with the TRIZ methodology through communication and information of-fers. If TRIZ users feel alone in their organizations, the exchange outside their organi-zation helps them to keep up with the TRIZ methodology. Moreover, the ETC strives to increase the ability to communicate the benefits of TRIZ-usage inside organizations. We discuss, how to reach teachers and students of all age, to make them the unique way of inventive thinking accessible.
In our paper we want to give other MATRIZ Official members insights and share our experi-ences and best practices with our fellow MO members.
Enterprise Governance, Risk and Compliance (GRC) systems are key to managing risks threatening modern enterprises from many different angles. Key constituent to GRC systems is the definition of controls that are implemented on the different layers of an Enterprise Architecture (EA). As part of the compliance aspect of GRC, the effectiveness of these controls is assessed and reported to relevant management bodies within the enterprise. In this paper we present a metamodel which links controls to the affected elements of an EA and supplies a way of expressing associated assessment techniques and results. We complement the metamodel with an expository instantiation in a cockpit for control compliance applied in an international enterprise in the insurance industry.
In recent years, machine learning algorithms have made a huge development in performance and applicability in industry and especially maintenance. Their application enables predictive maintenance and thus offers efficiency increases. However, a successful implementation of such solutions still requires high effort in data preparation to obtain the right information, interdisciplinarity in teams as well as a good communication to employees. Here, small and medium sized enterprises (SME) often lack in experience, competence and capacity. This paper presents a systematic and practice-oriented method for an implementation of machine learning solutions for predictive maintenance in SME, which has already been validated.
Parallel grippers offer multiple applications thanks to their flexibility. Their application field ranges from aerospace and automotive to medicine and communication technologies. However, the application of grippers has the problem of exhibition wear and errors during the execution of their operation. This affects the performance of the gripper. In this context, the remaining useful life (RUL) defines the remaining lifespan until failure for an asset at a particular time of operation occurs. The exact lifespan of an asset is uncertain, thus the RUL model and estimation must be derived from available sources of information. This paper presents a method for the estimation of the RUL for a two-jaw parallel gripper. After the introduction to the topic, an overview of existing literature and RUL methods are presented. Subsequently, the method for estimating the RUL of grippers is explained. Finally, the results are summarized and discussed before the outlook and further challenges are presented.
A millimeter-wave power amplifier concept in an advanced silicon germanium (SiGe) BiCMOS technology is presented. The goal of the concept is to investigate the impact of physical limitations of the used heterojunction bipolar transistors (HBT) on the performance of a 77 GHz power amplifier. High current behavior, collectorbase breakdown and transistor saturation can be forced with the presented design. The power amplifier is manufactured in an advanced SiGe BiCMOS technology at Infineon Technologies AG with a maximum transit frequency fT of around 250 GHz for npn HBT’s [1]. The simulation results of the power amplifier show a saturated output power of 16 dBm at a power added efficiency of 13%. The test chip is designed for a supply voltage of 3.3 V and requires a chip size of 1.448 x 0.930 mm².
Introduction
Despite its high accuracy, polysomnography (PSG) has several drawbacks for diagnosing obstructive sleep apnea (OSA). Consequently, multiple portable monitors (PMs) have been proposed.
Objective
This systematic review aims to investigate the current literature to analyze the sets of physiological parameters captured by a PM to select the minimum number of such physiological signals while maintaining accurate results in OSA detection.
Methods
Inclusion and exclusion criteria for the selection of publications were established prior to the search. The evaluation of the publications was made based on one central question and several specific questions.
Results
The abilities to detect hypopneas, sleep time, or awakenings were some of the features studied to investigate the full functionality of the PMs to select the most relevant set of physiological signals. Based on the physiological parameters collected (one to six), the PMs were classified into sets according to the level of evidence. The advantages and the disadvantages of each possible set of signals were explained by answering the research questions proposed in the methods.
Conclusions
The minimum number of physiological signals detected by PMs for the detection of OSA depends mainly on the purpose and context of the sleep study. The set of three physiological signals showed the best results in the detection of OSA.
Heat pumps are a vital element for reaching the greenhouse gas (GHG) reduction targets in the heating sector, but their system integration requires smart control approaches. In this paper, we first offer a comprehensive literature review and definition of the term control for the described context. Additionally, we present a control approach, which consists of an optimal scheduling module coupled with a detailed energy system simulation module. The aim of this integrated two part control approach is to improve the performance of an energy system equipped with a heat pump, while recognizing the technical boundaries of the energy system in full detail. By applying this control to a typical family household situation, we illustrate that this integrated approach results in a more realistic heat pump operation and thus a more realistic assessment of the control performance, while still achieving lower operational costs.
In recent years, the numer of hybrid work systems using human robot collaboration (HRC) increased in industrial production environments - enhancing productivity while reducing work-related burden. Despite growing availability of HRC-suitable manipulation and safety technology, tools and techniques facilitating the design, planning and implementation process are still lacking. System engineers who strive to implement technically feasible, ergonomically meaningful and economically beneficial HRC application need to make design and technology decisions in various subject areas, whereas the design alternatives per morphological analysis is applied to establish a description model that can serve as both a supporting design guideline for future HRC application of value-adding, industrial quality as well as a tool to characterize and compare existing applications. It focuses on HRC within assembly processes, and illustrates the complexity of HRC applications in a comprehensible manner through its multi-dimensional structure. The morphology has been validated through its application on various existing industrial HRC applications, research demonstrators and interviews of experts from academia.
3D morphable face models are a powerful tool in computer vision. They consist of a PCA model of face shape and colour information and allow to reconstruct a 3D face from a single 2D image. 3D morphable face models are used for 3D head pose estimation, face analysis, face recognition, and, more recently, facial landmark detection and tracking. However, they are not as widely used as 2D methods - the process of building and using a 3D model is much more involved.
In this paper, we present the Surrey Face Model, a multi resolution 3D morphable model that we make available to the public for non-commercial purposes. The model contains different mesh resolution levels and landmark point annotations as well as metadata for texture remapping. Accompanying the model is a lightweight open-source C++ library designed with simplicity and ease of integration as its foremost goals. In addition to basic functionality, it contains pose estimation and face frontalisation algorithms. With the tools presented in this paper, we aim to close two gaps. First, by offering different model resolution levels and fast fitting functionality, we enable the use of a 3D Morphable Model in time-critical applications like tracking. Second, the software library makes it easy for the community to adopt the 3D morphable face model in their research, and it offers a public place for collaboration.
A new planar compact antenna composed of two crossed Cornu spirals is presented. Each Cornu spiral is fed from the center of the linearly part of the curvature between the two spirals, which builds the clothoid. Sequential rotation is applied using a sequential phase network to obtain circular polarization and increase the effective bandwidth. Signal integrity issues have been addressed and designed to ensure high quality of signal propagation. As a result, the antenna shows good radiation characteristics in the bandwidth of interest. Compared to antennas of the same size in the literature, it is broadband and of high gain. Although the proposed antenna has been designed for K- and Ka-band operations, it can also be developed for lower and upper frequencies because of the linearity of the Maxwell equations.
This article presents a modified method of performing power flow calculations as an alternative to pure energy-based simulations of off-grid hybrid systems. The enhancement consists in transforming the scenario-based power flow method into a discrete time-dependent algorithm with the inclusion of bus and controller dynamics.
Continuous manufacturing is becoming more important in the biopharmaceutical industry. This processing strategy is favorable, as it is more efficient, flexible, and has the potential to produce higher and more consistent product quality. At the same time, it faces some challenges, especially in cell culture. As a steady state has to be maintained over a prolonged time, it is unavoidable to implement advanced process analytical technologies to control the relevant process parameters in a fast and precise manner. One such analytical technology is Raman spectroscopy, which has proven its advantages for process monitoring and control mostly in (fed-) batch cultivations. In this study, an in-line flow cell for Raman spectroscopy is included in the cell-free harvest stream of a perfusion process. Quantitative models for glucose and lactate were generated based on five cultivations originating from varying bioreactor scales. After successfully validating the glucose model (Root Mean Square Error of Prediction (RMSEP) of ∼0.2 g/L), it was employed for control of an external glucose feed in cultivation with a glucose-free perfusion medium. The generated model was successfully applied to perform process control at 4 g/L and 1.5 g/L glucose over several days, respectively, with variability of ±0.4 g/L. The results demonstrate the high potential of Raman spectroscopy for advanced process monitoring and control of a perfusion process with a bioreactor and scale-independent measurement method.
This paper presents a novel emulation concept for the test of smart contracts and Distributed Ledger Technologies (DLT) in distribute control or energy economy tasks and use cases. The concept uses state of the art behavioral modeling tools such as Matlab Simulink but presents a possible way to solve the shortfall of Simulink in communicating to DLT-Nodes directly. This is solved through a middleware solution. After this, an example used in verifying the test bed is presented and the target demonstration object is described. Finally, the possible expansion of the system is discussed and presented.
A new two-dimensional fluorescence sensor system was developed for in-line monitoring of mammalian cell cultures. Fluorescence spectroscopy allows for the detection and quantification of naturally occurring intra- and extracellular fluorophores in the cell broth. The fluorescence signals correlate the the cells' current redox state and other relevant process parameters. Cell culture pretests with twelve different excitation wavelengths showed that only three wavelengths account for a vast majority of spectral variation. Accordingly, the newly developed device utilizes three high-power LEDs as excitation sources in combination with a back-thinned CCD-spectrometer for fluorescence detection.
Cardiovascular diseases are directly or indirectly responsible for up to 38.5% of all deaths in Germany and thus represent the most frequent cause of death. At present, heart diseases are mainly discovered by chance during routine visits to the doctor or when acute symptoms occur. However, there is no practical method to proactively detect diseases or abnormalities of the heart in the daily environment and to take preventive measures for the person concerned. Long-term ECG devices, as currently used by physicians, are simply too expensive, impractical, and not widely available for everyday use. This work aims to develop an ECG device suitable for everyday use that can be worn directly on the body. For this purpose, an already existing hardware platform will be analyzed, and the corresponding potential for improvement will be identified. A precise picture of the existing data quality is obtained by metrological examination, and corresponding requirements are defined. Based on these identified optimization potentials, a new ECG device is developed. The revised ECG device is characterized by a high integration density and combines all components directly on one board except the battery and the ECG electrodes. The compact design allows the device to be attached directly to the chest. An integrated microcontroller allows digital signal processing without the need for an additional computer. Central features of the evaluation are a peak detection for detecting R-peaks and a calculation of the current heart rate based on the RR interval. To ensure the validity of the detected R-peaks, a model of the anatomical conditions is used. Thus, unrealistic RR-intervals can be excluded. The wireless interface allows continuous transmission of the calculated heart rate. Following the development of hardware and software, the results are verified, and appropriate conclusions about the data quality are drawn. As a result, a very compact and wearable ECG device with different wireless technologies, data storage, and evaluation of RR intervals was developed. Some tests yelled runtimes up to 24 hours with wireless Lan activated and streaming.
The chemical synthesis of polysiloxanes from monomeric starting materials involves a series of hydrolysis, condensation and modification reactions with complex monomeric and oligomeric reaction mixtures. Real-time monitoring and precise process control of the synthesis process is of great importance to ensure reproducible intermediates and products and can readily be performed by optical spectroscopy. In chemical reactions involving rapid and simultaneous functional group transformations and complex reaction mixtures, however, the spectroscopic signals are often ambiguous due to overlapping bands, shifting peaks and changing baselines. The univariate analysis of individual absorbance signals is hence often only of limited use. In contrast, batch modelling based on the multivariate analysis of the time course of principal components (PCs) derived from the reaction spectra provides a more efficient tool for real time monitoring. In batch modelling, not only single absorbance bands are used but information over a broad range of wavelengths is extracted from the evolving spectral fingerprints and used for analysis. Thereby, process control can be based on numerous chemical and morphological changes taking place during synthesis. “Bad” (or abnormal) batches can quickly be distinguished from “normal” ones by comparing the respective reaction trajectories in real time. In this work, FTIR spectroscopy was combined with multivariate data analysis for the in-line process characterization and batch modelling of polysiloxane formation. The synthesis was conducted under different starting conditions using various reactant concentrations. The complex spectral information was evaluated using chemometrics (principal component analysis, PCA). Specific spectral features at different stages of the reaction were assigned to the corresponding reaction steps. Reaction trajectories were derived based on batch modelling using a wide range of wavelengths. Subsequently, complexity was reduced again to the most relevant absorbance signals in order to derive a concept for a low-cost process spectroscopic set-up which could be used for real-time process monitoring and reaction control.
Documentation of clinical processes, especially in the perioperative are, is a base requirement for quality of service. Nonetheless, the documentation is a burden for the medical staff since it distracts from the clinical core process. An intuitive and user-friendly documentation system could increase documentation quality and reduce documentation workload. The optimal system solution would know what happened and the person documenting the step would need a single “confirm” button. In many cases, such a linear flow of activities is given as long as only one profession (e.g. anaestesiology, scrub nurse) is considered, but even in such cases, there might be derivations from the linear process flow and further interaction is required.
Multilevel-cell (MLC) flash is commonly deployed in today’s high density NAND memories, but low latency and high reliability requirements make it barely used in automotive embedded flash applications. This paper presents a time domain voltage sensing scheme that applies a dynamic voltage ramp at the cells’ control gate (CG) in order to achieve fast and reliable sensing suitable for automotive applications.
This article is a review of the book "Brain computation as hierarchical abstraction" by Dana H. Ballard published by MIT press in 2015. The book series computational neuroscience familiarizes the reader with the computational aspects of brain functions based on neuroscientific evidence. It provides an excellent introduction of the functioning, i.e. the structure, the network and the routines of the brain in our daily life. The final chapters even discuss behavioral elements such as decision-making, emotions and consciousness. These topics are of high relevance in other sciences such as economics and philosophy. Overall, Ballard’s book stimulates a scientifically well-founded debate and, more importantly, reveals the need of an interdisciplinary dialogue towards social sciences.
This paper is a brief review on the book ‘Capital in the Twenty-First Century’ by the French scholar Thomas Piketty. The book has started a new debate about inequality and capital taxation in Europe. It provides interesting empirical facts and develops a theory of the functioning of capitalist economies. However, I personally think the book is less convincing than recognized in the public debate. The demonstrated theory of economic growth in the book is elusive and lacks a psychological and behavioral underpinning. In fact, I do think that the increasing inequality and economic divergence are caused by capitalism but the psychological and behavioral aspects of humans are of similar or greater significance. Therefore, Piketty’s argument does not stimulate an open and scientifically founded debate in all aspects.
This paper is a commentary on the book ‘Probability and stochastic processes’ from Ionut Florescu. The book is an excellent introduction to both probability theory and stochastic processes. It provides a comprehensive discussion of the main statistical concepts including the theorems and proofs. The introduction to probability theory is easy accessible and a perfect starting point for undergraduate students even with majors in other subjects than science, such as business or engineering. The book is also up-to-date because it includes programming code for simulations. However, the book has some weaknesses. It is less convincing in more advanced topics of stochastic theory and it does not include solutions to excises and recent research trends.
A seamless convergence of the digital and physical factory aiming in personalized Product Emergence Process (PPEP) for smart products within ESB Logistics Learning Factory at Reutlingen University.
A completely new business model with reference to Industrie4.0 and facilitated by 3D experience software in today's networked society in which customers expect immediate responses, delightful experience and simple solutions is one of the mission scenarios in the ESB Logistics Learning Factory at ESB Business School (Reutlingen University).
The business experience platform provides software solutions for every organization in the company respectively in the factory. An interface with dashboards, project management apps, 3D - design and construction apps with high end visualization, manufacturing and simulation apps as well as intelligence and social network apps in a collaborative interactive environment help the user to learn the creation of a value end to end process for a personalized virtual and later real produced product.
Instead of traditional ways of working and a conventional operating factory real workers and robots work semi-intuitive together. Centerpiece in the self-planned interim factory is the smart personalized product, uniquely identifiable and locatable at all times during the production process – a scooter with an individual colored mobile phone – holder for any smart phone produced with a 3D printer in lot size one. Smart products have in the future solutions incorporated internet based services – designed and manufactured - at the costs of mass products. Additionally the scooter is equipped with a retrievable declarative product memory. Monitoring and control is handled by sensor tags and a raspberry positioned on the product. The engineering design and implementation of a changeable production system is guided by a self-execution system that independently find amongst others esplanade workplaces.
The imparted competences to students and professionals are project management method SCRUM, customization of workflows by Industrie4.0 principles, the enhancements of products with new personalized intelligent parts, electrical and electronic selfprogrammed components and the control of access of the product memory information, to plan in a digital engineering environment and set up of the physical factory to produce customer orders. The gained action-orientated experience refers to the chances and requirements for holistic digital and physical systems.
During the first years of the last decade, Egypt used to face recurrent electricity cut-offs in summer. In the past few years, the electricity tariff dramatically increased. Radiative cooling to the clear night sky is a renewable energy source that represents a relative solution. The dry desert climate promotes nocturnal radiative cooling applications. This study investigates the potential of nocturnal radiative cooling systems (RCSs) to reduce the energy consumption of the residential building sector in Egypt. The system technology proposed in this work is based on uncovered solar thermal collectors integrated into the building hydronic system. By implementing different control strategies, the same system could be used for both cooling and heating applications. The goal of this paper is to analyze the performance of RCSs in residential buildings in Egypt. The dynamic simulation program TRNSYS was used to simulate the thermal behavior of the system. The relevant issues of Egypt as a case-study are firstly overviewed. Then the paper introduces the work done to develop a building model that represents a typical residential apartment in Egypt. Typical occupancy profiles were developed to define the internal thermal gains. The adopted control strategy to optimize the system operation is presented as well. To fully understand and hence evaluate the operation of the proposed RCS, four simulation cases were considered: 1. a reference case (fully passive), 2. the stand-alone operation of the RCS, 3. ideal heating & cooling operation (fully-active), and 4. the hybrid-operation (when the active cooling system is supported by the proposed RCS). The analysis considered the main three distinct climates in Egypt, represented by the cities of Alexandria, Cairo and Asyut. The hotter and drier weather conditions resulted in a higher cooling potential and larger temperature differences. The simulated cooling power in Asyut was 28.4 W/m² for a 70 m² absorber field. For a smaller field area of 10 m², the cooling power reached 109 W/m² but with humble temperature differences. To meet the rigorous thermal comfort conditions, the proposed sensible RCS cannot fully replace conventional air-conditioning units, especially in humid areas like Alexandria. When working in a hybrid system, a 10% reduction in the active cooling energy demand could be achieved in Asyut to keep the cooling set-point at 24 °C. This percentage reduction was nearly doubled when the thermal comfort set-point was increased by two degrees (26 °C). In a sensitivity analysis, external shading devices as a passive measure as well as the implementation of the Egyptian code for buildings (ECP306/1–2005) were also investigated. The analysis of this study raised other relevant aspects to discuss, e.g. system-sizing, environmental effects, limitations and recommendations.
The usage of webinars is witnessing a resurgence in popularity, especially since the COVID-19 pandemic. It is now revealing itself to be an actual shift in how buyers and sellers do business in the future. Therefore, this study aims to investigate whether webinars will continue to be a valuable tool in B2B sales in the future. Specifically, it aims to gain a deeper understanding of how the role of webinars evolved in recent years and analyze its future potential.
Webinars are an essential tool for a wide variety of different use cases. While they have been around for over a decade, webinars recently have seen a resurgence in popularity. As the COVID-19 pandemic strictly limited contact between people and made them work from home as an only option, hosting and participating in webinars has become more common than ever - whether in business, education, or leisure.
Webinars can be effective for various purposes as they are held in real-time and allow multiple engagement opportunities between attendees and hosts. Moreover, because of their remote nature, webinars are more cost-effective and time-saving in organizing and supervising. Consequently, it is cheaper and more convenient to reach your desired target group as a webinar host than to hold a seminar in physical form. Among other reasons, convenience and interaction seem to be the most potent aspects of cementing webinars as a tool in the digital world. Nevertheless, where exactly are they used, and how do they create value in their respective usage fields?
Using predictive maintenance, more efficient processes can be implemented, leading to fewer maintenance costs and increased availability. The development of a predictive maintenance solution currently requires high efforts in time and capacity as well as often interdisciplinary cooperation. This paper presents a standardized model to describe a predictive maintenance use case. The description model is used to collect, present, and document the required information for the implementation of predictive maintenance use cases by and for different stakeholders. Based on this model, predictive maintenance solutions can be introduced more efficiently. The method is validated across departments in the automotive sector.
This paper examines the determinants of Google search in the banking area. The weekly Google data from 2004 to 2013 used for this study consists of the 30 largest banks, the Federal Reserve, and the European Central Bank. To my knowledge, this is the first study on the determinants of Google data. Firstly the paper shows that Google searches are correlated with several performance variables and market data, such as asset prices and trading volume. Secondly it demonstrates that banks´ internal performance data has a major influence whereas market data is rather insignificant. Moreover it is shown that Google search for central banks is largely determined by the level of interest rates as well as the inflation and output gap. This is evidence that central bank attention is primarily driven by the policy targets. Accordingly Google data can be applied to analyze the timely impact of monetary policy.
Additive manufacturing is a key technology which applies the ideas of Industry 4.0 in order to enable the production of personalized and highly customized products economically. Especially small and medium sized companies often lack the competence and experience to evaluate objectively and profoundly the potential of additive manufacturing technologies in small and medium sized companies. Furthermore, the method has been validated in a small medical technology company evaluating the additive manufacturing potential of an existing surgery tool.
The cloud evolved into an attractive execution environment for parallel applications from the High Performance Computing (HPC) domain. Existing research recognized that parallel applications require architectural refactoring to benefit from cloud-specific properties (most importantly elasticity). However, architectural refactoring comes with many challenges and cannot be applied to all applications due to fundamental performance issues. Thus, during the last years, different cloud migration strategies have been considered for different classes of parallel applications. In this paper, we provide a survey on HPC cloud migration research. We investigate on the approaches applied and the parallel applications considered. Based on our findings, we identify and describe three cloud migration strategies.
The development of automatic solutions for the detection of physiological events of interest is booming. Improvements in the collection and storage of large amounts of healthcare data allow access to these data faster and more efficiently. This fact means that the development of artificial intelligence models for the detection and monitoring of a large number of pathologies is becoming increasingly common in the medical field. In particular, developing deep learning models for detecting obstructive apnea (OSA) events is at the forefront. Numerous scientific studies focus on the architecture of the models and the results that these models can provide in terms of OSA classification and Apnea-Hypopnea-Index (AHI) calculation. However, little focus is put on other aspects of great relevance that are crucial for the training and performance of the models. Among these aspects can be found the set of physiological signals used and the preprocessing tasks prior to model training. This paper covers the essential requirements that must be considered before training the deep learning model for obstructive sleep apnea detection, in addition to covering solutions that currently exist in the scientific literature by analyzing the preprocessing tasks prior to training.
The acquisition of data for reality mining applications is a critical factor, since many mobile devices, e.g. smartphones, must be capable of capturing the required data. Otherwise, only a small target group would be able to use the reality mining application. In the course of a survey, we have identified smartphone features which might be relevant for various reality mining applications. The survey classifies these features and shows how the support of each feature has changed over the years by analyzing 143 smartphones released between 2004 and 2015. All analyzed devices can be ranked by their number of provided features. Furthermore, this paper deals with quality issues which have occurred during carrying out the survey.
Many organizations identified the opportunities of big data analytics to support the business with problem-specific insights through the exploitation of generated data. Socio-technical solutions are developed in big data projects to reach competitive advantage. Although these projects are aligned to specific business needs, common architectural challenges are not addressed in a comprehensive manner. Enterprise architecture management is a holistic approach to tackle the complex business and IT architecture. The transformation of an organization's EA is influenced by big data projects and their data-driven approach on all layers. To enable strategy oriented development of the EA it is essential to synchronize these projects supported by EA management. In
this paper, we conduct a systematic review of big data literature to analyze which requirements for the EA management discipline are proposed. Thereby, a broad overview about existing research is presented to facilitate a more detailed exploration and to foster the evolution o the EA management discipline.
Blockchain is a technology for the secure processing and verification of data transactions based on a distributed peer-to-peer network that uses cryptographic processes, consensus algorithms, and backward-linked blocks to make transactions virtually immutable. Within supply chain management, blockchain technology offer potentials in increasing supply chain transparency, visibility, automation, and efficiency. However, its complexity requires future employees to have comprehensive knowledge regarding the functionality of blockchain-based applications in order to be able to apply their benefits to scenarios in supply chain and production. Learning factories represent a suitable environment allowing learners to experience new technologies and to apply them to virtual and physical processes throughout value chains. This paper presents a concept to practically transfer knowledge about the technical functionality of blockchain technology to future engineers and software developers working within supply chains and production operations to sensitize them regarding the advantages of decentralized applications. First, the concept proposes methods to playfully convey immutable backward-linked blocks and the embedment of blockchain smart contracts. Subsequently, the students use this knowledge to develop blockchain-based application scenarios by means of an exemplary product in a learning factory environment. Finally, the developed solutions are implemented with the help of a prototypical decentralized application, which enables a holistic mapping of supply chain events.
Introduction: Telemedicine reduces greenhouse gas emissions (CO2eq); however, results of studies vary extremely in dependence of the setting. This is the first study to focus on effects of telemedicine on CO2 imprint of primary care.
Methods: We conducted a comprehensive retrospective study to analyze total CO2eq emissions of kilometers (km) saved by telemedical consultations. We categorized prevented and provoked patient journeys, including pharmacy visits. We calculated CO2eq emission savings through primary care telemedical consultations in comparison to those that would have occurred without telemedicine. We used the comprehensive footprint approach, including all telemedical cases and the CO2eq emissions by the telemedicine center infrastructure. In order to determine the net ratio of CO2eq emissions avoided by the telemedical center, we calculated the emissions associated with the provision of telemedical consultations (including also the total consumption of physicians’ workstations) and subtracted them from the total of avoided CO2eq emissions. Furthermore, we also considered patient cases in our calculation that needed to have an in-person visit after the telemedical consultation. We calculated the savings taking into account the source of the consumed energy (renewable or not).
Results: 433 890 telemedical consultations overall helped save 1 800 391 km in travel. On average, 1 telemedical consultation saved 4.15 km of individual transport and consumed 0.15 kWh. We detected savings in almost every cluster of patients. After subtracting the CO2eq emissions caused by the telemedical center, the data reveal savings of 247.1 net tons of CO2eq emissions in total and of 0.57 kg CO2eq per telemedical consultation. The comprehensive footprint approach thus indicated a reduced footprint due to telemedicine in primary care.
Discussion: Integrating a telemedical center into the health care system reduces the CO2 footprint of primary care medicine; this is true even in a densely populated country with little use of cars like Switzerland. The insight of this study complements previous studies that focused on narrower aspects of telemedical consultations.
This white paper builds a new financial theory of euro area sovereign bond markets under stress. The theory explains the abnormal bond pricing and increasing spreads during the recent market turmoil. We find that the strong disconnect of bond spreads from the respective bonds’ underlying fundamental values in 2010 was triggered by an increase in asymmetric information and weak reputation of government policies. Both factors cause a normal bond market to switch into a crisis mode. Finally, those markets are prone to self-fulfilling bubbles in which the economic effects are amplified by herding behaviour arising from animal spirits. Altogether, this produces contagious effects and multiple equilibria. Thus, we argue that government bond markets in a monetary union are more fragile and vulnerable to liquidity and solvency crises. Consequently, the systemic mispricing of sovereign debt creates more macroeconomic instability and bubbles in the euro area than in a single country. In other words, financial markets are partly blind to national default risks in a currency union. Therefore, the current European institutional framework puts the wrong incentives in place and needs structural changes soon. To tackle the root causes we suggest more market incentives via consistent rules, pre-emptive austerity measures in good economic times, and a resolution scheme for heavily indebted countries. In summary, our paper enhances the bond market theory and provides new insights into the recent bond market turmoil in Europe.
This project aims to evaluate existing big data infrastructures for their applicability in the operating room to support medical staff with context-sensitive systems. Requirements for the system design were generated. The project compares different data mining technologies, interfaces, and software system infrastructures with a focus on their usefulness in the peri-operative setting. The lambda architecture was chosen for the proposed system design, which will provide data for both postoperative analysis and real-time support during surgery.
This paper compares the influence a video self-avatar and a lack of a visual representation of a body have on height estimation when standing at a virtual visual cliff. A height estimation experiment was conducted using a custom augmented reality Oculus Rift hardware and software prototype also described in this paper. The results show a consistency with previous research demonstrating that the presence of a visual body influences height estimates, just as it has been shown to influence distance estimates and affordance estimates.
In this work, a web-based software architecture and framework for management and diagnosis of large amounts of medical data in an ophthalmologic reading center is proposed. Data management for multi-center studies requires merging of standing data and repeatedly gathered clinical evidence such as vital signs and raw data. If ophthalmologic questions are involved the data acquisition is often provided by non-medical staff at the point of care or a study center, whereas the medical finding is mostly provided by an ophthalmologist in a specialized reading center. The study data such as participants, cohorts and measured values are administrated at a single data center for the entire study. Since a specialized reading center maintains several studies, the medical staff must learn the different data administration for the different data center. With respect to the increasing number and sizes of clinical studies, two aspects must be considered. At first, an efficient software framework is required to support the data management, processing and diagnosis by medical experts at the reading center. In the second place, this software needs a standardized user-interface that has not to be trained/taylore /adapted for each new study. Furthermore different aspects of quality and security controls have to be included. Therefore, the objective of this work is to establish a multi purpose ophthalmologic reading center, which can be connected to different data centers via configurable data interfaces in order to treat various topics simultaneously.
Workflow driven support systems in the peri-operative area have the potential to optimize clinical processes and to allow new situation-adaptive support systems. We started to develop a workflow management system supporting all involved actors in the operating theatre with the goal to synchronize the tasks of the different stakeholders by giving relevant information to the right team members. Using the OMG standards BPMN, CMMN and DMN gives us the opportunity to bring established methods from other industries into the medical field. The system shows each addressed actor their information in the right place at the right time to make sure every member can execute their task in time to ensure a smooth workflow. The system has the overall view of all tasks. Accordingly, a workflow management system including the Camunda BPM workflow engine to run the models, and a middleware to connect different systems to the workflow engine and some graphical user interfaces to show necessary information or to interact with the system are used. The complete pipeline is implemented with a RESTful web service. The system is designed to include different systems like hospital information system (HIS) via the RESTful web service very easily and without loss of data. The first prototype is implemented and will be expanded.
Three established test methods employed for evaluating the abrasion or wear resistance of textile materials were compared to gain deeper insight into the specific damaging mechanisms to better understand a possible comparability of the results of the different tests. The knowledge of these mechanisms is necessary for a systematic development of finishing agents improving the wear resistance of textiles. Martindale, Schopper, and Einlehner tests were used to analyze two different fabrics made of natural (cotton) or synthetic (polyethylene terephthalate) fibers, respectively. Samples were investigated by digital microscopy and scanning electron microscopy to visualize the damage. Damage symptoms are compared and discussed with respect to differences in the damaging mechanisms.
Sleep is an essential part of human existence, as we are in this state for approximately a third of our lives. Sleep disorders are common conditions that can affect many aspects of life. Sleep disorders are diagnosed in special laboratories with a polysomnography system, a costly procedure requiring much effort for the patient. Several systems have been proposed to address this situation, including performing the examination and analysis at the patient's home, using sensors to detect physiological signals automatically analysed by algorithms. This work aims to evaluate the use of a contactless respiratory recording system based on an accelerometer sensor in sleep apnea detection. For this purpose, an installation mounted under the bed mattress records the oscillations caused by the chest movements during the breathing process. The presented processing algorithm performs filtering of the obtained signals and determines the apnea events presence. The performance of the developed system and algorithm of apnea event detection (average values of accuracy, specificity and sensitivity are 94.6%, 95.3%, and 93.7% respectively) confirms the suitability of the proposed method and system for further ambulatory and in-home use.
Lean Management hat in viele Unternehmen Einzug gehalten. Lean Konzepte stellen neue Anforderungen an die Art und Struktur der benötigten Kosteninformation, welche von traditionallen Kostenrechnungssystemen nicht unmittelbar erfüllt werden. Vertreter eines „Lean Accounting“ schlagen deshalb teils radikale Änderungen und eine Vereinfachung der Kostenrechnung vor. Der Beitrag diskutiert die Beschränkungen der traditionellen Kostenrechnung bei der Umsetzung von Lean Management und stellt ausgewählte Ansätze eines „Accounting for Lean“ vor. Die Analyse zeigt, dass Ansätze des Lean Accounting zu eng fokussiert sind und die in der Praxis vorhandene Pluralität der Kostenrechnungsfunktionen nicht adäquat abbilden können. Eine radikale Neugestaltung bestehender Kostenrechnungssysteme wird deshalb als unrealistisch und unbegründet verworfen. Der Beitrag entwickelt alternative Vorschläge, wie Konzepte des Lean Managements und die dafür benötigte Kosteninformation in traditionellen Kostenrechnungssystemen integriert werden können.
Glioblastoma WHO IV belongs to a group of brain tumors that are still incurable. A promising treatment approach applies photodynamic therapy (PDT) with hypericin as a photosensitizer. To generate a comprehensive understanding of the photosensitizer-tumor interactions, the first part of our study is focused on investigating the distribution and penetration behavior of hypericin in glioma cell spheroids by fluorescence microscopy. In the second part, fluorescence lifetime imaging microscopy (FLIM) was used to correlate fluorescence lifetime (FLT) changes of hypericin to environmental effects inside the spheroids. In this context, 3D tumor spheroids are an excellent model system since they consider 3D cell–cell interactions and the extracellular matrix is similar to tumors in vivo. Our analytical approach considers hypericin as probe molecule for FLIM and as photosensitizer for PDT at the same time, making it possible to directly draw conclusions of the state and location of the drug in a biological system. The knowledge of both state and location of hypericin makes a fundamental understanding of the impact of hypericin PDT in brain tumors possible. Following different incubation conditions, the hypericin distribution in peripheral and central cryosections of the spheroids were analyzed. Both fluorescence microscopy and FLIM revealed a hypericin gradient towards the spheroid core for short incubation periods or small concentrations. On the other hand, a homogeneous hypericin distribution is observed for long incubation times and high concentrations. Especially, the observed FLT change is crucial for the PDT efficiency, since the triplet yield, and hence the O2 activation, is directly proportional to the FLT. Based on the FLT increase inside spheroids, an incubation time 30 min is required to achieve most suitable conditions for an effective PDT.
Estimating molar solubility from the Hildebrand-Scott relation employing Hansen solubility parameters (HSP) is widely presumed a valid semi quantitative approach. To test this presumption and to determine quantitatively the inherent accuracy of such a solubility prognosis, l-ascorbic acid (LAA) was treated as an example of a commercially important solute. Analytical calculus and Monte Carlo (MC) simulation were performed for 20 common solvents with total HSP ranging from 14.5 to 33.0 (MPa)0.5 utilizing validated material data. It was found that, due to the uncertainty of the material data used in the calculations, the solubility prediction had a large scattering and, thus, a low precision.
This paper presents a modular and scalable power electronics concept for motor control with continuous output voltage. In contrast to multilevel concepts, modules with continuous output voltage are connected in series. The continuous output voltage of each module is obtained by using gallium nitride (GaN) high electron motility transistor (HEMT)s as switches inside the modules with a switching frequency in the range between 500 kHz and 1 MHz. Due to this high switching frequency a LC filter is integrated into the module resulting in a continuous output voltage. A main topic of the paper is the active damping of this LC output filter for each module and the analysis of the series connection of the damping behaviour. The results are illustrated with simulations and measurements.
The following paper is dealing with the issue on which actual consumer lifestyle segmentation methods there are for particular European countries and accordingly for Europe as a whole. This is important for corporations to be able to place their products accurately by a consumer orientated marketing concerning the constant change of values and minds. Researching current literature, internet sources and documents, the state of the science is presented by a detailed description of the most popular lifestyle segmentation methods used in European countries. In addition to that, these instruments are discussed individually and then compared to each other. All instruments, the Sinus-Milieus, Euro-Socio-Styles, Roper-Consumer-Styles, RISC and Mosaic, are serving the same purpose even so they differ pretty much from each other. Each market research company has its own method to generate their model just as different segments and definitions for them. Furthermore every segmentation method is illustrated in a different way. This paper demonstrates all these instruments in detail and shows its advantages and disadvantages. Summing up literature research concerning the main research question, there are several models segmenting consumers in different lifestyle groups for e.g. in Germany, France or Great Britain, but still less models referring to the entire European market.
The fast moving process of digitization1 demands flexibility in order to adapt to rapidly changing business requirements and newly emerging business opportunities. New features have to be developed and deployed to the production environment a lot faster. To be able to cope with this increased velocity and pressure, a lot of software developing companies have switched to a Microservice Architecture (MSA) approach. Applications built this way consist of several fine-grained and heterogeneous services that are independently scalable and deployable. However, the technological and business architectural impacts of microservices based applications directly affect their integration into the digital enterprise architecture. As a consequence, traditional Enterprise Architecture Management (EAM) approaches are not able to handle the extreme distribution, diversity, and volatility of micro-granular systems and services. We are therefore researching mechanisms for dynamically integrating large amounts of microservices into an adaptable digital enterprise architecture.
Adaptation of the business model canvas template to develop business models for the circular economy
(2021)
The Business Model Canvas as a template for strategic management serves the development of new or the documentation of existing linear business models. However, the change towards a Circular Economy requires new value creation structures and thus changed business models. To develop business models for circular economies, it is necessary to adapt the existing template, since the actors involved along the value chain take on changed roles. In the context of this paper, a template is presented, based on the existing Business Model Canvas, which allows to develop and document business models for a Circular Economy.
Big Data und Cloud Systeme werden zunehmend von mobilen, benutzerzentrierten und agil veränderbaren Informationssystemen im Kontext von digitalen sozialen Netzwerken genutzt. Metaphern aus der Biologie für lebendige und selbstheilende Systeme und Umgebungen liefern die Basis für intelligente adaptive Informationssysteme und für zugehörige serviceorientierte digitale Unternehmensarchitekturen. Wir berichten über unsere Forschungsarbeiten über Strukturen und Mechanismen adaptiver digitaler Unternehmensarchitekturen für die Entwicklung und Evolution von serviceorientierten Ökosystemen und deren Technologien wie Big Data, Services & Cloud Computing, Web Services und Semantikunterstützung. Für unsere aktuellen Forschungsarbeiten nutzen wir praxisrelevante SmartLife Szenarien für die Entwicklung, Wartung und Evolution zukunftsgerechter serviceorientierter Informationssysteme. Diese Systeme nutzen eine stark wachsende Zahl externer und interner Services und fokussieren auf die Besonderheiten der Weiterentwicklung der Informationssysteme für integrierte Big Data und Cloud Kontexte. Unser Forschungsansatz beschäftigt sich mit der systematischen und ganzheitlichen Modellbildung adaptiver digitaler Unternehmensarchitekturen - gemäß standardisierter Referenzmodelle und auf Standards aufsetzenden Referenzarchitekturen, die für besondere Einsatzszenarien auch bei kleineren Anwendungskontexten oder an neue Kontexte einfacher adaptiert werden können. Um Semantik-gestützte Analysen zur Entscheidungsunterstützung von System- und Unternehmensarchitekten zu ermöglichen, erweitern wir unser bisheriges Referenzmodell für ITUnternehmensarchitekturen ESARC – Enterprise Services Architecture Reference Cube – um agile Mechanismen der Adaption und Konsistenzbehandlung sowie die zugehörigen Metamodelle und Ontologien für Digitale Enterprise Architekturen um neue Aspekte wie Big Data und Cloud Kontexte.
Enterprise architecture (EA) is useful for effectively structuring digital platforms with digital transformation in information societies. Moreover, digital platforms in the healthcare industry accelerate and increase the efficiency of drug discovery and development processes. However, there is the lack of knowledge concerning relationships between EA and digital platforms, in spite of the needs of it. In this paper, we investigated and analyzed the process of drug design and development within the healthcare industry, together with related work in using an enterprise architecture framework for the digital era named the Adaptive Integrated Digital Architecture Framework (AIDAF), specifically supporting the design of digital platforms there. Based on this analysis, we evaluate a method and propose a new reference architecture for promoting digital platforms in the healthcare industry, with future specific aspects of them making effective use of Artificial Intelligence (AI). The practical and theoretical contributions include: (1) Streamlined processes through digital platforms in organizations. (2) Informal knowledge supply and sharing among organizational members through digital platforms. (3) Efficiency and effectiveness in planning production and business for drug development. The findings indicate that EA with digital platforms using the AIDAF contribute to digital transformation with effectiveness for new drugs in the healthcare industry.
Am 12. Juni beginnt mit dem Eröffnungspiel Brasilien gegen Kroation die Fußball-Weltmeisterschaft 2014 in Brasilien. Doch ein anderer Wettstreit begann schon viel früher: die Schlacht der Ausrüster. Adidas und Nike liefern sich einen erbitterten Kampf um Trikots und Schuhe der Stars. Fairness wird dabei oftmals nur klein geschrieben. Der vorliegenden Beitrag gibt einen Einblick in den "Krieg der Schuhe und Trikots".
The extracellular matrix (ECM) is the non-cellular part of tissues and represents the natural environment of the cells. Next to structural stability, it provides various physical, chemical, and mechanical cues that strongly regulate and influence cellular behavior and are required for tissue morphogenesis, differentiation, and homeostasis. Due to its promising characteristics, ECM is used in a wide range of tissue engineering and regenerative medicine approaches as a biomaterial for coatings and scaffolds. To date, there are two sources for ECM material. First, native ECM is generated by the removal of the residing cells of a tissue or organ (decellularized ECM; dECM). Secondly, cell-derived ECM (cdECM) can be generated by and isolated from in vitro cultured cells. Although both types of ECM were intensively used for tissue engineering and regenerative medicine approaches, studies directly characterizing and comparing them are rare. Hence, in the first part of this thesis, dECM from adipose tissue and cdECM from stem cells and adipogenic differentiated stem cells from adipose tissue (ASCs) were characterized towards their macromolecular composition, structural features, and biological purity. The dECM was found to exhibit higher levels of collagens and lower levels of sulfated glycosaminoglycans compared to cdECMs. Structural characteristics revealed an immature state of collagen fibers in cdECM samples. The obtained results revealed differences between the two ECMs that can relevantly impact cellular behavior and subsequently experimental outcome and should therefore be considered when choosing a biomaterial for a specific application. The establishment of a functional vascular system in tissue constructs to realize an adequate nutrient supply remains challenging. In the second part, the supporting effect of cdECM on the self‐assembled formation of prevascular‐like structures by microvascular endothelial cells (mvECs) was investigated. It could be observed that cdECM, especially adipogenic differentiated cdECM, enhanced the formation of prevascular-like structures. An increased concentration of proangiogenic factors was found in cdECM substrates. The demonstration of cdECMs capability to induce the spontaneous formation of prevascular‐like structures by mvECs highlights cdECM as a promising biomaterial for adipose tissue engineering. Depending on the purpose of the ECM material chemical modification might be necessary. In the third and last part, the chemical functionalization of cdECM with dienophiles (terminal alkenes, cyclopropene) by metabolic glycoengineering (MGE) was demonstrated. MGE allows the chemical functionalization of cdECM via the natural metabolism of the cells and without affecting the chemical integrity of the cdECM. The incorporated dienophile chemical groups can be specifically addressed via catalysts-free, cell-friendly inverse electron-demand Diels‐Alder reaction. Using this system, the successful modification of cdECM from ASCs with an active enzyme could be shown. The possibility to modify cdECM via a cell-friendly chemical reaction opens up a wide range of possibilities to improve cdECM depending on the purpose of the material. Altogether, this thesis highlighted the differences between adipose dECM and cdECM from ASCs and demonstrated cdECM as a promising alternative to native dECM for application in tissue engineering and regenerative medicine approaches.
Tissue constructs of physiologically relevant scale require a vascular system to maintain cell viability. However, in vitro vascularization of engineered tissues is still a major challenge. Successful approaches are based on a feeder layer (FL) to support vascularization. Here, we investigated whether the supporting effect on the self‐assembled formation of prevascular‐like structures by microvascular endothelial cells (mvECs) originates from the FL itself or from its extracellular matrix (ECM). Therefore, we compared the influence of ECM, either derived from adipose‐derived stem cells (ASCs) or adipogenically differentiated ASCs, with the classical cell‐based FL. All cell‐derived ECM (cdECM) substrates enabled mvEC growth with high viability. Prevascular‐like structures were visualized by immunofluorescence staining of endothelial surface protein CD31 and could be observed on all cdECM and FL substrates but not on control substrate collagen I. On adipogenically differentiated ECM, longer and higher branched structures could be found compared with stem cell cdECM. An increased concentration of proangiogenic factors was found in cdECM substrates and FL approaches compared with controls. Finally, the expression of proteins associated with tube formation (E‐selectin and thrombomodulin) was confirmed. These results highlight cdECM as promising biomaterial for adipose tissue engineering by inducing the spontaneous formation of prevascular‐like structures by mvECs.
The aim of this work is to establish and generalize a relationship between fractional partial differential equations (fPDEs) and stochastic differential equations (SDEs) to a wider class of stochastic processes, including fractional Brownian motions and sub-fractional Brownian motions with Hurst parameter H ∈ (1/2,1). We start by establishing the connection between a fPDE and SDE via the Feynman-Kac Theorem, which provides a stochastic representation of a general Cauchy problem. In hindsight, we extend this connection by assuming SDEs with fractional and sub-fractional Brownian motions and prove the generalized Feynman-Kac formulas under a (sub-)fractional Brownian motion. An application of the theorem demonstrates, as a by-product, the solution of a fractional integral, which has relevance in probability theory.
Bone tissue is highly vascularized. The crosstalk of vascular and osteogenic cells is not only responsible for the formation of the strongly divergent tissue types but also for their physiological maintenance and repair. Extrusion-based bioprinting presents a promising fabrication method for bone replacement. It allows for the production of large-volume constructs, which can be tailored to individual tissue defect geometries. In this study, we used the all-gelatin-based toolbox of methacryl-modified gelatin (GM), non-modified gelatin (G) and acetylated GM (GMA) to tailor both the properties of the bioink towards improved printability, and the properties of the crosslinked hydrogel towards enhanced support of vascular network formation by simple blending. The vasculogenic behavior of human dermal microvascular endothelial cells (HDMECs) and human adipose-derived stem cells (ASCs) was evaluated in the different hydrogel formulations for 14 days. Co-culture constructs including a vascular component and an osteogenic component (i.e. a bone bioink based on GM, hydroxyapatite and ASCs) were fabricated via extrusion-based bioprinting. Bioprinted co-culture constructs exhibited functional tissue-specific cells whose interplay positively affected the formation and maintenance of vascular-like structures. The setup further enabled the deposition of bone matrix associated proteins like collagen type I, fibronectin and alkaline phosphatase within the 30-day culture.
Nowadays the software development plays an important role in the entire value chain in production machine and plant engineering. An important component for rapid development of high quality software is the virtual commissioning. The real machine is described on the basis of simulation models. Therefore, the control software can be verified at an early stage using the simulation models. Since production machines are produced highly individual or in very small series, the challenge of virtual commissioning is to reduce the effort to the development of simulation models. Therefore, a systematic reuse of the simulation models and the control software for different variants of a machine is essential for an economic use. This necessarily requires a consideration of the variability which may occur between the production machines. This paper analyzes the question of how to systematically deal with the software-related variability in the context of virtual commissioning. For this purpose, first the characteristics of the virtual commissioning and variability handling are considered. Subsequently, the requirements to a so-called variant infrastructure for virtual commissioning are analyzed and possible solutions are discussed.
With the digital transformation, companies will experience a change that focuses on shaping the organization into an agile organizational form. In today's competitive and fast-moving business environment, it is necessary to react quickly to changing market conditions. Agility represents a promising option for overcoming these challenges. The path to an agile organization represents a development process that requires consideration of countless levels of the enterprise. This paper examines the impact of digital transformation on agile working practices and the benefits that can be achieved through technology. To enable a solution for today's so-called VUCA (Volatility, Uncertainty, Complexity und Ambiguity) world, agile ways of working can be applied project management requires adaptation. In the qualitative study, expert interviews were conducted and analyzed using the grounded theory method. As a result, a model can be presented that shows the influencing factors and potentials of agile management in the context of the digital transformation of medium-sized companies.
Digitalization changes the manufacturing dramatically. In regard of employees’ demands, global trends and the technological vision of future factories, automotive manufacturing faces a huge number of diverse challenges. Currently, research focuses on technological aspects of future factories in terms of digitalization. New ways of work and new organizational models for future factories have not been described yet. There are assumptions on how to develop the organization of work in a future factory but up to now, literature shows deficits in scientifically substantiated answers in this research area. Consequently, the objective of this paper is to present an approach on a work organization design for automotive Industry 4.0 manufacturing. Future requirements were analyzed and deducted to criteria that determine future agile organization design. These criteria were then transformed into functional mechanisms, which define the approach for shopfloor organization design
With significant advancements in digital technologies, firms find themselves competing in an increasingly dynamic business environment. It is of paramount importance that organizations undertake proper governance mechanisms with respect to their business and IT strategies. Therefore, IT governance (ITG) has become an important factor for firm performance. In recent years, agility has evolved as a core concept for governance, especially in the area of software development. However, the impact of agility on ITG and firm performance has not been analyzed by the broad scientific community. This paper focuses on the question, how the concept of agility affects the ITG–firm performance relationship. The conceptual model for this question was tested by a quantitative research process with 400 executives responding to a standardized survey. Findings show that the adoption of agile principles, values, and best practices to the context of ITG leads to meaningful results for governance, business/IT alignment, and firm performance.
Unternehmen, die restriktiv keinerlei Adaptionen innerhalb der Strukturen vornehmen, können angesichts der verschärften Wettbewerbssituation im tagtäglichen Business nicht bestehen. Warum Agilität ein geeignetes Tool für die Umsetzung einer innovativen Anpassung beschreibt und in zukunftsorientierten Unternehmen als Katalysator agiert, soll in diesem Artikel näher erläutert werden.
The production environment experiences copious challenges, but likewise discovers many new potential opportunities. To meet the new requirements, caused by the developments towards mass-customization, human-robot-cooperation (HRC) was identified as a key piece of technology and is becoming more and more important. HRC combines the strengths of robots, such as reliability, endurance and repeatability, with the strengths of humans, for instance flexibility and decision-making skills. Notwithstanding the high potential of HRC applications, the technology has not achieved a breakthrough in production so far. Studies have shown that one of the biggest obstacles for implementing HRC is the allocation of tasks. Another key technology that offers various opportunities to improve the production environment is Artificial Intelligence (AI). Therefore, this paper describes an AI supported method to improve the work organization in HRC in regards to the task-allocation. The aim of this method is to build a dynamic, semi-autonomous group work environment which keeps not just employee motivation at a high level, but also the product quality due to a decreased failure rate. The AI helps to detect the perfect condition in which the employee delivers the best performance and also supports at identifying the time when the worker leaves this optimal state. As soon as the employee reaches this trigger event, the allocation of the tasks adapts based on the identified stress. This adaptation aims to return the employee to the state of the optimal performance. In order to realize such a dynamic allocation, this method describes the creation of a pool with various interaction scenarios, as well as the AI supported recognition of the defined trigger event.
The workshop aims to discuss leading edge contributions to the interdisciplinary research area of ambient intelligence (AmI) applied to the domains of telemedicine and driving assistance. AmI refers to human centered environments attributed with sensors. The development of AmI in the two application domains of the workshop shares several commonalities: the extensive usage of networked devices and sensors, the design of artificial intelligence algorithms for diagnosis, including recommendation systems and qualitative reasoning or the application of mobile and wireless communication to their distributed systems. Together with the presentation of common aspects of Ambient Intelligence, a further goal of the workshop is to stimulate synergies among both application domains and present examples. The telemedicine domain can benefit from methodologies in designing complex devices, real-time conform system design, audiovisual or computer vision system design used in automotive driving assistance. Furthermore, the automotive domain can benefit from the usercentric view, biometric sensor data design, multi-user data bases for aggregation and diagnosis using big data like used in telemedicine. The German Government supports these research lines in its Hightec-Strategie under the domains “Health and Nutrition” and “Climate and Energy”. In Spain the term “Spanish Program for R&D Challenged Oriented Society – Challenge in energy safe, efficient and clean & Challenge in sustainable transport, smart and integrated” is used. Scientific contributions to the event are peer-reviewed by a suited program committee having members from Germany and Spain. The same committee is serving the JARCA workshop (Jornadas sobre Sistemas cualitativos y sus Aplicaciones en Diagnosis, Robótica e Inteligencia Ambiental - Conference on Qualitative Systems and their Applications in Diagnoses, Robotics and Ambient Intelligence) since 15 years. This workshop is sponsored by the German Academic Exchange Service (DAAD) under contract number 57070010.
Um sich im Kommunikationswettbewerb zu profilieren und Streuverluste zu minimieren, bedienen sich Unternehmen vermehrt den sogenannten "nicht klassischen" Kommunikationsinstrumenten. Sponsoring stellt dabei einen erfolgsversprechenden Ansatz dar, da Sponsoring in einem attraktiven, emotional- aufgeladenen und nicht -kommerziellen Umfeld stattfindet. Aufgrund der zunehmenden Reizüberflutung der Konsumenten erscheint die Erreichung gesteckter Sponsoringziele durch bloße Sichtbarkeit jedoch nicht mehr zufriedenstellend realisierbar. Der vorliegende Beitrag behandelt das Thema Aktuelle Trends im Sponsoring im Sport. Die Analyse der aktuellen Entwicklungen zeigt, dass sich die Wirkungsvoraussetzungen des Sponsoring im Zeitverlauf verändert haben. Es bedarf neuer und innovativer Aktivierungsmaßmahmen, um die Reizüberflutung der Konsumenten zu überwinden und die Potentiale des Sponsorings zu nutzen. Die Darstellung praktischer Beispiele aus dem Sportmarketing zeigt, dass die handelnden Akteure die neuen Herausforderungen des Sponsorings erkannt haben. Es werden die aktuellen Entwicklungen hinsichtlich Digitalisierung, Internationalisierung, Professionalisierung und unkonventionaller Aktivierung aufgezeigt.
In dieser Ausarbeitung geht es um den aktuellen Stand der Digitalisierung der Textilindustrie. Sie dient als Grundlage zur Master-Thesis und soll die Frage beantworten, ob ein Informations-System, das die Textilprozesskette begleitet, benötigt wird. Dazu werden die einzelnen Prozessschritte kurz erläutert. In der Ausarbeitung wird auch die Verbindung zwischen der Textilindustrie und den neuen Möglichkeiten mit dem Internet der Dinge beleuchtet.
Angesichts großer globaler Herausforderungen wie z.B. dem Klimawandel befindet sich die Entwicklungszusammenarbeit im Wandel und sieht sich zunehmend der Frage ihrer Wirksamkeit gegenübergestellt. Dieser Beitrag diskutiert, welchen Beitrag Sustainable Entrepreneurship zu einer Verbesserung ihrer Wirksamkeit leisten kann. Im Rahmen dessen wird zunächst die Ethik Albert Schweitzers und ihre Bezüge zu Konzepten der Nachhaltigen Entwicklung beleuchtet und im nächsten Schritt kriteriengeleitet diskutiert, inwieweit Albert Schweitzer als Prototyp eines Sustainable Entrepreneurs bezeichnet werden kann und was dies für die Konzeption und Ausrichtung der Entwicklungszusammenarbeit bedeuten könnte. Ebenso wird diskutiert, inwieweit Nachhaltige Entwicklung, insb. die Nachhaltigkeitsziele der Vereinten Nationen (SDG) als Orientierungsrahmen für die Entwicklungszusammenarbeit dienen kann. Ausgehend von dem Befund, dass für Sustainable Entrepreneurs die Finanzierung ihrer Aktivitäten ein erheblicher Engpassfaktor darstellt, wird diskutiert, inwiefern Sustainable Finance-Instrumente dazu beitragen können, die Finanzierungsbedingungen für Sustainable Entrepreneurs und damit den Impact ihrer Aktivitäten zu verbessern. Abschließend wird anhand eines aktuellen Fallbeispiels der Impact aufgezeigt, den Sustainable Entrepreneurs, die nach dem Vorbild Albert Schweitzers agieren, erzielen können.
There is a growing consensus in research and practice that value-creating networks and ecosystems are supplementing the traditional distinction between the internal firm and market perspectives. To achieve joint value in ecosystems, it is crucial to align the various interests of independently acting ecosystem actors and create a common vision. In this paper, we argue that the ecosystem-wide use of product roadmaps may help with this. To get a better understanding of how roadmapping is conducted in the dynamic ecosystem environment, we systematize the main characteristics of product roadmaps and perform a conceptual comparison with the known challenges of ecosystem management. Comparing the two concepts of ecosystems and product roadmaps, we highlight the fit between the characteristics and objectives of the roadmaps and the challenges of ecosystem management. Hence, we propose to experiment with the ecosystem-wide use of product roadmaps as well as the empirical study of the challenges emerging in the process and the associated redesign of the roadmaps.
A full understanding of the relationship between surface properties, protein adsorption, and immune responses is lacking but is of great interest for the design of biomaterials with desired biological profiles. In this study, polyelectrolyte multilayer (PEM) coatings with gradient changes in surface wettability were developed to shed light on how this impacts protein adsorption and immune response in the context of material biocompatibility. The analysis of immune responses by peripheral blood mononuclear cells to PEM coatings revealed an increased expression of proinflammatory cytokines tumor necrosis factor (TNF)-α, macrophage inflammatory protein (MIP)-1β, monocyte chemoattractant protein (MCP)-1, and interleukin (IL)-6 and the surface marker CD86 in response to the most hydrophobic coating, whereas the most hydrophilic coating resulted in a comparatively mild immune response. These findings were subsequently confirmed in a cohort of 24 donors. Cytokines were produced predominantly by monocytes with a peak after 24 h. Experiments conducted in the absence of serum indicated a contributing role of the adsorbed protein layer in the observed immune response. Mass spectrometry analysis revealed distinct protein adsorption patterns, with more inflammation-related proteins (e.g., apolipoprotein A-II) present on the most hydrophobic PEM surface, while the most abundant protein on the hydrophilic PEM (apolipoprotein A-I) was related to anti-inflammatory roles. The pathway analysis revealed alterations in the mitogen-activated protein kinase (MAPK)-signaling pathway between the most hydrophilic and the most hydrophobic coating. The results show that the acute proinflammatory response to the more hydrophobic PEM surface is associated with the adsorption of inflammation-related proteins. Thus, this study provides insights into the interplay between material wettability, protein adsorption, and inflammatory response and may act as a basis for the rational design of biomaterials.
Bei großen Sportereignissen wie der diesjährigen Fußball- Europameisterschaft oder den Olympischen Sommerspielen geht es für Verbände und offizielle Sponsoren um Millionen, entsprechend scharf verteidigen sie ihre Werberechte. Burger King zeigt, wie sich dieses "Monopol" kreativ umgehen lässt. Im vorliegenden Beitrag werden exemplarisch zwei Ambush Marketing-Aktivitäten von Burger King im Rahmen der Fußball-Europameisterschaften 2016 vorgestellt. Nicht Sponsor Burger King setzte Ambush Marketing dabei gezielt und kreativ im Rahmen der EM ein, um gegen den offiziellen UEFA-Sponsor und Marktführer McDonald's Punkte zu sammeln.