Ja
Refine
Document Type
- Journal article (604)
- Conference proceeding (437)
- Book (90)
- Book chapter (52)
- Working Paper (32)
- Doctoral Thesis (28)
- Report (24)
- Issue of a journal (18)
- Review (5)
- Anthology (2)
Is part of the Bibliography
- yes (1293)
Institute
- ESB Business School (456)
- Informatik (428)
- Life Sciences (162)
- Technik (146)
- Texoversum (77)
- Zentrale Einrichtungen (12)
Publisher
- Hochschule Reutlingen (180)
- Elsevier (135)
- MDPI (99)
- Gesellschaft für Informatik e.V (66)
- Universität Tübingen (65)
- Springer (58)
- De Gruyter (39)
- IARIA (26)
- MIM, Marken-Institut München (23)
- Wiley (21)
- RWTH Aachen (15)
- SciTePress (15)
- Deutsche Gesellschaft für Computer- und Roboterassistierte Chirurgie e.V. (12)
- Stellenbosch University (12)
- Hochschule Ulm (11)
- Deutsches Textilforschungszentrum Nord-West (10)
- University of Hawai'i at Manoa (10)
- Center for Promoting Education and Research (9)
- Association for Information Systems (7)
- IOP Publishing (7)
- Arbeitsgemeinschaft Simulation (ASIM) (6)
- Koordinierungsstelle Forschung und Entwicklung der Fachhochschulen des Landes Baden-Württemberg (6)
- Landesanstalt für Umwelt Baden-Württemberg (6)
- PLOS (6)
- Sage Publishing (6)
- Sciamus GmbH (6)
- AMD Akademie Mode & Design (5)
- American Chemical Society (5)
- European Association for the Development of Renewable Energy, Environment and Power Quality (5)
- Frontiers Media (5)
- Konrad-Adenauer-Stiftung (5)
- Leibniz-Universität Hannover (5)
- Macmillan Publishers Limited (5)
- University of Zagreb (5)
- Cambridge University Press (4)
- EMW (4)
- Fraunhofer-Institut für Arbeitswirtschaft und Organisation (4)
- IEEE (4)
- OpenProceedings (4)
- PeerJ Inc. (4)
- Riga Technical University Press (4)
- Science Publishing Group (4)
- Scientific Research Publishing (4)
- Tech Science Press (4)
- University of Colorado (4)
- University of Hawaii at Manoa (4)
- University of Portsmouth (4)
- Deutsche Gesellschaft für Computer- und Roboterassistierte Chirurgie e. V. (3)
- Hanser (3)
- Hochschulrektorenkonferenz (3)
- Hometrica Consulting (3)
- International Academy of Business Disciplines (3)
- Mesago Messe Frankfurt GmbH (3)
- Public Verlagsgesellschaft und Anzeigenagentur (3)
- Royal Society of Chemistry (3)
- SPIE. The International Society for Optical Engineering (3)
- Scienpress (3)
- Technische Universität Berlin (3)
- Technische Universität Graz (3)
- Universität Konstanz (3)
- Universität Stuttgart (3)
- Academic Conferences International (2)
- AfM Arbeitsgemeinschaft für Marketing (2)
- Altop Verlag (2)
- Apprimus Wissenschaftsverlag (2)
- Centre of Sociological Research (2)
- Common Ground Publishing (2)
- Deutscher Akademischer Austauschdienst (2)
- Dnipro University of Technology (2)
- EDP Sciences (2)
- Fraunhofer Verlag (2)
- Fraunhofer-Verbund Innovationsforschung (2)
- Frontiers Research Foundation (2)
- GMDS e.V. (2)
- HTWG Konstanz (2)
- Hindawi (2)
- Hochschule Nordhausen (2)
- IACSIT Press (2)
- IBM Research Division (2)
- IIAR (2)
- IJECM (2)
- IM Publications Open LLP (2)
- Infonomics Society (2)
- International Academy Publishing (2)
- International Society for Photogrammetry and Remote Sensing (2)
- Karlsruher Institut für Technologie (2)
- Koordinierungsstelle (2)
- MFG Stiftung Baden-Württemberg (2)
- MHP Management- und IT-Beratung GmbH (2)
- MIT Center for Information Systems Research (2)
- Marken-Institut München (2)
- Optical Society of America (2)
- Palgrave Macmillan (2)
- Research Academy of Social Sciences (2)
- Scientific & Academic Publishing (2)
- Steinbeis (2)
- System Dynamics Society (2)
- Technische Informationsbibliothek (2)
- The Kelley School of Business, Indiana University (2)
- Thieme (2)
- University College Cork (2)
- University of Novi Sad (2)
- University of São Paulo (2)
- Universität Hohenheim (2)
- Virtus Interpress (2)
- WEKA Fachmedien (2)
- Waxmann (2)
- 3m5.Media GmbH (1)
- AG der Landes- und Universitätsbibliotheken in Baden-Württemberg (1)
- AIMS Press (1)
- AOK-Bundesverband (1)
- ARVO (1)
- Academic Publications Ltd. (1)
- Academic Star Publishing Company (1)
- Access Press UK (1)
- AlTEX Edition (1)
- American Institute of Physics (1)
- Apluit (1)
- Asian Society of Business and Commerce Research (1)
- Association for Computing Machinery (1)
- Association of Computing Machinery (1)
- Athens Institute for Education and Research (1)
- Atlantis Press (1)
- BAN (1)
- BDI Bundesverband der Deutschen Industrie e.V. (1)
- BWR media (1)
- Baltic Management Development Association (BMDA) (1)
- Bayerisches Zentrum für innovative Lehre (1)
- Beilstein-Institut zur Förderung der Chemischen Wissenschaften (1)
- Berufsverband Information Bibliothek (1)
- Business Perspectives (1)
- CERN Data Centre (1)
- CESifo GmbH (1)
- CIDR (1)
- Canadian Center of Science and Education (1)
- Centre for Promoting Ideas (1)
- Circle International (1)
- Copernicus GmbH (1)
- Copernicus Publications (1)
- Cornell University (1)
- Cornell Universiy (1)
- Curran Associates Inc. (1)
- DGMP (1)
- DIMECC Oy (1)
- DOVE Medical Press (1)
- David Publishing (1)
- De Montfort University (1)
- Delft University of Technology (1)
- Design Society (1)
- Deutsche Bundesstiftung Umwelt (1)
- Deutscher Landwirtschaftsverlag (1)
- Development and Entrepreneurship Agency (1)
- Dr. Beyer Internet-Beratung (1)
- EPubli (1)
- ESB Reutlingen Alumni e. V. (1)
- Ed2.0Work (1)
- Edizioni Novacultur (1)
- EduINDEX (1)
- Education and Novel Technology Research Association (1)
- EuroMed Press (1)
- Eurographics Association (1)
- Fachausschuß Management der Anwendungsentwicklung und -wartung (1)
- Fachverband ...textil..e.V. (1)
- Foreign Policy Research Center (1)
- Forschungsinstitut zur Zukunft der Arbeit GmbH (1)
- Fromm (1)
- Fuks e.V. (1)
- GESIS (1)
- GITO Verlag (1)
- German Medical Science Publishing House (1)
- Gewerkschaft Erziehung und Wissenschaft (1)
- Ghent University (1)
- Global Financial Institute (1)
- Global Science Institute (1)
- Globeedu Group (1)
- Graduiertenakademie Pädagogische Hochschulen (1)
- Harvard Business School (1)
- Heimatmuseum Reutlingen (1)
- Hochschule Furtwangen (1)
- Hochschule Hannover (1)
- Hochschule Heilbronn (1)
- Hochschule der Medien (1)
- Hochschule für Technik, Wirtschaft und Kultur Leipzig (1)
- Horizon Research Publishing (1)
- Hüthig (1)
- IADIS Press (1)
- IDL Beratung für Integrierte DV-Lösungen (1)
- IGI Publishing (1)
- IWCS (1)
- Ifo (1)
- Ifo-Inst. (1)
- Ifo-Leibniz-Institut für Wirtschaftsforschung (1)
- Industrie- und Handelskammer Region Stuttgart (1)
- Inovatus Services (1)
- Institute of Academic Research and Publication (1)
- IntechOpen (1)
- International Association for Development of the Information Society (1)
- International Federation of Automatic Control (1)
- International Scientific Press (1)
- International Society for Professional Innovation Management (1)
- International TRIZ Official Association (1)
- International Wire and Cable Symposium Inc. (1)
- JIBRM (1)
- JMIR Publications (1)
- JoVE (1)
- KSP Journals (1)
- KTH Royal Institute of Technology (1)
- KennisDC Logistiek (1)
- Kluwer (1)
- Konradin (1)
- LAR Center Press (1)
- Liberales Institut der Friedrich-Naumann-Stiftung (1)
- Liebert (1)
- Lippincott Williams & Wilkins (1)
- Messe Offenburg-Ortenau GmbH (1)
- Middle Tennessee State University (1)
- Ministerium für Wirtschaft, Arbeit und Wohnungsbau (1)
- Morressier (1)
- Munich Society for the Promotion of Economic Research (1)
- National Library of Medicine (1)
- New Business Verlag (1)
- Newcastle University (1)
- NextMed (1)
- North American Business Press (1)
- Open Access Publishing Group (1)
- PC Technology Center (1)
- Power Sources Manufacturers Association (1)
- Publications Office of the European Union (1)
- Qeios (1)
- SAGE Publications Ltd (1)
- SAIIE (1)
- SISSA (1)
- Sakarya University (1)
- SciKA (1)
- Sciedu Press (1)
- Sciencedomain international (1)
- Sciendo (1)
- Scientific research publishing (1)
- Shaker Verlag (1)
- Smart Home & Living Baden-Württemberg e.V. (1)
- Society for Science and Education (1)
- Springer Nature (1)
- Talent First Network (1)
- Tamkang University (1)
- Taylor & Francis (1)
- Technical Conference Management (1)
- Technical University (1)
- Technische Universität Bergakademie Freiberg (1)
- Technische Universität Darmstadt (1)
- Technische Universität Dresden (1)
- Tomas Bata University in Zlín (1)
- Universidad Carlos III de Madrid (1)
- University of Illinois (1)
- University of Maribor Press (1)
- University of Technology and Economics (1)
- University of Waikato (1)
- University of Zagreb Faculty of Organization and Informatics (1)
- University of the West of Scotland (1)
- Universität Trier (1)
- Universität Ulm (1)
- Universität des Saarlandes (1)
- VCW (1)
- VDE Verlag (1)
- VDI (1)
- VDI Fachmedien (1)
- VKU Verlag (1)
- Verlag IFZ – Hochschule Luzern (1)
- Vogel Business Media AG (1)
- WGTL (1)
- Warsaw School of Economics, Department of Human Capital Development (1)
- Wydawnictwo Uniwersytetu Jagiellońskiego (1)
- ZBW (1)
- ZIM-Kooperationsnetzwerk Region Neckar-Alb (1)
- dpw-Verlagsgesellschaft (1)
- fortiss GmbH (1)
- gws-netzwerk für Systemische Organisations- und Personalentwicklung e.V. (1)
- ifo Institut - Leibniz-Institut für Wirtschaftsforschung an der Universität München, München (1)
- managerSeminare Verlags GmbH (1)
- vwh Verlag Werner Hülsbusch (1)
- wbv Publikation (1)
- Österreichischer Verband der Wirtschaftsingenieure (1)
Tech hubs (THs) and cognate structures are nowadays ubiquitous in the innovation ecosystem of Sub-Saharan African (SSA) countries. However, the concept of THs is fuzzy due to the lack of a clear and universally accepted definition. This ambiguity is further compounded by the diverse range of organizations that self-identify as hubs, or are categorized as such by others. As a result, research on THs in SSA remained limited. Against the backdrop of established research on the interconnectedness of technology, innovation and entrepreneurship in different organizational forms, this paper is meant to provide fresh insights into the study of THs in SSA. To advance future research, first, it reveals what is special about THs in SSA and how they are related to existing concepts. I particularly argue that they contour a fourth-wave model of incubation. Second, four main categories are unfolded to delineate THs in SSA which is the cornerstone for future research.
Radiofrequency ablation is an ablation technique to treat tumors with focused heat. Computer tomography, ultrasound and magnetic resonance imaging (MRI) are imaging modalities which can be used for image-guided procedures. MRI offers several advantages in comparison to the other imaging modalities, such as radiation-free fluoroscopic imaging, temperature mapping, a high-soft-tissue contrast and free selection of imaging planes. This work addresses the application of 3Dcontrollers for controlling interventional, fluoroscopic MR sequences at the scenario of MR guided radiofrequency ablation of hepatic malignancies. During this procedure, the interventionalist can monitor the targeting of the tumor with near-real time fluoroscopic sequences. In general, adjustments of the imaging planes are necessary during tumor targeting, which is performed by an assistant in the control room. Therefore, communication between the interventionalist in the scanner room and the assistant in the control room is essential. However, verbal communication is impaired due to the loud scanning noises. Alternatively, non-verbal communication between the two persons is possible, however limited to a few gestures and susceptible to misunderstandings. This work is analyzing different 3D-controllers to enable control of interventional MR sequences during MR-guided procedures directly by the interventionalist. Leap Motion, Wii Remote, SpaceNavigator, Phantom Omni and Foot Switch were selected. For that a simulation was built in C++ with VTK to feign the real scenario for test purposes. Previous results showed that Leap Motion is not suitable for the application while Wii Remote and Foot Switch are possible input devices. Final evaluation showed a generally time reduction with the use of 3D-controllers. Best results were reached with Wii Remote in 34 seconds. Handholding input devices like Wii Remote have further potential to integrate them in real environment to reduce intervention time.
Die Arbeit stellt die Möglichkeiten von 3D-Controllern für den Einsatz in der interventionellen Radiologie und insbesondere für die Steuerung der Echtzeit-Magnetresonanztomographie (MRT) dar. Dies ist interessant in Bezug auf die kontrollierte Navigation in ein Zielgewebe. Dabei kann der Interventionalist durch Echtzeit- Bildgebung den Verlauf des Eingriffs verfolgen, allerdings kann er bisher das MRT während der Durchführung des Eingriffs nicht selbst steuern, da dies durch den Assistenten im Nebenraum erfolgt. Die Kommunikation ist bei dem hohen Geräuschpegel aber sehr schwer. Diese Arbeit setzt an dieser Stelle an und analysiert 3D-Controller auf die Eignung für die Echtzeit-Steuerung eines MRTs. Dabei wurden trackingbasierte und trackinglose Geräte betrachtet. Als Ergebnis ließ sich festhalten, dass trackingbasierte Verfahren weniger geeignet sind, aufgrund der nicht ausreichenden Interpretation der Eingaben. Die trackinglosen Geräte hingegen sind aufgrund der korrekten Interpretation aller Eingaben und der intuitiven Bedienung geeignet.
The Internet of Things (IoT) fundamentally influences today’s digital strategies with disruptive business operating models and fast changing markets. New business information systems are integrating emerging Internet of Things infrastructures and components. With the huge diversity of Internet of Things technologies and products organizations have to leverage and extend previous enterprise architecture efforts to enable business value by integrating the Internet of Things into their evolving Enterprise Architecture Management environments. Both architecture engineering and management of current enterprise architectures is complex and has to integrate beside the Internet of Things synergistic disciplines like EAM - Enterprise Architecture and Management with disciplines like: services & cloud computing, semantic-based decision support through ontologies and knowledge-based systems, big data management, as well as mobility and collaboration networks. To provide adequate decision support for complex business/IT environments, it is necessary to identify affected changes of Internet of Things environments and their related fast adapting architecture. We have to make transparent the impact of these changes over the integral landscape of affected EAM-capabilities, like directly and transitively impacted IoT-objects, business categories, processes, applications, services, platforms and infrastructures. The paper describes a new metamodel-based approach for integrating partial Internet of Things objects, which are semi-automatically federated into a holistic Enterprise Architecture Management environment.
The digital transformation of our society changes the way we live, work, learn, communicate, and collaborate. This disruptive change drive current and next information processes and systems that are important business enablers for the context of digitization since years. Our aim is to support flexibility and agile transformations for both business domains and related information technology with more flexible enterprise information systems through adaptation and evolution of digital architectures. The present research paper investigates the continuous bottom-up integration of micro-granular architectures for a huge amount of dynamically growing systems and services, like microservices and the Internet of Things, as part of a new composed digital architecture. To integrate micro-granular architecture models into living architectural model versions we are extending enterprise architecture reference models by state of art elements for agile architectural engineering to support digital products, services, and processes.
New business opportunities appeared using the potential of the Internet and related digital technologies, like the Internet of Things, services computing, artificial intelligence, cloud, edge, and fog computing, social networks, big data with analytics, mobile systems, collaboration networks, and cyber-physical systems. Companies are transforming their strategy and product base, as well as their culture, processes and information systems to adopt digital transformation or to approach for digital leadership. Digitalization fosters the development of IT environments with many rather small and distributed structures, like the Internet of Things, Microservices, or other micro-granular elements. Digitalization has a substantial impact for architecting the open and complex world of highly distributed digital servcies and products, as part of a new digital enterprise architecture, which structure and direct service-dominant digital products and services. The present research paper investigates mechanisms for supporting the evolution of digital enterprise architectures with user-friendly methods and instruments of interaction, visualization, and intelligent decision management during the exploration of multiple and interconnected perspectives by an architecture management cockpit.
Enterprises are transforming their strategy, culture, processes, and their information systems to enlarge their digitalization efforts or to approach for digital leadership. The digital transformation profoundly disrupts existing enterprises and economies. In current times, a lot of new business opportunities appeared using the potential of the Internet and related digital technologies: The Internet of Things, services computing, cloud computing, artificial intelligence, big data with analytics, mobile systems, collaboration networks, and cyber physical systems. Digitization fosters the development of IT environments with many rather small and distributed structures, like the Internet of Things, microservices, or other micro-granular elements. Architecting micro-granular structures have a substantial impact on architecting digital services and products. The change from a closed-world modeling perspective to more flexible Open World of living software and system architectures defines the context for flexible and evolutionary software approaches, which are essential to enable the digital transformation. In this paper, we are revealing multiple perspectives of digital enterprise architecture and decisions to effectively support value and service oriented software systems for intelligent digital services and products.
Artificial Intelligence-based Assistants AIAs are spreading quickly both in homes and offices. They already have left their original habitats of "intelligent speakers" providing easy access to music collections. The initiated a multitude of new devices and are already populating devices such as TV sets. Characteristic for the intelligent digital assistants is the formation of platforms around their core functionality. Thus, AIS capabilities of the assistants are used to offer new services and create new interfaces for business processes. There are positive network effects between the assistants and the services as well as within the services. Therefore, many companies see the need to get involved in the field of digital assistants but lack a framework to align their initiatives with their corporate strategies. In order to lay the foundation for a comprehensive method, we are therefore investigating intelligent digital assistants. Based on this analysis, we are developing a framework of strategic opportunities and challenges.
Der lokale Bekleidungseinzelhandel steht unter immer stärkerem Konkurrenzdruck durch Versandunternehmen. Zusätzlich bestehen durch gewachsene Architekturen eine Reihe von Wachstumshemmnissen. Daher sollen hier eine Reihe von Ansätzen zur Gestaltung datenzentrierter Unternehmensarchitekturen für den Bekleidungseinzelhandel vorgestellt werden. Sie basieren auf dem Einsatz von RFID zur Gewinnung von Kundenprofilen in den Niederlassungen und dem Einsatz von Big-Data basierten Auswertungs- und Analysemechanismen. Mit den vorgestellten Konzepten ist es Unternehmen des Bekleidungseinzelhandels möglich, ähnlich wie Versandunternehmen, individuelle Ansprachen des Kunden und Angebote zu entwickeln
The digital transformation of our society changes the way we live, work, learn, communicate, and collaborate. This disruptive change interacts with all information processes and systems that are important business enablers for the digital transformation since years. The Internet of Things, social collaboration systems for adaptive case management, mobility systems and services for Big Data in cloud services environments are emerging to support intelligent user-centered and social community systems. They will shape future trends of business innovation and the next wave of information and communication technology. Biological metaphors of living and adaptable ecosystems provide the logical foundation for self-optimizing and resilient run-time environments for intelligent business services and related distributed information systems with service-oriented enterprise architectures. The present research investigates mechanisms for flexible adaptation and evolution of digital enterprise architectures in the context of integrated synergistic disciplines like distributed service-oriented architectures and information systems, EAM - Enterprise Architecture and Management, metamodeling, semantic echnologies, web services, cloud computing and Big Data technology. Our aim is to support flexibility and agile transformations for both business domains and related enterprise systems through adaptation and evolution of digital enterprise architectures. The present research paper investigates digital transformations of business and IT and integrates fundamental mappings between adaptable digital enterprise architectures and service-oriented information systems.
Digitization is the use of digital technologies for creating innovative digital business models and transforming existing business models, processes and systems. Digitization creates profound changes in the economy and society. Information is often captured and processed without human intervention using digital means. Digitization impacts nearly all products and services as well as the customer and the value-creation perspective.
Big Data und Cloud Systeme werden zunehmend von mobilen, benutzerzentrierten und agil veränderbaren Informationssystemen im Kontext von digitalen sozialen Netzwerken genutzt. Metaphern aus der Biologie für lebendige und selbstheilende Systeme und Umgebungen liefern die Basis für intelligente adaptive Informationssysteme und für zugehörige serviceorientierte digitale Unternehmensarchitekturen. Wir berichten über unsere Forschungsarbeiten über Strukturen und Mechanismen adaptiver digitaler Unternehmensarchitekturen für die Entwicklung und Evolution von serviceorientierten Ökosystemen und deren Technologien wie Big Data, Services & Cloud Computing, Web Services und Semantikunterstützung. Für unsere aktuellen Forschungsarbeiten nutzen wir praxisrelevante SmartLife Szenarien für die Entwicklung, Wartung und Evolution zukunftsgerechter serviceorientierter Informationssysteme. Diese Systeme nutzen eine stark wachsende Zahl externer und interner Services und fokussieren auf die Besonderheiten der Weiterentwicklung der Informationssysteme für integrierte Big Data und Cloud Kontexte. Unser Forschungsansatz beschäftigt sich mit der systematischen und ganzheitlichen Modellbildung adaptiver digitaler Unternehmensarchitekturen - gemäß standardisierter Referenzmodelle und auf Standards aufsetzenden Referenzarchitekturen, die für besondere Einsatzszenarien auch bei kleineren Anwendungskontexten oder an neue Kontexte einfacher adaptiert werden können. Um Semantik-gestützte Analysen zur Entscheidungsunterstützung von System- und Unternehmensarchitekten zu ermöglichen, erweitern wir unser bisheriges Referenzmodell für ITUnternehmensarchitekturen ESARC – Enterprise Services Architecture Reference Cube – um agile Mechanismen der Adaption und Konsistenzbehandlung sowie die zugehörigen Metamodelle und Ontologien für Digitale Enterprise Architekturen um neue Aspekte wie Big Data und Cloud Kontexte.
Handling complexity in modern software engineering : editorial introduction to issue 32 of CSIMQ
(2022)
The potential of the Internet and related digital technologies, such as the Internet of Things (IoT), cognition and artificial intelligence, data analytics, services computing, cloud computing, mobile systems, collaboration networks, and cyber-physical systems, are both strategic drivers and enablers of modern digital platforms with fast-evolving ecosystems of intelligent services for digital products. This issue of CSIMQ presents three recent articles on modern software engineering. First, we focus on continuous software development and place it in the context of software architectures and digital transformation. The first contribution is followed by the description of the basis of specific security requirements and adequate digital monitoring mechanisms. Finally, we present a practical example of the digital management of livestock farming.
The digitization of our society changes the way we live, work, learn, communicate, and collaborate. This disruptive change interacts with all information processes and systems that are important business enablers for the context of digitization since years. Our aim is to support flexibility and agile transformations for both business domains and related information technology and enterprise systems through adaptation and evolution of digital enterprise architectures. The present research paper investigates collaborative decision mechanisms for adaptive digital enterprise architectures by extending original architecture reference models with state of art elements for agile architectural engineering for the digitization and collaborative architectural decision support.
The digitization of our society changes the way we live, work, learn, communicate, and collaborate. The Internet of Things, enterprise social networks, adaptive case management, mobility systems, analytics for big data, and cloud services environments are emerging to support smart connected products and services and the digital transformation. Biological metaphors of living and adaptable ecosystems provide the logical foundation for self-optimizing and resilient run-time environments for intelligent business services and service-oriented enterprise architectures. Our aim is to support flexibility and agile transformations for both business domains and related information technology. The present research paper investigates mechanisms for decision analytics in the context of multi-perspective explorations of enterprise services and their digital enterprise architectures by extending original architecture reference models with state of art elements for agile architectural engineering for the digitization and collaborative architectural decision support. The paper’s context focuses on digital transformations of business and IT and integrates fundamental mappings between adaptable digital enterprise architectures and service-oriented information systems. We are putting a spotlight on the example domain – Internet of Things.
The digitization of our society changes the way we live, work, learn, communicate, and collaborate. This disruptive change interacts with all information processes and systems that are important business enablers for the context of digitization since years. Our aim is to support flexibility and agile transformations for both business domains and related information technology with more flexible enterprise information systems through adaptation and evolution of digital enterprise architectures. The present research paper investigates the continuous bottom-up integration of micro-granular architectures for a huge amount of dynamically growing systems and services, like microservices and the Internet of Things, as part of a new digital enterprise architecture. To integrate micro granular architecture models to living architectural model versions we are extending more traditional enterprise architecture reference models with state of art elements for agile architectural engineering to support the digitization of products, services, and processes.
Mit dem Kunstbegriff "Virtuelle Realität" beschreibt man die Darstellung von künstlichen Welten und die Interaktion mit den selbigen. Meist verbindet man damit teure Spiel- und Filmproduktionen. Doch durch derzeitige Entwicklungen können auch kleine Entwicklerstudios und Endanwender auf Bewegungserkennungssysteme zurückgreifen. In dieser Ausarbeitung werden zwei Prototypen vorgestellt, die auf eben diese Systeme zurückgreifen. In den Prototypen soll eine Interaktion mit der Umwelt und ein "Mittendringefühl" im Rahmen von Serious Games ermöglicht werden.
Background
The actual task of electrocardiographic examinations is to increase the reliability of diagnosing the condition of the heart. Within the framework of this task, an important direction is the solution of the inverse problem of electrocardiography, based on the processing of electrocardiographic signals of multichannel cardio leads at known electrode coordinates in these leads (Titomir et al. Noninvasiv electrocardiotopography, 2003), (Macfarlane et al. Comprehensive Electrocardiology, 2nd ed. (Chapter 9), 2011).
Results
In order to obtain more detailed information about the electrical activity of the heart, we carry out a reconstruction of the distribution of equivalent electrical sources on the heart surface. In this area, we hold reconstruction of the equivalent sources during the cardiac cycle at relatively low hardware cost. ECG maps of electrical potentials on the surface of the torso (TSPM) and electrical sources on the surface of the heart (HSSM) were studied for different times of the cardiac cycle. We carried out a visual and quantitative comparison of these maps in the presence of pathological regions of different localization. For this purpose we used the model of the heart electrical activity, based on cellular automata.
Conclusions
The model of cellular automata allows us to consider the processes of heart excitation in the presence of pathological regions of various sizes and localization. It is shown, that changes in the distribution of electrical sources on the surface of the epicardium in the presence of pathological areas with disturbances in the conduction of heart excitation are much more noticeable than changes in ECG maps on the torso surface.
Purpose
Computerized medical imaging processing assists neurosurgeons to localize tumours precisely. It plays a key role in recent image-guided neurosurgery. Hence, we developed a new open-source toolkit, namely Slicer-DeepSeg, for efficient and automatic brain tumour segmentation based on deep learning methodologies for aiding clinical brain research.
Methods
Our developed toolkit consists of three main components. First, Slicer-DeepSeg extends the 3D Slicer application and thus provides support for multiple data input/ output data formats and 3D visualization libraries. Second, Slicer core modules offer powerful image processing and analysis utilities. Third, the Slicer-DeepSeg extension provides a customized GUI for brain tumour segmentation using deep learning-based methods.
Results
The developed Slicer-DeepSeg was validated using a public dataset of high-grade glioma patients. The results showed that our proposed platform’s performance considerably outperforms other 3D Slicer cloud-based approaches.
Conclusions
Developed Slicer-DeepSeg allows the development of novel AI-assisted medical applications in neurosurgery. Moreover, it can enhance the outcomes of computer-aided diagnosis of brain tumours. Open-source Slicer-DeepSeg is available at github.com/razeineldin/Slicer-DeepSeg.
Intraoperative imaging can assist neurosurgeons to define brain tumours and other surrounding brain structures. Interventional ultrasound (iUS) is a convenient modality with fast scan times. However, iUS data may suffer from noise and artefacts which limit their interpretation during brain surgery. In this work, we use two deep learning networks, namely UNet and TransUNet, to make automatic and accurate segmentation of the brain tumour in iUS data. Experiments were conducted on a dataset of 27 iUS volumes. The outcomes show that using a transformer with UNet is advantageous providing an efficient segmentation modelling long-range dependencies between each iUS image. In particular, the enhanced TransUNet was able to predict cavity segmentation in iUS data with an inference rate of more than 125 FPS. These promising results suggest that deep learning networks can be successfully deployed to assist neurosurgeons in the operating room.
Intraoperative brain deformation, so called brain shift, affects the applicability of preoperative magnetic resonance imaging (MRI) data to assist the procedures of intraoperative ultrasound (iUS) guidance during neurosurgery. This paper proposes a deep learning-based approach for fast and accurate deformable registration of preoperative MRI to iUS images to correct brain shift. Based on the architecture of 3D convolutional neural networks, the proposed deep MRI-iUS registration method has been successfully tested and evaluated on the retrospective evaluation of cerebral tumors (RESECT) dataset. This study showed that our proposed method outperforms other registration methods in previous studies with an average mean squared error (MSE) of 85. Moreover, this method can register three 3D MRI-US pair in less than a second, improving the expected outcomes of brain surgery.
Purpose: Gliomas are the most common and aggressive type of brain tumors due to their infiltrative nature and rapid progression. The process of distinguishing tumor boundaries from healthy cells is still a challenging task in the clinical routine. Fluid attenuated inversion recovery (FLAIR) MRI modality can provide the physician with information about tumor infiltration. Therefore, this paper proposes a new generic deep learning architecture, namely DeepSeg, for fully automated detection and segmentation of the brain lesion using FLAIR MRI data.
Methods: The developed DeepSeg is a modular decoupling framework. It consists of two connected core parts based on an encoding and decoding relationship. The encoder part is a convolutional neural network (CNN) responsible for spatial information extraction. The resulting semantic map is inserted into the decoder part to get the full-resolution probability map. Based on modified U-Net architecture, different CNN models such as residual neural network (ResNet), dense convolutional network (DenseNet), and NASNet have been utilized in this study.
Results: The proposed deep learning architectures have been successfully tested and evaluated on-line based on MRI datasets of brain tumor segmentation (BraTS 2019) challenge, including s336 cases as training data and 125 cases for validation data. The dice and Hausdorff distance scores of obtained segmentation results are about 0.81 to 0.84 and 9.8 to 19.7 correspondingly.
Conclusion: This study showed successful feasibility and comparative performance of applying different deep learning models in a new DeepSeg framework for automated brain tumor segmentation in FLAIR MR images. The proposed DeepSeg is open source and freely available at https://github.com/razeineldin/DeepSeg/.
Accurate and safe neurosurgical intervention can be affected by intra-operative tissue deformation, known as brain-shift. In this study, we propose an automatic, fast, and accurate deformable method, called iRegNet, for registering pre-operative magnetic resonance images to intra-operative ultrasound volumes to compensate for brain-shift. iRegNet is a robust end-to-end deep learning approach for the non-linear registration of MRI-iUS images in the context of image-guided neurosurgery. Pre-operative MRI (as moving image) and iUS (as fixed image) are first appended to our convolutional neural network, after which a non-rigid transformation field is estimated. The MRI image is then transformed using the output displacement field to the iUS coordinate system. Extensive experiments have been conducted on two multi-location databases, which are the BITE and the RESECT. Quantitatively, iRegNet reduced the mean landmark errors from pre-registration value of (4.18 ± 1.84 and 5.35 ± 4.19 mm) to the lowest value of (1.47 ± 0.61 and 0.84 ± 0.16 mm) for the BITE and RESECT datasets, respectively. Additional qualitative validation of this study was conducted by two expert neurosurgeons through overlaying MRI-iUS pairs before and after the deformable registration. Experimental findings show that our proposed iRegNet is fast and achieves state-of-the-art accuracies outperforming state-of-the-art approaches. Furthermore, the proposed iRegNet can deliver competitive results, even in the case of non-trained images as proof of its generality and can therefore be valuable in intra-operative neurosurgical guidance.
Purpose
Artificial intelligence (AI), in particular deep neural networks, has achieved remarkable results for medical image analysis in several applications. Yet the lack of explainability of deep neural models is considered the principal restriction before applying these methods in clinical practice.
Methods
In this study, we propose a NeuroXAI framework for explainable AI of deep learning networks to increase the trust of medical experts. NeuroXAI implements seven state-of-the-art explanation methods providing visualization maps to help make deep learning models transparent.
Results
NeuroXAI has been applied to two applications of the most widely investigated problems in brain imaging analysis, i.e., image classification and segmentation using magnetic resonance (MR) modality. Visual attention maps of multiple XAI methods have been generated and compared for both applications. Another experiment demonstrated that NeuroXAI can provide information flow visualization on internal layers of a segmentation CNN.
Conclusion
Due to its open architecture, ease of implementation, and scalability to new XAI methods, NeuroXAI could be utilized to assist radiologists and medical professionals in the detection and diagnosis of brain tumors in the clinical routine of cancer patients. The code of NeuroXAI is publicly accessible at https://github.com/razeineldin/NeuroXAI.
Recent advances in artificial intelligence have enabled promising applications in neurosurgery that can enhance patient outcomes and minimize risks. This paper presents a novel system that utilizes AI to aid neurosurgeons in precisely identifying and localizing brain tumors. The system was trained on a dataset of brain MRI scans and utilized deep learning algorithms for segmentation and classification. Evaluation of the system on a separate set of brain MRI scans demonstrated an average Dice similarity coefficient of 0.87. The system was also evaluated through a user experience test involving the Department of Neurosurgery at the University Hospital Ulm, with results showing significant improvements in accuracy, efficiency, and reduced cognitive load and stress levels. Additionally, the system has demonstrated adaptability to various surgical scenarios and provides personalized guidance to users. These findings indicate the potential for AI to enhance the quality of neurosurgical interventions and improve patient outcomes. Future work will explore integrating this system with robotic surgical tools for minimally invasive surgeries.
Purpose
Artificial intelligence (AI), in particular deep learning (DL), has achieved remarkable results for medical image analysis in several applications. Yet the lack of human-like explanations of such systems is considered the principal restriction before utilizing these methods in clinical practice (Yang, Ye, & Xia, 2022).
Methods
Explainable Artificial Intelligence (XAI) provides a human-explainable and interpretable description of the “black-box” nature of DL (Gulum, Trombley, & Kantardzic, 2021). An effective XAI diagnosis generator, namely NeuroXAI (refer to Fig. 1), has been developed to extract 3D explanations from convolutional neural networks (CNN) models of brain gliomas (Zeineldin et al., 2022). By providing visual justification maps, NeuroXAI can help make DL models transparent and thus increase the trust of medical experts.
Results
NeuroXAI has been applied to two applications of the most widely investigated problems in brain imaging analysis, i.e. image classification and segmentation using magnetic resonance imaging (MRI). Visual attention maps of multiple XAI methods have been generated and compared for both applications, which could help to provide transparency about the performance of DL systems.
Conclusion
NeuroXAI helps to understand the prediction process of 3D CNN networks for brain glioma using human-understandable explanations. Results revealed that the investigated DL models behave in a logical human-like manner and can improve the analytical process of the MRI images systematically. Due to its open architecture, ease of implementation, and scalability to new XAI methods, NeuroXAI could be utilized to assist medical professionals in the detection and diagnosis of brain tumors. NeuroXAI code is publicly accessible at https://github.com/razeineldin/NeuroXAI
Intracranial brain tumors are one of the ten most common malignant cancers and account for substantial morbidity and mortality. The largest histological category of primary brain tumors is the gliomas which occur with an ultimate heterogeneous appearance and can be challenging to discern radiologically from other brain lesions. Neurosurgery is mostly the standard of care for newly diagnosed glioma patients and may be followed by radiation therapy and adjuvant temozolomide chemotherapy.
However, brain tumor surgery faces fundamental challenges in achieving maximal tumor removal while avoiding postoperative neurologic deficits. Two of these neurosurgical challenges are presented as follows. First, manual glioma delineation, including its sub-regions, is considered difficult due to its infiltrative nature and the presence of heterogeneous contrast enhancement. Second, the brain deforms its shape, called “brain shift,” in response to surgical manipulation, swelling due to osmotic drugs, and anesthesia, which limits the utility of pre-operative imaging data for guiding the surgery.
Image-guided systems provide physicians with invaluable insight into anatomical or pathological targets based on modern imaging modalities such as magnetic resonance imaging (MRI) and Ultrasound (US). The image-guided toolkits are mainly computer-based systems, employing computer vision methods to facilitate the performance of peri-operative surgical procedures. However, surgeons still need to mentally fuse the surgical plan from pre-operative images with real-time information while manipulating the surgical instruments inside the body and monitoring target delivery. Hence, the need for image guidance during neurosurgical procedures has always been a significant concern for physicians.
This research aims to develop a novel peri-operative image-guided neurosurgery (IGN) system, namely DeepIGN, that can achieve the expected outcomes of brain tumor surgery, thus maximizing the overall survival rate and minimizing post-operative neurologic morbidity. In the scope of this thesis, novel methods are first proposed for the core parts of the DeepIGN system of brain tumor segmentation in MRI and multimodal pre-operative MRI to the intra-operative US (iUS) image registration using the recent developments in deep learning. Then, the output prediction of the employed deep learning networks is further interpreted and examined by providing human-understandable explainable maps. Finally, open-source packages have been developed and integrated into widely endorsed software, which is responsible for integrating information from tracking systems, image visualization, image fusion, and displaying real-time updates of the instruments relative to the patient domain.
The components of DeepIGN have been validated in the laboratory and evaluated in the simulated operating room. For the segmentation module, DeepSeg, a generic decoupled deep learning framework for automatic glioma delineation in brain MRI, achieved an accuracy of 0.84 in terms of the dice coefficient for the gross tumor volume. Performance improvements were observed when employing advancements in deep learning approaches such as 3D convolutions over all slices, region-based training, on-the-fly data augmentation techniques, and ensemble methods.
To compensate for brain shift, an automated, fast, and accurate deformable approach, iRegNet, is proposed for registering pre-operative MRI to iUS volumes as part of the multimodal registration module. Extensive experiments have been conducted on two multi-location databases: the BITE and the RESECT. Two expert neurosurgeons conducted additional qualitative validation of this study through overlaying MRI-iUS pairs before and after the deformable registration. Experimental findings show that the proposed iRegNet is fast and achieves state-of-the-art accuracies. Furthermore, the proposed iRegNet can deliver competitive results, even in the case of non-trained images, as proof of its generality and can therefore be valuable in intra-operative neurosurgical guidance.
For the explainability module, the NeuroXAI framework is proposed to increase the trust of medical experts in applying AI techniques and deep neural networks. The NeuroXAI includes seven explanation methods providing visualization maps to help make deep learning models transparent. Experimental findings showed that the proposed XAI framework achieves good performance in extracting both local and global contexts in addition to generating explainable saliency maps to help understand the prediction of the deep network. Further, visualization maps are obtained to realize the flow of information in the internal layers of the encoder-decoder network and understand the contribution of MRI modalities in the final prediction. The explainability process could provide medical professionals with additional information about tumor segmentation results and therefore aid in understanding how the deep learning model is capable of processing MRI data successfully.
Furthermore, an interactive neurosurgical display has been developed for interventional guidance, which supports the available commercial hardware such as iUS navigation devices and instrument tracking systems. The clinical environment and technical requirements of the integrated multi-modality DeepIGN system were established with the ability to incorporate: (1) pre-operative MRI data and associated 3D volume reconstructions, (2) real-time iUS data, and (3) positional instrument tracking. This system's accuracy was tested using a custom agar phantom model, and its use in a pre-clinical operating room is simulated. The results of the clinical simulation confirmed that system assembly was straightforward, achievable in a clinically acceptable time of 15 min, and performed with a clinically acceptable level of accuracy.
In this thesis, a multimodality IGN system has been developed using the recent advances in deep learning to accurately guide neurosurgeons, incorporating pre- and intra-operative patient image data and interventional devices into the surgical procedure. DeepIGN is developed as open-source research software to accelerate research in the field, enable ease of sharing between multiple research groups, and continuous developments by the community. The experimental results hold great promise for applying deep learning models to assist interventional procedures - a crucial step towards improving the surgical treatment of brain tumors and the corresponding long-term post-operative outcomes.
Vehicles have been so far improved in terms of energy-efficiency and safety mainly by optimising the engine and the power train. However, there are opportunities to increase energy-efficiency and safety by adapting the individual driving behaviour in the given driving situation. In this paper, an improved rule match algorithm is introduced, which is used in the expert system of a human-centred driving system. The goal of the driving system is to optimise the driving behaviour in terms of energy-efficiency and safety by giving recommendations to the driver. The improved rule match algorithm checks the incoming information against the driving rules to recognise any breakings of a driving rule. The needed information is obtained by monitoring the driver, the current driving situation as well as the car, using in-vehicle sensors and serial-bus systems. On the basis of the detected broken driving rules, the expert system will create individual recommendations in terms of energy-efficiency and safety, which will allow eliminating bad driving habits, while considering the driver needs.
Energy-efficiency and safety became an important factor for car manufacturers. Thus, the cars have been optimised regarding the energy consumption and safety by optimising for example the power train or the engine. Besides the optimisation of the car itself, energy-efficiency and safety can also be increased by adapting the individual driving behaviour to the current driving situation. This paper introduces a driving system, which is in development. Its goal is to optimise the driving behaviour in terms of energy-efficiency and safety by giving recommendations to the driver. For the creation of a recommendation the driving system monitors the driver and the current driving situation as well as the car using in-vehicle sensors and serial-bus systems. On the basis of the acquired data, the driving system will give individual energy-efficiency and safety recommendations in real-time. This will allow eliminating bad driving habits, while considering the driver needs.
Saving energy and protecting the environment became fundamental for society and politics, why several laws were enacted to increase the energy-efficiency. Furthermore, the growing number of vehicles and drivers leaded to more accidents and fatalities on the roads, why road safety became an important factor as well. Due to the increasing importance of energy-efficiency and safety, car manufacturers started to optimise the vehicle in terms of energy-effciency and safety. However, energy-efficiency and road safety can be also increased by adapting the driving behaviour to the given driving situation. This thesis presents a concept of an adaptive and rule based driving system that tries to educate the driver in energy-efficient and safe driving by showing recommendations on time. Unlike existing driving-systems, the presented driving system considers energy-efficiency and safety relevant driving rules, the individual driving behaviour and the driver condition. This allows to avoid the distraction of the driver and to increase the acceptance of the driving system, while improving the driving behaviour in terms of energy-efficiency and safety. A prototype of the driving system was developed and evaluated. The evaluation was done on a driving simulator using 42 test drivers, who tested the effect of the driving system on the driving behaviour and the effect of the adaptiveness of the driving system on the user acceptance. It has been proven during the evaluation that the energy-efficiency and safety can be increased, when the driving system was used. Furthermore, it has been proven that the user acceptance of the driving system increases when the adaptive feature was turned on. A high user acceptance of the driving system allows a steady usage of the driving system and, thus, a steady improvement of the driving behaviour in terms of energy-efficiency and safety.
In dieser Arbeit wird eine optimierte Bandgap-Referenz zur Erzeugung einer temperaturstabilen Spannung und eines Referenzstroms vorgestellt. Für Low-Power-Anwendungen wurde die Bandgap-Referenz, basierend auf der Brokaw-Zelle, mit minimaler Stromaufnahme und optimierter Chipfläche durch Multi-Emitter-Layout der Bipolartransistoren implementiert. Zusätzliches Merkmal ist ein verbreiteter Versorgungsspannungsbereich von 2,5 bis 5,5 V. Simulationen zeigen, dass eine stabile Ausgangsspannung von 1,218 V und ein Referenzstrom von 1,997 μA realisiert wird. Im Temperaturbereich -40 °C … 50 °C sowie dem gesamten Bereich der Versorgungsspannung beträgt die Genauigkeit der Referenzspannung ± 0,04 % mit einer Gesamtstromaufnahme zwischen 3,5 und 10 μA. Es wird eine Temperaturdrift von 2,18 ppm/K erreicht. Durch das elektronische Trimmen von Widerständen wird der Offset der Ausgangsspannung, bedingt durch Herstellungstoleranzen, auf ±3,5 mV justiert. Die Referenz wird in einer 0,18 μm BiCMOS-Technologie implementiert.
In vitro, hydrogel-based ECMs for functionalizing surfaces of various material have played an essential role in mimicking native tissue matrix. Polydimethylsiloxane (PDMS) is widely used to build microfluidic or organ-on-chip devices compatible with cells due to its easy handling in cast replication. Despite such advantages, the limitation of PDMS is its hydrophobic surface property. To improve wettability of PDMS-based devices, alginate, a naturally derived polysaccharide, was covalently bound to the PDMS surface. This alginate then crosslinked further hydrogel onto the PDMS surface in desired layer thickness. Hydrogel-modified PDMS was used for coating a topography chip system and in vitro investigation of cell growth on the surfaces. Moreover, such hydrophilic hydrogel-coated PDMS is utilized in a microfluidic device to prevent unspecific absorption of organic solutions. Hence, in both exemplary studies, PDMS surface properties were modified leading to improved devices.
In this paper, it aims to model wind speed time series at multiple sites. The five-parameter Johnson distribution is deployed to relate the wind speed at each site to a Gaussian time series, and the resultant m-dimensional Gaussian stochastic vector process Z(t) is employed to model the temporal-spatial correlation of wind speeds at m different sites. In general, it is computationally tedious to obtain the autocorrelation functions (ACFs) and cross-correlation functions (CCFs) of Z(t), which are different to those of wind speed times series. In order to circumvent this correlation distortion problem, the rank ACF and rank CCF are introduced to characterize the temporal-spatial correlation of wind speeds, whereby the ACFs and CCFs of Z(t) can be analytically obtained. Then, Fourier transformation is implemented to establish the cross-spectral density matrix of Z(t), and an analytical approach is proposed to generate samples of wind speeds at m different sites. Finally, simulation experiments are performed to check the proposed methods, and the results verify that the five-parameter Johnson distribution can accurately match distribution functions of wind speeds, and the spectral representation method can well reproduce the temporal-spatial correlation of wind speeds.
Die Coronapandemie hat Deutschland seit dem Frühjahr 2020 fest im Griff. Eine zentrale Maßnahme zur Verlangsamung der Ausbreitung des Coronavirus war von Beginn an die Schließung von Schulen. In einer ersten Studie wurden die Lernzeitverluste durch die Corona-bedingten Schulschließungen im Frühjahr 2020 quantifiziert (Wößmann, Freundl, Lergetporer, Grewenig, Werner & Zierow, 2020). Es zeigte sich, dass sich die Lernzeit der Schülerinnen und Schüler durch die Schulschließungen halbiert hatte und die Verluste bei leistungsschwächeren Schülerinnen und Schülern besonders groß waren. Im Frühjahr 2020 wurde die Verringerung der Lernzeit von den Schulen nicht kompensiert: Nur ein kleiner Teil der Schülerinnen und Schüler hatte in dieser Phase regelmäßigen Distanzunterricht und täglichen Kontakt mit Lehrkräften. Während der Sommer- und Herbstmonate seit der Phase der ersten Schulschließungen hatten Schulverwaltung, Schulen und Lehrkräfte Zeit, sich auf Distanzunterricht und digitale Lehrmethoden umzustellen, um Lernausfällen während etwaiger erneuter Schulschließungen entgegenzuwirken. Inwiefern dies dazu geführt hat, dass die Schülerinnen und Schüler während der Schulschließungen Anfang 2021 tatsächlich mehr Zeit mit Lernen verbracht haben als im Frühjahr 2020, ist jedoch bislang weitgehend unbekannt.
Um zu erfahren, mit welchen Aktivitäten die Schulkinder die Zeit der Schulschließungen Anfang 2021 verbracht haben, wurde erneut eine deutschlandweite Umfrage durchgeführt, diesmal unter mehr als 2.000 Eltern von Schulkindern. Die Ergebnisse liefern umfassende Einblicke in den Alltag von Schulkindern, Eltern und Schulen während der Schulschließungen Anfang 2021. Sie zeigen, wie viele Stunden die Schulkinder in dieser Phase mit Lernen und anderen kreativen und passiven Tätigkeiten verbracht haben, welche konkreten Maßnahmen die Schulen ergriffen haben, um den Schulbetrieb aufrechtzuerhalten, wie effektiv das Lernen zu Hause war, und wie die Eltern das häusliche Lernumfeld einschätzen. Dabei vergleichen wir die Aktivitäten während der Schulschließungen Anfang 2021 mit den Aktivitäten während der ersten Corona-bedingten Schulschließungen im Frühjahr 2020 sowie mit den Aktivitäten vor Corona (vgl. Wößmann et al., 2020). Wir berichten zudem Ergebnisse zum sozio-emotionalen Wohlbefinden der Kinder nach einem Jahr Coronapandemie und zu den Einschätzungen der Eltern, welche breiteren Auswirkungen die Schulschließungen auf verschiedene Lebensbereiche ihrer Kinder haben. Die Befragung liefert somit neue empirische Erkenntnisse über mögliche Folgen der Corona-Krise für den Bildungserfolg von Kindern in Deutschland. Dabei untersuchen wir auch, inwiefern sich die Auswirkungen der Schulschließungen zwischen leistungsstärkeren und -schwächeren Schülerinnen und Schülern sowie zwischen Akademikerkindern und Nicht-Akademikerkindern unterscheiden.
Here, we report the mechanical and water sorption properties of a green composite based on Typha latifolia fibres. The composite was prepared either completely binder-less or bonded with 10% (w/w) of a bio-based resin which was a mixture of an epoxidized linseed oil and a tall-oil based polyamide. The flexural modulus of elasticity, the flexural strength and the water absorption of hot pressed Typha panels were measured and the influence of pressing time and panel density on these properties was investigated. The cure kinetics of the biobased resin was analyzed by differential scanning calorimetry (DSC) in combination with the iso-conversional kinetic analysis method of Vyazovkin to derive the curing conditions required for achieving completely cured resin. For the binderless Typha panels the best technological properties were achieved for panels with high density. By adding 10% of the binder resin the flexural strength and especially the water absorption were improved significantly.
Within the scope of the present cumulative doctoral thesis six scientific papers were published which illustrates that modern reaction model-free (=isoconversional) kinetic analysis (ICKA) methods represents a universal and effective tool for the controlled processing of thermosetting materials. In order to demonstrate the universal applicability of ICKA methods, the thermal cure of different thermosetting materials having a very broad range of chemical composition (melamine-formaldehyde resins, epoxy resins, polyester-epoxy resins, and acrylate/epoxy resins) were analyzed and mathematically modelled. Some of the materials were based on renewable resources (an epoxy resin was made from hempseed oil; linseed oil was modified into an acrylate/epoxy resin). With the aid of ICKA methods not only single-step but also complex multi-step reactions were modelled precisely. The analyzed thermosetting materials were combined with wood, wood-based products, paper, and plant fibers which are processed to various final products. Some of the thermosetting materials were applied as coating (in form of impregnated décor papers or powder and wet coatings respectively) on wood substrates and the epoxy resin from hempseed oil was mixed with plant fibers and processed into bio-based composites for lightweight applications. From the final products mechanical, thermal, and surface properties were determined. The activation energy as function of cure conversion derived from ICKA methods was utilized to predict accurately the thermal curing over the course of time for arbitrary cure conditions. Furthermore the cure models were used to establish correlations between the cross-linking during processing into products and the properties of the final products. Therewith it was possible to derive the process time and temperature that guarantee optimal cross-linking as well as optimal product properties
A millimeter-wave power amplifier concept in an advanced silicon germanium (SiGe) BiCMOS technology is presented. The goal of the concept is to investigate the impact of physical limitations of the used heterojunction bipolar transistors (HBT) on the performance of a 77 GHz power amplifier. High current behavior, collectorbase breakdown and transistor saturation can be forced with the presented design. The power amplifier is manufactured in an advanced SiGe BiCMOS technology at Infineon Technologies AG with a maximum transit frequency fT of around 250 GHz for npn HBT’s [1]. The simulation results of the power amplifier show a saturated output power of 16 dBm at a power added efficiency of 13%. The test chip is designed for a supply voltage of 3.3 V and requires a chip size of 1.448 x 0.930 mm².
The world population is growing and alternative ways of satisfying the increasing demand for meat are being explored, such as using animal cells for the fabrication of cultured meat. Edible biomaterials are required as supporting structures. Hence, we chose agarose, gellan and a xanthan-locust bean gum blend (XLB) as support materials with pea and soy protein additives and analyzed them regarding material properties and biocompatibility. We successfully built stable hydrogels containing up to 1% pea or soy protein. Higher amounts of protein resulted in poor handling properties and unstable gels. The gelation temperature range for agarose and gellan blends is between 23–30 °C, but for XLB blends it is above 55 °C. A change in viscosity and a decrease in the swelling behavior was observed in the polysaccharide-protein gels compared to the pure polysaccharide gels. None of the leachates of the investigated materials had cytotoxic effects on the myoblast cell line C2C12. All polysaccharide-protein blends evaluated turned out as potential candidates for cultured meat. For cell-laden gels, the gellan blends were the most suitable in terms of processing and uniform distribution of cells, followed by agarose blends, whereas no stable cell-laden gels could be formed with XLB blends.
Using predictive maintenance, more efficient processes can be implemented, leading to fewer maintenance costs and increased availability. The development of a predictive maintenance solution currently requires high efforts in time and capacity as well as often interdisciplinary cooperation. This paper presents a standardized model to describe a predictive maintenance use case. The description model is used to collect, present, and document the required information for the implementation of predictive maintenance use cases by and for different stakeholders. Based on this model, predictive maintenance solutions can be introduced more efficiently. The method is validated across departments in the automotive sector.
Size and cost of a switched mode power supply can be reduced by increasing the switching frequency. This leads especially at a high input voltage to a decreasing efficiency caused by switching losses. Conventional calculations are not suitable to predict the efficiency as parasitic capacitances have a significant loss contribution. This paper presents an analytical efficiency model which considers parasitic capacitances separately and calculates the power loss contribution of each capacitance to any resistive element. The proposed model is utilized for efficiency optimization of converters with switching frequencies >10MHz and input voltages up to 40V. For experimental evaluation a DCDC converter was manufactured in a 180 nm HV BiCMOS technology. The model matches a transistor level simulation and measurement results with an accuracy better than 3.5 %. The accuracy of the parasitic capacitances of the high voltage transistor determines the overall accuracy of the efficiency model. Experimental capacitor measurements can be fed into the model. Based on the model, different architectures have been studied.
Socially interactive robots with human-like speech synthesis and recognition, coupled with humanoid appearance, are an important subject of robotics and artificial intelligence research. Modern solutions have matured enough to provide simple services to human users. To make the interaction with them as fast and intuitive as possible, researchers strive to create transparent interfaces close to human-human interaction. Because facial expressions play a central role in human-human communication, robot faces were implemented with varying degrees of human-likeness and expressiveness. We propose a way to implement a program that believably animates changing facial expressions and allows to influence them via inter-process communication based on an emotion model. This will can be used to create a screen based virtual face for a robotic system with an inviting appearance to stimulate users to seek interaction with the robot.
In recent years robotic systems have matured enough to perform simple home or office tasks, guide visitors in environments such as museums or stores and aid people in their daily life. To make the interaction with service and even industrial robots as fast and intuitive as possible, researchers strive to create transparent interfaces close to human-human interaction. As facial expressions play a central role in human-human communication, robot faces were implemented with varying degrees of human-likeness and expressiveness. We propose an emotion model to parameterize a screen based facial animation via inter-process communication. A software will animate transitions and add additional animations to make a digital face appear “alive” and equip a robotic system with a virtual face. The result will be an inviting appearance to motivate potential users to seek interaction with the robot.
Die Simulation menschlichen Gruppenverhaltens kann bei der Kapazitäten-, Risiko- und Evakuierungs Planung von Gebäuden hilfreich sein, bei der Produktion von Filmen für eindrucksvolle Massen-Szenen eingesetzt werden oder virtuelle Schauplätze in Echtzeit-Anwendungen beleben. Die Herausforderungen liegen vor allem in einem realistischen Erscheinungsbild der virtuellen Crowd, glaubwürdigem Verhalten innerhalb eines sozialen Verbundes, realitätsnahen Animationen und der Wahrung der Echtzeitfähigkeit interaktiver Anwendungen. Im Rahmen dieser Arbeit wird der aktuelle Stand der Technik vorgestellt, Technologien evaluiert und ein Crowd Simulation Prototyp mit der Unity Engine implementiert.
Railway operators are being challenged by increasing complexity and safeguarding the availability of passenger rolling stock, bringing maintenance and especially emerging technologies into the focus. This paper presents a model for selection and implementation of Industry 4.0 technologies in rolling stock maintenance. The model consists of different stages and considers the main components of rolling stock, the related appropriate maintenance strategies and Industry 4.0 technologies considering the maturity level of the railway operators. Relevant criteria and main prerequisites of the technologies were identified. The model proposes relevant activities and was validated by industry experts.
Simulation eines dezentralen Regelungssystems zur netzdienlichen Erzeugung von grünem Wasserstoff
(2023)
Wasserstoff wird einen bedeutenden Beitrag zum Wandel von Industrie und Gesellschaft in eine klimaneutrale Zukunft leisten. Der Aufbau und die ökologisch und ökonomisch sinnvolle Nutzung einer Wasserstoffinfrastruktur sind hierbei die zentralen Herausforderungen. Ein notwendiger Baustein ist die effiziente Bereitstellung von grünem Strom und dem daraus produzierten grünen Wasserstoff. Der vorliegende Beitrag stellt ein dezentrales Regel- und Kommunikationssystem vor, mit dem Angebot und Nachfrage von grünem Strom und Wasserstoff in einem System aus dezentralen Akteuren in Einklang gebracht werden. In einer hierzu entwickelten Simulationsumgebung wird die Funktion und der Nutzen dieses dezentralen Ansatzes verdeutlicht.
The metric and qualitative analysis of models of the upper and lower dental arches is an important aspect of orthodontic treatment planning. Currently available eLearning systems for dental education only allow access to digital learning materials, and do not interactively support the learning progress. Moreover, to date no study compared the efficiency of learning methods based on physical or digital study models. For this pilot study, 18 dental students were separated into two groups to investigate whether the learning success in study model analysis with an interactive elearning system is higher based on digital models or on conventional plaster models. The results show that with the digital method less time is needed per model analysis. Moreover, the digital approach leads to higher total scores than that based on plaster models. We conclude that interactive eLearning using digital dental arch models is a promising tool for dental education.
This publication gives a short introduction and overview of the European project SCOUT and introduces a methodology for a holistic approach to record the state of the art in technical (vehicle and connectivity, human factors regarding physiologic and ergonomic level) and non-technical enablers (societal, economic, legal, regulatory and policy level) of connected and automated driving in Europe. The paper addresses beside the technical topics of environmental perception, E/E architecture, actuators and security, the state of the art of the legal framework in the context of connected and automated driving.
Successful transitions to a sustainable bioeconomy require novel technologies, processes, and practices as well as a general agreement about the overarching normative direction of innovation. Both requirements necessarily involve collective action by those individuals who purchase, use, and co-produce novelties: the consumers. Based on theoretical considerations borrowed from evolutionary innovation economics and consumer social responsibility, we explore to what extent consumers’ scope of action is addressed in the scientific bioeconomy literature. We do so by systematically reviewing bioeconomy-related publications according to (i) the extent to which consumers are regarded as passive vs. active, and (ii) different domains of consumer responsibility (depending on their power to influence economic processes). We find all aspects of active consumption considered to varying degrees but observe little interconnection between domains. In sum, our paper contributes to the bioeconomy literature by developing a novel coding scheme that allows us to pinpoint different aspects of consumer activity, which have been considered in a rather isolated and undifferentiated manner. Combined with our theoretical considerations, the results of our review reveal a central research gap which should be taken up in future empirical and conceptual bioeconomy research. The system-spanning nature of a sustainable bioeconomy demands an equally holistic exploration of the consumers’ prospective and shared responsibility for contributing to its coming of age, ranging from the procurement of information on bio-based products and services to their disposal.
Diese Arbeit liefert einen Konzeptentwurf, der die Integration verschiedener Systeme mit prozessrelevanten klinischen Diensten gewährleistet. Chirurgische Abläufe werden in Form von Prozessen modelliert. Die Wahl der Notation und die Art der Modellierung dieser Prozesse spielt in der heutigen Forschung in diesem Gebiet eine zentrale Rolle. Sind diese Prozesse modelliert, besteht die Möglichkeit, diese in einer Workflow-Engine automatisiert auszuführen. Im Rahmen der Entwicklung eines Workflow-Managment-Systems stellt sich die Frage, wie die Anbindung dieser Workflow-Engine mit anderen Systemen erfolgen soll. In der Arbeit werden Schnittstellen abstrakt in der Web Services Description Language (WSDL) definiert. Darum werden automatisiert Artefakte erzeugt. Auf der Grundlage dieser Artefakte erfolgt die Integration der Systeme. Die Workflow-Engine kommunizieren über SOAP-Nachrichten (Simple Object Access Protocol) mit den entsprechenden Systemen. Dieser Ansatz wurde mithilfe eines Prototyps validiert und umgesetzt.
Informationstechnische Systeme, die den Arbeitsablauf im klinischen Bereich unterstützen, sind aktuell auf organisatorische Abläufe beschränkt. Diese Arbeit stellt einen ersten Ansatz vor, wie solch ein System in den perioperativen Bereich eingebracht werden kann. Hierzu wurde eine Workflow Engine mit einer perioperativen Prozess-Visualisierung verknüpft. Das System wurde nach Modell-View-Controller-Prinzip implementiert. Als "Controller" kommt die Workflow Engine zum Einsatz; also "Modell" ein Prozessmodell, mit den erforderlichen klinischen Daten. Der "View" wurde durch eine abgekoppelte Anwendung realisiert, welche auf Web-Technologien basiert. Drei Visualisierungen, die Workflow Engine sowie die Anbindung beider über eine Datenbankschnittstelle, wurden erfolgreich umgesetzt. Bei den drei Visualisierungen wurden jeweils eine Ansicht für den OP-Koordinator, den Springer und eine Ansicht für die Übersicht einer OP erstellt.
An operation room is a stressful work environment. Nevertheless, all involved persons have to work safely as there is no space for making mistakes. To ensure a high level of concentration and seamless interaction, all involved persons have to know their own tasks and tasks of their colleagues. The entire team must work synchronously at all times. However, the operation room (OR) is a noisy environment and the actors have to set their focus on their work. To optimize the overall workflow, a task manager supporting the team was developed. Each actor is equipped with a client terminal showing a summary of their own tasks. Moreover, a big screen displays all tasks of all actors. The architecture is a distributed system based on a communication framework that supports the interaction of all clients with the task manager. A prototype of the task manager and several clients have been developed and implemented. The system represents a proof-of-concept for further development. This paper describes the concept of the task manager.
Workflow driven support systems in the peri-operative area have the potential to optimize clinical processes and to allow new situation-adaptive support systems. We started to develop a workflow management system supporting all involved actors in the operating theatre with the goal to synchronize the tasks of the different stakeholders by giving relevant information to the right team members. Using the OMG standards BPMN, CMMN and DMN gives us the opportunity to bring established methods from other industries into the medical field. The system shows each addressed actor their information in the right place at the right time to make sure every member can execute their task in time to ensure a smooth workflow. The system has the overall view of all tasks. Accordingly, a workflow management system including the Camunda BPM workflow engine to run the models, and a middleware to connect different systems to the workflow engine and some graphical user interfaces to show necessary information or to interact with the system are used. The complete pipeline is implemented with a RESTful web service. The system is designed to include different systems like hospital information system (HIS) via the RESTful web service very easily and without loss of data. The first prototype is implemented and will be expanded.
In der Vergangenheit ist der Materialfluss meist mit der Produktion gewachsen. Mit steigender Produkt-Individualität erhöht sich die Anzahl der zu fertigenden Varianten in der Produktion und somit die Komplexität der Materialflüsse. Im Rahmen dieser Arbeit wurden Möglichkeiten und Methoden zur Aufnahme und Optimierung von Materialflüssen im Zusammenhang mit hoher Variantenvielfalt untersucht.
With the digital transformation, companies will experience a change that focuses on shaping the organization into an agile organizational form. In today's competitive and fast-moving business environment, it is necessary to react quickly to changing market conditions. Agility represents a promising option for overcoming these challenges. The path to an agile organization represents a development process that requires consideration of countless levels of the enterprise. This paper examines the impact of digital transformation on agile working practices and the benefits that can be achieved through technology. To enable a solution for today's so-called VUCA (Volatility, Uncertainty, Complexity und Ambiguity) world, agile ways of working can be applied project management requires adaptation. In the qualitative study, expert interviews were conducted and analyzed using the grounded theory method. As a result, a model can be presented that shows the influencing factors and potentials of agile management in the context of the digital transformation of medium-sized companies.
Adoption of artificial intelligence (AI) has risen sharply in recent years but many firms are not successful in realising the expected benefits or even terminate projects before completion. While there are a number of previous studies that highlight challenges in AI projects, critical factors that lead to project failure are mostly unknown. The aim of this study is therefore to identify distinct factors that are critical for failure of AI projects. To address this, interviews with experts in the field of AI from different industries are conducted and the results are analyzed using qualitative analysis methods. The results show that both, organizational and technological issues can cause project failure. Our study contributes to knowledge by reviewing previously identified challenges in terms of their criticality for project failure based on new empirical data, as well as, by identifying previously unknown factors.
Seit 5 Jahrzehnten steht die Erforschung von Leben, Werk und Wirkungsgeschichte von Friedrich List (1789–1846) im Zentrum der wissenschaftlichen Arbeit von Eugen Wendler. Im Laufe der Zeit sind ca. 30 Monographien und eine größere Anzahl von wissenschaftlichen Aufsätzen und journalistischen Artikeln entstanden. Dabei baute Eugen Wendler auf der unschätzbaren Vorarbeit der Herausgeber der Gesamtausgabe von Lists Werken von 1925 bis 1935 auf.
Der vorliegende Aufsatz vermittelt einen Überblick über die Buchpublikationen von Eugen Wendler zur List-Forschung. Mit seinem eindrucksvollen Oeuvre bekennt er sich zum letzten lebenden Fossil in der Nachfolge der FLG und erweist damit den Herausgebern die gebührende und längst überfällige Wertschätzung und Achtung.
Unter den widrigsten wirtschaftlichen und politischen Verhältnissen und Bedingungen wurde die Friedrich-List-Gesellschaft (FLG) 1925 gegründet und bis 1934 fortgeführt. Sie verfolgte vor allem den Zweck, die weit verstreuten, schwer zugänglichen und vielfach unbekannten Schriften, Reden und Briefe von Friedrich List (1789-1846) zusammenzutragen und in Form einer Gesamtausgabe zu publizieren.
Weder diese 10- bzw. 12-bändige Gesamtausgabe, noch die Namen ihrer Herausgeber haben in der Wirtschaftswissenschaft die gebührende Wertschätzung und Aufmerksamkeit erfahren. Die längst überfällige Dankesschuld wird in dem vorliegenden Beitrag nach nahezu 100 Jahren abgetragen. Ohne den engagierten und mutigen Einsatz der Herausgeber, insbesondere von Edgar Salin, wäre die List-Forschung undenkbar und die deutsche Wirtschaftswissenschaft um ein ruhmreiches Kapitel ärmer.
In buchstäblich letzter Minute haben sich die englische Regierung und die Europäische Union auf ein umfangreiches Abkommen geeinigt, um einen ungeregelten Brexit zu verhindern. Nach dem jahrelangen zähen Verhandlungsmarathon fällt der Jubel verhalten aus, dennoch herrscht auf beiden Seiten des Ärmelkanals Erleichterung, weil ein Modus Vivendi gefunden wurde, auf dem sich die künftigen Beziehungen aufbauen und fortführen lassen. Ob sich die englischen Blütenträume, die an den Brexit geknüpft wurden, erfüllen werden, wird die Zukunft erweisen.
Die Strategie und Taktik der englischen Regierungen zum Brexit und bei den Austrittsverhandlungen spiegeln sich in den Erfahrungen wider, die Friedrich List vor genau 175 Jahren bei seinen Bemühungen um eine deutsch-englische Allianz machen musste. Wegen der von England schon damals strikt befolgten Insular und Handelssuprematie musste er sich eingestehen, dass England diese Position hartnäckig verteidigt und deshalb frustriert und ernüchtert seine Pläne aufgeben. Deshalb setzte er seine Hoffnung auf eine "Kontinentalallianz" der europäischen Nationen, wie sie nun nach dem Austritt Großbritanniens aus der Europäischen Union entstanden ist. Vielleicht werden wir uns nun an den Begriff "Kontinentalallianz" gewöhnen müssen und dabei an die Weitsicht von Friedrich List erinnert.
Andererseits gilt auch für die englische Politik das Motto von Lists zweiter Pariser Preisschrift: "Le monde marche - Die Welt bewegt sich", allerdings mit völlig anderen Vorzeichen als vor 175 Jahren: Die Welthandelsachse hat sich von der westlichen auf die östliche Halbkugel verlagert; das britische Weltreich ist Geschichte, die Fließgeschwindigkeit des globalen Wandels hat sich dramatisch beschleunigt und trotz der Lingua Franca erscheint England, vor allem aus asiatischer Sicht, nur noch als kleiner Fleck auf der Weltkarte. Falls die schottische Regierung ihre Absicht durchsetzen und die Unabhängigkeit vom Vereinigten Königreich erreichen sollte, würde sich der Brexit als verhängnisvoller Bumerang erweisen.
Automatic content creation system for augmented reality maintenance applications for legacy machines
(2024)
Augmented reality (AR) applications have great potential to assist maintenance workers in their operations. However, creating AR solutions is time-consuming and laborious, which limits its widespread adoption in the industry. It therefore often happens that even with the latest generation machines, instead of an AR solution, the user only receives an electronic manual for the equipment operation and maintenance. This is commonplace with legacy machines. For this reason, solutions are required that simplify the creation of such AR solutions. This paper presents an approach using an electronic manual as a basis to create fast and cost-effective AR solutions for maintenance. As part of the approach, an application was developed to automatically identify and subdivide the chapters of electronic manuals via the bookmarks in the table of contents. The contents are then automatically uploaded to a central server and indexed with a suitable marker to make the data retrievable. The prepared content can then be accessed for creating context-related AR instructions via the marker. The application is characterized by the fact that no developers or experts are required to prepare the information. In addition to complying with common design criteria, the clear presentation of the contents and the intuitive use of the system offer added value for the performance of maintenance tasks. Together, these two elements form a novel way to retrofit legacy machines with AR maintenance instructions. The practical validation of the system took place in a factory environment. For this purpose, the content was created for a filter change on a CNC milling machine. The results show that inexperienced users can extract appropriate content with the software application. Furthermore, it is shown that maintenance workers, can access the content with an AR application developed for the Microsoft HoloLens 2 and complete simple tasks provided in the manufacturer's electronic manual.
In recent years, machine learning algorithms have made a huge development in performance and applicability in industry and especially maintenance. Their application enables predictive maintenance and thus offers efficiency increases. However, a successful implementation of such solutions still requires high effort in data preparation to obtain the right information, interdisciplinarity in teams as well as a good communication to employees. Here, small and medium sized enterprises (SME) often lack in experience, competence and capacity. This paper presents a systematic and practice-oriented method for an implementation of machine learning solutions for predictive maintenance in SME, which has already been validated.
Beim Language Oriented Programming (LOP) erstellt der Entwickler eine Programmiersprache, um ein Problem oder eine Aufgabe in einer bestimmten Domäne zu lösen. Dabei wird die Sprache so entwickelt, dass sie das konzeptuelle Modell des Entwicklers ohne Umdenken umsetzen kann. Diese Sprachen nennt man domänenspezifische Sprachen (DSL). Zur Entwicklung dieser Sprachen werden sogenannte Language Workbenches (LWB) verwendet. Diese Arbeit befasst sich mit der Entwicklung von DSLs als ein Mittel zur Umsetzung von LOP. Durch die Nutzung der LWBs kann man DSLs mit relativ kleinem Aufwand erstellen und einsetzen. Im Fokus dieser Arbeit steht die Entwicklung von "Modularen DSLs". Hierbei werden Kriterien und Voraussetzungen für die Modularisierung betrachtet. Zum Abschluss werden drei Konzepte bestehender Systeme anhand dieser Kriterien betrachtet und bewertet.
The isothermal curing of melamine resin is investigated by in-line infrared spectroscopy at different temperatures. The infrared spectra are decomposed into time courses of characteristic spectral patterns using Multivariate Curve Resolution (MCR). It was found that depending on the applied curing temperature, melamine films with different spectral fingerprints and correspondingly different chemical network structures are formed. The network structures of fully cured resin films are specific for the applied curing temperatures used and cannot simply be compensated by changes in the curing time. For industrial curing processes, this means that cure temperature is the main system determining factor at constant M:F ratio. However, different MF resin networks can be specifically obtained from one and the same melamine resin by suitable selection of the curing time and temperatures profiles to design resin functionality. The spectral fingerprints after short curing time as well as after long curing time reflect the fundamental differences in the thermoset networks that can be obtained with industrial short-cycle and multi-daylight presses.
Here, we study resin cure and network formation of solid melamine formaldehyde pre-polymer over a large temperature range viadynamic temperature curing profiles. Real-time infrared spectroscopy is used to analyze the chemical changes during network formation and network hardening. By applying chemometrics (multivariate curve resolution,MCR), the essential chemical functionalities that constitute the network at a given stage of curing are mathematically extracted and tracked over time. The three spectral components identified by MCR were methylol-rich, ether linkages-rich and methylene linkages-rich resin entities. Based on dynamic changes of their characteristic spectral patterns in dependence of temperature, curing is divided into five phases: (I) stationary phase with free methylols as main chemical feature, (II) formation of flexible network cross-linked by ether linkages, (III) formation of rigid, ether-cross-linked network, (IV) further hardening via transformation of methylols and ethers into methylene-cross-linkages, and (V) network consolidation via transformation of ether into methylene bridges. The presented spectroscopic/chemometric approach can be used as methodological basis for the functionality design of MF-based surface films at the stage of laminate pressing, i.e., for tailoring the technological property profile of cured MF films using a causal understanding of the underlying chemistry based on molecular markers and spectroscopic fingerprints.
During curing of thermosetting resins the technologically relevant properties of binders and coatings develop. However, curing is difficult to monitor due to the multitude of chemical and physical processes taking place. Precise prediction of specific technological properties based on molecular properties is very difficult. In this study, the potential of principal component analysis (PCA) and principal component regression (PCR) in the analysis of Fourier transform infrared (FTIR) spectra is demonstrated using the example of melamine-formaldehyde (MF) resin curing in solid state. FTIR/PCA-based reaction trajectories are used to visualize the influence of temperature on isothermal cure. An FTIR/PCR model for predicting the hydrolysis resistance of cured MF resin from their spectral fingerprints is presented which illustrates the advantages of FTIR/PCR compared to the combination differential scanning calorimetry/isoconversional kinetic analysis. The presented methodology is transferable to the curing reactions of any thermosetting resin and can be applied to model other technologically relevant final properties as well.
Durch das stetige Wachstum an neuen Technologien und Möglichkeiten steht der Verschmelzung von Technologien mit dem Menschen kaum noch etwas im Wege. Die Untersuchung der Implantate und die damit verbundenen Risiken sind ein Teil dieser Arbeit. Von Bedeutung sind hier die Funktionsweise und die IT-Sicherheitsaspekte. Alle in dieser Arbeit dargestellten Implantate benötigen eine Kommunikation nach außen. Diese Kommunikationsmöglichkeit birgt Risiken, die nicht nur auf die Daten der Träger beschränkt sind, sondern auch gesundheitliche Risiken beinhalten.
Die vorliegende Studie zeigt, dass das Thema Smart Innovation (der Einsatz von KI-Systemen im Innovationsprozess) von hoher Relevanz ist und Zustimmung für den Einsatz von KI im Innovationsprozess besteht. Sowohl von den Unternehmen als auch von den Studierenden werden Effizienzsteigerung, schnellere Bearbeitung großer Datenmengen, die Steigerung der Wettbewerbsfähigkeit und Kosteneinsparungen als Gründe für den Einsatz von KI im Innovationsprozess gesehen. In Deutschland finden KI-Technologien bereits jetzt punktuell und branchenunabhängig Anwendung im Innovationsprozess. Einflussfaktoren, wie Hochschulkooperationen, Innovationsabteilungen und Open Innovation können den Einsatz fördern. Vor allem KMU aus den frühen Phasen der Industrialisierung sollten davon Gebrauch machen. In einem Zusammenspiel von menschlicher Expertise und der schnellen und präzisen Datenverarbeitung der KI liegt das Erfolgsgeheimnis eines möglichst effizienten Innovationsprozesses. Es wird deutlich, dass verschiedene Einflussfaktoren erforderlich sind, um die Anwendung von Smart Innovation praktikabel zu gestalten. So gilt es zunächst die technischen Voraussetzungen einer funktionierenden IT-Infrastruktur zu erfüllen. Gleichbedeutend sind offene Fragestellungen hinsichtlich der Datenverfügbarkeit, des Dateneigentums und der Datensicherheit. Ohne rechtlichen Rahmen sind kaum Akteure gewillt, ihre Daten zu teilen und zugänglich zu machen. Erschwert wird der Einsatz von KI durch den nationalen IT-Fachkräftemangel. So sehen sowohl Unternehmen als auch die Studierenden das größte Hindernis im Mangel von KI-relevantem Know-how. Dies hemmt einerseits die Forschung, andererseits fehlt es den Unternehmen an erforderlichen Fachkräften für eine Einführung von KI im Unternehmen. Es ist jedoch notwendig, den Unternehmen durch das Aufzeigen von Anwendungsbeispielen, die Potenziale und Chancen von Smart Innovation zu vermitteln. Es gilt, die anwendungsorientierte Forschung zu fördern und einen reibungslosen Transfer in die Wirtschaft sicherzustellen. Dieser Wissensaustausch erfordert zudem eine höhere unternehmerische Risikobereitschaft. Es wächst die Notwendigkeit, unternehmensspezifische KI-Strategien zu entwerfen. Die Technologien entwickeln sich schnell, es gilt daher auch für Unternehmen sich diesem Fortschritt anzupassen, um den Anschluss nicht zu verlieren und die Wettbewerbsfähigkeit zu sichern. So liegt die größte Herausforderung im grundlegenden Wandel der Geschäftsmodelle, denn die Wertschöpfung erfolgreicher Unternehmen basiert zunehmend auf "digitalen assets". Daten gelten generell als die neue Ressource, als Rohstoff, auch für Smarte Innovationen. Die Bedeutung von Smart Innovation wird in Zukunft weiterhin ansteigen. Kurz- und mittelfristig unterstützt die Schwache KI vor allem bei der Datensammlung und -analyse, bei der Prozessautomatisierung sowie bei der Bedürfnis- und Trendidentifikation. Weiter werden sich inkrementelle Veränderungen im Innovationsmanagement mithilfe von Simulationen und der zufälligen Kombination von Technologien erhofft. Langfristig wird eine stärkere KI den Einsatz der Menschen im Innovationsprozess in Teilen ersetzen können. Ob autonomes Innovieren zukünftig möglich sein wird, hängt zunächst von dem Ausmaß der Neuheit einer Innovation, aber vor allem auch von der Möglichkeit einer kreativen KI ab. Es ist davon auszugehen, dass die Fortschritte im Bereich der KI nicht nur radikale Innovationen ermöglichen werden, sondern auch zu einer strukturellen Veränderung unseres heutigen Verständnisses des Innovationsmanagements führen.
The desire to combine advanced user friendly interfaces with a product personality communicating environmental friendliness to customers poses new challenges for car interior designers, as little research has been carried out in this field to date. In this paper, the creation of three personas aimed at defining key German car users with pro environmental behaviour is presented. After collecting ethnographic data of potential drivers through literature review, information about generation and Euro car segment led to the definition of three key user groups. The resulting personas were applied to determine the most important interaction points in car interior. Finally, present design cues of eco-friendly product personality developed in the field of automotive design were explored. Our work presents three strategic directions for the design development of future in-car user interfaces named as a) foster multimodal mobility; b) emphasize the interlinkage economy - sustainable driving; and c) highlight new technological developments. The presented results are meant as an impulse for developers to fit the needs of green customers and drivers when designing user-friendly HMI components.
For collision and obstacle avoidance as well as trajectory planning, robots usually generate and use a simple 2D costmap without any semantic information about the detected obstacles. Thus a robot’s path planning will simply adhere to an arbitrarily large safety margin around obstacles. A more optimal approach is to adjust this safety margin according to the class of an obstacle. For class prediction, an image processing convolutional neural network can be trained. One of the problems in the development and training of any neural network is the creation of a training dataset. The first part of this work describes methods and free open source software, allowing a fast generation of annotated datasets. Our pipeline can be applied to various objects and environment settings and is extremely easy to use to anyone for synthesising training data from 3D source data. We create a fully synthetic industrial environment dataset with 10 k physically-based rendered images and annotations. Our da taset and sources are publicly available at https://github.com/LJMP/synthetic-industrial-dataset. Subsequently, we train a convolutional neural network with our dataset for costmap safety class prediction. We analyse different class combinations and show that learning the safety classes end-to-end directly with a small dataset, instead of using a class lookup table, improves the quantity and precision of the predictions.
Massive data transfers in modern data-intensive systems resulting from low data-locality and data-to-code system design hurt their performance and scalability. Near-Data processing (NDP) and a shift to code-to-data designs may represent a viable solution as packaging combinations of storage and compute elements on the same device has become feasible. The shift towards NDP system architectures calls for revision of established principles. Abstractions such as data formats and layouts typically spread multiple layers in traditional DBMS, the way they are processed is encapsulated within these layers of abstraction. The NDP-style processing requires an explicit definition of cross-layer data formats and accessors to ensure in-situ executions optimally utilizing the properties of the underlying NDP storage and compute elements. In this paper, we make the case for such data format definitions and investigate the performance benefits under RocksDB and the COSMOS hardware platform.
The incudo-malleal joint (IMJ) in the human middle ear is a true diarthrodial joint and it has been known that the flexibility of this joint does not contribute to better middle-ear sound transmission. Previous studies have proposed that a gliding motion between the malleus and the incus at this joint prevents the transmission of large displacements of the malleus to the incus and stapes and thus contributes to the protection of the inner ear as an immediate response against large static pressure changes. However, dynamic behavior of this joint under static pressure changes has not been fully revealed. In this study, effects of the flexibility of the IMJ on middle-ear sound transmission under static pressure difference between the middle-ear cavity and the environment were investigated. Experiments were performed in human cadaveric temporal bones with static pressures in the range of +/- 2 kPa being applied to the ear canal (relative to middle-ear cavity). Vibrational motions of the umbo and the stapes footplate center in response to acoustic stimulation (0.2-8 kHz) were measured using a 3D-Laser Doppler vibrometer for (1) the natural IMJ and (2) the IMJ with experimentally-reduced flexibility. With the natural condition of the IMJ, vibrations of the umbo and the stapes footplate center under static pressure loads were attenuated at low frequencies below the middle-ear resonance frequency as observed in previous studies. After the flexibility of the IMJ was reduced, additional attenuations of vibrational motion were observed for the umbo under positive static pressures in the ear canal (EC) and the stapes footplate center under both positive and negative static EC pressures. The additional attenuation of vibration reached 4~7 dB for the umbo under positive static EC pressures and the stapes footplate center under negative EC pressures, and 7~11 dB for the stapes footplate center under positive EC pressures. The results of this study indicate an adaptive mechanism of the flexible IMJ in the human middle ear to changes of static EC pressure by reducing the attenuation of the middle-ear sound transmission. Such results are expected to be used for diagnosis of the IMJ stiffening and to be applied to design of middle-ear prostheses.
Our paper investigates the response of acquiring firms’ stock returns around the announcement date in cross-border mergers and acquisitions (M&A) between listed Chinese acquirers and German targets. We apply an event study methodology to examine the shareholder value effect based on a sample of M&A deals over the most recent period of 2012-2018. We apply a market model event study based on the argumentation of Brown and Warner (1985) and use short-term observation periods according to Andrade, Mitchell, and Stafford (2001) as well as Hackbarth and Morellec (2008). The results indicate that the announcement of M&A involving German targets results in a positive cumulative abnormal return of on average 2.18% for Chinese acquirers’ shareholders in a five-day symmetric event window. Furthermore, we found slight indications of possible information leakage prior to the formal announcement. Although it shows that the size of acquiring firms is not necessarily correlated with the positive abnormal returns in the short run, this study suggests that Chinese acquirers’ shareholders gain higher abnormal returns when the German targets are non-listed companies.
In dieser Ausarbeitung geht es um den aktuellen Stand der Digitalisierung der Textilindustrie. Sie dient als Grundlage zur Master-Thesis und soll die Frage beantworten, ob ein Informations-System, das die Textilprozesskette begleitet, benötigt wird. Dazu werden die einzelnen Prozessschritte kurz erläutert. In der Ausarbeitung wird auch die Verbindung zwischen der Textilindustrie und den neuen Möglichkeiten mit dem Internet der Dinge beleuchtet.
The Circular Economy aims to reintroduce the value of products back into the economic cycle at the same value chain level. While the activities of the Circular Economy are already well-defined, there exists a gap in how returned products are treated by the industry. This study aims to examine how a process should be designed to handle returned products in the context of the Circular Economy. To achieve this, a machine learning-based algorithm is used to classify data and extract relevant information throughout the product life cycle. The focus of this research is limited to land transportation systems within the Sharing Economy sector.
Internet of things innovations and the industrial internet these days become more and more decisive factors of future success for companies. Especially manufacturing oriented SME will face the challenge to develop innovative technology driven business models alongside technology innovations in this field which will be essential for future competitiveness. Failing in developing these technology driven business models in an internationally highly competitive environment will have a serious impact both on companies and on the society. Hence, securing economic stability and success of these technology driven business models is an indispensable task. To identify challenges for innovative industrial internet business models first it is necessary to understand what the industrial internet means to the leading parties and applying companies and start-ups in the field. Second, challenges from general business model development will be outlined. In a third step risks and challenges in business model development will be discussed with regard to the special characteristics of technology driven business models in the context of the industrial internet and the important role of the technological key component of the business model. Especially the capability to deal with an integrated consideration of the indivisible linked dimensions of economic and technological aspects of these business models is questioned. In the fourth place the specific challenges for industrial internet business models are derived. On the basis of these results it is also discussed what might be done to handle these challenges successfully with the goal to turn them into chances. The need for future research on the integration of the risk management perspective into the development of these technology driven business models is derived. This will help established companies and start-ups to realize great technological innovations for the industrial internet in sound and successful innovative business models.
Metalworking fluids (MWFs) are widely used to cool and lubricate metal workpieces during processing to reduce heat and friction. Extending a MWF’s service life is of importance from both economical and ecological points of view. Knowledge about the effects of processing conditions on the aging behavior and reliable analytical procedures are required to properly characterize the aging phenomena. While so far no quantitative estimations of ageing effects on MWFs have been described in the literature other than univariate ones based on single parameter measurements, in the present study we present a simple spectroscopy-based set-up for the simultaneous monitoring of three quality parameters of MWF and a mathematical model relating them to the most influential process factors relevant during use. For this purpose, the effects of MWF concentration, pH and nitrite concentration on the droplet size during aging were investigated by means of a response surface modelling approach. Systematically varied model MWF fluids were characterized using simultaneous measurements of absorption coefficients µa and effective scattering coefficients µ’s. Droplet size was determined via dynamic light scattering (DLS) measurements. Droplet size showed non-linear dependence on MWF concentration and pH, but the nitrite concentration had no significant effect. pH and MWF concentration showed a strong synergistic effect, which indicates that MWF aging is a rather complex process. The observed effects were similar for the DLS and the µ’s values, which shows the comparability of the methodologies. The correlations of the methods were R2c = 0.928 and R2P = 0.927, as calculated by a partial least squares regression (PLS-R) model. Furthermore, using µa, it was possible to generate a predictive PLS-R model for MWF concentration (R2c = 0.890, R2P = 0.924). Simultaneous determination of the pH based on the µ’s is possible with good accuracy (R²c = 0.803, R²P = 0.732). With prior knowledge of the MWF concentration using the µa-PLS-R model, the predictive capability of the µ’s-PLS-R model for pH was refined (10 wt%: R²c = 0.998, R²p = 0.997). This highlights the relevance of the combined measurement of µa and µ’s. Recognizing the synergistic nature of the effects of MWF concentration and pH on the droplet size is an important prerequisite for extending the service life of an MWF in the metalworking industry. The presented method can be applied as an in-process analytical tool that allows one to compensate for ageing effects during use of the MWF by taking appropriate corrective measures, such as pH correction or adjustment of concentration.
Two Stream Hypothesis: Adaptationseffekte bei sozialen Interaktionen mit Avataren in Virtual Reality
(2015)
In diesem Paper wird ein Experiment zur Two-Streams-Hypothese vorgestellt. Dabei werden zunächst die psychologischen und technischen Grundlagen erarbeitet, welche für das Experiment benötigt werden. Anschließend wird die Forschungsfrage definiert und der Versuchsaufbau erörtert. Im Experiment soll getestet werden, ob es unterschiedliche Adaptationseffekte bei der Erkennung und dem Ausführen von nicht-eindeutigen sozialen Handlungen gibt. Es wird ein Versuchsaufbau entwickelt, bei welchem Probanden entweder aktiv durch komplementäre Handlungen auf die Handlungen von virtuellen Avataren reagieren sollen oder passiv durch das Drücken von Buttons. Abschließend werden die Ergebnisse ausgewertet und ein Fazit
gezogen.
Turning students into Industry 4.0 entrepreneurs: design and evaluation of a tailored study program
(2022)
Startups in the field of Industry 4.0 could be a huge driver of innovation for many industry sectors such as manufacturing. However, there is a lack of education programs to ensure a sufficient number of well-trained founders and thus a supply of such startups. Therefore, this study presents the design, implementation, and evaluation of a university course tailored to the characteristics of Industry 4.0 entrepreneurship. Educational design-based research was applied with a focus on content and teaching concept. The study program was first implemented in 2021 at a German university of applied sciences with 25 students, of which 22 participated in the evaluation. The evaluation of the study program was conducted with a pretest–posttest-design targeting three areas: (1) knowledge about the application domain, (2) entrepreneurial intention and (3) psychological characteristics. The entrepreneurial intention was measured based on the theory of planned behavior. For measuring psychological characteristics, personality traits associated with entrepreneurship were used. Considering the study context and the limited external validity of the study, the following can be identified in particular: The results show that a university course can improve participants' knowledge of this particular area. In addition, perceived behavioral control of starting an Industry 4.0 startup was enhanced. However, the results showed no significant effects on psychological characteristics.
This study investigates how integrated reporting (IR) creates value for investors. It examines how providers of financial capital benefit from an improved firm information environment provided by IR. Specifically, this study investigates the effect of voluntary IR disclosure on analyst earnings forecast accuracy as well as on firm value. To do so, we use an international sample of 167 listed companies that voluntarily publish an integrated report. Our analysis shows no significant effect of a voluntary IR publication on analyst earnings forecast accuracy and no significant effect on firm value. We thus do not find evidence for the fulfillment of IR's promises regarding improved information environment and value creation of voluntary adopters. We conclude that such companies might already have a relatively high level of transparency leading to an absent additional effect of IR disclosure. Positive effects of IR appear to be more relevant in environments where IR is mandatory.
Background
Personalized medicine requires the integration and analysis of vast amounts of patient data to realize individualized care. With Surgomics, we aim to facilitate personalized therapy recommendations in surgery by integration of intraoperative surgical data and their analysis with machine learning methods to leverage the potential of this data in analogy to Radiomics and Genomics.
Methods
We defined Surgomics as the entirety of surgomic features that are process characteristics of a surgical procedure automatically derived from multimodal intraoperative data to quantify processes in the operating room. In a multidisciplinary team we discussed potential data sources like endoscopic videos, vital sign monitoring, medical devices and instruments and respective surgomic features. Subsequently, an online questionnaire was sent to experts from surgery and (computer) science at multiple centers for rating the features’ clinical relevance and technical feasibility.
Results
In total, 52 surgomic features were identified and assigned to eight feature categories. Based on the expert survey (n = 66 participants) the feature category with the highest clinical relevance as rated by surgeons was “surgical skill and quality of performance” for morbidity and mortality (9.0 ± 1.3 on a numerical rating scale from 1 to 10) as well as for long-term (oncological) outcome (8.2 ± 1.8). The feature category with the highest feasibility to be automatically extracted as rated by (computer) scientists was “Instrument” (8.5 ± 1.7). Among the surgomic features ranked as most relevant in their respective category were “intraoperative adverse events”, “action performed with instruments”, “vital sign monitoring”, and “difficulty of surgery”.
Conclusion
Surgomics is a promising concept for the analysis of intraoperative data. Surgomics may be used together with preoperative features from clinical data and Radiomics to predict postoperative morbidity, mortality and long-term outcome, as well as to provide tailored feedback for surgeons.
Monodisperse polystyrene spheres are functional materials with interesting properties, such as high cohesion strength, strong adsorptivity, and surface reactivity. They have shown a high application value in biomedicine, information engineering, chromatographic fillers, supercapacitor electrode materials, and other fields. To fully understand and tailor particle synthesis, the methods for characterization of their complex 3D morphological features need to be further explored. Here we present a chemical imaging study based on three-dimensional confocal Raman microscopy (3D-CRM), scanning electron microscopy (SEM), focused ion beam (FIB), diffuse reflectance infrared Fourier transform (DRIFT), and nuclear magnetic resonance (NMR) spectroscopy for individual porous swollen polystyrene/poly (glycidyl methacrylate-co-ethylene di-methacrylate) particles. Polystyrene particles were synthesized with different co-existing chemical entities, which could be identified and assigned to distinct regions of the same particle. The porosity was studied by a combination of SEM and FIB. Images of milled particles indicated a comparable porosity on the surface and in the bulk. The combination of standard analytical techniques such as DRIFT and NMR spectroscopies yielded new insights into the inner structure and chemical composition of these particles. This knowledge supports the further development of particle synthesis and the design of new strategies to prepare particles with complex hierarchical architectures.
Development of an expert system to overpass citizens technological barriers on smart home and living
(2023)
Adopting new technologies can be overwhelming, even for people with experience in the field. For the general public, learning about new implementations, releases, brands, and enhancements can cause them to lose interest. There is a clear need to create point sources and platforms that provide helpful information about the novel and smart technologies, assisting users, technicians, and providers with products and technologies. The purpose of these platforms is twofold, as they can gather and share information on interests common to manufacturers and vendors. This paper presents the ”Finde-Dein-SmartHome” tool. Developed in association with the Smart Home & Living competence center [5] to help users learn about, understand, and purchase available technologies that meet their home automation needs. This tool aims to lower the usability barrier and guide potential customers to clear their doubts about privacy and pricing. Communities can use the information provided by this tool to identify market trends that could eventually lower costs for providers and incentivize access to innovative home technologies and devices supporting long-term care.
Die sogenannte Systemsimulation, bei der mehrere physikalische Domänen gemeinsam simuliert werden, erlaubt die Analyse komplexer und damit realitätsnaher Systeme und spielt eine zunehmend größere Rolle bei der Auslegung von Komponenten. Enthält das System Teile, die durch Feldgrößen aus unterschiedlichen physikalischen Domänen beschrieben werden müssen, kann man Co-Simulationen einsetzen, die allerdings zeitaufwändig sind. Für die Auslegung des Systems ist es dagegen notwendig, Systemsimulationen schnell durchzuführen zu können. Hierfür können für ausgewählte Bauteile oder Domänen schnellere reduzierte Ersatzmodelle (ROM) eingesetzt werden. In dieser Arbeit stellen wir ein reduziertes Modell für elektromechanische Bauteile mit Berücksichtigung von Wirbelströmen vor. Wirbelstromeffekte hängen nicht nur vom aktuellen Zustand, sondern auch von der Geschichte der elektromagnetischen Domäne ab. Das vorgestellte Ersatzmodell basiert auf Daten, die mit einer Reihe von stationären Feldsimulationen vorab erzeugt werden. Für die Modellierung der geschichtsabhängigen Wirbelstromeffekte wird ein Konvolutionsansatz (Faltungsansatz) verwendet. Vergleiche mit entsprechenden Co-Simulationen in ANSYS Maxwell und Simplorer zeigen am Beispiel eines Hubankers, dass das Ersatzmodell in der Lage ist, die wesentlichen Eigenschaften des Bauteils physikalisch korrekt abzubilden.
Completely defined co-culture of adipogenic differentiated ASCs and microvascular endothelial cells
(2018)
Vascularized adipose tissue models are in high demand as alternatives to animal models to elucidate the mechanisms of widespread diseases, screen for new drugs or assess drug safety levels. Animal-derived sera such as fetal bovine serum (FBS), which are commonly used in these models, are associated with ethical concerns, risk of contaminations and inconsistencies of their composition and impact on cells. In this study, we developed a serum-free, defined co culture medium and implemented it in an adipocyte/endothelial cell (EC) co culture model.
Human adipose-derived stem cells were differentiated under defined conditions (diffASCs) and, like human microvascular ECs (mvECs), cultured in a defined co culture medium in mono-, indirect or direct co-culture for 14 days. The defined co-culture medium was superior when compared to mono-culture media and facilitated the functional maintenance and maturation of diffASCs including perilipin A expression, lipid accumulation, and also glycerol and leptin release. The medium also allowed mvEC maintenance, confirmed by the expression of CD31 and von Willebrand factor (vWF), and by acetylated low density lipoprotein (acLDL) uptake. Thereby, mvECs showed strong dependence on EC-specific factors. Additionally, mvECs formed vascular structures in direct co-culture with diffASCs.
The completely defined co-culture system allows for the serum-free culture of adipocyte/EC co-cultures and thereby represents a valuable and ethically acceptable tool for the culture and study of vascularized adipose tissue models.
So far, only few authors addressed the serum-free, defined differentiation of adipocytes. And there are hardly any trials available on the defined maintenance of adipocytes. In this study, the development of a defined culture medium for the adipogenic differentiation of primary human adipose-derived stem cells (ASCs) was aimed. Based on the addition of specific factors for the replacement of serum, ASCs were differentiated to viable and characteristic adipocytes for 14 days, which was proven through the accumulation of lipids, the expression of perilipin A and by the release of leptin and glycerol. Furthermore, a defined maintenance medium was developed, which supported the maturation and stability of cells for a long-term period of additional 42 days until day 56.
Increasing flexibility, greater transparency and faster adaptability play a key role in the development of future intralogistics. Ever-changing environmental conditions require easy extensibility and modifiability of existing bin systems. This research project explores approaches to transfer the Internet of Things (IoT) paradigm to intralogistics. This allows a synchronization of the material and information flow. The bin is enabled by the implementation of adequate hardware and software components to capture, store, process and forward data to selected system subscribers. Monitoring the processes in the intralogistics by means of the smart bin system ensures the implementation of appropriate actions in case of defined deviations. By using explorative expert interviews with representatives from the automotive and pharmaceutical industries, seven practical application scenarios were defined. On this basis, the requirements of smart bin systems were examined. For each individual case of application, a system model was created in order to obtain an overview of the system components and thus reveal similarities and differences. Based on the similarities of the system models, a general requirement profile was derived. After the hardware components of the bin system had been determined, a utility analysis was carried out to find the adequate IoT software. The utility analysis was conducted with a focus on data acquisition and data transfer, data storage, data analysis, data presentation as well as authorization management and data security. The results show that there is great interest in easily expandable and modifiable bin systems, as in all cases, the necessary information flow in the existing bin system has to be improved by means of new IoT hardware and software components.
Software is an integrated part of new features within the automotive sector, car manufacturers, the Hersteller Initiative Software (HIS) consortium defined metrics to determine software quality. Yet, problems with assigning metrics to quality attributes often occur in practice. The specified boundary values lead to discussions between contractors and clients as different standards and metric sets are used. This paper studies metrics used in the automotive sector and the quality attributes they address. The HIS, ISO/IEC 25010:2011, and ISO/IEC 26262:2018 are utilized to draw a big picture illustrating (i) which metrics and boundary values are reported in literature, (ii) how the metrics match the standards, (iii) which quality attributes are addressed, and (iv) how the metrics are supported by tools. Our findings from analyzing 38 papers include a catalog of 112 metrics of which 17 define boundary values and 48 are supported by tools. Most of the metrics are concerned with source code, are generic, and not specifically designed for automotive software development. We conclude that many metrics exist, but a clear definition of the metrics' context, notably regarding the construction of flexible and efficient measurement suites, is missing.
This paper presents the concept of the system architecture of a flexible cyber-physical factory control system. The system allows the automation of process structures using cyber-physical fractal nodes. These nodes have a functional and independent form and can be clustered to larger structures. This makes it possible to equip the factory with a flexible, freely scalable, modular system. The description of this system architecture and the associated rules and conditions is outlined in the concept.
Rapidly growing data volumes push today's analytical systems close to the feasible processing limit. Massive parallelism is one possible solution to reduce the computational time of analytical algorithms. However, data transfer becomes a significant bottleneck since it blocks system resources moving data-to-code. Technological advances allow to economically place compute units close to storage and perform data processing operations close to data, minimizing data transfers and increasing scalability. Hence the principle of Near Data Processing (NDP) and the shift towards code-to-data. In the present paper we claim that the development of NDP-system architectures becomes an inevitable task in the future. Analytical DBMS like HPE Vertica have multiple points of impact with major advantages which are presented within this paper.
Near-data processing in database systems on native computational storage under HTAP workloads
(2022)
Today’s Hybrid Transactional and Analytical Processing (HTAP) systems, tackle the ever-growing data in combination with a mixture of transactional and analytical workloads. While optimizing for aspects such as data freshness and performance isolation, they build on the traditional data-to-code principle and may trigger massive cold data transfers that impair the overall performance and scalability. Firstly, in this paper we show that Near-Data Processing (NDP) naturally fits in the HTAP design space. Secondly, we propose an NDP database architecture, allowing transactionally consistent in-situ executions of analytical operations in HTAP settings. We evaluate the proposed architecture in state-of-the-art key/value-stores and multi-versioned DBMS. In contrast to traditional setups, our approach yields robust, resource- and cost-effcient performance.
Modern persistent Key/Value stores are designed to meet the demand for high transactional throughput and high data ingestion rates. Still, they rely on backwards-compatible storage stack and abstractions to ease space management, foster seamless proliferation and system integration. Their dependence on the traditional I/O stack has negative impact on performance, causes unacceptably high write-amplification, and limits the storage longevity.
In the present paper we present NoFTL KV, an approach that results in a lean I/O stack, integrating physical storage management natively in the Key/Value store. NoFTL-KV eliminates backwards compatibility, allowing the Key/Value store to directly consume the characteristics of modern storage technologies. NoFTLKV is implemented under RocksDB. The performance evaluation under LinkBench shows that NoFTL-KV improves transactional throughput by 33%, while response times improve up to 2.3x. Furthermore, NoFTL KV reduces write-amplification 19x and improves storage longevity by imately the same factor.
Over the last decades, a tremendous change toward using information technology in almost every daily routine of our lives can be perceived in our society, entailing an incredible growth of data collected day-by-day on Web, IoT, and AI applications.
At the same time, magneto-mechanical HDDs are being replaced by semiconductor storage such as SSDs, equipped with modern Non-Volatile Memories, like Flash, which yield significantly faster access latencies and higher levels of parallelism. Likewise, the execution speed of processing units increased considerably as nowadays server architectures comprise up to multiple hundreds of independently working CPU cores along with a variety of specialized computing co-processors such as GPUs or FPGAs.
However, the burden of moving the continuously growing data to the best fitting processing unit is inherently linked to today’s computer architecture that is based on the data-to-code paradigm. In the light of Amdahl's Law, this leads to the conclusion that even with today's powerful processing units, the speedup of systems is limited since the fraction of parallel work is largely I/O-bound.
Therefore, throughout this cumulative dissertation, we investigate the paradigm shift toward code-to-data, formally known as Near-Data Processing (NDP), which relieves the contention on the I/O bus by offloading processing to intelligent computational storage devices, where the data is originally located.
Firstly, we identified Native Storage Management as the essential foundation for NDP due to its direct control of physical storage management within the database. Upon this, the interface is extended to propagate address mapping information and to invoke NDP functionality on the storage device. As the former can become very large, we introduce Physical Page Pointers as one novel NDP abstraction for self-contained immutable database objects.
Secondly, the on-device navigation and interpretation of data are elaborated. Therefore, we introduce cross-layer Parsers and Accessors as another NDP abstraction that can be executed on the heterogeneous processing capabilities of modern computational storage devices. Thereby, the compute placement and resource configuration per NDP request is identified as a major performance criteria. Our experimental evaluation shows an improvement in the execution durations of 1.4x to 2.7x compared to traditional systems. Moreover, we propose a framework for the automatic generation of Parsers and Accessors on FPGAs to ease their application in NDP.
Thirdly, we investigate the interplay of NDP and modern workload characteristics like HTAP. Therefore, we present different offloading models and focus on an intervention-free execution. By propagating the Shared State with the latest modifications of the database to the computational storage device, it is able to process data with transactional guarantees. Thus, we achieve to extend the design space of HTAP with NDP by providing a solution that optimizes for performance isolation, data freshness, and the reduction of data transfers. In contrast to traditional systems, we experience no significant drop in performance when an OLAP query is invoked but a steady and 30% faster throughput.
Lastly, in-situ result-set management and consumption as well as NDP pipelines are proposed to achieve flexibility in processing data on heterogeneous hardware. As those produce final and intermediary results, we continue investigating their management and identified that an on-device materialization comes at a low cost but enables novel consumption modes and reuse semantics. Thereby, we achieve significant performance improvements of up to 400x by reusing once materialized results multiple times.
The field of breath analysis has developed to be of growing interest in medical diagnosis and patient monitoring. The main advantages are that it’s noninvasive, painless and repeatable in flexible cycles. Even though breath analysis is being researched for a couple of decades there are still many unanswered questions. Human breath contains volatile organic compounds which are emitted from inside the body. Some of these compounds can be assigned to specific sources, such as inflammation or cancer, but also to non health related origins. This paper gives an overview of breath analysis for the purpose of disease diagnosis and health monitoring. Therefore, literature regarding breath analysis in the medical field has been analyzed, from its early stages to the present. As a result, this paper gives an outline of the topic of breath analysis.
Human retinal pigment epithelial (RPE) cells express the transmembrane Ca2+-dependent Cl− channel bestrophin-1 (hBest1) of the plasma membrane. Mutations in the hBest1 protein are associated with the development of distinct pathological conditions known as bestrophinopathies. The interactions between hBest1 and plasma membrane lipids (cholesterol (Chol), 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine (POPC) and sphingomyelin (SM)) determine its lateral organization and surface dynamics, i.e., their miscibility or phase separation. Using the surface pressure/mean molecular area (π/A) isotherms, hysteresis and compressibility moduli (Cs−1) of hBest1/POPC/Chol and hBest1/SM/Chol composite Langmuir monolayers, we established that the films are in an LE (liquid-expanded) or LE-LC (liquid-condensed) state, the components are well-mixed and the Ca2+ ions have a condensing effect on the surface molecular organization. Cholesterol causes a decrease in the elasticity of both films and a decrease in the ΔGmixπ values (reduction of phase separation) of hBest1/POPC/Chol films. For the hBest1/SM/Chol monolayers, the negative values of ΔGmixπ are retained and equalized with the values of ΔGmixπ in the hBest1/POPC/Chol films. Shifts in phase separation/miscibility by cholesterol can lead to changes in the structure and localization of hBest1 in the lipid rafts and its channel functions.
Human bestrophin-1 protein (hBest1) is a transmembrane channel associated with the calcium-dependent transport of chloride ions in the retinal pigment epithelium as well as with the transport of glutamate and GABA in nerve cells. Interactions between hBest1, sphingomyelins, phosphatidylcholines and cholesterol are crucial for hBest1 association with cell membrane domains and its biological functions. As cholesterol plays a key role in the formation of lipid rafts, motional ordering of lipids and modeling/remodeling of the lateral membrane structure, we examined the effect of different cholesterol concentrations on the surface tension of hBest1/POPC (1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine) and hBest1/SM Langmuir monolayers in the presence/absence of Ca2+ ions using surface pressure measurements and Brewster angle microscopy studies. Here, we report that cholesterol: (1) has negligible condensing effect on pure hBest1 monolayers detected mainly in the presence of Ca2+ ions, and; (2) induces a condensing effect on composite hBest1/POPC and hBest1/SM monolayers. These results offer evidence for the significance of intermolecular protein–lipid interactions for the conformational dynamics of hBest1 and its biological functions as multimeric ion channel.
Ein nicht unerheblicher Anteil der Autounfälle ist auf Müdigkeit am Steuer zurückzuführen. Um Unfälle aufgrund von Müdigkeit zu vermeiden, existieren schon einige Ansätze wie beispielsweise die Erkennung der Fahrweise. Im Rahmen des IOT-Labors des Masterstudiengangs Human Centered Computing der Hochschule Reutlingen sollen verschiedene Fahrassistenzsysteme entwickelt und getestet werden, um Unfälle aufgrund von Müdigkeit zu verhindern. Diese Arbeit beschäftigt sich mit der Müdigkeitserkennung über Computer Vision (CV) und das Elektrokardiogramm (EKG). Im Rahmen dieses Papers wird die Müdigkeitserkennung über CV am Steuer mittels den Open Source Bibliotheken OpenCV und Dlib und dem Embedded PC Nvidia Jetson Nano verwirklicht. Die Müdigkeit über EKG wird über den Herzschlag und die Herzfrequenzvariabilität erkannt. Ebenfalls wurde in dieser Arbeit eine Schnittstelle aus CV und EKG entwickelt, um aus den Python-Skripten der Müdigkeitserkennung über Computer Vision und der Müdigkeitserkennung über EKG die zur Erkennung wichtigen Daten zusammenzufassen. Diese werden anschließend zu einem gesamten Ergebnis ausgewertet.
Information technology (IT) plays an essential role in organizational innovation adoption. As such, IT governance (ITG) is paramount in accompanying IT to allow innovation. However, the traditional concept of ITG to control the formulation and implementation of IT strategy is not fully equipped to deal with the current changes occurring in the digital age. Today’s ITG needs an agile approach that can respond to changing dynamics. Consequently, companies are relying heavily on agile strategies to secure better company performance. This paper aims to clarify how organizations can implement agile ITG. To do so, this study conducted 56 qualitative interviews with professionals from the banking industry to identify agile dimensions within the governance construct. The qualitative evaluation uncovered 46 agile governance dimensions. Moreover, these dimensions were rated by 29 experts to identify the most effective ones. This led to the identification of six structure elements, eight processes, and eight relational mechanisms.
With significant advancements in digital technologies, firms find themselves competing in an increasingly dynamic business environment. It is of paramount importance that organizations undertake proper governance mechanisms with respect to their business and IT strategies. Therefore, IT governance (ITG) has become an important factor for firm performance. In recent years, agility has evolved as a core concept for governance, especially in the area of software development. However, the impact of agility on ITG and firm performance has not been analyzed by the broad scientific community. This paper focuses on the question, how the concept of agility affects the ITG–firm performance relationship. The conceptual model for this question was tested by a quantitative research process with 400 executives responding to a standardized survey. Findings show that the adoption of agile principles, values, and best practices to the context of ITG leads to meaningful results for governance, business/IT alignment, and firm performance.
Digital transformation has changed corporate reality and, with that, firms’ IT environments and IT governance (ITG). As such, the perspective of ITG has shifted from the design of a relatively stable, closed and controllable System of a self-sufficient Enterprise to a relatively fluid, open, agile and transformational system of networked co-adaptive entities. Related to this paradigm shift in ITG, this paper aims to clarify how the concept of an effective ITG framework has changed in terms of the demand for agility in organizations. Thus, this study conducted 33 qualitative interviews with executives and senior managers from the banking industry in Germany, Switzerland and Austria. Analysis of the interviews focused on the formation of categories and the assignment of individual text parts (codings)
to these categories to allow for a quantitative evaluation of the codings per category. Regarding traditional and agile ITG dimensions, 22 traditional and 25 agile dimensions in terms of structures, processes and relational mechanisms were identified. Moreover, agile strategies within the agile ITG construct and ten ITG patterns were identified from the interview data. The data show relevant perspectives on the implementation of traditional and new ITG dimensions and highlight ambidextrous aspects in ITG in the German-speaking banking industry.