Refine
Document Type
- Conference proceeding (273) (remove)
Is part of the Bibliography
- yes (273)
Institute
- Technik (273) (remove)
Publisher
- IEEE (120)
- VDE Verlag (28)
- Hochschule Ulm (11)
- Association for Computing Machinery (8)
- Springer (8)
- Arbeitsgemeinschaft Simulation (ASIM) (6)
- SciTePress (6)
- European Association for the Development of Renewable Energy, Environment and Power Quality (4)
- Hochschule Reutlingen (4)
- University of Colorado (4)
Different network architectures are being used to build remote laboratories. Historically, it has been difficult to integrate industrial control systems with higher level IT systems like enterprise resource planning (ERP), manufacturing execution systems (MES), and manufacturing operations management (MOM). Getting these systems to communicate with one another has proven to be relatively difficult due to the absence of shared protocols between them. The Open Platform Communications United Architecture (OPC-UA) protocol was introduced as a remedy for this issue and is gaining popularity, but what if open-source protocols that are widely used in the IT industry could be used instead? This paper presents the development of an IT-Architecture for a cyber-physical industrial control systems laboratory that enables a seamless interconnection and integration of its elements. The architecture utilises Node-Red technology. Node-RED is an open-source programming platform developed by IBM that is focused on making it simple to link physical components, APIs, and web services. This cyber-physical laboratory is for learning principles of an industrial cascaded process control factory. Finally, this text will also discuss future work relating to digital twin (DT). A coupled tank system is selected as a teaching factory to illustrate a range of fluid control application in a typical chemical process factory.
Mit zunehmender Dynamik im Forschungsumfeld – Digitalisierung der Produktentwicklung – steigen neben der Komplexität auch die technischen Anforderungen an die künftigen Entscheidungsprozesse. Die Einführung von neuen IT-Systemen zur Automation von Entscheidungen haben Anpassungen in den derzeitigen Geschäftsprozessen der Unternehmen zur Folge. Für eine erfolgreiche Implementierung neuer IT-Informationstools gilt es im Voraus mögliche Auswirkungen auf die bisherigen Anwendersysteme genauer zu untersuchen. Neue Technologien, KI-Informationssysteme oder auch neues Wissen entstehen in der Wissenschaft oft durch Interpretation und Synthese von bestehendem Wissen. Aus diesem Grund nimmt die Qualität von Literaturanalysen eine immer größere Relevanz in der Ingenieur- und Informatikwissenschaft ein. Neben der Anzahl an Publikationen wächst auch der Aufwand für die strukturierte Literaturrecherche (SLA). Die Autoren stellen in diesem Paper den Rechercheprozess und die Ergebnisse einer SLA vor. Mit dieser Arbeit soll der derzeitige Forschungsstand zur Entscheidungsunterstützung in der Produktentwicklung von Klein- und mittelständischen Unternehmen sowie Großunternehmen in der
Automobilbranche ermittelt und nach Analyse sowie Bewertung mögliche Forschungslücken zu automatisierten Entscheidungsunterstützungssystemen (aEUS) aufgezeigt werden.
Impact of a large distribution network on radiation characteristics of planar spiral antenna arrays
(2023)
Designing antenna arrays with a central feed point has gained ground in the antenna technique. This approach, which is usually applied because of manufacturing costs, is difficult to achieve and leads to a large feeding network. The impact of which is numerically investigated in the present work. Upon comparing three different antennas, it is shown that the enlargement of the feed strongly affects the antenna's overall dimensions and the antenna's radiation characteristics. The antenna with the plug-in solution is not only small in size but also performs better compared to antennas with a central feed point. Considering the high effort in designing the feed network with a central point and the influence of the resulting enlarged network on the dimensions and radiation characteristics of the antenna, the cost saving in production can be put into perspective.
Advancing mental health diagnostics: AI-based method for depression detection in patient interviews
(2023)
In this paper, we present a novel artificial intelligence (AI) application for depression detection, using advanced transformer networks to analyse clinical interviews. By incorporating simulated data to enhance traditional datasets, we overcome limitations in data protection and privacy, consequently improving the model’s performance. Our methodology employs BERT-based models, GPT-3.5, and ChatGPT-4, demonstrating state-of-the-art results in detecting depression from linguistic patterns and contextual information that significantly outperform previous approaches. Utilising the DAIC-WOZ and Extended-DAIC datasets, our study showcases the potential of the proposed application in revolutionising mental health care through early depression detection and intervention. Empirical results from various experiments highlight the efficacy of our approach and its suitability for real-world implementation. Furthermore, we acknowledge the ethical, legal, and social implications of AI in mental health diagnostics. Ultimately, our study underscores the transformative potential of AI in mental health diagnostics, paving the way for innovative solutions that can facilitate early intervention and improve patient outcomes.
In recent years, the demand for accurate and efficient 3D body scanning technologies has increased, driven by the growing interest in personalised textile development and health care. This position paper presents the implementation of a novel 3D body scanner that integrates multiple RGB cameras and image stitching techniques to generate detailed point clouds and 3D mesh models. Our system significantly enhances the scanning process, achieving higher resolution and fidelity while reducing the cost, time and effort required for data acquisition and processing. Furthermore, we evaluate the potential use cases and applications of our 3D body scanner, focusing on the textile technology and health sectors. In textile development, the 3D scanner contributes to bespoke clothing production, allowing designers to construct made-to-measure garments, thus minimising waste and enhancing customer satisfaction through fitting clothing. In mental health care, the 3D body scanner can be employed as a tool for body image analysis, providing valuable insights into the psychological and emotional aspects of self-perception. By exploring the synergy between the 3D body scanner and these fields, we aim to foster interdisciplinary collaborations that drive advancements in personalisation, sustainability, and well-being.
Patterns are virtually simulated in 3D CAD programs before production to check the fit. However, achieving lifelike representations of human avatars, especially regarding soft tissue dynamics, remains challenging. This is mainly since conventional avatars in garment CAD programs are simulated with a continuous hard surface and not corresponding to the human physical and mechanical body properties of soft tissue. In the real world, the human body’s natural shape is affected by the contact pressure of tight-fitting textiles. To verify the fit of a simulated garment, the interactions between the individual body shape and the garment must be considered. This paper introduces an innovative approach to digitising the softness of human tissue using 4D scanning technology. The primary objective of this research is to explore the interactions between tissue softness and different compression levels of apparel, exerting pressure on the tissue to capture the changes in the natural shape. Therefore, to generate data and model an avatar with soft body physics, it is essential to capture the deform ability and elasticity of the soft tissue and map it into the modification options for a simulation. To aim this, various methods from different fields were researched and compared to evaluate 4D scanning as the most suitable method for capturing tissue deformability in vivo. In particular, it should be considered that the human body has different deformation capabilities depending on age, the amount of muscle and body fat. In addition, different tissue zones have different mechanical properties, so it is essential to identify and classify them to back up these properties for the simulation. It has been shown that by digitising the obtained data of the different defined applied pressure levels, a prediction of the deformation of the tissue of the exact person becomes possible. As technology advances and data sets grow, this approach has the potential to reshape how we verify fit digitally with soft avatars and leverage their realistic soft tissue properties for various practical purposes.
Analog integrated circuit sizing still relies heavily on human expert knowledge as previous automation approaches have not found wide-spread acceptance in industry. One strand, the optimization-based automation, is often discarded due to inflated constraining setups, infeasible results or excessive run times. To address these deficits, this work proposes a alternative optimization flow featuring a designer’s intuition for feasible design spaces through integration of expert knowledge based on the gm/ID-method. Moreover, the extensive run times of simulation-based optimization flows are overcome by incorporating computationally efficient machine learning methods. Neural network surrogate models predicting eleven performance parameters increase the evaluation speed by 3 400× on average compared to a simulator. Additionally, they enable the use of optimization algorithms dependent on automatic differentiation, that would otherwise be unavailable in this field. First, an up to 4× more efficient way for sampling training data based on the aforementioned space is detailed. After presenting the architecture and training effort regarding the surrogate models, they are employed as part of the objective function for sizing three operational amplifiers with three different optimization algorithms. Additionally, the benefits of using the gm/ID-method become evident when considering technology migration, as previously found solutions may be reused for other technologies.
This article presents a modified method of performing power flow calculations as an alternative to pure energy-based simulations of off-grid hybrid systems. The enhancement consists in transforming the scenario-based power flow method into a discrete time-dependent algorithm with the inclusion of bus and controller dynamics.
Online-Portal "MINTFabrik"
(2023)
Das browserbasierte Online-Portal "MINTFabrik" entstand im Zuge der Maßnahmen zur Minderung von Lernrückständen mit der Idee, eine Lücke zu schließen, die es oft bei großen Online-Brückenkursen gibt: Ein Mangel an Übungsaufgaben, die schnell zugänglich sind, einfach ausgesucht werden können und gut auf bestimmte Lehrveranstaltungen und deren Anforderungen zugeschnitten sind. Die Entwicklung erfolgte in einer Kooperation der Hochschule Reutlingen mit der Tübinger Softwarefirma "Let´s Make Sense GmbH". Das Portal verzichtet bewusst auf eine Lektionsstruktur und besteht ausschließlich aus einzelnen Lernbausteinen (Items), d.h. Video-Tutorials, VisuApps und Aufgaben, die über eine komfortable Suche mit Filtern erreichbar sind und direkt bearbeitet werden können. Ein besonderes Merkmal der MINTFabrik sind Mikrokurse, die von Lehrenden und Studierenden erstellt werden können. Das sind kleine Einheiten aus einigen wenigen Items, die beliebig miteinander kombinierbar sind.
Most Question-answering (QA) systems rely on training data to reach their optimal performance. However, acquiring training data for supervised systems is both time-consuming and resource-intensive. To address this, in this paper, we propose TFCSG, an unsupervised similar question retrieval approach that leverages pre-trained language models and multi-task learning. Firstly, topic keywords in question sentences are extracted sequentially based on a latent topic-filtering algorithm to construct unsupervised training corpus data. Then, the multi-task learning method is used to build the question retrieval model. There are three tasks designed. The first is a short sentence contrastive learning task. The second is the question sentence and its corresponding topic sequence similarity judgment task. The third is using question sentences to generate their corresponding topic sequence task. The three tasks are used to train the language model in parallel. Finally, similar questions are obtained by calculating the cosine similarity between sentence vectors. The comparison experiment on public question datasets that TFCSG outperforms the comparative unsupervised baseline method. And there is no need for manual marking, which greatly saves human resources.