Refine
Document Type
- Journal article (992)
- Conference proceeding (904)
- Book chapter (222)
- Working Paper (35)
- Book (30)
- Doctoral Thesis (24)
- Report (23)
- Issue of a journal (17)
- Review (6)
- Anthology (2)
Has full text
- yes (2257) (remove)
Is part of the Bibliography
- yes (2257)
Institute
- Informatik (747)
- ESB Business School (698)
- Technik (373)
- Life Sciences (274)
- Texoversum (137)
- Zentrale Einrichtungen (14)
Publisher
- Springer (367)
- IEEE (251)
- Elsevier (189)
- Hochschule Reutlingen (176)
- MDPI (99)
- Gesellschaft für Informatik e.V (66)
- Wiley (62)
- De Gruyter (51)
- Association for Computing Machinery (45)
- IARIA (26)
The efficiency of pharmaceutical research and development (R&D) reflected by increasing costs of R&D, long timelines, and low probabilities of technical and regulatory success decreased continuously in the past years. Today, the costs for discovering and developing a new drug are enormously high with more than USD 2 billion per new molecular entity (NME), while the average overall success of a research project to provide an NME is in the single-digit percentage rate, and the total timelines of R&D easily exceeds 10 years questioning the return on investment (ROI) of pharmaceutical R&D. As a consequence and also caused by numerous patent expirations of blockbuster drugs that increased the pressure to return to an acceptable ROI, the pharmaceutical industry addressed this challenge and the related causes and identified several actions that need to be taken to increase the output/input ratio of R&D. This book chapter will review the pipeline sizes and the R&D investments of multinational pharmaceutical companies, will describe new processes that have been implemented to increase the reach and to reduce costs of pharmaceutical R&D, and it will illustrate new innovation models that were developed to increase the R&D efficiency.
Das Gehör ist mehr als jedes andere Sinnesorgan für die menschliche Sprache und ihre Entwicklung verantwortlich und stellt auf kleinstem Raum ein anatomisch und physiologisch einzigartiges Gebilde dar. Es beeindruckt vor allem durch seinen großen Dynamikbereich.
Während im Bereich der Hörschwelle der eben hörbare Schalldruck etwa 20 μPa beträgt, können die Drucke und Amplituden bis zur Schmerzgrenze noch etwa zweimillionenfach gesteigert werden. Derartige physiologische Schalldrücke sind jedoch immer noch verschwindend klein gegenüber den ebenfalls auf das Ohr einwirkenden statischen Luftdruckschwankungen, wie sie beispielsweise beim Treppensteigen, Zug- bzw. Autofahren, Fliegen oder beim Naseputzen vorkommen. Zwar führen diese zu einer veränderten Wahrnehmung, jedoch nicht zu einer Schäadigung des Ohrs. Diese Fähigkeit, trotz großer statischer Druckschwankungen in der Umgebung, gleichzeitig winzige physiologische Schalldrücke wahrnehmen zu können, ist hauptsächlich in den nichtlinearen, viskoelastischen Eigenschaften der Gelenke und Bänder des Mittelohrs sowie des Trommelfells begründet.
Das Ziel dieser Arbeit ist es, dieses nichtlineare Verhalten des Mittelohrs bei großen Belastungen und großen Verschiebungen durch nichtlineare, räumliche Ersatzmodelle auf mechanischer Basis abzubilden. Zur Charakterisierung der nichtlinearen Eigenschaften der Gerhörknöchelchenkette werden statische und dynamische Messungen an humanen Felsenbeinen durchgeführt. Ein besonderes Augenmerk ist auf die Charakterisierung der nichtlinearen, viskoelastischen Eigenschaften des Ringbands, des Trommelfells, der Trommelfellsehne und der beiden Mittelohrgelenke gerichtet. Im Blick auf die klinische Praxis wird zum einen anhand von Messergebnissen das Schädigungsrisiko bei einer Stapeschirurgie diskutiert sowie die Auswirkungen von Vorspannungen im Mittelohrapparat am Beispiel des aktiven Implantats Carina untersucht.
nKV in action: accelerating KVstores on native computational storage with NearData processing
(2020)
Massive data transfers in modern data intensive systems resulting from low data-locality and data-to-code system design hurt their performance and scalability. Near-data processing (NDP) designs represent a feasible solution, which although not new, has yet to see widespread use.
In this paper we demonstrate various NDP alternatives in nKV, which is a key/value store utilizing native computational storage and near-data processing. We showcase the execution of classical operations (GET, SCAN) and complex graph-processing algorithms (Betweenness Centrality) in-situ, with 1.4x-2.7x better performance due to NDP. nKV runs on real hardware - the COSMOS+ platform.
Massive data transfers in modern key/value stores resulting from low data-locality and data-to-code system design hurt their performance and scalability. Near-data processing (NDP) designs represent a feasible solution, which although not new, have yet to see widespread use.
In this paper we introduce nKV, which is a key/value store utilizing native computational storage and near-data processing. On the one hand, nKV can directly control the data and computation placement on the underlying storage hardware. On the other hand, nKV propagates the data formats and layouts to the storage device where, software and hardware parsers and accessors are implemented. Both allow NDP operations to execute in host-intervention-free manner, directly on physical addresses and thus better utilize the underlying hardware. Our performance evaluation is based on executing traditional KV operations (GET, SCAN) and on complex graph-processing algorithms (Betweenness Centrality) in-situ, with 1.4×-2.7× better performance on real hardware – the COSMOS+ platform.
This article discusses the scientifically and industrially important problem of automating the process of unloading goods from standard shipping containers. We outline some of the challenges barring further adoption of robotic solutions to this problem, ranging from handling a vast variety of shapes, sizes, weights, appearances, and packing arrangements of the goods, through hard demands on unloading speed and reliability, to ensuring that fragile goods are not damaged. We propose a modular and reconfigurable software framework in an attempt to efficiently address some of these challenges. We also outline the general framework design and the basic functionality of the core modules developed. We present two instantiations of the software system on two different fully integrated demonstrators: 1) coping with an industrial scenario, i.e., the automated unloading of coffee sacks with an already economically interesting performance; and 2) a scenario used to demonstrate the capabilities of our scientific and technological developments in the context of medium- to long-term prospects of automation in logistics. We performed evaluations that allowed us to summarize several important lessons learned and to identify future directions of research on autonomous robots for the handling of goods in logistics applications.
Flash SSDs are omnipresent as database storage. HDD replacement is seamless since Flash SSDs implement the same legacy hardware and software interfaces to enable backward compatibility. Yet, the price paid is high as backward compatibility masks the native behaviour, incurs significant complexity and decreases I/O performance, making it non-robust and unpredictable. Flash SSDs are black-boxes. Although DBMS have ample mechanisms to control hardware directly and utilize the performance potential of Flash memory, the legacy interfaces and black-box architecture of Flash devices prevent them from doing so.
In this paper we demonstrate NoFTL, an approach that enables native Flash access and integrates parts of the Flashmanagement functionality into the DBMS yielding significant performance increase and simplification of the I/O stack. NoFTL is implemented on real hardware based on the OpenSSD research platform. The contributions of this paper include: (i) a description of the NoFTL native Flash storage architecture; (ii) its integration in Shore-MT and (iii) performance evaluation of NoFTL on a real Flash SSD and on an on-line data-driven Flash emulator under TPCB, C,E and H workloads. The performance evaluation results indicate an improvement of at least 2.4x on real hardware over conventional Flash storage; as well as better utilisation of native Flash parallelism.
Modern persistent Key/Value stores are designed to meet the demand for high transactional throughput and high data ingestion rates. Still, they rely on backwards-compatible storage stack and abstractions to ease space management, foster seamless proliferation and system integration. Their dependence on the traditional I/O stack has negative impact on performance, causes unacceptably high write-amplification, and limits the storage longevity.
In the present paper we present NoFTL KV, an approach that results in a lean I/O stack, integrating physical storage management natively in the Key/Value store. NoFTL-KV eliminates backwards compatibility, allowing the Key/Value store to directly consume the characteristics of modern storage technologies. NoFTLKV is implemented under RocksDB. The performance evaluation under LinkBench shows that NoFTL-KV improves transactional throughput by 33%, while response times improve up to 2.3x. Furthermore, NoFTL KV reduces write-amplification 19x and improves storage longevity by imately the same factor.
Sleep analysis using a Polysomnography system is difficult and expensive. That is why we suggest a non-invasive and unobtrusive measurement. Very few people want the cables or devices attached to their bodies during sleep. The proposed approach is to implement a monitoring system, so the subject is not bothered. As a result, the idea is a non-invasive monitoring system based on detecting pressure distribution. This system should be able to measure the pressure differences that occur during a single heartbeat and during breathing through the mattress. The system consists of two blocks signal acquisition and signal processing. This whole technology should be economical to be affordable enough for every user. As a result, preprocessed data is obtained for further detailed analysis using different filters for heartbeat and respiration detection. In the initial stage of filtration, Butterworth filters are used.
Respiratory diseases are leading causes of death and disability in the world. The recent COVID-19 pandemic is also affecting the respiratory system. Detecting and diagnosing respiratory diseases requires both medical professionals and the clinical environment. Most of the techniques used up to date were also invasive or expensive.
Some research groups are developing hardware devices and techniques to make possible a non-invasive or even remote respiratory sound acquisition. These sounds are then processed and analysed for clinical, scientific, or educational purposes.
We present the literature review of non-invasive sound acquisition devices and techniques.
The results are about a huge number of digital tools, like microphones, wearables, or Internet of Thing devices, that can be used in this scope.
Some interesting applications have been found. Some devices make easier the sound acquisition in a clinic environment, but others make possible daily monitoring outside that ambient. We aim to use some of these devices and include the non-invasive recorded respiratory sounds in a Digital Twin system for personalized health.
Sleep study can be used for detection of sleep quality and in general bed behaviors. These results can helpful for regulating sleep and recognizing different sleeping disorders of human. In comparison to the leading standard measuring system, which is Polysomnography (PSG), the system proposed in this work is a non-invasive sleep monitoring device. For continuous analysis or home use, the PSG or wearable Actigraphy devices tends to be uncomfortable. Besides, these methods not only decrease practicality due to the process of having to put them on, but they are also very expensive. The system proposed in this paper classifies respiration and body movement with only one type of sensor and also in a noninvasive way. The sensor used is a pressure sensor. This sensor is low cost and can be used for commercial proposes. The system was tested by carrying out an experiment that recorded the sleep process of a subject. These recordings showed excellent results in the classification of breathing rate and body movements.
Mit der Überarbeitung der DIN EN 50173 (VDE 0800-173) Serie, wurden unter anderem die optischen Übertragungsstreckenklassen ersatzlos gestrichen. Um die so entstandene Lücke zu schließen, hat das deutsche Gremium DKE GUK 715.3 „Informationstechnische Verkabelung von Gebäudekomplexen“ neue Klassen erarbeitet, die in der DIN VDE 0800- 173-100 „Klassifizierung von Lichtwellenleiter-Übertragungsstrecken“ im Juni 2019 veröffentlicht wurden. Die Norm klassifiziert Lichtwellenleiter Übertragungsstrecken für anwendungsneutrale Kommunikationskabelanlagen nach DIN EN 50173-1.
Sie dient Benutzern, eine breite Palette von Anwendungen zu ermöglichen, die Auswahl des Verkabelungssystems zu erleichtern, eine zukunftssichere Klassifizierung von LWL-Verkabelungen zu generieren und dazu, Systemanforderungen zu beschreiben.
Die in der Norm definierten Klassen beschreiben die Anforderungen an die Übertragungsstrecken und basieren auf einer maximal zulässigen Einfügedämpfung in dB für maximale Übertragungsstreckenlängen, wobei zusätzlich das Bandbreitenlängenprodukt berücksichtigt wird.
Der Beitrag liefert einen Überblick über die Norm und zeigt Anwendungsbeispiele auf.
Novel design for a coreless printed circuit board transformer realizing high bandwidth and coupling
(2019)
Rogowski coils offer galvanic isolation and can measure alternating currents with a high bandwidth. Coreless printed circuit board (PCB) transformers have been used as an alternative to limit the additional stray inductance if a Rogowski coil can not be attached to the circuit. A new PCB transformer layout is proposed to reduce cost, decrease additional stray inductance, increase the bandwidth of current measurements and simplify the integration into existing designs.
Near-Data Processing (NDP) is a key computing paradigm for reducing the ever growing time and energy costs of data transport versus computations. With their flexibility, FPGAs are an especially suitable compute element for NDP scenarios. Even more promising is the exploitation of novel and future non-volatile memory (NVM) technologies for NDP, which aim to achieve DRAM-like latencies and throughputs, while providing large capacity non-volatile storage.
Experimentation in using FPGAs in such NVM-NDP scenarios has been hindered, though, by the fact that the NVM devices/FPGA boards are still very rare and/or expensive. It thus becomes useful to emulate the access characteristics of current and future NVMs using off-the-shelf DRAMs. If such emulation is sufficiently accurate, the resulting FPGA-based NDP computing elements can be used for actual full-stack hardware/software benchmarking, e.g., when employed to accelerate a database.
For this use, we present NVMulator, an open-source easy-to-use hardware emulation module that can be seamlessly inserted between the NDP processing elements on the FPGA and a conventional DRAM-based memory system. We demonstrate that, with suitable parametrization, the emulated NVM can come very close to the performance characteristics of actual NVM technologies, specifically Intel Optane. We achieve 0.62% and 1.7% accuracy for cache line sized accesses for read and write operations, while utilizing only 0.54% of LUT logic resources on a Xilinx/AMD AU280 UltraScale+ FPGA board. We consider both file-system as well as database access patterns, examining the operation of the RocksDB database when running on real or emulated Optane-technology memories.
Bodenbeläge aus Feinsteinzeug und Naturstein werden poliert und zur Gewährleistung ausreichender Rutschfestigkeit im Innenbereich (Bewertungsgruppe der Rutschsicherheit R9 und R10) werden durch Laserbehandlung mikroskopischen Vertiefungen erzeugt. In Laborversuchen wurden bei einigen Materialien deutliche Vorteile bzgl. des Anschmutzverhaltens und Reinigung festgestellt. Es sollte untersucht werden, inwieweit dadurch in der Praxis tatsächlich eine Reduktion des Reinigungsaufwandes und der damit verbundenen Umweltbelastung möglich ist, im Gegensatz zu anderen Oberflächenbearbeitungen, die die Bewertungsgruppe "R9" erzielen. Die Oberflächenbehandlung und die damit verbundene Reinigung sollten optimiert werden hinsichtlich minimalem Aufwand und minimalem Einsatz von Reinigungsmittel für Unterhaltsreinigung und der dazugehörigen Grundreinigungs- und Zwischenreinigungsfrequenz. Ziel war eine Reduktion um bis zum Faktor 2. Dazu sollten für verschiedene typische und weit gebräuchliche Bodenplatten aus Feinsteinzeug und Naturstein Abstand und Größe der Vertiefungen optimiert und kontrolliert werden. Die Dosierungen der Reinigungsmittel sollten, ausgehend von der derzeitigen Herstellervorgabe, reduziert werden. Die Abstände zwischen den Grundreinigungen, die mit einer starken Umweltbelastung verbunden sind, sollten vergrößert werden. Begleitend sollte eine Methode für die Vorhersage und Messung der Verschmutzung entwickelt werden. Diese wird in der Entwicklung benutzt und soll nach dem Projekt für die Optimierungen an anderen Materialien, z. B. PVC oder Polyolefinböden nutzbar sein und auch als Vorarbeit für eine Zertifizierung der Böden nach LEED dienen. Bislang gab es zwar Hinweise für die Vorteile bzgl. Reinigung, die aber weder optimiert noch belegbar waren. Deshalb hatte sich dieses Verfahren noch nicht durchgesetzt. Der Marktanteil liegt bei Naturstein < 10%, im Bereich Feinsteinzeug weit niedriger.
Background: Polysomnography (PSG) is the gold standard for detecting obstructive sleep apnea (OSA). However, this technique has many disadvantages when using it outside the hospital or for daily use. Portable monitors (PMs) aim to streamline the OSA detection process through deep learning (DL).
Materials and methods: We studied how to detect OSA events and calculate the apnea-hypopnea index (AHI) by using deep learning models that aim to be implemented on PMs. Several deep learning models are presented after being trained on polysomnography data from the National Sleep Research Resource (NSRR) repository. The best hyperparameters for the DL architecture are presented. In addition, emphasis is focused on model explainability techniques, concretely on Gradient-weighted Class Activation Mapping (Grad-CAM).
Results: The results for the best DL model are presented and analyzed. The interpretability of the DL model is also analyzed by studying the regions of the signals that are most relevant for the model to make the decision. The model that yields the best result is a one-dimensional convolutional neural network (1D-CNN) with 84.3% accuracy.
Conclusion: The use of PMs using machine learning techniques for detecting OSA events still has a long way to go. However, our method for developing explainable DL models demonstrates that PMs appear to be a promising alternative to PSG in the future for the detection of obstructive apnea events and the automatic calculation of AHI.
There is no doubt that the amplification of channel integration towards an omni-channel structure is a powerful idea whose time has finally come. The digitally cross-linked world postulates all-encompassing, ubiquitous, and unobtrusive future services. In the concomitant, increasingly competitive market, retailers are starting to lay the foundation for omnichannel, meeting the expectations of a digitally cunning audience wanting their shopping experience to be as seamless and uncomplicated as possible. Nevertheless, recent researches show that there are still enough avenues for further research on omnichannel. Until now, the performance of companies was solely considered by experts from a suppliers’ point of view. It would be rather interesting to find out whether the desire to meet the increased cus-tomer expectations is also recognized by the customers themselves. This paper seeks to answering how the purchasing behavior has changed and what customers demand. In addition, it elaborates the opportunities that are promoted by omni-channel. Searching out all the effects, the paper will get to a final step, where it can be attested how the omnichannel performance of fashion and lifestyle retailers can be measured from a consumers’ perspective by developing an exclusive index. The study is confined to four fashion and lifestyle retailers: Hugo Boss AG, Levi Strauss & Co, Pull and Bear as well as COS. Using the scientific method of mystery shopping and a multi-item checklist including 54 key performance indicators, the paper aims to examine to which extend the four selected retailers provide a seamless customer journey, according to the five decision-making phases.
The fiber deformations of once-dried, bleached and never-dried unbleached kraft pulps were studied with respect to their behavior in high- and low-consistency refining. The pulps were stained with congo red to experimentally highlight areas where the arrangement of the fibrils was altered by refining such as dislocated zones or slip planes. The stained fibers were analyzed with conventional Metso Fiberlab but also with a novel prototype measurement device utilizing a color imaging setup. The local intensity of the stain in the fiber was expressed as degree of overall damage (Overall fiber damage index, OFDI). The rewetted zero span tensile index (RWZSTI) was used to verify the OFDI with respect to the pulp strength. High consistency refining resulted in a clear increase in the number of kinks which negatively influenced the pulp strength. The OFDI which was used to detect the intensity of local fiber defects also responded accordingly. A higher OFDI resulted in a lower pulp strength. Low consistency refining removed a significant amount of kinks and resulted in an increase in fiber swelling. A slight increase in fibrillation and a significant increase in flake-like fines were also observed. The OFDI, however, was not reduced in low consistency refining as it would be expected by the removal of less severe dislocations. One reason proposed here is that low consistency refining created new fiber pores that allowed the dye to penetrate into the fiber wall similarly as it does in the zones of the dislocations.
This paper addresses what we call the investment question: under what plausible circumstances, if any, can variable renewable energy (VRE, and solar photovoltaic (PV) in particular) be a good investment? Although VRE has been growing rapidly world-wide, it is generally subsidized. Under what cost and market conditions can solar PV flourish without subsidy? We employ solar insolation and market price data from the U.S. and from Germany to gain insight into the investment question. We find that unsubsidized solar PV is or may soon be a justifiable investment, but that market arrangements may play a crucial role in determining success. We end by sketching a proposal that amounts to a reformed capacity market that would afford participation of solar PV.
Digitalization of products and services commonly causes substantial changes in business models, operations, organization structures and IT infrastructures of enterprises. Motivated by experiences and observations from digitalization projects, the paper investigates the effects of digitalization on enterprise architectures (EA). EA models serve as representation of business, information system and technical aspects of an enterprise to support management and development. By comparing EA models before and after digitalization, the paper analyzes the kinds of changes visible in the EA model. The most important finding is that newly created digitized products and the associated (product)- and enterprise architecture are no longer properly integrated into the overall architecture and even exist in parallel. Thus, the focus of this work is on showing these parallel architectures and proposing derivations for a better integration.
Context: Fast moving markets and the age of digitization require that software can be quickly changed or extended with new features. The associated quality attribute is referred to as evolvability: the degree of effectiveness and efficiency with which a system can be adapted or extended. Evolvability is especially important for software with frequently changing requirements, e.g. internet-based systems. Several evolvability-related benefits were arguably gained with the rise of service-oriented computing (SOC) that established itself as one of the most important paradigms for distributed systems over the last decade. The implementation of enterprise-wide software landscapes in the style of service-oriented architecture (SOA) prioritizes loose coupling, encapsulation, interoperability, composition, and reuse. In recent years, microservices quickly gained in popularity as an agile, DevOps-focused, and decentralized service-oriented variant with fine-grained services. A key idea here is that small and loosely coupled services that are independently deployable should be easy to change and to replace. Moreover, one of the postulated microservices characteristics is evolutionary design.
Problem Statement: While these properties provide a favorable theoretical basis for evolvable systems, they offer no concrete and universally applicable solutions. As with each architectural style, the implementation of a concrete microservice-based system can be of arbitrary quality. Several studies also report that software professionals trust in the foundational maintainability of service orientation and microservices in particular. A blind belief in these qualities without appropriate evolvability assurance can lead to violations of important principles and therefore negatively impact software evolution. In addition to this, very little scientific research has covered the areas of maintenance, evolution, or technical debt of microservices.
Objectives: To address this, the aim of this research is to support developers of microservices with appropriate methods, techniques, and tools to evaluate or improve evolvability and to facilitate sustainable long-term development. In particular, we want to provide recommendations and tool support for metric-based as well as scenario-based evaluation. In the context of service-based evolvability, we furthermore want to analyze the effectiveness of patterns and collect relevant antipatterns. Methods: Using empirical methods, we analyzed the industry state of the practice and the academic state of the art, which helped us to identify existing techniques, challenges, and research gaps. Based on these findings, we then designed new evolvability assurance techniques and used additional empirical studies to demonstrate and evaluate their effectiveness. Applied empirical methods were for example surveys, interviews, (systematic) literature studies, or controlled experiments.
Contributions: In addition to our analyses of industry practice and scientific literature, we provide contributions in three different areas. With respect to metric-based evolvability evaluation, we identified a set of structural metrics specifically designed for service orientation and analyzed their value for microservices. Subsequently, we designed tool-supported approaches to automatically gather a subset of these metrics from machine-readable RESTful API descriptions and via a distributed tracing mechanism at runtime. In the area of scenario-based evaluation, we developed a tool-supported lightweight method to analyze the evolvability of a service-based system based on hypothetical evolution scenarios. We evaluated the method with a survey (N=40) as well as hands-on interviews (N=7) and improved it further based on the findings. Lastly with respect to patterns and antipatterns, we collected a large set of service-based patterns and analyzed their applicability for microservices. From this initial catalogue, we synthesized a set of candidate evolvability patterns via the proxy of architectural modifiability tactics. The impact of four of these patterns on evolvability was then empirically tested in a controlled experiment (N=69) and with a metric-based analysis. The results suggest that the additional structural complexity introduced by the patterns as well as developers' pattern knowledge have an influence on their effectiveness. As a last contribution, we created a holistic collection of service-based antipatterns for both SOA and microservices and published it in a collaborative repository.
Conclusion: Our contributions provide first foundations for a holistic view on the evolvability assurance of microservices and address several perspectives. Metric- and scenario-based evaluation as well as service-based antipatterns can be used to identify "hot spots" while service-based patterns can remediate them and provide means for systematic evolvability construction. All in all, researchers and practitioners in the field of microservices can use our artifacts to analyze and improve the evolvability of their systems as well as to gain a conceptual understanding of service-based evolvability assurance.
Background: Design patterns are supposed to improve various quality attributes of software systems. However, there is controversial quantitative evidence of this impact. Especially for younger paradigms such as service- and microservice-based systems, there is a lack of empirical studies.
Objective: In this study, we focused on the effect of four service-based patterns - namely process abstraction, service façade, decomposed capability, and event-driven messaging - on the evolvability of a system from the viewpoint of inexperienced developers.
Method: We conducted a controlled experiment with Bachelor students (N = 69). Two functionally equivalent versions of a service-based web shop - one with patterns (treatment group), one without (control group) - had to be changed and extended in three tasks. We measured evolvability by the effectiveness and efficiency of the participants in these tasks. Additionally, we compared both system versions with nine structural maintainability metrics for size, granularity, complexity, cohesion, and coupling.
Results: Both experiment groups were able to complete a similar number of tasks within the allowed 90 min. Median effectiveness was 1/3. Mean efficiency was 12% higher in the treatment group, but this difference was not statistically significant. Only for the third task, we found statistical support for accepting the alternative hypothesis that the pattern version led to higher efficiency. In the metric analysis, the pattern version had worse measurements for size and granularity while simultaneously having slightly better values for coupling metrics. Complexity and cohesion were not impacted.
Interpretation: For the experiment, our analysis suggests that the difference in efficiency is stronger with more experienced participants and increased from task to task. With respect to the metrics, the patterns introduce additional volume in the system, but also seem to decrease coupling in some areas.
Conclusions: Overall, there was no clear evidence for a decisive positive effect of using service-based patterns, neither for the student experiment nor for the metric analysis. This effect might only be visible in an experiment setting with higher initial effort to understand the system or with more experienced developers.
On the influence of ground and substrate on the radiation characteristics of planar spiral antennas
(2022)
The unidirectional radiation of spiral antennas mounted on a substrate requires the presence of a ground plane. In this work, we successively illustrate the impact of dielectric material and ground plane on the key metrics of a planar equiangular spiral antenna (PESA). For this purpose, a PESA mounted on several substrates with different dielectric properties and thicknesses is modeled and simulated. We introduce the tertiary current flowing on spiral arms when backed by a ground plane.
Massive data transfers in modern data intensive systems resulting from low data-locality and data-to-code system design hurt their performance and scalability. Near-data processing (NDP) and a shift to code-to-data designs may represent a viable solution as packaging combinations of storage and compute elements on the same device has become viable.
The shift towards NDP system architectures calls for revision of established principles. Abstractions such as data formats and layouts typically spread multiple layers in traditional DBMS, the way they are processed is encapsulated within these layers of abstraction. The NDP-style processing requires an explicit definition of cross-layer data formats and accessors to ensure in-situ executions optimally utilizing the properties of the underlying NDP storage and compute elements. In this paper, we make the case for such data format definitions and investigate the performance benefits under NoFTL-KV and the COSMOS hardware platform.
Massive data transfers in modern data-intensive systems resulting from low data-locality and data-to-code system design hurt their performance and scalability. Near-Data processing (NDP) and a shift to code-to-data designs may represent a viable solution as packaging combinations of storage and compute elements on the same device has become feasible. The shift towards NDP system architectures calls for revision of established principles. Abstractions such as data formats and layouts typically spread multiple layers in traditional DBMS, the way they are processed is encapsulated within these layers of abstraction. The NDP-style processing requires an explicit definition of cross-layer data formats and accessors to ensure in-situ executions optimally utilizing the properties of the underlying NDP storage and compute elements. In this paper, we make the case for such data format definitions and investigate the performance benefits under RocksDB and the COSMOS hardware platform.
In summary, we believe that current “sleep monitoring” consumer devices on the market must undergo a more robust validation process before being made available and distributed in the general public. This is especially noteworthy as there have been first reports in the literature that inaccurate feedback of such consumer devices can worry subjects and may even lead to compromised well-being of the user.
The proliferation of smart technologies transforms the way individual consumers perform tasks. Considerable research alludes that smart technologies are often related to domestic energy consumption. However, it remains unclear how such technologies transform tasks and thereby impact our planet. We explore the role of technological smartness in personal day-to-day tasks that help create a more sustainable future. In the absence of theory, but facing extensive changes in everyday life enabled by smart technologies, we draw on phenomenon-based theorizing (PBT) guidelines. As anchor, we refer to task endogeneity related to task-technology fit theory (TTF). As infusion, we employ theory on public goods. Our model proposes novel relations between the concepts of smart autonomy and -transparency with sustainable task outcomes, mediated by task convenience and task significance. We discuss some implications, limitations, and future research opportunities.
Software Process Improvement (SPI) programs have been implemented, inter alia, to improve quality and speed of software development. SPI addresses many aspects ranging from individual developer skills to entire organizations. It comprises, for instance, the optimization of specific activities in the software lifecycle as well as the creation of organizational awareness and project culture. In the course of conducting a systematic mapping study on the state-of-the-art in SPI from a general perspective, we observed Software Quality Management (SQM) being of certain relevance in SPI programs. In this paper, we provide a detailed investigation of those papers from the overall systematic mapping study that were classified as addressing SPI in the context of SQM (including testing). From the main study’s result set, 92 papers were selected for an in-depth systematic review to study the contributions and to develop an initial picture of how these topics are addressed in SPI. Our findings show a fairly pragmatic contribution set in which different solutions are proposed, discussed, and evaluated. Among others, our findings indicate a certain reluctance towards standard quality or (test) maturity models and a strong focus on custom review, testing, and documentation techniques, whereas a set of five selected improvement measures is almost equally addressed.
Three different polyols (soluble starch, sucrose, and glycerol) were tested for their potential in the chemical modification of melamine formaldehyde (MF) resins for paper impregnation. MF impregnated papers are widely used as finishing materials for engineered wood. These polyols were selected because the presence of multiple hydroxy groups in the molecules was suspected to facilitate cocondensation with the main MF framework. This should lead to good resin performance. Moreover, they are readily produced from natural feedstock. They are available in large quantities and may serve as economically feasible, environmentally harmless alternative co-monomers suitable to substitute a portion of fossil-based starting material. In the presented work, a number of model resins were synthesized and tested for covalent incorporation of the natural polyol into the MF Framework. Spectroscopic evidence of chemical incorporation of glycerol was found by applying by 1H, 13C, 1H/13C HSQC, 1H/13C HMBC, and 1H DOSY methods. It was furthermore found that covalent incorporation of glycerol in the network took place when glycerol was added at different stages during synthesis. Further, all resins were used to prepare decorative laminates and the performance of the novel resins as surface finishing was evaluated using standard technological tests. The technological performance of the various modified thermosetting resins was assessed by determining flow viscosity, molar mass distribution, the storage stability, and in a second step laminating impregnated paper to particle boards and testing the resulting surfaces according to standardized quality tests. In most cases, the average board surface properties were of acceptable quality. Our findings demonstrate the possibility to replace several percent of the petrol-based product melamine by compounds obtained from renewable resources.
A software process is the game plan to organize project teams and run projects. Yet, it still is a challenge to select the appropriate development approach for the respective context. A multitude of development approaches compete for the users’ favor, but there is no silver bullet serving all possible setups. Moreover, recent research as well as experience from practice shows companies utilizing different development approaches to assemble the best-fitting approach for the respective company: a more traditional process provides the basic framework to serve the organization, while project teams embody this framework with more agile (and/or lean) practices to keep their flexibility. The paper at hand provides insights into the HELENA study with which we aim to investigate the use of “Hybrid dEveLopmENt Approaches in software systems development”. We present the survey design and initial findings from the survey’s test runs. Furthermore, we outline the next steps towards the full survey.
Wave-like differential equations occur in many engineering applications. Here the engineering setup is embedded into the framework of functional analysis of modern mathematical physics. After an overview, the –Hilbert space approach to free Euler–Bernoulli bending vibrations of a beam in one spatial dimension is investigated. We analyze in detail the corresponding positive, selfadjoint differential operators of 4-th order associated to the boundary conditions in statics. A comparison with free string wave swinging is outlined.
In many automotive applications, repetitive selfheating is the most critical operation condition for LDMOS transistors in smart power ICs. This is attributed to thermomechanical stress in the on-chip metallization, which results from the different thermal expansion coefficients of the metal and the intermetal dielectric. After many cycles, the accumulated strain in the metallization can lead to short circuits, thus limiting the lifetime. Increasing the LDMOS size can help to lower peak temperatures and therefore to reduce the stress. The downside of this is a higher cost. Hence, it has been suggested to use resilient systems that monitor the LDMOS metallization and lower the stress once a certain level of degradation is reached. Then, lifetime requirements can be fulfilled without oversizing LDMOS transistors, even though a certain performance loss has to be accepted. For such systems, suitable sensors for metal degradation are required. This work proposes a floating metal line embedded in the LDMOS metallization. The suitability of this approach has been investigated experimentally by test structures and shown to be a promising candidate. The obtained results will be explained by means of numerical thermo-mechanical simulations.
The purpose of this study is to evaluate online German fashion shopping websites from a customer perspective, based on a two-dimensional conceptual framework covering
shopping experience and shopping quality. As the research methodology, an exploratory mystery shopping approach was used in order to compare online shops. The results were as follows. First, four categories of online shops were identified: heroes, marketing winners, process winners, and underperformers. Second, three main levers for improvement were elaborated: emotionality of websites, reducing complexity, and the introduction of an industry standard of payments. From These results, it is possible to analyze and benchmark websites and to adapt online Marketing decisions as well as general management strategies for online fashion Shopping companies. The study has originality and value as it is the first time that an Evaluation of websites has combined the consumer´s perspective before the purchase and its fulfillment (e.g. delivery) after the online purchase.
Online-Portal "MINTFabrik"
(2023)
Das browserbasierte Online-Portal "MINTFabrik" entstand im Zuge der Maßnahmen zur Minderung von Lernrückständen mit der Idee, eine Lücke zu schließen, die es oft bei großen Online-Brückenkursen gibt: Ein Mangel an Übungsaufgaben, die schnell zugänglich sind, einfach ausgesucht werden können und gut auf bestimmte Lehrveranstaltungen und deren Anforderungen zugeschnitten sind. Die Entwicklung erfolgte in einer Kooperation der Hochschule Reutlingen mit der Tübinger Softwarefirma "Let´s Make Sense GmbH". Das Portal verzichtet bewusst auf eine Lektionsstruktur und besteht ausschließlich aus einzelnen Lernbausteinen (Items), d.h. Video-Tutorials, VisuApps und Aufgaben, die über eine komfortable Suche mit Filtern erreichbar sind und direkt bearbeitet werden können. Ein besonderes Merkmal der MINTFabrik sind Mikrokurse, die von Lehrenden und Studierenden erstellt werden können. Das sind kleine Einheiten aus einigen wenigen Items, die beliebig miteinander kombinierbar sind.
Services Oriented Architectures (SOA) have emerged as a useful framework for developing interoperable, large-scale systems, typically implemented using the Web Services (WS) standards. However, the maintenance and evolution of SOA systems present many challenges. SmartLife applications are intelligent user-centered systems and a special class of SOA systems that present even greater challenges for a software maintainer. Ontologies and ontological modeling can be used to support the evolution of SOA systems. This paper describes the development of a SOA evolution ontology and its use to develop an ontological model of a SOA system. The ontology is based on a standard SOA ontology. The ontological model can be used to provide semantic and visual support for software maintainers during routine maintenance tasks. We discuss a case study to illustrate this approach, as well as the strengths and limitations.
Historically, research and development (R&D) in the pharmaceutical sector has predominantly been an in-house activity. To enable investments for game changing late-stage assets and to enable better and less costly go/no-go decisions, most companies have employed a fail early paradigm through the implementation of clinical proof-of-concept organizations. To fuel their pipelines, some pioneers started to complement their internal R&D efforts through collaborations as early as the 1990s. In recent years, multiple extrinsic and intrinsic factors induced an opening for external sources of innovation and resulted in new models for open innovation, such as open sourcing, crowdsourcing, public–private partnerships, innovations centres, and the virtualization of R&D. Three factors seem to determine the breadth and depth regarding how companies approach external innovation: (1) the company’s legacy, (2) the company’s willingness and ability to take risks and (3) the company’s need to control IP and competitors. In addition, these factors often constitute the major hurdles to effectively leveraging external opportunities and assets. Conscious and differential choices of the R&D and business models for different companies and different divisions in the same company seem to best allow a company to fully exploit the potential of both internal and external innovations.
The digital transformation of our society changes the way we live, work, learn, communicate, and collaborate. This disruptive change drive current and next information processes and systems that are important business enablers for the context of digitization since years. Our aim is to support flexibility and agile transformations for both business domains and related information technology with more flexible enterprise information systems through adaptation and evolution of digital architectures. The present research paper investigates the continuous bottom-up integration of micro-granular architectures for a huge amount of dynamically growing systems and services, like microservices and the Internet of Things, as part of a new composed digital architecture. To integrate micro-granular architecture models into living architectural model versions we are extending enterprise architecture reference models by state of art elements for agile architectural engineering to support digital products, services, and processes.
Conventional production systems are evolving through cyber-physical systems and application-oriented approaches of AI, more and more into "smart" production systems, which are characterized among other things by a high level of communication and integration of the individual components. The exchange of information between the systems is usually only oriented towards the data content, where semantics is usually only implicitly considered. The adaptability required by external and internal influences requires the integration of new or the redesign of existing components. Through an open application-oriented ontology the information and communication exchange are extended by explicit semantic information. This enables a better integration of new and an easier reconfiguration of existing components. The developed ontology, the derived application and use of the semantic information will be evaluated by means of a practical use case.
A clinically useful system for individual continuous health data monitoring needs an architecture that takes into account all relevant medical and technical conditions. The requirements for a health app to support such a system are collected, and a vendor independent architecture is designed that allows the collection of vital data from arbitrary wearables using a smartphone. A prototypical implementation for the main scenario shows the feasibility of the approach.
Programmable nano-bio interfaces driven by tuneable vertically configured nanostructures have recently emerged as a powerful tool for cellular manipulations and interrogations. Such interfaces have strong potential for ground-breaking advances, particularly in cellular nanobiotechnology and mechanobiology. However, the opaque nature of many nanostructured surfaces makes non-destructive, live-cell characterization of cellular behavior on vertically aligned nanostructures challenging to observe. Here, a new nanofabrication route is proposed that enables harvesting of vertically aligned silicon (Si) nanowires and their subsequent transfer onto an optically transparent substrate, with high efficiency and without artefacts. We demonstrate the potential of this route for efficient live-cell phase contrast imaging and subsequent characterization of cells growing on vertically aligned Si nanowires. This approach provides the first opportunity to understand dynamic cellular responses to a cell-nanowire interface, and thus has the potential to inform the design of future nanoscale cellular manipulation technologies.
This paper discusses the optimal control problem for increasing the energy efficiency of induction machines in dynamic operation including field weakening regime. In an offline procedure optimal current and flux trajectories are determined such that the copper losses are minimized during transient operations. These trajectories are useful for a subsequent online implementation.
This paper presents a control strategy for optimal utilization of photovoltaic (PV) generated power in conjunction with an Energy Storage System (ESS). The ESS is specifically designed to be retrofitted into existing PV systems in an end-user application. It can be attached in parallel to the PV system and connects to existing DC/AC inverters. In particular, the study covers the impact such a modification has on the output power of existing PV panels. A distinct degradation of PV output power was found due to the different power characteristics of PV panel and ESS. To overcome such degradation a novel feedback system is proposed. The feedback system continuously modifies the power characteristic of the ESS to match the PV panel and thus achieves optimal power utilization. Impact on PV and power point tracking performance is analyzed. Simulation of the proposed system is performed in MATLAB/Simulink. The results are found to be satisfactory.
In the luxury Fashion industry, consumers could be categorized into two groups: fashion leader and Fashion follower. Both groups of consumers purchase luxury fashion products aim at satisfying both their functional needs and social needs (i.e., social influence). Thus the demands of both consumer groups are related. In this paper, we construct a model to examine the effects of pricing and online retail service in luxury fashion firms with social influence. To maximize profit, we identify the optimal prices and online retail service when the luxury fashion firms provide the non-differentiated and differentiated online retail services, respectively. More insights are discussed.
In this paper, we examine the political gridlock in reforming the Economic and Monetary Union. We utilize a two–stage game with imperfect information in order to study the optimal sequencing. The main results are: first, optimal sequencing requires for incompliant Member States a default option in stage–two, which in principle is related to the today's fiscal architecture (EMU-I). Second, we show that compliant countries prefer a reform equilibrium today if and only if they have a free choice about the preferred fiscal architecture at the end — either EMU-II with binding European coordination or EMU-I related to Maastricht. Noteworthy, our sequencing approach works for any design of the EMU-II architecture.
Es wird das Ziel verfolgt, eine Möglichkeit für die sichere Wiederverwendbarkeit von Schaltungen aus der OTA-Schaltungsklasse bereitzustellen. Hierfür werden ausgewählte OTA-Schaltungstopologien für die "Copy-and-Paste"-Methode vorgestellt. Es wurde im industriellen Umfeld gezeigt, dass sie sich unter der Voraussetzung einer repräsentativen Topologieauswahl – vordimensioniert für den typischen Anwendungsbereich – schon in dieser Form für die Wiederverwendung eignen.
Mit dem starken Wachstum des CarSharing- Angebots und der großen Menge an Flottenfahrzeugen in Unternehmen nimmt auch die Anzahl der Fahrtenbuch-Apps zu. Bei den meisten mobilen Fahrtenbuch- Anwendungen muss der Benutzer den Kilometerstand manuell eintragen. Dies wirkt sich negativ auf die Usability und die User Experience aus. Hinzu kommt, dass jede Minute kostbar ist, die der Fahrer im ausgeliehenen Auto verbringt. Aus diesen Gründen wird hier eine Lösung vorgestellt, bei der der Kilometerstand aus einer Mercedes Benz A-Klasse über den OBD-Anschluss mit Hilfe des CAN Interfaces „ISI b2air“ automatisch ausgelesen und per Bluetooth an die Fahrtenbuch-App der Berger Elektronik GmbH gesendet wird. Hierfür wird mittels der Software „ISI b2app“ die Kommunikation des Diagnosetesters mit dem Fahrzeug aufgezeichnet. Anschließend werden die CAN-Botschaften analysiert und in Bezug auf den Kilometerstand gefiltert. Die entsprechende Anfrage zum Erhalt des Kilometerstandes wird in den Programmcode des Berger Fahrtenbuches implementiert, so dass die App selbstständig den Kilometerstand auslesen kann.
Die Zielsetzung des hier vorgestellten Projekts ist es, eine intelligente Steuerungsalgorithmik für Biogas-Blockheizkraftwerke (Biogas-BHKW) zu entwickeln und zu optimieren. Daran schließt sich eine Testphase an einer realen Biogasanlage an, an der die Algorithmik zu diesem Zweck in die Anlagensteuerung implementiert wird. Um beurteilen zu können inwieweit die Steuerungsalgorithmik einen Beitrag zur Entlastung von Stromnetzen leisten kann, wird für die Versuche neben dem elektrischen Bedarf des landwirtschaftlichen Betriebs, an dem die Anlage angesiedelt ist, zusätzlich die Residuallast des benachbarten Stromnetzes betrachtet. Diese basiert auf Daten vom nächstgelegenen Umspannwerk, die so skaliert werden, dass sie eine Siedlung repräsentieren, die von dem Biogas-BHKW der Anlage mitversorgt werden kann. Die Einbindung der Steuerungsalgorithmik in die Anlagensteuerung erfolgt über eine Kommunikationsstruktur mit einer Datenbank als zentraler Schnittstelle. Eine erste Versuchsreihe, bei der das Biogas-BHKW nach den Fahrplänen der intelligenten Steuerungsalgorithmik geregelt wird, zeigt vielversprechende Ergebnisse. Über die gesamte Versuchsreihe hinweg berechnet die Steuerungsalgorithmik zuverlässig neue Fahrpläne, die vom BHKW weitestgehend auch sehr gut umgesetzt werden. Zudem kann nachgewiesen werden, dass durch den Einsatz der Algorithmik das vorgelagerte Stromnetz entlastet wird.
The use of additive manufacturing technologies for industrial production is constantly growing. This technology differs from the known production proecdures. The areas for scheduling, detailed and sequence planning are particularly important for additive production due to the long print times and flexible use of the production area. Therefore, production-relevant variables are considered and used for the production planning and control (PPC) of additive manufacturing machines. For this purpose, an optimization model is presented which shows a time-oriented build space utilization. In the implementation, a nesting algorithm is used to check the combinability of different models for each individual print job.
In a digitally controlled slope shaping system, reliable detection of both voltage and current slope is required to enable a closed-loop control for various power switches independent of system parameters. In most state-of-the-art works, this is realized by monitoring the absolute voltage and current values. Better accuracy at lower DC power loss is achieved by sensing techniques for a reliable passive detection, which is achieved through avoiding DC paths from the high voltage network into the sensing network. Using a high-speed analog-to-digital converter, the whole waveform of the transient derivative can be stored digitally and prepared for a predictive cycle-by-cycle regulation, without requiring high-precision digital differentiation algorithms. To gain an accurate representation of the voltage and current derivative waveforms, system parasitics are investigated and classified in three sections: (1) component parasitics, which are identified by s-parameter measurements and extraction of equivalent circuit models, (2) PCB design issues related to the sensing circuit, and (3) interconnections between adjacent boards.
The contribution of this paper is an optimized sensing network on the basis of the experimental study supporting fast transition slopes up to 100 V/ns and 1 A/ns and beyond, making the sensing technique attractive for slope shaping of fast switching devices like modern generation IGBTs, CoolMOSTM and SiC mosfets. Measurements of the optimized dv/dt and di/dt setups are demonstrated for a hard switched IGBT power stage.
The vast majority of state-of-the-art integrated circuits are mixed-signal chips. While the design of the digital parts of the ICs is highly automated, the design of the analog circuitry is largely done manually; it is very time-consuming; and prone to error. Among the reasons generally listed for this is often the attitude of the analog designer. The fact is that many analog designers are convinced that human experience and intuition are needed for good analog design. This is why they distrust the automated synthesis tools. This observation is quite correct, but this is only a symptom of the real problem. This paper shows that this phenomenon is caused by very concrete technical (and thus very rational) issues. These issues lie in the mode of operation of the typical optimization processes employed for the synthesizing tasks. I will show that the dilemma that arises in analog design with these optimizers is the root cause of the low level of automation in analog design. The paper concludes with a review of proposals for automating analog design
Thermoplastic polymers like ethylene-octene copolymer (EOC) may be grafted with silanes via reactive extrusion to enable subsequent crosslinking for advanced biomaterials manufacture. However, this reactive extrusion process is difficult to control and it is still challenging to reproducibly arrive at well-defined products. Moreover, high grafting degrees require a considerable excess of grafting reagent. A large proportion of the silane passes through the process without reacting and needs to be removed at great expense by subsequent purification. This results in unnecessarily high consumption of chemicals and a rather resource-inefficient process. It is thus desired to be able to define desired grafting degrees with optimum grafting efficiency by means of suitable process control. In this study, the continuous grafting of vinyltrimethoxysilane (VTMS) on ethylene-octene copolymer (EOC) via reactive extrusion was investigated. Successful grafting was verified and quantified by 1H-NMR spectroscopy. The effects of five process parameters and their synergistic interactions on grafting degree and grafting efficiency were determined using a face-centered experimental design (FCD). Response surface methodology (RSM) was applied to derive a causal process model and define process windows yielding arbitrary grafting degrees between <2 and >5% at a minimum waste of grafting agent. It was found that the reactive extrusion process was strongly influenced by several second-order interaction effects making this process difficult to control. Grafting efficiencies between 75 and 80% can be realized as long as grafting degrees <2% are admitted.