Refine
Document Type
- Journal article (875)
- Conference proceeding (850)
- Book chapter (184)
- Book (61)
- Doctoral Thesis (34)
- Anthology (15)
- Working Paper (13)
- Patent / Standard / Guidelines (6)
- Review (6)
- Issue of a journal (2)
Language
- English (2049) (remove)
Is part of the Bibliography
- yes (2049)
Institute
- Informatik (705)
- ESB Business School (518)
- Technik (346)
- Life Sciences (327)
- Texoversum (151)
- Zentrale Einrichtungen (6)
Publisher
- Springer (331)
- IEEE (252)
- Elsevier (242)
- MDPI (99)
- Wiley (66)
- Hochschule Reutlingen (59)
- Gesellschaft für Informatik e.V (54)
- Association for Computing Machinery (45)
- De Gruyter (43)
- Association for Information Systems (32)
Modern mixed (HTAP)workloads execute fast update-transactions and long running analytical queries on the same dataset and system. In multi-version (MVCC) systems, such workloads result in many short-lived versions and long version-chains as well as in increased and frequent maintenance overhead.
Consequently, the index pressure increases significantly. Firstly, the frequent modifications cause frequent creation of new versions, yielding a surge in index maintenance overhead. Secondly and more importantly, index-scans incur extra I/O overhead to determine, which of the resulting tuple versions are visible to the executing transaction (visibility-check) as current designs only store version/timestamp information in the base table – not in the index. Such index-only visibility-check is critical for HTAP workloads on large datasets.
In this paper we propose the Multi Version Partitioned B-Tree (MV-PBT) as a version-aware index structure, supporting index-only visibility checks and flash-friendly I/O patterns. The experimental evaluation indicates a 2x improvement for analytical queries and 15% higher transactional throughput under HTAP workloads. MV-PBT offers 40% higher tx. throughput compared to WiredTiger’s LSM-Tree implementation under YCSB.
The extracellular environment of vascular cells in vivo is complex in its chemical composition, physical properties, and architecture. Consequently, it has been a great challenge to study vascular cell responses in vitro, either to understand their interaction with their native environment or to investigate their interaction with artificial structures such as implant surfaces. New procedures and techniques from materials science to fabricate bio-scaffolds and surfaces have enabled novel studies of vascular cell responses under well-defined, controllable culture conditions. These advancements are paving the way for a deeper understanding of vascular cell biology and materials–cell interaction. Here, we review previous work focusing on the interaction of vascular smooth muscle cells (SMCs) and endothelial cells (ECs) with materials having micro- and nanostructured surfaces. We summarize fabrication techniques for surface topographies, materials, geometries, biochemical functionalization, and mechanical properties of such materials. Furthermore, various studies on vascular cell behavior and their biological responses to micro- and nanostructured surfaces are reviewed. Emphasis is given to studies of cell morphology and motility, cell proliferation, the cytoskeleton and cell-matrix adhesions, and signal transduction pathways of vascular cells. We finalize with a short outlook on potential interesting future studies.
Thin radio-frequency magnetron sputter deposited nano-hydroxyapatite (HA) films were prepared on the surface of a Fe-tricalcium phosphate (Fe-TCP) bioceramic composite, which was obtained using a conventional powder injection moulding technique. The obtained nano-hydroxyapatite coated Fe-TCP biocomposites (nano HA-Fe-TCP) were studied with respect to their chemical and phase composition, surface morphology, water contact angle, surface free energy and hysteresis. The deposition process resulted in a homogeneous, single-phase HA coating. The ability of the surface to support adhesion and the proliferation of human mesenchymal stem cells (hMSCs) was studied using biological short-term tests in vitro. The surface of the uncoated Fe-TCP bioceramic composite showed an initial cell attachment after 24 h of seeding, but adhesion, proliferation and growth did not persist during 14 days of culture.However, the HA-Fe-TCP surfaces allowed cell adhesion, and proliferation during 14 days. The deposition of the nano-HA films on the Fe-TCP surface resulted in higher surface energy, improved hydrophilicity and biocompatibility compared with the surface of the uncoated Fe-TCP. Furthermore, it is suggested that an increase in the polar component of the surface energy was responsible for the enhanced cell adhesion and proliferation in the case of the nano-HA Fe-TCP biocomposites.
Nanocoatings based on sol–gel coatings are presented as suitable tool to modify materials based on polymers. The main focus is set onto textiles as the most common polymer materials. It presents which types of functionalization can be reached by modified sol–gel processes. Also a suitable categorization of functions is given and set into relation to common applications. A special focus is set on the functional properties, antimicrobial, UV protective, and flame retardant. The concept of bifunctional coatings is discussed and especially the combination of water-repellent and antistatic is presented.
This paper is concerned with the study, optimization and control of the moisture sorption kinetics of agricultural products at temperatures typically found in processing and storage. A nonlinear autoregressive with exogenous inputs (NARX) neural network was developed to predict moisture sorption kinetics and consequently equilibrium moisture contents of shiitake mushrooms (Lentinula edodes (Berk.) Pegler) over a wide range of relative humidity and different temperatures. Sorption kinetic data of mushroom caps was separately generated using a continuous, gravimetric dynamic vapour sorption analyser at emperatures of 25-40 °C over a stepwise variation of relative humidity ranging from 0 to 85%. The predictive power of the neural network was based on physical data, namely relative humidity and temperature. The model was fed with a total of 4500 data points by dividing them into three subsets, namely, 70% of the data was used for training, 15% of the data for testing and 15% of the data for validation, randomly selected from the whole dataset. The NARX neural network was capable of precisely simulating equilibrium moisture contents of mushrooms derived from the dynamic vapour sorption kinetic data throughout the entire range of relative humidity.
In the present tutorial we perform a cross-cut analysis of database storage management from the perspective of modern storage technologies. We argue that neither the design of modern DBMS, nor the architecture of modern storage technologies are aligned with each other. Moreover, the majority of the systems rely on a complex multi-layer and compatibility oriented storage stack. The result is needlessly suboptimal DBMS performance, inefficient utilization, or significant write amplification due to outdated abstractions and interfaces. In the present tutorial we focus on the concept of native storage, which is storage operated without intermediate abstraction layers over an open native storage interface and is directly controlled by the DBMS.
Data analytics tasks on large datasets are computationally intensive and often demand the compute power of cluster environments. Yet, data cleansing, preparation, dataset characterization and statistics or metrics computation steps are frequent. These are mostly performed ad hoc, in an explorative manner and mandate low response times. But, such steps are I/O intensive and typically very slow due to low data locality, inadequate interfaces and abstractions along the stack. These typically result in prohibitively expensive scans of the full dataset and transformations on interface boundaries.
In this paper, we examine R as analytical tool, managing large persistent datasets in Ceph, a wide-spread cluster file-system. We propose nativeNDP – a framework for Near Data Processing that pushes down primitive R tasks and executes them in-situ, directly within the storage device of a cluster-node. Across a range of data sizes, we show that nativeNDP is more than an order of magnitude faster than other pushdown alternatives.
Hypermedia as the Engine of Application State (HATEOAS) is one of the core constraints of REST. It refers to the concept of embedding hyperlinks into the response of a queried or manipulated resource to show a client possible follow-up actions and transitions to related resources. Thus, this concept aims to provide a client with a navigational support when interacting with a Web-based application. Although HATEOAS should be implemented by any Web-based API claiming to be RESTful, API providers tend to offer service descriptions in place of embedding hyperlinks into responses. Instead of relying on a navigational support, a client developer has to read the service description and has to identify resources and their URIs that are relevant for the interaction with the API. In this paper, we introduce an approach that aims to identify transitions between resources of a Web-based API by systematically analyzing the service description only. We devise an algorithm that automatically derives a URI Model from the service description and then analyzes the payload schemas to identify feasible values for the substitution of path parameters in URI Templates. We implement this approach as a proxy application, which injects hyperlinks representing transitions into the response payload of a queried or manipulated resource. The result is a HATEOAS-like navigational support through an API. Our first prototype operates on service descriptions in the OpenAPI format. We evaluate our approach using ten real-world APIs from different domains. Furthermore, we discuss the results as well as the observations captured in these tests.
Rapidly growing data volumes push today's analytical systems close to the feasible processing limit. Massive parallelism is one possible solution to reduce the computational time of analytical algorithms. However, data transfer becomes a significant bottleneck since it blocks system resources moving data-to-code. Technological advances allow to economically place compute units close to storage and perform data processing operations close to data, minimizing data transfers and increasing scalability. Hence the principle of Near Data Processing (NDP) and the shift towards code-to-data. In the present paper we claim that the development of NDP-system architectures becomes an inevitable task in the future. Analytical DBMS like HPE Vertica have multiple points of impact with major advantages which are presented within this paper.
Near-data processing in database systems on native computational storage under HTAP workloads
(2022)
Today’s Hybrid Transactional and Analytical Processing (HTAP) systems, tackle the ever-growing data in combination with a mixture of transactional and analytical workloads. While optimizing for aspects such as data freshness and performance isolation, they build on the traditional data-to-code principle and may trigger massive cold data transfers that impair the overall performance and scalability. Firstly, in this paper we show that Near-Data Processing (NDP) naturally fits in the HTAP design space. Secondly, we propose an NDP database architecture, allowing transactionally consistent in-situ executions of analytical operations in HTAP settings. We evaluate the proposed architecture in state-of-the-art key/value-stores and multi-versioned DBMS. In contrast to traditional setups, our approach yields robust, resource- and cost-effcient performance.
Escherichia coli (E. coli) is considered the most common life-threatening infectious bacteria in our daily life and poses a major challenge to human health. However, antibiotics frequently overused and misused has triggered increased multidrug resistance, hinders therapeutic outcomes, and causes higher mortalities. Herein, we addressed near-infrared (NIR) laser-excited human serum albumin (HSA) mediated graphene oxide loaded palladium nano-dots (HSA-GO-Pd) that can effectively combat Gram-negative E. coli in vitro. NIR laser-excited designed hybrid material highly generates singlet oxygen and hydroxyl radical by electron spin-resonance (ESR) analysis. Transmission electron microscope (TEM) images show small spherical sizes PdNPs on the surface of GO nano-sheets. The zeta (ζ) potential study indicates that in an aqueous medium, the average PdNPs size and surface capped charge comes from human body protein (HSA), HSA-GO-Pd is 5–8 nm, and +25 mV, respectively. The spectroscopic characterization reveals that in the synthesized HSA-GO-Pd nanocomposite, PdNPs successfully well-dispersed decorated on the surface of graphene oxide. The as-synthesized HSA-GO-Pd shows excellent antibacterial activity against gram-negative pathogen by killing 95% bacteria within 5 h. HSA-GO-Pd having very biocompatible and shows significant antibacterial activities. Owing to their intense photothermal conversation potential, low toxicity to normal cells, the as-addressed hybrid (HSA-GO-Pd) combined with NIR-irradiation will catch up valuable insight into the effective ablation of pathogenic bacteria.
Multi-versioning and MVCC are the foundations of many modern DBMSs. Under mixed workloads and large datasets, the creation of the transactional snapshot can become very expensive, as long-running analytical transactions may request old versions, residing on cold storage, for reasons of transactional consistency. Furthermore, analytical queries operate on cold data, stored on slow persistent storage. Due to the poor data locality, snapshot creation may cause massive data transfers and thus lower performance. Given the current trend towards computational storage and near-data processing, it has become viable to perform such operations in-storage to reduce data transfers and improve scalability. neoDBMS is a DBMS designed for near-data processing and computational storage. In this paper, we demonstrate how neoDBMS performs snapshot computation in-situ. We showcase different interactive scenarios, where neoDBMS outperforms PostgreSQL 12 by up to 5×.
Neuromarketing is already relatively advanced when it comes to researching the principle effect of marketing in the brain. What is often still missing, however, is the transfer of these findings into practice. The reason for this is that research has so far primarily pursued the question of „why?“. For practice, however, the question of „how?“ is much more relevant. This article attempts to answer the latter question, i.e. to bridge the gap between research and practice in the field of retail marketing. Is there a buy button in the consumer´s brain? And if so, how can it be activated? Neuromarketing is a young discipline at the interface of cognitive science, neuroscience and market research. Due to technological progress, neuromarketing can provide important insights for retail, especially insights to explain consumer behaviour. By looking into the customer’s brain, retail companies can address their customers in a more targeted manner and thus gain an advantage over competitors. Especially the influence of emotions and the unconscious play a major role in the purchase decision of consumers. Using the limbic map, customers can be clustered into types based on the characteristics of their emotional systems, for which specific marketing measures can be derived. Best-practice examples from the retail sector show that a targeted approach to specific shopping types in retail can lead to success.
New approaches to respiratory assist: bioengineering an ambulatory, miniaturized bioartificial lung
(2019)
Although state-of-the-art treatments of respiratory failure clearly have made some progress in terms of survival in patients suffering from severe respiratory system disorders, such as acute respiratory distress syndrome (ARDS), they failed to significantly improve the quality of life in patients with acute or chronic lung failure, including severe acute exacerbations of chronic obstructive pulmonary disease or ARDS as well. Limitations of standard treatment modalities, which largely rely on conventional mechanical ventilation, emphasize the urgent, unmet clinical need for developing novel(bio)artificial respiratory assist devices that provide extracorporeal gas exchange with a focus on direct extracorporeal CO2 removal from the blood. In this review, we discuss some of the novel concepts and critical prerequisites for such respiratory lung assist devices that can be used with an adequate safety profile, in the intensive care setting, as well as for long-term domiciliary therapy in patients with chronic ventilatory failure. Specifically, we describe some of the pivotal steps, such as device miniaturization, passivation of the blood-contacting surfaces by chemical surface modifications, or endothelial cell seeding, all of which are required for converting current lung assist devices into ambulatory lung assist device for long-term use in critically ill patients. Finally, we also discuss some of the risks and challenges for the long-term use of ambulatory miniaturized bioartificial lungs.
The efficiency of pharmaceutical research and development (R&D) reflected by increasing costs of R&D, long timelines, and low probabilities of technical and regulatory success decreased continuously in the past years. Today, the costs for discovering and developing a new drug are enormously high with more than USD 2 billion per new molecular entity (NME), while the average overall success of a research project to provide an NME is in the single-digit percentage rate, and the total timelines of R&D easily exceeds 10 years questioning the return on investment (ROI) of pharmaceutical R&D. As a consequence and also caused by numerous patent expirations of blockbuster drugs that increased the pressure to return to an acceptable ROI, the pharmaceutical industry addressed this challenge and the related causes and identified several actions that need to be taken to increase the output/input ratio of R&D. This book chapter will review the pipeline sizes and the R&D investments of multinational pharmaceutical companies, will describe new processes that have been implemented to increase the reach and to reduce costs of pharmaceutical R&D, and it will illustrate new innovation models that were developed to increase the R&D efficiency.
Using the damage area as a quantification method for the Martindale test is a promising method to compare textile finishes without the need to test to full destruction. In addition, it could be shown that the results of Martindale tests performed with different pressure loads can be scaled to identical functional shape. If these results can be verified, this method would be a simplification of abrasive testing for different application areas.
In the IGF project No. 19617 N, nitrogen and phosphorous substituted alkoxysilanes were prepared and their ability to inhibit fire growth and spread for fabrics was explored. To this end, a series of flame retardants were synthesized using different strategies including click chemistry and nucleophilic substitution of commercial organophosphorus compounds with amino-based trialkoxysilanes and/or cyanuric chloride. The new halogen-free and aldehyde-free flame retardants were applied to different fabrics such as cotton (CO), polyethylene terephthalate (PET), polyamide (PA) and their blends using the well-known pad-dry-cure technique and sol-gel method. The flame-retarding efficiencies were evaluated by EN ISO 15025 test methods (protective clothing-protection against heat and flame method of test for limited flame spread). Good flame retardancy of the hybrid organic-inorganic materials was achieved with the addition of as small amount as 3-5 wt.% for cotton fabrics. Moreover, the water solubility and the washing resistance could be controlled through the functional groups attached to the phosphor atom or through the optimization of the curing temperature. Overall, the research project demonstrated that N-P-silanes are very good permanent flame retardants for textiles.
nKV in action: accelerating KVstores on native computational storage with NearData processing
(2020)
Massive data transfers in modern data intensive systems resulting from low data-locality and data-to-code system design hurt their performance and scalability. Near-data processing (NDP) designs represent a feasible solution, which although not new, has yet to see widespread use.
In this paper we demonstrate various NDP alternatives in nKV, which is a key/value store utilizing native computational storage and near-data processing. We showcase the execution of classical operations (GET, SCAN) and complex graph-processing algorithms (Betweenness Centrality) in-situ, with 1.4x-2.7x better performance due to NDP. nKV runs on real hardware - the COSMOS+ platform.
Massive data transfers in modern key/value stores resulting from low data-locality and data-to-code system design hurt their performance and scalability. Near-data processing (NDP) designs represent a feasible solution, which although not new, have yet to see widespread use.
In this paper we introduce nKV, which is a key/value store utilizing native computational storage and near-data processing. On the one hand, nKV can directly control the data and computation placement on the underlying storage hardware. On the other hand, nKV propagates the data formats and layouts to the storage device where, software and hardware parsers and accessors are implemented. Both allow NDP operations to execute in host-intervention-free manner, directly on physical addresses and thus better utilize the underlying hardware. Our performance evaluation is based on executing traditional KV operations (GET, SCAN) and on complex graph-processing algorithms (Betweenness Centrality) in-situ, with 1.4×-2.7× better performance on real hardware – the COSMOS+ platform.
This article discusses the scientifically and industrially important problem of automating the process of unloading goods from standard shipping containers. We outline some of the challenges barring further adoption of robotic solutions to this problem, ranging from handling a vast variety of shapes, sizes, weights, appearances, and packing arrangements of the goods, through hard demands on unloading speed and reliability, to ensuring that fragile goods are not damaged. We propose a modular and reconfigurable software framework in an attempt to efficiently address some of these challenges. We also outline the general framework design and the basic functionality of the core modules developed. We present two instantiations of the software system on two different fully integrated demonstrators: 1) coping with an industrial scenario, i.e., the automated unloading of coffee sacks with an already economically interesting performance; and 2) a scenario used to demonstrate the capabilities of our scientific and technological developments in the context of medium- to long-term prospects of automation in logistics. We performed evaluations that allowed us to summarize several important lessons learned and to identify future directions of research on autonomous robots for the handling of goods in logistics applications.
Flash SSDs are omnipresent as database storage. HDD replacement is seamless since Flash SSDs implement the same legacy hardware and software interfaces to enable backward compatibility. Yet, the price paid is high as backward compatibility masks the native behaviour, incurs significant complexity and decreases I/O performance, making it non-robust and unpredictable. Flash SSDs are black-boxes. Although DBMS have ample mechanisms to control hardware directly and utilize the performance potential of Flash memory, the legacy interfaces and black-box architecture of Flash devices prevent them from doing so.
In this paper we demonstrate NoFTL, an approach that enables native Flash access and integrates parts of the Flashmanagement functionality into the DBMS yielding significant performance increase and simplification of the I/O stack. NoFTL is implemented on real hardware based on the OpenSSD research platform. The contributions of this paper include: (i) a description of the NoFTL native Flash storage architecture; (ii) its integration in Shore-MT and (iii) performance evaluation of NoFTL on a real Flash SSD and on an on-line data-driven Flash emulator under TPCB, C,E and H workloads. The performance evaluation results indicate an improvement of at least 2.4x on real hardware over conventional Flash storage; as well as better utilisation of native Flash parallelism.
Modern persistent Key/Value stores are designed to meet the demand for high transactional throughput and high data ingestion rates. Still, they rely on backwards-compatible storage stack and abstractions to ease space management, foster seamless proliferation and system integration. Their dependence on the traditional I/O stack has negative impact on performance, causes unacceptably high write-amplification, and limits the storage longevity.
In the present paper we present NoFTL KV, an approach that results in a lean I/O stack, integrating physical storage management natively in the Key/Value store. NoFTL-KV eliminates backwards compatibility, allowing the Key/Value store to directly consume the characteristics of modern storage technologies. NoFTLKV is implemented under RocksDB. The performance evaluation under LinkBench shows that NoFTL-KV improves transactional throughput by 33%, while response times improve up to 2.3x. Furthermore, NoFTL KV reduces write-amplification 19x and improves storage longevity by imately the same factor.
Sleep analysis using a Polysomnography system is difficult and expensive. That is why we suggest a non-invasive and unobtrusive measurement. Very few people want the cables or devices attached to their bodies during sleep. The proposed approach is to implement a monitoring system, so the subject is not bothered. As a result, the idea is a non-invasive monitoring system based on detecting pressure distribution. This system should be able to measure the pressure differences that occur during a single heartbeat and during breathing through the mattress. The system consists of two blocks signal acquisition and signal processing. This whole technology should be economical to be affordable enough for every user. As a result, preprocessed data is obtained for further detailed analysis using different filters for heartbeat and respiration detection. In the initial stage of filtration, Butterworth filters are used.
Respiratory diseases are leading causes of death and disability in the world. The recent COVID-19 pandemic is also affecting the respiratory system. Detecting and diagnosing respiratory diseases requires both medical professionals and the clinical environment. Most of the techniques used up to date were also invasive or expensive.
Some research groups are developing hardware devices and techniques to make possible a non-invasive or even remote respiratory sound acquisition. These sounds are then processed and analysed for clinical, scientific, or educational purposes.
We present the literature review of non-invasive sound acquisition devices and techniques.
The results are about a huge number of digital tools, like microphones, wearables, or Internet of Thing devices, that can be used in this scope.
Some interesting applications have been found. Some devices make easier the sound acquisition in a clinic environment, but others make possible daily monitoring outside that ambient. We aim to use some of these devices and include the non-invasive recorded respiratory sounds in a Digital Twin system for personalized health.
Sleep study can be used for detection of sleep quality and in general bed behaviors. These results can helpful for regulating sleep and recognizing different sleeping disorders of human. In comparison to the leading standard measuring system, which is Polysomnography (PSG), the system proposed in this work is a non-invasive sleep monitoring device. For continuous analysis or home use, the PSG or wearable Actigraphy devices tends to be uncomfortable. Besides, these methods not only decrease practicality due to the process of having to put them on, but they are also very expensive. The system proposed in this paper classifies respiration and body movement with only one type of sensor and also in a noninvasive way. The sensor used is a pressure sensor. This sensor is low cost and can be used for commercial proposes. The system was tested by carrying out an experiment that recorded the sleep process of a subject. These recordings showed excellent results in the classification of breathing rate and body movements.
This article provides a stochastic agent-based model to exhibit the role of aggregation metrics in order to mitigate polarization in a complex society. Our sociophysics model is based on interacting and nonlinear Brownian agents, which allow us to study the emergence of collective opinions. The opinion of an agent, x i (t) is a continuous positive value in an interval [0, 1]. We find (i) most agent-metrics display similar outcomes. (ii) The middle-metric and noisy-metric obtain new opinion dynamics either towards assimilation or fragmentation. (iii) We show that a developed 2-stage metric provide new insights about convergence and equilibria. In summary, our simulation demonstrates the power of institutions, which affect the emergence of collective behavior. Consequently, opinion formation in a decentralized complex society is reliant to the individual information processing and rules of collective behavior.
In this paper we claim that a competitive analysis with new players entering a market requires a specific and systems-based analysis. System dynamics provides such an approach. We infer from our study that established premium automobile manufacturers could have identified a possible threat by a newcomer like Tesla earlier with using system dynamics. In particular, we postulate that a feedback view supports decision makers to better understand the significance of competitive information and perceive information faster and more reliably.
Novel design for a coreless printed circuit board transformer realizing high bandwidth and coupling
(2019)
Rogowski coils offer galvanic isolation and can measure alternating currents with a high bandwidth. Coreless printed circuit board (PCB) transformers have been used as an alternative to limit the additional stray inductance if a Rogowski coil can not be attached to the circuit. A new PCB transformer layout is proposed to reduce cost, decrease additional stray inductance, increase the bandwidth of current measurements and simplify the integration into existing designs.
Near-Data Processing (NDP) is a key computing paradigm for reducing the ever growing time and energy costs of data transport versus computations. With their flexibility, FPGAs are an especially suitable compute element for NDP scenarios. Even more promising is the exploitation of novel and future non-volatile memory (NVM) technologies for NDP, which aim to achieve DRAM-like latencies and throughputs, while providing large capacity non-volatile storage.
Experimentation in using FPGAs in such NVM-NDP scenarios has been hindered, though, by the fact that the NVM devices/FPGA boards are still very rare and/or expensive. It thus becomes useful to emulate the access characteristics of current and future NVMs using off-the-shelf DRAMs. If such emulation is sufficiently accurate, the resulting FPGA-based NDP computing elements can be used for actual full-stack hardware/software benchmarking, e.g., when employed to accelerate a database.
For this use, we present NVMulator, an open-source easy-to-use hardware emulation module that can be seamlessly inserted between the NDP processing elements on the FPGA and a conventional DRAM-based memory system. We demonstrate that, with suitable parametrization, the emulated NVM can come very close to the performance characteristics of actual NVM technologies, specifically Intel Optane. We achieve 0.62% and 1.7% accuracy for cache line sized accesses for read and write operations, while utilizing only 0.54% of LUT logic resources on a Xilinx/AMD AU280 UltraScale+ FPGA board. We consider both file-system as well as database access patterns, examining the operation of the RocksDB database when running on real or emulated Optane-technology memories.
Background: Polysomnography (PSG) is the gold standard for detecting obstructive sleep apnea (OSA). However, this technique has many disadvantages when using it outside the hospital or for daily use. Portable monitors (PMs) aim to streamline the OSA detection process through deep learning (DL).
Materials and methods: We studied how to detect OSA events and calculate the apnea-hypopnea index (AHI) by using deep learning models that aim to be implemented on PMs. Several deep learning models are presented after being trained on polysomnography data from the National Sleep Research Resource (NSRR) repository. The best hyperparameters for the DL architecture are presented. In addition, emphasis is focused on model explainability techniques, concretely on Gradient-weighted Class Activation Mapping (Grad-CAM).
Results: The results for the best DL model are presented and analyzed. The interpretability of the DL model is also analyzed by studying the regions of the signals that are most relevant for the model to make the decision. The model that yields the best result is a one-dimensional convolutional neural network (1D-CNN) with 84.3% accuracy.
Conclusion: The use of PMs using machine learning techniques for detecting OSA events still has a long way to go. However, our method for developing explainable DL models demonstrates that PMs appear to be a promising alternative to PSG in the future for the detection of obstructive apnea events and the automatic calculation of AHI.
Purpose: The purpose of this paper is to analyze if omni-channeling is a prerequisite for physical stores to create an emotional shopping experience.
Findings: Due to the technological developments an changes in consumer behavior, the retailer needs to adapt digital tools and to offer services that link on- and offline channels ensuring an emotional shopping experience. Multi-channel retailers need to integrate their channels to satisfy the customer.
There is no doubt that the amplification of channel integration towards an omni-channel structure is a powerful idea whose time has finally come. The digitally cross-linked world postulates all-encompassing, ubiquitous, and unobtrusive future services. In the concomitant, increasingly competitive market, retailers are starting to lay the foundation for omnichannel, meeting the expectations of a digitally cunning audience wanting their shopping experience to be as seamless and uncomplicated as possible. Nevertheless, recent researches show that there are still enough avenues for further research on omnichannel. Until now, the performance of companies was solely considered by experts from a suppliers’ point of view. It would be rather interesting to find out whether the desire to meet the increased cus-tomer expectations is also recognized by the customers themselves. This paper seeks to answering how the purchasing behavior has changed and what customers demand. In addition, it elaborates the opportunities that are promoted by omni-channel. Searching out all the effects, the paper will get to a final step, where it can be attested how the omnichannel performance of fashion and lifestyle retailers can be measured from a consumers’ perspective by developing an exclusive index. The study is confined to four fashion and lifestyle retailers: Hugo Boss AG, Levi Strauss & Co, Pull and Bear as well as COS. Using the scientific method of mystery shopping and a multi-item checklist including 54 key performance indicators, the paper aims to examine to which extend the four selected retailers provide a seamless customer journey, according to the five decision-making phases.
The fiber deformations of once-dried, bleached and never-dried unbleached kraft pulps were studied with respect to their behavior in high- and low-consistency refining. The pulps were stained with congo red to experimentally highlight areas where the arrangement of the fibrils was altered by refining such as dislocated zones or slip planes. The stained fibers were analyzed with conventional Metso Fiberlab but also with a novel prototype measurement device utilizing a color imaging setup. The local intensity of the stain in the fiber was expressed as degree of overall damage (Overall fiber damage index, OFDI). The rewetted zero span tensile index (RWZSTI) was used to verify the OFDI with respect to the pulp strength. High consistency refining resulted in a clear increase in the number of kinks which negatively influenced the pulp strength. The OFDI which was used to detect the intensity of local fiber defects also responded accordingly. A higher OFDI resulted in a lower pulp strength. Low consistency refining removed a significant amount of kinks and resulted in an increase in fiber swelling. A slight increase in fibrillation and a significant increase in flake-like fines were also observed. The OFDI, however, was not reduced in low consistency refining as it would be expected by the removal of less severe dislocations. One reason proposed here is that low consistency refining created new fiber pores that allowed the dye to penetrate into the fiber wall similarly as it does in the zones of the dislocations.
This paper addresses what we call the investment question: under what plausible circumstances, if any, can variable renewable energy (VRE, and solar photovoltaic (PV) in particular) be a good investment? Although VRE has been growing rapidly world-wide, it is generally subsidized. Under what cost and market conditions can solar PV flourish without subsidy? We employ solar insolation and market price data from the U.S. and from Germany to gain insight into the investment question. We find that unsubsidized solar PV is or may soon be a justifiable investment, but that market arrangements may play a crucial role in determining success. We end by sketching a proposal that amounts to a reformed capacity market that would afford participation of solar PV.
On the design of an urban data and modeling platform and its application to urban district analyses
(2020)
An integrated urban platform is the essential software infrastructure for smart, sustainable and resilitent city planning, operation and maintenance. Today such platforms are mostly designed to handle and analyze large and heterogeneous urban data sets from very different domains. Modeling and optimization functionalities are usually not part of the software concepts. However, such functionalities are considered crucial by the authors to develop transformation scenarios and to optimized smart city operation. An urban platform needs to handle multiple scales in the time and spatial domain, ranging from long term population and land use change to hourly or sub-hourly matching of renewable energy supply and urban energy demand.
Urban platforms are essential for smart and sustainable city planning and operation. Today they are mostly designed to handle and connect large urban data sets from very different domains. Modelling and optimisation functionalities are usually not part of the cities software infrastructure. However, they are considered crucial for transformation scenario development and optimised smart city operation. The work discusses software architecture concepts for such urban platforms and presents case study results on the building sector modelling, including urban data analysis and visualisation. Results from a case study in New York are presented to demonstrate the implementation status.
Digitalization of products and services commonly causes substantial changes in business models, operations, organization structures and IT infrastructures of enterprises. Motivated by experiences and observations from digitalization projects, the paper investigates the effects of digitalization on enterprise architectures (EA). EA models serve as representation of business, information system and technical aspects of an enterprise to support management and development. By comparing EA models before and after digitalization, the paper analyzes the kinds of changes visible in the EA model. The most important finding is that newly created digitized products and the associated (product)- and enterprise architecture are no longer properly integrated into the overall architecture and even exist in parallel. Thus, the focus of this work is on showing these parallel architectures and proposing derivations for a better integration.
Context: Fast moving markets and the age of digitization require that software can be quickly changed or extended with new features. The associated quality attribute is referred to as evolvability: the degree of effectiveness and efficiency with which a system can be adapted or extended. Evolvability is especially important for software with frequently changing requirements, e.g. internet-based systems. Several evolvability-related benefits were arguably gained with the rise of service-oriented computing (SOC) that established itself as one of the most important paradigms for distributed systems over the last decade. The implementation of enterprise-wide software landscapes in the style of service-oriented architecture (SOA) prioritizes loose coupling, encapsulation, interoperability, composition, and reuse. In recent years, microservices quickly gained in popularity as an agile, DevOps-focused, and decentralized service-oriented variant with fine-grained services. A key idea here is that small and loosely coupled services that are independently deployable should be easy to change and to replace. Moreover, one of the postulated microservices characteristics is evolutionary design.
Problem Statement: While these properties provide a favorable theoretical basis for evolvable systems, they offer no concrete and universally applicable solutions. As with each architectural style, the implementation of a concrete microservice-based system can be of arbitrary quality. Several studies also report that software professionals trust in the foundational maintainability of service orientation and microservices in particular. A blind belief in these qualities without appropriate evolvability assurance can lead to violations of important principles and therefore negatively impact software evolution. In addition to this, very little scientific research has covered the areas of maintenance, evolution, or technical debt of microservices.
Objectives: To address this, the aim of this research is to support developers of microservices with appropriate methods, techniques, and tools to evaluate or improve evolvability and to facilitate sustainable long-term development. In particular, we want to provide recommendations and tool support for metric-based as well as scenario-based evaluation. In the context of service-based evolvability, we furthermore want to analyze the effectiveness of patterns and collect relevant antipatterns. Methods: Using empirical methods, we analyzed the industry state of the practice and the academic state of the art, which helped us to identify existing techniques, challenges, and research gaps. Based on these findings, we then designed new evolvability assurance techniques and used additional empirical studies to demonstrate and evaluate their effectiveness. Applied empirical methods were for example surveys, interviews, (systematic) literature studies, or controlled experiments.
Contributions: In addition to our analyses of industry practice and scientific literature, we provide contributions in three different areas. With respect to metric-based evolvability evaluation, we identified a set of structural metrics specifically designed for service orientation and analyzed their value for microservices. Subsequently, we designed tool-supported approaches to automatically gather a subset of these metrics from machine-readable RESTful API descriptions and via a distributed tracing mechanism at runtime. In the area of scenario-based evaluation, we developed a tool-supported lightweight method to analyze the evolvability of a service-based system based on hypothetical evolution scenarios. We evaluated the method with a survey (N=40) as well as hands-on interviews (N=7) and improved it further based on the findings. Lastly with respect to patterns and antipatterns, we collected a large set of service-based patterns and analyzed their applicability for microservices. From this initial catalogue, we synthesized a set of candidate evolvability patterns via the proxy of architectural modifiability tactics. The impact of four of these patterns on evolvability was then empirically tested in a controlled experiment (N=69) and with a metric-based analysis. The results suggest that the additional structural complexity introduced by the patterns as well as developers' pattern knowledge have an influence on their effectiveness. As a last contribution, we created a holistic collection of service-based antipatterns for both SOA and microservices and published it in a collaborative repository.
Conclusion: Our contributions provide first foundations for a holistic view on the evolvability assurance of microservices and address several perspectives. Metric- and scenario-based evaluation as well as service-based antipatterns can be used to identify "hot spots" while service-based patterns can remediate them and provide means for systematic evolvability construction. All in all, researchers and practitioners in the field of microservices can use our artifacts to analyze and improve the evolvability of their systems as well as to gain a conceptual understanding of service-based evolvability assurance.
Background: Design patterns are supposed to improve various quality attributes of software systems. However, there is controversial quantitative evidence of this impact. Especially for younger paradigms such as service- and microservice-based systems, there is a lack of empirical studies.
Objective: In this study, we focused on the effect of four service-based patterns - namely process abstraction, service façade, decomposed capability, and event-driven messaging - on the evolvability of a system from the viewpoint of inexperienced developers.
Method: We conducted a controlled experiment with Bachelor students (N = 69). Two functionally equivalent versions of a service-based web shop - one with patterns (treatment group), one without (control group) - had to be changed and extended in three tasks. We measured evolvability by the effectiveness and efficiency of the participants in these tasks. Additionally, we compared both system versions with nine structural maintainability metrics for size, granularity, complexity, cohesion, and coupling.
Results: Both experiment groups were able to complete a similar number of tasks within the allowed 90 min. Median effectiveness was 1/3. Mean efficiency was 12% higher in the treatment group, but this difference was not statistically significant. Only for the third task, we found statistical support for accepting the alternative hypothesis that the pattern version led to higher efficiency. In the metric analysis, the pattern version had worse measurements for size and granularity while simultaneously having slightly better values for coupling metrics. Complexity and cohesion were not impacted.
Interpretation: For the experiment, our analysis suggests that the difference in efficiency is stronger with more experienced participants and increased from task to task. With respect to the metrics, the patterns introduce additional volume in the system, but also seem to decrease coupling in some areas.
Conclusions: Overall, there was no clear evidence for a decisive positive effect of using service-based patterns, neither for the student experiment nor for the metric analysis. This effect might only be visible in an experiment setting with higher initial effort to understand the system or with more experienced developers.
On the influence of ground and substrate on the radiation characteristics of planar spiral antennas
(2022)
The unidirectional radiation of spiral antennas mounted on a substrate requires the presence of a ground plane. In this work, we successively illustrate the impact of dielectric material and ground plane on the key metrics of a planar equiangular spiral antenna (PESA). For this purpose, a PESA mounted on several substrates with different dielectric properties and thicknesses is modeled and simulated. We introduce the tertiary current flowing on spiral arms when backed by a ground plane.
Massive data transfers in modern data intensive systems resulting from low data-locality and data-to-code system design hurt their performance and scalability. Near-data processing (NDP) and a shift to code-to-data designs may represent a viable solution as packaging combinations of storage and compute elements on the same device has become viable.
The shift towards NDP system architectures calls for revision of established principles. Abstractions such as data formats and layouts typically spread multiple layers in traditional DBMS, the way they are processed is encapsulated within these layers of abstraction. The NDP-style processing requires an explicit definition of cross-layer data formats and accessors to ensure in-situ executions optimally utilizing the properties of the underlying NDP storage and compute elements. In this paper, we make the case for such data format definitions and investigate the performance benefits under NoFTL-KV and the COSMOS hardware platform.
Massive data transfers in modern data-intensive systems resulting from low data-locality and data-to-code system design hurt their performance and scalability. Near-Data processing (NDP) and a shift to code-to-data designs may represent a viable solution as packaging combinations of storage and compute elements on the same device has become feasible. The shift towards NDP system architectures calls for revision of established principles. Abstractions such as data formats and layouts typically spread multiple layers in traditional DBMS, the way they are processed is encapsulated within these layers of abstraction. The NDP-style processing requires an explicit definition of cross-layer data formats and accessors to ensure in-situ executions optimally utilizing the properties of the underlying NDP storage and compute elements. In this paper, we make the case for such data format definitions and investigate the performance benefits under RocksDB and the COSMOS hardware platform.
In summary, we believe that current “sleep monitoring” consumer devices on the market must undergo a more robust validation process before being made available and distributed in the general public. This is especially noteworthy as there have been first reports in the literature that inaccurate feedback of such consumer devices can worry subjects and may even lead to compromised well-being of the user.
The proliferation of smart technologies transforms the way individual consumers perform tasks. Considerable research alludes that smart technologies are often related to domestic energy consumption. However, it remains unclear how such technologies transform tasks and thereby impact our planet. We explore the role of technological smartness in personal day-to-day tasks that help create a more sustainable future. In the absence of theory, but facing extensive changes in everyday life enabled by smart technologies, we draw on phenomenon-based theorizing (PBT) guidelines. As anchor, we refer to task endogeneity related to task-technology fit theory (TTF). As infusion, we employ theory on public goods. Our model proposes novel relations between the concepts of smart autonomy and -transparency with sustainable task outcomes, mediated by task convenience and task significance. We discuss some implications, limitations, and future research opportunities.
Software Process Improvement (SPI) programs have been implemented, inter alia, to improve quality and speed of software development. SPI addresses many aspects ranging from individual developer skills to entire organizations. It comprises, for instance, the optimization of specific activities in the software lifecycle as well as the creation of organizational awareness and project culture. In the course of conducting a systematic mapping study on the state-of-the-art in SPI from a general perspective, we observed Software Quality Management (SQM) being of certain relevance in SPI programs. In this paper, we provide a detailed investigation of those papers from the overall systematic mapping study that were classified as addressing SPI in the context of SQM (including testing). From the main study’s result set, 92 papers were selected for an in-depth systematic review to study the contributions and to develop an initial picture of how these topics are addressed in SPI. Our findings show a fairly pragmatic contribution set in which different solutions are proposed, discussed, and evaluated. Among others, our findings indicate a certain reluctance towards standard quality or (test) maturity models and a strong focus on custom review, testing, and documentation techniques, whereas a set of five selected improvement measures is almost equally addressed.
The increased availability of data gives rise to the use of machine learning methods for purposes like forecasting or quality control in operations management. Practitioners who want to employ these methods are faced with the task of choosing from a large number of available methods. We give an overview of classification methods and available implementations and present considerations for choosing appropriate methods.
Three different polyols (soluble starch, sucrose, and glycerol) were tested for their potential in the chemical modification of melamine formaldehyde (MF) resins for paper impregnation. MF impregnated papers are widely used as finishing materials for engineered wood. These polyols were selected because the presence of multiple hydroxy groups in the molecules was suspected to facilitate cocondensation with the main MF framework. This should lead to good resin performance. Moreover, they are readily produced from natural feedstock. They are available in large quantities and may serve as economically feasible, environmentally harmless alternative co-monomers suitable to substitute a portion of fossil-based starting material. In the presented work, a number of model resins were synthesized and tested for covalent incorporation of the natural polyol into the MF Framework. Spectroscopic evidence of chemical incorporation of glycerol was found by applying by 1H, 13C, 1H/13C HSQC, 1H/13C HMBC, and 1H DOSY methods. It was furthermore found that covalent incorporation of glycerol in the network took place when glycerol was added at different stages during synthesis. Further, all resins were used to prepare decorative laminates and the performance of the novel resins as surface finishing was evaluated using standard technological tests. The technological performance of the various modified thermosetting resins was assessed by determining flow viscosity, molar mass distribution, the storage stability, and in a second step laminating impregnated paper to particle boards and testing the resulting surfaces according to standardized quality tests. In most cases, the average board surface properties were of acceptable quality. Our findings demonstrate the possibility to replace several percent of the petrol-based product melamine by compounds obtained from renewable resources.
A software process is the game plan to organize project teams and run projects. Yet, it still is a challenge to select the appropriate development approach for the respective context. A multitude of development approaches compete for the users’ favor, but there is no silver bullet serving all possible setups. Moreover, recent research as well as experience from practice shows companies utilizing different development approaches to assemble the best-fitting approach for the respective company: a more traditional process provides the basic framework to serve the organization, while project teams embody this framework with more agile (and/or lean) practices to keep their flexibility. The paper at hand provides insights into the HELENA study with which we aim to investigate the use of “Hybrid dEveLopmENt Approaches in software systems development”. We present the survey design and initial findings from the survey’s test runs. Furthermore, we outline the next steps towards the full survey.