Refine
Document Type
- Conference proceeding (850) (remove)
Language
- English (850) (remove)
Is part of the Bibliography
- yes (850)
Institute
- Informatik (462)
- Technik (220)
- ESB Business School (137)
- Texoversum (21)
- Life Sciences (11)
- Zentrale Einrichtungen (2)
Publisher
- IEEE (222)
- Springer (142)
- Gesellschaft für Informatik e.V (42)
- Association for Computing Machinery (40)
- Hochschule Reutlingen (31)
- Association for Information Systems (30)
- VDE Verlag (23)
- SciTePress (21)
- IARIA (19)
- Elsevier (18)
An important shift in software delivery is the definition of a cloud service as an independently deployable unit by following the microservices architectural style. Container virtualization facilitates development and deployment by ensuring independence from the runtime environment. Thus, cloud services are built as container based systems - a set of containers that control the lifecycle of software and middleware components. However, using containers leads to a new paradigm for service development and operation: Self service environments enable software developers to deploy and operate container based systems on their own - you build it, you run it. Following this approach, more and more operational aspects are transferred towards the responsibility of software developers. In this work, we propose a concept for self-adaptive cloud services based on container virtualization in line with the microservices architectural style and present a model-based approach that assists software developers in building these services. Based on operational models specified by developers, the mechanisms required for self-adaptation are automatically generated. As a result, each container automatically adapts itself in a reactive, decentralized manner. We evaluate a prototype which leverages the emerging TOSCA standard to specify operational behavior in a portable manner.
Simulation models of the middle ear have rarely been used for diagnostic purposes due to their limited predictive ability with respect to pathologies. One big challenge is the large uncertainty and ambiguity in the choice of material parameters of the model.
Typically, the model parameters are determined by fitting simulation results to validation measurements. In a previous study, it was shown that fitting the model parameters of a finite-element model using the middle-ear transfer function and various other measurable output variables from normal ears alone is not sufficient to obtain a good predictive ability of the model on pathological middle-ear conditions. However, the inclusion of validation measurements on one pathological case resulted in a very good predictive ability also for other pathological cases. Although the found parameter set was plausible in all aspects, it was not yet possible to draw conclusions about the uniqueness and the accuracy or the uncertainty of the parameter set.
To answer these questions, statistical solution approaches are used in this study. Using the Monte Carlo method, a large number of plausible model data sets are generated that correctly represent the normal and pathological middle-ear characteristics in terms of various output variables like e.g., impedance, reflectance, umbo, and stapes transfer function. Subsequent principal component analyses (PCA) allow to draw conclusions about correlations, quantitative limits and statistical density of parameter values.
Furthermore, applying inverse PCA yields numerous plausible parameterizations of the middle-ear model, which can be used for data augmentation and training of a neural network which is capable of distinguishing between a normal middle ear and pathologies like otosclerosis, malleus fixation, and disarticulation based on objectively measured quantities like impedance, reflectance, and umbo velocity.
Due to the large interindividual variances and the poor optical accessibility of the ear, the specificity of hearing diagnostics today is severely restricted to a certain clinical picture and quantitative assessment. Often only a yes or no decision is possible, which depends strongly on the subjective assessment of the ENT physician. A novel approach, in which objectively obtainable, non invasive audiometric measurements are evaluated using a numerical middle ear model, makes it possible to make the hidden middle ear properties visible and quantifiable. The central topic of this paper is a novel parameter identification algorithm that combines inverse fuzzy arithmetic with an artificial neural network in order to achieve a coherent diagnostic overall picture in the comparison of model and measurement. Its usage is shown at a pathological pattern called malleus fixation where the upper ligament of the malleus is pathologically stiffened.
New or adapted digital business models have huge impacts on Enterprise Architectures (EA) and require them to become more agile, flexible, and adaptable. All these changes are happening frequently and are currently not well documented. An EA consists of a lot of elements with manifold relationships between them. Thus changing the business model may have multiple impacts on other architectural elements. The EA engineering process deals with the development, change and optimization of architectural elements and their dependencies. Thus an EA provides a holistic view for both business and IT from the perspective of many stakeholders, which are involved in EA decision-making processes. Different stakeholders have specific concerns and are collaborating today in often unclear decision-making processes. In our research we are investigating information from collaborative decision-making processes to support stakeholders in taking current decisions. In addition we provide all information necessary to understand how and why decisions were taken. We are collecting the decision-related information automatically to minimize manual time intensive work as much as possible. The core contribution of our research extends a decisional metamodel, which links basic decisions with architectural elements and extends them with an associated decisional case context. Our aim is to support a new integral method for multi perspective and collaborative decision-making processes. We illustrate this by a practice-relevant decision-making scenario for Enterprise Architecture Engineering.
Companies are continuously changing their strategy, processes, and information systems to benefit from the digital transformation. Controlling the digital architecture and governance is the fundamental goal. Enterprise Governance, Risk and Compliance (GRC) systems are vital for managing digital risks threatening in modern enterprises from many different angles. The most significant constituent to GRC systems is the definition of controls that is implemented on different layers of a digital Enterprise Architecture (EA). As part of the compliant aspect of GRC, the effectiveness of these controls is assessed and reported to relevant management bodies within the enterprise. In this paper, we present a metamodel which links controls to the affected elements of a digital EA and supplies a way of expressing associated assessment techniques and results. We complement a metamodel with an expository instantiation of a control compliance cockpit in an international insurance enterprise.
The energy turnaround, digitalization and decreasing revenues forces enterprises in the energy domain to develop new business models. Business models for renewable energy are compound on different logic than business models for larger scale power plants. Following a design science research approach, we examined the business models of three enterprises in the energy domain in a first step. We identified that these business models result in complex ecosystems with multiple actors and difficult relationships between them. One cause is the fast changing and complicated state regulation in Germany. In order to solve the problem, we captured together with the partners of the enterprises the requirements in a second phase. Further we developed the prototype Business Model Configurator (BMConfig) based on the e3Value Ontology on the metamodelling platform ADOxx. We demonstrate the feasibility of our approach in business model of energy efficiency service based on smart meter data.
This paper models the political budget cycle with stochastic differential equations. The paper highlights the development of future volatility of the budget cycle. In fact, I confirm the proposition of a less volatile budget cycle in future. Moreover, I show that this trend is even amplified due to higher transparency. These findings are new evidence in the literature on electoral cycles. I calibrate a rigorous stochastic model on public deficit-to-GDP data for several countries from 1970 to 2012.
One of the challenges in condition monitoring systems is the residual life time prediction. This prediction is done based on statistical methods, based on physical knowledge about the considered process or a combination of these approaches. Physical knowledge of the system is a result of long-term experience of process operators. However, it can be gained as well by analyzing appropriately designed process models. The additional benefit of such models is that particular effects and their impact on the process behavior can be analyzed in detail and without plant operation in a shorter time. The current contribution developed in the framework of the research project Model Based Hierarchic Condition Monitoring presents such models for condition monitoring of roller chains. First, already existing high order dynamic models given by nonlinear differential equations of such chains are extended to incorporate effects that occur due to a deterioration of the chain condition. Then, a simple model is developed and compared to the high order model. Based on the two models the change in the process behavior due to a deterioration of the roller chain condition is analyzed to illustrate that these models can be used in future research in the above mentioned research project to better predict the residual life time of the considered roller chains.
Teaching at assembly workstations in production in SMEs (small and medium sized companies) often does not take place at all or only insufficiently. In addition to the lack of technical content, there are also aggravatingly incorrect movement sequences from an ergonomic point of view, which "untrained" people usually automatically acquire. An AI based approach is used to analyze a definite workflow for a specific assembly scope regarding the behavior of several employees. Based on these different behaviors, the AI gives feedback at which points in time, work steps and movement’s particularly dangerous incorrect postures occur. Motion capturing and digital human model simulation in combination with the results of the AI define the optimized workflow. Individual employees can be trained directly due to the fact that AI identifies their most serious incorrect postures and provide them with a direct analogy of their “wrong” posture and “easy on the joints posture”. With the assistance of various test persons, the AI can conduct a study in which the most frequently occurring incorrect postures can be identified. This could be realized in general or tailored to specific groups of people (e.g. "People over 1.90m tall must be particularly careful not to make the following mistake...). The approach will be tested and validated at the Werk150, the factory of the ESB Business School, on the campus of the Reutlingen University. The new gained knowledge will be used subsequently for training in SMEs.
Collaborative apparel consumption is proposed as more sustainable alternative to conventional consumption. The purpose of this study is the exploration of consumers’ motives to participate in collaborative apparel consumption. Findings suggest that consumers’ intention to participate in collaborative apparel consumption is mainly influenced by financial benefits, convenience and sustainability awareness.
Multi-dimensional patient data, such as time varying volume data, data of different imaging modalities, surface segmentations etc. are of growing importance in the clinical routine. For many use cases, it is of major importance to replicate a certain visualization of a data set created on one machine on a different computer using different software tools. Up until now, there exists no standardized methodology for this consistent presentation. We propose an extension of the Digital Imaging und Communications in Medicine (DICOM) called “Multi dimensional Presentation State” and outline scope and first results of the standardization process.
The internet of things, enterprise social networks, adaptive case management, mobility systems, analytics for big data, and cloud environments are emerging to support smart connected i.e. digital products and services and the digital transformation. Biological metaphors for living and adaptable ecosystems are currently providing the logical foundation for resilient run-time environments with serviceoriented digitization architectures and for self-optimizing intelligent business services and related distributed information systems. We are investigating mechanisms for flexible adaptation and evolution of information systems with digital architecture in the context of the ongoing digital transformation. The goal is to support flexible and agile transformations for both business and related information systems through adaptation and dynamical evolution of their digital architectures. The present research paper investigates mechanisms of decision analytics for digitization architectures, putting a spotlight to internet of things micro-granular architectures, by extending original enterprise architecture reference models with digitization architectures and their multi-perspective architectural decision management.
Social networks, smart portable devices, Internet of Things (IoT) on base of technologies like analytics for big data and cloud services are emerging to support flexible connected products and agile services as the new wave of digital transformation. Biological metaphors of living and adaptable ecosystems with service-oriented enterprise architectures provide the foundation for self-optimizing and resilient run-time environments for intelligent business services and related distributed information systems. We are extending Enterprise Architecture (EA) with mechanisms for flexible adaptation and evolution of information systems having distributed IoT and other micro-granular digital architecture to support next digitization products, services, and processes. Our aim is to support flexibility and agile transformation for both IT and business capabilities through adaptive digital enterprise architectures. The present research paper investigates additionally decision mechanisms in the context of multi-perspective explorations of enterprise services and Internet of Things architectures by extending original enterprise architecture reference models with state of art elements for architectural engineering and digitization.
The seamless fusion of the virtual world of information with the real physical world of things is considered the key for mastering the increasing complexity of production networks in the context of Industry 4.0. This fusion, widely referred to as the Internet of Things (IoT), is primarily enabled through the use of automatic identification (Auto-ID) technologies as an interface between the two worlds. Existing Auto-ID technologies almost exclusively rely on artificial features or identifiers that are attached to an object for the sole purpose of identification. In fact, using artificial features for the purpose of identification causes additional efforts and is not even always applicable. This paper, therefore, follows an approach of using multiple natural object features defined by the technical product information from computer-aided design (CAD) models for direct identification. By extending optical instance-level 3D-Object recognition by means of additional non-optical sensors, a multi-sensor automatic identification system (AIS) is realised, capable of identifying unpackaged piece goods without the need for artificial identifiers. While the implementation of a prototype confirms the feasibility of the approach, first experiments show improved accuracy and distinctiveness in identification compared to optical instance-level 3D-Object recognition. This paper aims to introduce the concept of multisensor identification and to present the prototype multi-sensor AIS.
Characteristics of modern computing and storage technologies fundamentally differ from traditional hardware. There is a need to optimally leverage their performance, endurance and energy consumption characteristics. Therefore, existing architectures and algorithms in modern high performance database management systems have to be redesigned and advanced. Multi Version Concurrency Control (MVCC) approaches in data-base management systems maintain multiple physically independent tuple versions. Snapshot isolation approaches enable high parallelism and concurrency in workloads with almost serializable consistency level. Modern hardware technologies benefit from multi-version approaches. Indexing multi-version data on modern hardware is still an open research area. In this paper, we provide a survey of popular multi-version indexing approaches and an extended scope of high performance single-version approaches. An optimal multi-version index structure brings look-up efficiency of tuple versions, which are visible to transactions, and effort on index maintenance in balance for different workloads on modern hardware technologies.
Automatic segmentation is essential for the brain tumor diagnosis, disease prognosis, and follow-up therapy of patients with gliomas. Still, accurate detection of gliomas and their sub-regions in multimodal MRI is very challenging due to the variety of scanners and imaging protocols. Over the last years, the BraTS Challenge has provided a large number of multi-institutional MRI scans as a benchmark for glioma segmentation algorithms. This paper describes our contribution to the BraTS 2022 Continuous Evaluation challenge. We propose a new ensemble of multiple deep learning frameworks namely, DeepSeg, nnU-Net, and DeepSCAN for automatic glioma boundaries detection in pre-operative MRI. It is worth noting that our ensemble models took first place in the final evaluation on the BraTS testing dataset with Dice scores of 0.9294, 0.8788, and 0.8803, and Hausdorf distance of 5.23, 13.54, and 12.05, for the whole tumor, tumor core, and enhancing tumor, respectively. Furthermore, the proposed ensemble method ranked first in the final ranking on another unseen test dataset, namely Sub-Saharan Africa dataset, achieving mean Dice scores of 0.9737, 0.9593, and 0.9022, and HD95 of 2.66, 1.72, 3.32 for the whole tumor, tumor core, and enhancing tumor, respectively.
This paper presents a novel multi-modal CNN architecture that exploits complementary input cues in addition to sole color information. The joint model implements a mid-level fusion that allows the network to exploit cross modal interdependencies already on a medium feature-level. The benefit of the presented architecture is shown for the RGB-D image understanding task. So far, state-of-the-art RGB-D CNNs have used network weights trained on color data. In contrast, a superior initialization scheme is proposed to pre-train the depth branch of the multi-modal CNN independently. In an end-to-end training the network parameters are optimized jointly using the challenging Cityscapes dataset. In thorough experiments, the effectiveness of the proposed model is shown. Both, the RGB GoogLeNet and further RGB-D baselines are outperformed with a significant margin on two different tasks: semantic segmentation and object detection. For the latter, this paper shows how to extract object level groundtruth from the instance level annotations in Cityscapes in order to train a powerful object detector.
Enterprises are transforming their strategy, culture, processes, and their information systems to enlarge their digitalization efforts or to approach for digital leadership. The digital transformation profoundly disrupts existing enterprises and economies. In current times, a lot of new business opportunities appeared using the potential of the Internet and related digital technologies: The Internet of Things, services computing, cloud computing, artificial intelligence, big data with analytics, mobile systems, collaboration networks, and cyber physical systems. Digitization fosters the development of IT environments with many rather small and distributed structures, like the Internet of Things, microservices, or other micro-granular elements. Architecting micro-granular structures have a substantial impact on architecting digital services and products. The change from a closed-world modeling perspective to more flexible Open World of living software and system architectures defines the context for flexible and evolutionary software approaches, which are essential to enable the digital transformation. In this paper, we are revealing multiple perspectives of digital enterprise architecture and decisions to effectively support value and service oriented software systems for intelligent digital services and products.
There are indicators we are entering a new era for MTM research, by moving beyond the structural approach that has characterized MTM research to date, to focus on important and under-researched issues, such as the nature of employees’ experiences in an MTM context. Although team research suggests that the experiences of members impact team functioning, these lines of reasoning have not, until recently, made their way to MTM research. To overcome this limitation, this symposium showcases five papers that use a variety of theoretical perspectives, research designs (i.e., qualitative, quantitative), contexts (e.g., healthcare, automotive manufacturer, online panels), methodologies, and analytical methods (i.e., meta-analysis, content/thematic analysis). The symposium focuses on surfacing and advancing unanswered questions that extend theory and can offer fruitful directions for MTM research by examining critical individual and team level outcomes (e.g., individual/team performance, individual counterproductive and organizational citizenship behavior, individual learning, individual turnover intentions, organizational commitment) in the experiences of MTM employees across their teams (e.g., goals, functions, roles). We hope to provide a forum to advance unanswered questions that offer fruitful directions for MTM research.
An index in a Multi-Version DBMS (MV-DBMS) has to reflect different tuple versions of a single data item. Existing approaches follow the paradigm of logically separating the tuple version data from the data item, e.g. an index is only allowed to return at most one version of a single data item (while it may return multiple data items that match a search criteria). Hence to determine the valid (and therefore visible) tuple version of a data item, the MV-DBMS first fetches all tuple versions that match the search criteria and subsequently filters visible versions using visibility checks. This involves I/O storage accesses to tuple versions that do not have to be fetched. In this vision paper we present the Multi Version Index (MV-IDX) approach that allows index-only visibility checks which significantly reduce the amount of I/O storage accesses as well as the index maintenance overhead. The MV-IDX achieves significantly lower response times and higher transactional throughput on OLTP workloads.
Modern mixed (HTAP)workloads execute fast update-transactions and long running analytical queries on the same dataset and system. In multi-version (MVCC) systems, such workloads result in many short-lived versions and long version-chains as well as in increased and frequent maintenance overhead.
Consequently, the index pressure increases significantly. Firstly, the frequent modifications cause frequent creation of new versions, yielding a surge in index maintenance overhead. Secondly and more importantly, index-scans incur extra I/O overhead to determine, which of the resulting tuple versions are visible to the executing transaction (visibility-check) as current designs only store version/timestamp information in the base table – not in the index. Such index-only visibility-check is critical for HTAP workloads on large datasets.
In this paper we propose the Multi Version Partitioned B-Tree (MV-PBT) as a version-aware index structure, supporting index-only visibility checks and flash-friendly I/O patterns. The experimental evaluation indicates a 2x improvement for analytical queries and 15% higher transactional throughput under HTAP workloads. MV-PBT offers 40% higher tx. throughput compared to WiredTiger’s LSM-Tree implementation under YCSB.
In the present tutorial we perform a cross-cut analysis of database storage management from the perspective of modern storage technologies. We argue that neither the design of modern DBMS, nor the architecture of modern storage technologies are aligned with each other. Moreover, the majority of the systems rely on a complex multi-layer and compatibility oriented storage stack. The result is needlessly suboptimal DBMS performance, inefficient utilization, or significant write amplification due to outdated abstractions and interfaces. In the present tutorial we focus on the concept of native storage, which is storage operated without intermediate abstraction layers over an open native storage interface and is directly controlled by the DBMS.
Data analytics tasks on large datasets are computationally intensive and often demand the compute power of cluster environments. Yet, data cleansing, preparation, dataset characterization and statistics or metrics computation steps are frequent. These are mostly performed ad hoc, in an explorative manner and mandate low response times. But, such steps are I/O intensive and typically very slow due to low data locality, inadequate interfaces and abstractions along the stack. These typically result in prohibitively expensive scans of the full dataset and transformations on interface boundaries.
In this paper, we examine R as analytical tool, managing large persistent datasets in Ceph, a wide-spread cluster file-system. We propose nativeNDP – a framework for Near Data Processing that pushes down primitive R tasks and executes them in-situ, directly within the storage device of a cluster-node. Across a range of data sizes, we show that nativeNDP is more than an order of magnitude faster than other pushdown alternatives.
Hypermedia as the Engine of Application State (HATEOAS) is one of the core constraints of REST. It refers to the concept of embedding hyperlinks into the response of a queried or manipulated resource to show a client possible follow-up actions and transitions to related resources. Thus, this concept aims to provide a client with a navigational support when interacting with a Web-based application. Although HATEOAS should be implemented by any Web-based API claiming to be RESTful, API providers tend to offer service descriptions in place of embedding hyperlinks into responses. Instead of relying on a navigational support, a client developer has to read the service description and has to identify resources and their URIs that are relevant for the interaction with the API. In this paper, we introduce an approach that aims to identify transitions between resources of a Web-based API by systematically analyzing the service description only. We devise an algorithm that automatically derives a URI Model from the service description and then analyzes the payload schemas to identify feasible values for the substitution of path parameters in URI Templates. We implement this approach as a proxy application, which injects hyperlinks representing transitions into the response payload of a queried or manipulated resource. The result is a HATEOAS-like navigational support through an API. Our first prototype operates on service descriptions in the OpenAPI format. We evaluate our approach using ten real-world APIs from different domains. Furthermore, we discuss the results as well as the observations captured in these tests.
Multi-versioning and MVCC are the foundations of many modern DBMSs. Under mixed workloads and large datasets, the creation of the transactional snapshot can become very expensive, as long-running analytical transactions may request old versions, residing on cold storage, for reasons of transactional consistency. Furthermore, analytical queries operate on cold data, stored on slow persistent storage. Due to the poor data locality, snapshot creation may cause massive data transfers and thus lower performance. Given the current trend towards computational storage and near-data processing, it has become viable to perform such operations in-storage to reduce data transfers and improve scalability. neoDBMS is a DBMS designed for near-data processing and computational storage. In this paper, we demonstrate how neoDBMS performs snapshot computation in-situ. We showcase different interactive scenarios, where neoDBMS outperforms PostgreSQL 12 by up to 5×.
Massive data transfers in modern key/value stores resulting from low data-locality and data-to-code system design hurt their performance and scalability. Near-data processing (NDP) designs represent a feasible solution, which although not new, have yet to see widespread use.
In this paper we introduce nKV, which is a key/value store utilizing native computational storage and near-data processing. On the one hand, nKV can directly control the data and computation placement on the underlying storage hardware. On the other hand, nKV propagates the data formats and layouts to the storage device where, software and hardware parsers and accessors are implemented. Both allow NDP operations to execute in host-intervention-free manner, directly on physical addresses and thus better utilize the underlying hardware. Our performance evaluation is based on executing traditional KV operations (GET, SCAN) and on complex graph-processing algorithms (Betweenness Centrality) in-situ, with 1.4×-2.7× better performance on real hardware – the COSMOS+ platform.
Flash SSDs are omnipresent as database storage. HDD replacement is seamless since Flash SSDs implement the same legacy hardware and software interfaces to enable backward compatibility. Yet, the price paid is high as backward compatibility masks the native behaviour, incurs significant complexity and decreases I/O performance, making it non-robust and unpredictable. Flash SSDs are black-boxes. Although DBMS have ample mechanisms to control hardware directly and utilize the performance potential of Flash memory, the legacy interfaces and black-box architecture of Flash devices prevent them from doing so.
In this paper we demonstrate NoFTL, an approach that enables native Flash access and integrates parts of the Flashmanagement functionality into the DBMS yielding significant performance increase and simplification of the I/O stack. NoFTL is implemented on real hardware based on the OpenSSD research platform. The contributions of this paper include: (i) a description of the NoFTL native Flash storage architecture; (ii) its integration in Shore-MT and (iii) performance evaluation of NoFTL on a real Flash SSD and on an on-line data-driven Flash emulator under TPCB, C,E and H workloads. The performance evaluation results indicate an improvement of at least 2.4x on real hardware over conventional Flash storage; as well as better utilisation of native Flash parallelism.
Modern persistent Key/Value stores are designed to meet the demand for high transactional throughput and high data ingestion rates. Still, they rely on backwards-compatible storage stack and abstractions to ease space management, foster seamless proliferation and system integration. Their dependence on the traditional I/O stack has negative impact on performance, causes unacceptably high write-amplification, and limits the storage longevity.
In the present paper we present NoFTL KV, an approach that results in a lean I/O stack, integrating physical storage management natively in the Key/Value store. NoFTL-KV eliminates backwards compatibility, allowing the Key/Value store to directly consume the characteristics of modern storage technologies. NoFTLKV is implemented under RocksDB. The performance evaluation under LinkBench shows that NoFTL-KV improves transactional throughput by 33%, while response times improve up to 2.3x. Furthermore, NoFTL KV reduces write-amplification 19x and improves storage longevity by imately the same factor.
Sleep analysis using a Polysomnography system is difficult and expensive. That is why we suggest a non-invasive and unobtrusive measurement. Very few people want the cables or devices attached to their bodies during sleep. The proposed approach is to implement a monitoring system, so the subject is not bothered. As a result, the idea is a non-invasive monitoring system based on detecting pressure distribution. This system should be able to measure the pressure differences that occur during a single heartbeat and during breathing through the mattress. The system consists of two blocks signal acquisition and signal processing. This whole technology should be economical to be affordable enough for every user. As a result, preprocessed data is obtained for further detailed analysis using different filters for heartbeat and respiration detection. In the initial stage of filtration, Butterworth filters are used.
Sleep study can be used for detection of sleep quality and in general bed behaviors. These results can helpful for regulating sleep and recognizing different sleeping disorders of human. In comparison to the leading standard measuring system, which is Polysomnography (PSG), the system proposed in this work is a non-invasive sleep monitoring device. For continuous analysis or home use, the PSG or wearable Actigraphy devices tends to be uncomfortable. Besides, these methods not only decrease practicality due to the process of having to put them on, but they are also very expensive. The system proposed in this paper classifies respiration and body movement with only one type of sensor and also in a noninvasive way. The sensor used is a pressure sensor. This sensor is low cost and can be used for commercial proposes. The system was tested by carrying out an experiment that recorded the sleep process of a subject. These recordings showed excellent results in the classification of breathing rate and body movements.
In this paper we claim that a competitive analysis with new players entering a market requires a specific and systems-based analysis. System dynamics provides such an approach. We infer from our study that established premium automobile manufacturers could have identified a possible threat by a newcomer like Tesla earlier with using system dynamics. In particular, we postulate that a feedback view supports decision makers to better understand the significance of competitive information and perceive information faster and more reliably.
Novel design for a coreless printed circuit board transformer realizing high bandwidth and coupling
(2019)
Rogowski coils offer galvanic isolation and can measure alternating currents with a high bandwidth. Coreless printed circuit board (PCB) transformers have been used as an alternative to limit the additional stray inductance if a Rogowski coil can not be attached to the circuit. A new PCB transformer layout is proposed to reduce cost, decrease additional stray inductance, increase the bandwidth of current measurements and simplify the integration into existing designs.
Near-Data Processing (NDP) is a key computing paradigm for reducing the ever growing time and energy costs of data transport versus computations. With their flexibility, FPGAs are an especially suitable compute element for NDP scenarios. Even more promising is the exploitation of novel and future non-volatile memory (NVM) technologies for NDP, which aim to achieve DRAM-like latencies and throughputs, while providing large capacity non-volatile storage.
Experimentation in using FPGAs in such NVM-NDP scenarios has been hindered, though, by the fact that the NVM devices/FPGA boards are still very rare and/or expensive. It thus becomes useful to emulate the access characteristics of current and future NVMs using off-the-shelf DRAMs. If such emulation is sufficiently accurate, the resulting FPGA-based NDP computing elements can be used for actual full-stack hardware/software benchmarking, e.g., when employed to accelerate a database.
For this use, we present NVMulator, an open-source easy-to-use hardware emulation module that can be seamlessly inserted between the NDP processing elements on the FPGA and a conventional DRAM-based memory system. We demonstrate that, with suitable parametrization, the emulated NVM can come very close to the performance characteristics of actual NVM technologies, specifically Intel Optane. We achieve 0.62% and 1.7% accuracy for cache line sized accesses for read and write operations, while utilizing only 0.54% of LUT logic resources on a Xilinx/AMD AU280 UltraScale+ FPGA board. We consider both file-system as well as database access patterns, examining the operation of the RocksDB database when running on real or emulated Optane-technology memories.
This paper addresses what we call the investment question: under what plausible circumstances, if any, can variable renewable energy (VRE, and solar photovoltaic (PV) in particular) be a good investment? Although VRE has been growing rapidly world-wide, it is generally subsidized. Under what cost and market conditions can solar PV flourish without subsidy? We employ solar insolation and market price data from the U.S. and from Germany to gain insight into the investment question. We find that unsubsidized solar PV is or may soon be a justifiable investment, but that market arrangements may play a crucial role in determining success. We end by sketching a proposal that amounts to a reformed capacity market that would afford participation of solar PV.
Urban platforms are essential for smart and sustainable city planning and operation. Today they are mostly designed to handle and connect large urban data sets from very different domains. Modelling and optimisation functionalities are usually not part of the cities software infrastructure. However, they are considered crucial for transformation scenario development and optimised smart city operation. The work discusses software architecture concepts for such urban platforms and presents case study results on the building sector modelling, including urban data analysis and visualisation. Results from a case study in New York are presented to demonstrate the implementation status.
Digitalization of products and services commonly causes substantial changes in business models, operations, organization structures and IT infrastructures of enterprises. Motivated by experiences and observations from digitalization projects, the paper investigates the effects of digitalization on enterprise architectures (EA). EA models serve as representation of business, information system and technical aspects of an enterprise to support management and development. By comparing EA models before and after digitalization, the paper analyzes the kinds of changes visible in the EA model. The most important finding is that newly created digitized products and the associated (product)- and enterprise architecture are no longer properly integrated into the overall architecture and even exist in parallel. Thus, the focus of this work is on showing these parallel architectures and proposing derivations for a better integration.
On the influence of ground and substrate on the radiation characteristics of planar spiral antennas
(2022)
The unidirectional radiation of spiral antennas mounted on a substrate requires the presence of a ground plane. In this work, we successively illustrate the impact of dielectric material and ground plane on the key metrics of a planar equiangular spiral antenna (PESA). For this purpose, a PESA mounted on several substrates with different dielectric properties and thicknesses is modeled and simulated. We introduce the tertiary current flowing on spiral arms when backed by a ground plane.
Massive data transfers in modern data intensive systems resulting from low data-locality and data-to-code system design hurt their performance and scalability. Near-data processing (NDP) and a shift to code-to-data designs may represent a viable solution as packaging combinations of storage and compute elements on the same device has become viable.
The shift towards NDP system architectures calls for revision of established principles. Abstractions such as data formats and layouts typically spread multiple layers in traditional DBMS, the way they are processed is encapsulated within these layers of abstraction. The NDP-style processing requires an explicit definition of cross-layer data formats and accessors to ensure in-situ executions optimally utilizing the properties of the underlying NDP storage and compute elements. In this paper, we make the case for such data format definitions and investigate the performance benefits under NoFTL-KV and the COSMOS hardware platform.
In summary, we believe that current “sleep monitoring” consumer devices on the market must undergo a more robust validation process before being made available and distributed in the general public. This is especially noteworthy as there have been first reports in the literature that inaccurate feedback of such consumer devices can worry subjects and may even lead to compromised well-being of the user.
The proliferation of smart technologies transforms the way individual consumers perform tasks. Considerable research alludes that smart technologies are often related to domestic energy consumption. However, it remains unclear how such technologies transform tasks and thereby impact our planet. We explore the role of technological smartness in personal day-to-day tasks that help create a more sustainable future. In the absence of theory, but facing extensive changes in everyday life enabled by smart technologies, we draw on phenomenon-based theorizing (PBT) guidelines. As anchor, we refer to task endogeneity related to task-technology fit theory (TTF). As infusion, we employ theory on public goods. Our model proposes novel relations between the concepts of smart autonomy and -transparency with sustainable task outcomes, mediated by task convenience and task significance. We discuss some implications, limitations, and future research opportunities.
Software Process Improvement (SPI) programs have been implemented, inter alia, to improve quality and speed of software development. SPI addresses many aspects ranging from individual developer skills to entire organizations. It comprises, for instance, the optimization of specific activities in the software lifecycle as well as the creation of organizational awareness and project culture. In the course of conducting a systematic mapping study on the state-of-the-art in SPI from a general perspective, we observed Software Quality Management (SQM) being of certain relevance in SPI programs. In this paper, we provide a detailed investigation of those papers from the overall systematic mapping study that were classified as addressing SPI in the context of SQM (including testing). From the main study’s result set, 92 papers were selected for an in-depth systematic review to study the contributions and to develop an initial picture of how these topics are addressed in SPI. Our findings show a fairly pragmatic contribution set in which different solutions are proposed, discussed, and evaluated. Among others, our findings indicate a certain reluctance towards standard quality or (test) maturity models and a strong focus on custom review, testing, and documentation techniques, whereas a set of five selected improvement measures is almost equally addressed.
Three different polyols (soluble starch, sucrose, and glycerol) were tested for their potential in the chemical modification of melamine formaldehyde (MF) resins for paper impregnation. MF impregnated papers are widely used as finishing materials for engineered wood. These polyols were selected because the presence of multiple hydroxy groups in the molecules was suspected to facilitate cocondensation with the main MF framework. This should lead to good resin performance. Moreover, they are readily produced from natural feedstock. They are available in large quantities and may serve as economically feasible, environmentally harmless alternative co-monomers suitable to substitute a portion of fossil-based starting material. In the presented work, a number of model resins were synthesized and tested for covalent incorporation of the natural polyol into the MF Framework. Spectroscopic evidence of chemical incorporation of glycerol was found by applying by 1H, 13C, 1H/13C HSQC, 1H/13C HMBC, and 1H DOSY methods. It was furthermore found that covalent incorporation of glycerol in the network took place when glycerol was added at different stages during synthesis. Further, all resins were used to prepare decorative laminates and the performance of the novel resins as surface finishing was evaluated using standard technological tests. The technological performance of the various modified thermosetting resins was assessed by determining flow viscosity, molar mass distribution, the storage stability, and in a second step laminating impregnated paper to particle boards and testing the resulting surfaces according to standardized quality tests. In most cases, the average board surface properties were of acceptable quality. Our findings demonstrate the possibility to replace several percent of the petrol-based product melamine by compounds obtained from renewable resources.
A software process is the game plan to organize project teams and run projects. Yet, it still is a challenge to select the appropriate development approach for the respective context. A multitude of development approaches compete for the users’ favor, but there is no silver bullet serving all possible setups. Moreover, recent research as well as experience from practice shows companies utilizing different development approaches to assemble the best-fitting approach for the respective company: a more traditional process provides the basic framework to serve the organization, while project teams embody this framework with more agile (and/or lean) practices to keep their flexibility. The paper at hand provides insights into the HELENA study with which we aim to investigate the use of “Hybrid dEveLopmENt Approaches in software systems development”. We present the survey design and initial findings from the survey’s test runs. Furthermore, we outline the next steps towards the full survey.
The digital transformation of our society changes the way we live, work, learn, communicate, and collaborate. This disruptive change drive current and next information processes and systems that are important business enablers for the context of digitization since years. Our aim is to support flexibility and agile transformations for both business domains and related information technology with more flexible enterprise information systems through adaptation and evolution of digital architectures. The present research paper investigates the continuous bottom-up integration of micro-granular architectures for a huge amount of dynamically growing systems and services, like microservices and the Internet of Things, as part of a new composed digital architecture. To integrate micro-granular architecture models into living architectural model versions we are extending enterprise architecture reference models by state of art elements for agile architectural engineering to support digital products, services, and processes.
A clinically useful system for individual continuous health data monitoring needs an architecture that takes into account all relevant medical and technical conditions. The requirements for a health app to support such a system are collected, and a vendor independent architecture is designed that allows the collection of vital data from arbitrary wearables using a smartphone. A prototypical implementation for the main scenario shows the feasibility of the approach.
This paper discusses the optimal control problem for increasing the energy efficiency of induction machines in dynamic operation including field weakening regime. In an offline procedure optimal current and flux trajectories are determined such that the copper losses are minimized during transient operations. These trajectories are useful for a subsequent online implementation.
This paper presents a control strategy for optimal utilization of photovoltaic (PV) generated power in conjunction with an Energy Storage System (ESS). The ESS is specifically designed to be retrofitted into existing PV systems in an end-user application. It can be attached in parallel to the PV system and connects to existing DC/AC inverters. In particular, the study covers the impact such a modification has on the output power of existing PV panels. A distinct degradation of PV output power was found due to the different power characteristics of PV panel and ESS. To overcome such degradation a novel feedback system is proposed. The feedback system continuously modifies the power characteristic of the ESS to match the PV panel and thus achieves optimal power utilization. Impact on PV and power point tracking performance is analyzed. Simulation of the proposed system is performed in MATLAB/Simulink. The results are found to be satisfactory.
In the luxury Fashion industry, consumers could be categorized into two groups: fashion leader and Fashion follower. Both groups of consumers purchase luxury fashion products aim at satisfying both their functional needs and social needs (i.e., social influence). Thus the demands of both consumer groups are related. In this paper, we construct a model to examine the effects of pricing and online retail service in luxury fashion firms with social influence. To maximize profit, we identify the optimal prices and online retail service when the luxury fashion firms provide the non-differentiated and differentiated online retail services, respectively. More insights are discussed.
In a digitally controlled slope shaping system, reliable detection of both voltage and current slope is required to enable a closed-loop control for various power switches independent of system parameters. In most state-of-the-art works, this is realized by monitoring the absolute voltage and current values. Better accuracy at lower DC power loss is achieved by sensing techniques for a reliable passive detection, which is achieved through avoiding DC paths from the high voltage network into the sensing network. Using a high-speed analog-to-digital converter, the whole waveform of the transient derivative can be stored digitally and prepared for a predictive cycle-by-cycle regulation, without requiring high-precision digital differentiation algorithms. To gain an accurate representation of the voltage and current derivative waveforms, system parasitics are investigated and classified in three sections: (1) component parasitics, which are identified by s-parameter measurements and extraction of equivalent circuit models, (2) PCB design issues related to the sensing circuit, and (3) interconnections between adjacent boards.
The contribution of this paper is an optimized sensing network on the basis of the experimental study supporting fast transition slopes up to 100 V/ns and 1 A/ns and beyond, making the sensing technique attractive for slope shaping of fast switching devices like modern generation IGBTs, CoolMOSTM and SiC mosfets. Measurements of the optimized dv/dt and di/dt setups are demonstrated for a hard switched IGBT power stage.