Refine
Document Type
- Conference proceeding (222)
- Journal article (28)
Language
- English (250)
Is part of the Bibliography
- yes (250)
Institute
- Technik (140)
- Informatik (89)
- ESB Business School (19)
- Life Sciences (1)
- Texoversum (1)
Publisher
- IEEE (250) (remove)
Most Question-answering (QA) systems rely on training data to reach their optimal performance. However, acquiring training data for supervised systems is both time-consuming and resource-intensive. To address this, in this paper, we propose TFCSG, an unsupervised similar question retrieval approach that leverages pre-trained language models and multi-task learning. Firstly, topic keywords in question sentences are extracted sequentially based on a latent topic-filtering algorithm to construct unsupervised training corpus data. Then, the multi-task learning method is used to build the question retrieval model. There are three tasks designed. The first is a short sentence contrastive learning task. The second is the question sentence and its corresponding topic sequence similarity judgment task. The third is using question sentences to generate their corresponding topic sequence task. The three tasks are used to train the language model in parallel. Finally, similar questions are obtained by calculating the cosine similarity between sentence vectors. The comparison experiment on public question datasets that TFCSG outperforms the comparative unsupervised baseline method. And there is no need for manual marking, which greatly saves human resources.
Analog-/Mixed-Signal (AMS) design verification is one of the most challenging and time consuming tasks of todays complex system on chip (SoC) designs. In contrast to digital system design, AMS designers have to deal with a continuous state space of conservative quantities, highly nonlinear relationships, non-functional influences, etc. enlarging the number of possibly critical scenarios to infinity. In this special session we demonstrate the verification of functional properties using simulative and formal methods. We combine different approaches including automated abstraction and refinement of mixed-level models, state-space discretization as well as affine arithmetic. To reach sufficient verification coverage with reasonable time and effort, we use enhanced simulation schemes to avoid conventional simulation drawbacks.
An integrated synchronous buck converter with a high resolution dead time control for input voltages up to 48V and 10MHz switching frequency is presented. The benefit of an enhanced dead time control at light loads to enable zero voltage switching at both the high-side and low-side switch at low output load is studied. This way, compact multi-MHz DCDC converters can be implemented at high efficiency over a wide load current range. The concept also eliminates body diode forward conduction losses and minimizes reverse recovery losses. A dead time resolution of 125 ps is realized by an 8-bit differential delay chain. A further efficiency enhancement by soft switching at the high-side switch at light load is achieved with a voltage boost of the switching node by dead time control in forced continuous conduction mode. The monolithic converter is implemented in an 180nm high-voltage BiCMOS technology. At V IN = 48V, V OUT = 5V, 50mA load, 10MHz switching frequency and 500 nH output inductance, the efficiency is measured to be increased by 14.4% compared to a conventional predictive dead time control. A peak efficiency of 80.9% is achieved at 12V input.
Additive manufacturing (AM) is a promising manufacturing method for many industrial sectors. For this application, industrial requirements such as high production volumes and coordinated implementation must be taken into account. These tasks of the internal handling of production facilities are carried out by the Production Planning and Control (PPC) information system. A key factor in the planning and scheduling is the exact calculation of manufacturing times. For this purpose we investigate the use of Machine Learning (ML) for the prediction of manufacturing times of AM facilities.
Multi-versioning and MVCC are the foundations of many modern DBMSs. Under mixed workloads and large datasets, the creation of the transactional snapshot can become very expensive, as long-running analytical transactions may request old versions, residing on cold storage, for reasons of transactional consistency. Furthermore, analytical queries operate on cold data, stored on slow persistent storage. Due to the poor data locality, snapshot creation may cause massive data transfers and thus lower performance. Given the current trend towards computational storage and near-data processing, it has become viable to perform such operations in-storage to reduce data transfers and improve scalability. neoDBMS is a DBMS designed for near-data processing and computational storage. In this paper, we demonstrate how neoDBMS performs snapshot computation in-situ. We showcase different interactive scenarios, where neoDBMS outperforms PostgreSQL 12 by up to 5×.
This work is a report on practical experiences with the issue of interoperability in German practice management systems (PMSs) from an ongoing clinical trial on teledermatology, the TeleDerm project. A proprietary and established web-platform for store-and-forward telemedicine is integrated with the IT in the GPs’ offices for automatic exchange of basic patient data. Most of the 19 different PMSs included in the study sample lack support of modern health data exchange standards, therefore the relatively old but widely available German health data exchange interface “Gerätedatentransfer” (GDT) is used. Due to the lack of enforcement and regulation of the GDT standard, several obstacles to interoperability are encountered. As a partial, but reusable working solution to cope with these issues, we present a custom middleware which is used in conjunction with GDT. We describe the design, technical implementation and observed hindrances with the existing infrastructure. A discussion on health care interfacing standards and the current state of interoperability in German PMS software is given.
The design process for a single phase, smart, universal charger for light electric vehicles, is presented. With a step up, power factor correction circuit, followed by a phase shifted, full bridge converter, with synchronous rectification on the secondary side. Due to the resistor-capacitor-diode snubber on the secondary side, the current peak at the start of power transfer, leads to false triggering during light load control with peak current mode control. The solution developed for light loads, is to change from peak current control to voltage control. This is achieved by limiting the maximum phase shift, instead of changing the reference value. For the power factor correction stage, measured and calculated efficiencies are compared as a function of the output power. The voltage and current waveforms are shown for the power factor correction circuit, and for the phase shifted bridge, the measured current waveform is compared with simulation.
Large critical systems, such as those created in the space domain, are usually developed by a large number of organizations and, furthermore, they have to comply with standards. Yet, the different stakeholders often do not have a common understanding of the needed quality of requirements specifications. Achieving such a common understanding is a laborious process that is currently not sufficiently supported. Moreover, such a common understanding must be aligned with the standards. In this paper, we present an approach that can be used to align the different stakeholder perceptions regarding the quality of requirements specifications. Existing quality models for requirements specifications are analyzed for equivalences, and transferred into a common representation, the so-called Aligned Quality Map (AQM). Furthermore, a process is defined that supports the alignment of different stakeholder perspectives with regard to the quality of requirements specifications using AQM, which is validated in a case study in the context of European space projects. AQM has been created and populated with an initial set of quality models. It is designed in such way that it can be extended to include further quality models. The case study has shown that an alignment of different stakeholder perspectives and the quality model of the European Cooperation for Space Standardization using AQM is feasible. The approach allows for aligning different stakeholder perspectives for a common understanding of the quality of requirements specifications in the context of standards. Furthermore, AQM supports the assessment of requirements specifications.
In the luxury Fashion industry, consumers could be categorized into two groups: fashion leader and Fashion follower. Both groups of consumers purchase luxury fashion products aim at satisfying both their functional needs and social needs (i.e., social influence). Thus the demands of both consumer groups are related. In this paper, we construct a model to examine the effects of pricing and online retail service in luxury fashion firms with social influence. To maximize profit, we identify the optimal prices and online retail service when the luxury fashion firms provide the non-differentiated and differentiated online retail services, respectively. More insights are discussed.
The imparting of knowledge and skills in STEM education, especially under the influence of the Covid-19 pandemic, is increasingly taking place online and through digital formats. The partially asynchronous instruction eliminates, on the one hand, the social relation in the learning process and, on the other hand, the direct experience with physical objects. Here, the digital learning systems provide learning tools and controls to support the learning process on a general basis. Existing methods for simulating physical objects (digital twins) are also used to a minimal extent. The following approach presents a learning system framework that enables individualized learning, including all dimensions (social, physical). Implementing a concept that uses a personalized assistance system to orchestrate the individual learning steps enables efficient and effective learning. Applying the learning system framework exemplifies the STEM education at Reutlingen University in the logistics learning factory Werk150.
While the concepts of object-oriented antipatterns and code smells are prevalent in scientific literature and have been popularized by tools like SonarQube, the research field for service-based antipatterns and bad smells is not as cohesive and organized. The description of these antipatterns is distributed across several publications with no holistic schema or taxonomy. Furthermore, there is currently little synergy between documented antipatterns for the architectural styles SOA and Microservices, even though several antipatterns may hold value for both. We therefore conducted a Systematic Literature Review (SLR) that identified 14 primary studies. 36 service-based antipatterns were extracted from these studies and documented with a holistic data model. We also categorized the antipatterns with a taxonomy and implemented relationships between them. Lastly, we developed a web application for convenient browsing and implemented a GitHub-based repository and workflow for the collaborative evolution of the collection. Researchers and practitioners can use the repository as a reference, for training and education, or for quality assurance.
Microservices are a topic driven mainly by practitioners and academia is only starting to investigate them. Hence, there is no clear picture of the usage of Microservices in practice. In this paper, we contribute a qualitative study with insights into industry adoption and implementation of Microservices. Contrary to existing quantitative studies, we conducted interviews to gain a more in-depth understanding of the current state of practice. During 17 interviews with software professionals from 10 companies, we analyzed 14 service-based systems. The interviews focused on applied technologies, Microservices characteristics, and the perceived influence on software quality. We found that companies generally rely on well established technologies for service implementation, communication, and deployment. Most systems, however, did not exhibit a high degree of technological diversity as commonly expected with Microservices. Decentralization and product character were different for systems built for external customers. Applied DevOps practices and automation were still on a mediocre level and only very few companies strictly followed the you build it, you run it principle. The impact of Microservices on software quality was mainly rated as positive. While maintainability received the most positive mentions, some major issues were associated with security. We present a description of each case and summarize the most important findings of companies across different domains and sizes. Researchers may build upon our findings and take them into account when designing industry-focused methods.
While Microservices promise several beneficial characteristics for sustainable long-term software evolution, little empirical research covers what concrete activities industry applies for the evolvability assurance of Microservices and how technical debt is handled in such systems. Since insights into the current state of practice are very important for researchers, we performed a qualitative interview study to explore applied evolvability assurance processes, the usage of tools, metrics, and patterns, as well as participants’ reflections on the topic. In 17 semi-structured interviews, we discussed 14 different Microservice-based systems with software professionals from 10 companies and how the sustainable evolution of these systems was ensured. Interview transcripts were analyzed with a detailed coding system and the constant comparison method.
We found that especially systems for external customers relied on central governance for the assurance. Participants saw guidelines like architectural principles as important to ensure a base consistency for evolvability. Interviewees also valued manual activities like code review, even though automation and tool support was described as very important. Source code quality was the primary target for the usage of tools and metrics. Despite most reported issues being related to Architectural Technical Debt (ATD), our participants did not apply any architectural or service-oriented tools and metrics. While participants generally saw their Microservices as evolvable, service cutting and finding an appropriate service granularity with low coupling and high cohesion were reported as challenging. Future Microservices research in the areas of evolution and technical debt should take these findings and industry sentiments into account.
IT environments that consist of a very large number of rather small structures like microservices, Internet of Things (IoT) components, or mobility systems are emerging to support flexible and agile products and services in the age of digital transformation. Biological metaphors of living and adaptable ecosystems with service-oriented enterprise architectures provide the foundation for self-optimizing, resilient run-time environments and distributed information systems. We are extending Enterprise Architecture (EA) methodologies and models that cover a high degree of heterogeneity and distribution to support the digital transformation and related information systems with micro-granular architectures. Our aim is to support flexibility and agile transformation for both IT and business capabilities within adaptable digital enterprise architectures. The present research paper investigates mechanisms for integrating Microservice Architectures (MSA) by extending original enterprise architecture reference models with elements for more flexible architectural metamodels and EA-mini-descriptions.
The respiratory rate is a vital sign indicating breathing illness. It is necessary to analyze the mechanical oscillations of the patient's body arising from chest movements. An inappropriate holder on which the sensor is mounted, or an inappropriate sensor position is some of the external factors which should be minimized during signal registration. This paper considers using a non-invasive device placed under the bed mattress and evaluates the respiratory rate. The aim of the work is the development of an accelerometer sensor holder for this system. The normal and deep breathing signals were analyzed, corresponding to the relaxed state and when taking deep breaths. The evaluation criterion for the holder's model is its influence on the patient's respiratory signal amplitude for each state. As a result, we offer a non-invasive system of respiratory rate detection, including the mechanical component providing the most accurate values of mentioned respiratory rate.
The high system flexibility necessary for the full automation of complex and unstructured tasks leads to increased complexity, thus higher costs. On the other hand, the effectiveness and performance of such systems decrease, explaining the unfulfilled potential of robotcs in sectors such as intralogistics, where the benefits of a robotic solution rarely justify its costs. Taking the distance from the false idea that a task should be either fully automated, or fully manual, this aper presents a method for design of a lean human-robot interaction (HRI) withe the objective of the "right level of automation", where functions are divided among human and automated agends, so that the overall process gains in performances and/or costs. ... The 10 progressive steps of the method are presented and discussed with reference to their graphical tool: the House of Quality Interaction.
Compared to the automotive sector, where automation is the rule, in many other less standardized sectors automation is still the exception. This could soon hurt the productivity of industrialized countries, where the unemployment is low and the population is aging. Phenomena like the recent downfall in productivity, due to lockdowns and social distancing for prevention of health hazards during the COVID19 pandemic, only add to the problem. For these reasons, the relevance, motivation and intention for more automation in less standardized sectors has probably never been higher. However, available statistics say that providers and users of technologies struggle to bring more automation into action in automation-unfriendly sectors. In this paper, we present a decision support method for investment in automation that tackles the problem: the STIC analysis. The method takes a holistic and quantitative approach tying together technological, context-related and economic input parameters and synthetizing them in a final economic indicator. Thanks to the modelling of such parameters, it is possible to gain sensibility on the technological and/or process adjustments that would have the highest impact on the efficiency of the automation, thereby delivering value for both technology users and technology providers.
In this paper it is first identified the trade-off among costs, flexibility and performances of autonomous robotic solutions for material handling processes, where adding value with automation is not as trivial as in production processes: hence the requirement for automated solutions to be simple, lean and efficient becomes even stricter. Then a method for modelling and comparing differential performances and costs of manual and autonomous solutions is developed. As a result of the method, a smart man-machine collaborative interface is designed and its impact evaluated on a specific case of study. Results are then generalized and prove the strong conclusions that in unconstrained environments, where full standardization cannot be achieved, the risk of investing in autonomous solutions can only be mitigated by creating a fast and smart man-machine collaborative interface.
Planning of available resources considering ergonomics under deterministic highly variable demand
(2020)
In this paper, a method for hybrid short- to long-term planning of available resources for operations is presented, which is based on a known or deterministically forecasted but highly variable demand. The method considers quantitative measures such as the performance and the availability of resources, ergonomically relevant KPI and ultimately process costs in order to serve as a pragmatic planning tool for operations managers in SMEs. Specifically, the method enables exploiting the ergonomic advantages of available flexible automation technology (e.g. AGVs or picking robots), while assuring that these do not represent a capacity bottleneck. After presenting the method along with the necessary assumptions, mainly concerning the availability of data for the calculations, we report a case study that quantifies the impact of throughput variability on the selection of different process alternatives, where different teams of resources are used.
In standardized sectors such as the automotive, the cost-benefit ratio of automation solutions is high as they contribute to increase capacity, decrease costs and improve product quality. In less standardized application fields, the contribution of automation to improvements in capacity, cost and quality blurs. The automation of complex and unstructured tasks requires sophisticated, expensive and low-performing systems, whose impact on product quality is oftentimes not directly perceived by customers. As a result, the full automation of process chains in the general manufacturing or the logistic sectors is often a sub optimal solution. Taking the distance from the false idea that a process should be either fully automated, or fully manual, this paper presents a novel heuristic method for design of lean human-robot interaction, the Quality Interaction Function Deployment, with the objective of the “right level of automation”. Functions are divided among human and automated agents and several automation scenarios are created and evaluated with respect to their compliance to the requirements of all process´ stakeholders. As a result, synergies among operators (manual tasks) and machines (automated tasks) are improved, thus reducing time-losses and increasing productivity.
Methods for increasing the energy efficiency of induction motors by an appropriate control strategy have been a subject of research during the last years. Several methods for loss minimization have been developed for induction motors operated in a steady state. In recent years, some solutions for the dynamic case have been given as well either using an online or offline optimization approach, implying a certain computational burden, which is undesired in practice. This paper shows that the appropriate application of steady state techniques during transients due to a changing motor torque is a suboptimal strategy with an acceptable performance for efficiency optimization given an induction machine where saturation effects of the main inductance must be considered. The optimization problem is simplified such that a simple suboptimal solution is possible and the quality of the suboptimal solution is investigated by simulations and measurements. The proposed solution is simple, easy to implement, and does not require an online optimization. In addition, the influence of magnetizing induction saturation is considered.
This paper introduces a novel placement methodology for a common-centroid (CC) pattern generator. It can be applied to various integrated circuit (IC) elements, such as transistors, capacitors, diodes, and resistors. The proposed method consists of a constructive algorithm which generates an initial, close to the optimum, solution, and an iterative algorithm which is used subsequently, if the output of constructive algorithm does not satisfy the desired criteria. The outcome of this work is an automatic CC placement algorithm for IC element arrays. Additionally, the paper presents a method for the CC arrangement evaluation. It allows for evaluating the quality of an array, and a comparison of different placement methods.
The hotspot detection has received much attention in the recent years due to a substantial mismatch between lithography wavelength and semiconductor technology feature size. This mismatch causes diffraction when transferring the layout from design onto a silicon wafer. As a result, open or short circuits (i.e. lithography hotspots) are more likely to be produced. Additionally, increasing numbers of semiconductors devices on a wafer required more time for the lithography hotspot detection analysis. In this work, we propose a fast and accurate solution based on novel artificial neural network (ANN) architecture for precise lithography hotspot detection using a convolution neural network (CNN) adopting a state of-the-art technique. The experimental results showed that the proposed model gained accuracy improvement over current state-of-theart approaches. The final code has been made publicly available.
The benefits of urban data cannot be realized without a political and strategic view of data use. A core concept within this view is data governance, which aligns strategy in data-relevant structures and entities with data processes, actors, architectures, and overall data management. Data governance is not a new concept and has long been addressed by scientists and practitioners from an enterprise perspective. In the urban context, however, data governance has only recently attracted increased attention, despite the unprecedented relevance of data in the advent of smart cities. Urban data governance can create semantic compatibility between heterogeneous technologies and data silos and connect stakeholders by standardizing data models, processes, and policies. This research provides a foundation for developing a reference model for urban data governance, identifies challenges in dealing with data in cities, and defines factors for the successful implementation of urban data governance. To obtain the best possible insights, the study carries out qualitative research following the design science research paradigm, conducting semi-structured expert interviews with 27 municipalities from Austria, Germany, Denmark, Finland, Sweden, and the Netherlands. The subsequent data analysis based on cognitive maps provides valuable insights into urban data governance. The interview transcripts were transferred and synthesized into comprehensive urban data governance maps to analyze entities and complex relationships with respect to the current state, challenges, and success factors of urban data governance. The findings show that each municipal department defines data governance separately, with no uniform approach. Given cultural factors, siloed data architectures have emerged in cities, leading to interoperability and integrability issues. A city-wide data governance entity in a cross-cutting function can be instrumental in breaking down silos in cities and creating a unified view of the city’s data landscape. The further identified concepts and their mutual interaction offer a powerful tool for developing a reference model for urban data governance and for the strategic orientation of cities on their way to data-driven organizations.
Human pose estimation (HPE) is integral to scene understanding in numerous safety-critical domains involving human-machine interaction, such as autonomous driving or semi-automated work environments. Avoiding costly mistakes is synonymous with anticipating failure in model predictions, which necessitates meta-judgments on the accuracy of the applied models. Here, we propose a straightforward human pose regression framework to examine the behavior of two established methods for simultaneous aleatoric and epistemic uncertainty estimation: maximum a-posteriori (MAP) estimation with Monte-Carlo variational inference and deep evidential regression (DER). First, we evaluate both approaches on the quality of their predicted variances and whether these truly capture the expected model error. The initial assessment indicates that both methods exhibit the overconfidence issue common in deep probabilistic models. This observation motivates our implementation of an additional recalibration step to extract reliable confidence intervals. We then take a closer look at deep evidential regression, which, to our knowledge, is applied comprehensively for the first time to the HPE problem. Experimental results indicate that DER behaves as expected in challenging and adverse conditions commonly occurring in HPE and that the predicted uncertainties match their purported aleatoric and epistemic sources. Notably, DER achieves smooth uncertainty estimates without the need for a costly sampling step, making it an attractive candidate for uncertainty estimation on resource-limited platforms.
Reliable and accurate car driver head pose estimation is an important function for the next generation of advanced driver assistance systems that need to consider the driver state in their analysis. For optimal performance, head pose estimation needs to be non-invasive, calibration-free and accurate for varying driving and illumination conditions. In this pilot study we investigate a 3D head pose estimation system that automatically fits a statistical 3D face model to measurements of a driver’s face, acquired with a low-cost depth sensor on challenging real-world data. We evaluate the results of our sensor-independent, driver-adaptive approach to those of a state-of-the-art camera-based 2D face tracking system as well as a non-adaptive 3D model relative to own ground-truth data, and compare to other 3D benchmarks. We find large accuracy benefits of the adaptive 3D approach.
We present a multitask network that supports various deep neural network based pedestrian detection functions. Besides 2D and 3D human pose, it also supports body and head orientation estimation based on full body bounding box input. This eliminates the need for explicit face recognition. We show that the performance of 3D human pose estimation and orientation estimation is comparable to the state-of-the-art. Since very few data sets exist for 3D human pose and in particular body and head orientation estimation based on full body data, we further show the benefit of particular simulation data to train the network. The network architecture is relatively simple, yet powerful, and easily adaptable for further research and applications.
Model-guided Therapy and Surgical Workflow Systems are two interrelated research fields, which have been developed separately in the last years. To make full use of both technologies, it is necessary to integrate them and connect them to Hospital Information Systems. We propose a framework for integration of Model-guided Therapy in Hospital Information Systems based on the Electronic Medical Record, and a taskbased Workflow Management System, which is suitable for clinical end users. Two prototypes - one based on Business Process Modeling Language, one based on the serum-board - are presented. From the experience with these prototypes, we developed a novel personalized visualization system for Surgical Workflows and Model-guided Therapy. Key challenges for further development are automated situation detection and a common communication infrastructure.
The diversity of energy prosumer types makes it difficult to create appropriate incentive mechanisms that satisfy both prosumers and energy system operators alike. Meanwhile, European energy suppliers buy guarantees of origin (GoO) which allow them to sell green energy at premium prices while in reality delivering grey energy to their customers. Blockchain technology has proven itself to be a robust paying system in which users transact money without the involvement of a third party. Blockchain tokens can be used to represent a unit of energy and, just as GoOs, be submitted to the market. This paper focuses on simulating marketplace using the ethereum blockchain and smart contracts, where prosumers can sell tokenized GoOs to consumers willing to subsidize renewable energy producers. Such markets bypass energy providers by allowing consumers to obtain tokenized GoOs directly from the producers, which in turn benefit directly from the earnings. Two market strategies where tokens are sold as GoOs have been simulated. In the Fix Price Strategy prosumers sell their tokens to the average GoO price of 2014. The Variable Price Strategy focuses on selling tokens at a price range defined by the difference between grey and green energy. The study finds that the ethereum blockchain is robust enough to functions as a platform for tokenized GoO trading. Simulation results have been compared and the results indicate that prosumers earn significantly more money by following the Variable Price
Strategy.
Distributed Ledger Technologies for the energy sector: facilitating interoperability analysis
(2023)
The use of distributed data storage and management structures, such as Distributed Ledger Technologies (DLT), in the energy sector has gained great interest in recent times. This opens up new possibilities in e.g. microgrid management, aggregation of distributed resources, peer-to- peer trading, integration of electromobility or proof-of-origin strategies. However, in order to benefit from those new possibilities, new challenges have to be overcome. This work focuses on one of these challenges, which is the need to ensure interoperability when integrating DLT-enabled devices in energy use cases. Firstly, the use of DLTs in the energy sector will be analyzed and the main use cases will be presented. Then, a classification of DLT-Energy use cases will be proposed. Secondly, the need for a common reference architecture framework to analyze those use cases with a focus on interoperability will be discussed and the current activities in research and standardization in this field will be presented. Finally, a new common reference architecture framework based on current activities in standardization will be presented.
The ballistocardiography is a technique that measures the heart rate from the mechanical vibrations of the body due to the heart movement. In this work a novel noninvasive device placed under the mattress of a bed estimates the heart rate using the ballistocardiography. Different algorithms for heart rate estimation have been developed.
A TLP system with a very low characteristic impedance of 1.5 Ω and a selectable pulse length from 0.5 to 6 μs is presented. It covers the entire operation region of many power semiconductors up to 700 V and 400 A. Ist applicability is demonstrated by determining the Output characteristics for two Cool MOS devices up to destruction.
Modern wide bandgap power devices promise higher power conversion performance if the device can be operated reliably. As switching speed increases, the effects of parasitic ringing become more prominent, causing potentially damaging overvoltages during device turn-off. Estimating the expected additional voltage caused by such ringing enables more reliable designs. In this paper, we present an analytical expression to calculate the expected overvoltage caused by parasitic ringing based on parasitic element values and operating point parameters. Simulations and measurements confirm that the expression can be used to find the smallest rise time of the switches’ drain-source voltage for minimum overvoltage. The given expression also allows the prediction of the trade off overvoltage amplitude in case of faster required rise times.
Advancing mental health diagnostics: AI-based method for depression detection in patient interviews
(2023)
In this paper, we present a novel artificial intelligence (AI) application for depression detection, using advanced transformer networks to analyse clinical interviews. By incorporating simulated data to enhance traditional datasets, we overcome limitations in data protection and privacy, consequently improving the model’s performance. Our methodology employs BERT-based models, GPT-3.5, and ChatGPT-4, demonstrating state-of-the-art results in detecting depression from linguistic patterns and contextual information that significantly outperform previous approaches. Utilising the DAIC-WOZ and Extended-DAIC datasets, our study showcases the potential of the proposed application in revolutionising mental health care through early depression detection and intervention. Empirical results from various experiments highlight the efficacy of our approach and its suitability for real-world implementation. Furthermore, we acknowledge the ethical, legal, and social implications of AI in mental health diagnostics. Ultimately, our study underscores the transformative potential of AI in mental health diagnostics, paving the way for innovative solutions that can facilitate early intervention and improve patient outcomes.
Free-floating e-scooter sharing is an upcoming trend in mobility, which has been spreading since 2015 in various German cities. Unlike the more scientifically explorend car sharing, the usage patterns and behaviors of e-scooter sharing customers are yet to be analyzed. This presumably discovers better ways to attract customers as well as adaptions of the business model in order to increase scooter utilization and therefore the profit of the e-scooter providers. As most of the customer's journey, from registration to scooter reservation and the ride itself, is digitally traceable, large datasets are available allowing for understanding of customers' needs and motivations. Based on these datasets of an e-scooter provider operating in a big German city we propose a customer clustering that identifies four different customer segments, which enables multiple conclusions to be drawn for business development and improving the problem-solution fit of the e-scooter sharing model.
Steady state efficiency optimization techniques for induction motors are state of the art and various methods have already been developed. This paper provides new insights in the efficiency optimized operation in dynamic regime. The paper proposes an anticipative flux modification in order to decrease losses during torque and speed transients. These trajectories are analyzed based on a numerical study for different motors. Measurement results for one motor are given as well.
Energy efficient electric control of drives is more and more important for electric mobility and manufacturing industries. Online dynamic optimization of induction machines is challenging due to the computational complexity involved and the variable power losses during dynamic operation of induction machines. This paper proposes a simple technique for sub-optimal online loss optimization using rotor flux linkage templates for energy efficient dynamic operation of induction machines. Such a rotor flux linkage template is given by a rotor flux linkage trajectory which is optimal for a specific scenario. This template is calculated in an offline optimization process. For a specific scenario during real time operation the rotor flux linkage is calculated by appropriately scaling the given template.
This paper discusses the optimal control problem for increasing the energy efficiency of induction machines in dynamic operation including field weakening regime. In an offline procedure optimal current and flux trajectories are determined such that the copper losses are minimized during transient operations. These trajectories are useful for a subsequent online implementation.
Gallium nitride high electron mobility transistors (GaN-HEMTs) have low capacitances and can achieve low switching losses in applications where hard turn-on is required. Low switching losses imply a fast switching; consequently, fast voltage and current transients occur. However, these transients can be limited by package and layout parasitics even for highly optimized systems. Furthermore, a fast switching requires a fast charging of the input capacitance, hence a high gate current.
In this paper, the switching speed limitations of GaN-HEMTs due to the common source inductance and the gate driver supply voltage are discussed. The turn-on behavior of a GaN-HEMT is simulated and the impact of the parasitics and the gate driver supply voltage on the switching losses is described in detail. Furthermore, measurements are performed with an optimized layout for a drain-source voltage of 500 V and a drain-source current up to 60 A.
Modern power semiconductor devices have low capacitances and can therefore achieve very fast switching transients under hard-switching conditions. However, these transients are often limited by parasitic elements, especially by the source inductance and the parasitic capacitances of the power semiconductor. These limitations cannot be compensated by conventional gate drivers. To overcome this, a novel gate driver approach for power semiconductors was developed. It uses a transformer which accelerates the switching by transferring energy from the source path to the gate path.
Experimental results of the novel gate driver approach show a turn-on energy reduction of 78% (from 80 μJ down to 17 μJ) with a drain-source voltage of 500V and a drain current of 60 A. Furthermore, the efficiency improvement is demonstrated for a hard-switching boost converter. For a switching frequency of 750 kHz with an input voltage of 230V and an output voltage of 400V, it was possible to extend the output power range by 35%(from 2.3kW to 3.1 kW), due to the reduction of the turn-on losses, therefore lowering the junction temperature of the GaN-HEMT.
The experimental characterization of the thermal impedance Zth of large power MOSFETs is commonly done by measuring the junction temperature Tj in the cooling phase after the device has been heated, preferably to a high junction temperature for increased accuracy. However, turning off a large heating current (as required by modern MOSFETs with low on-state resistances) takes some time because of parasitic inductances in the measurement system. Thus, most setups do not allow the characterization of the junction temperature in the time range below several tens of μs.
In this paper, an optimized measurement setup is presented which allows accurate Tj characterization already 3 μs after turn-off of heating. With this, it becomes possible to experimentally investigate the influence of thermal capacitances close to the active region of the device. Measurement results will be presented for advanced power MOSFETs with very large heating currents up to 220 A. Three bonding variants are investigated and the observed differences will be explained.
A gate driver approach is presented for the reduction of turn-on losses in hard switching applications. A significant turn-on loss reduction of up to 55% has been observed for SiCMOSFETs. The gate driver approach uses a transformer which couples energy from the power path back into the gate path during switching events, providing increased gate driver current and thereby faster switching speed.
The gate driver approach was tested on a boost converter running at a switching frequency up to 300 kHz. With an input voltage of 300V and an output voltage of 600V, it was possible to reduce the converter losses by 8% at full load. Moreover, the output power range could be extended by 23% (from 2.75kW to 3.4 kW) due to the reduction of the turn-on losses.
The loss contribution of a 2.3kW synchronous GaN-HEMT boost converter for an input voltage of 250V and an output voltage of 500V was analyzed. A simulation model which consists of two parts is introduced. First, a physics-based model is used to determine the switching losses. Then, a system simulation is applied to calculate the losses of the specific elements. This approach allows a fast and accurate system evaluation as required for further system optimization.
In this work, a hard- and a zero-voltage turn-on switching converter are compared. Measurements were performed to verify the simulation model, showing a good agreement. A peak efficiency of 99% was achieved for an output power of 1.4kW. Even with an output power above 400W, it was possible to obtain a system efficiency exceeding 98 %.
This paper presents a compact 3 kW bidirectional GaN-HEMT DC/DC converter for 360V to 400-500 V. A very high efficiency has been reached by applying a zero voltage turn-on in conjunction with a negative gate-source voltage, even though normally-off HEMTs are used. Further improvements were achieved by adapting the switching frequency to the load current and output voltage, as will be explained by means of the loss contribution of the specific elements for a constant and an adaptive switching frequency. Measurements have shown a high converter efficiency exceeding 99% over a wide output power range of up to 3 kW.
Learning to translate between real world and simulated 3D sensors while transferring task models
(2019)
Learning-based vision tasks are usually specialized on the sensor technology for which data has been labeled. The knowledge of a learned model is simply useless when it comes to data which differs from the data on which the model has been initially trained or if the model should be applied to a totally different imaging or sensor source. New labeled data has to be acquired on which a new model can be trained. Depending on the sensor, this can even get more complicated when the sensor data becomes more abstract and hard to be interpreted and labeled by humans. To enable reuse of models trained for a specific task across different sensors minimizes the data acquisition effort. Therefore, this work focuses on learning sensor models and translating between them, thus aiming for sensor interoperability. We show that even for the complex task of human pose estimation from 3D depth data recorded with different sensors, i.e. a simulated and a Kinect 2TM depth sensor, human pose estimation can greatly improve by translating between sensor models without modifying the original task model. This process especially benefits sensors and applications for which labels and models are difficult if at all possible to retrieve from raw sensor data.
This paper investigates the evaluation of dense 3D face reconstruction from a single 2D image in the wild. To this end, we organise a competition that provides a new benchmark dataset that contains 2000 2D facial images of 135 subjects as well as their 3D ground truth face scans. In contrast to previous competitions or challenges, the aim of this new benchmark dataset is to evaluate the accuracy of a 3D dense face reconstruction algorithm using real, accurate and high-resolution 3D ground truth face scans. In addition to the dataset, we provide a standard protocol as well as a Python script for the evaluation. Last, we report the results obtained by three state-of-the-art 3D face reconstruction systems on the new benchmark dataset. The competition is organised along with the 2018 13th IEEE Conference on Automatic Face & Gesture Recognition.
Micro grids often consist of energy generators, storages and consumers with controllers which are not prepared for their integration into communication networks for energy systems. In this paper it will be presented, how standards from the field of energy automation can be applied in such controllers. The data for communication interfaces can be structured according to the IEC 61850- or the VHPREADY standard. It is investigated which requirements must be supported to implement such data models within the controllers. For the transmission of the data we propose the OPC UA protocol, which supports extensive security measures and which is today available for nearly all modern types of controllers and computers.
To remain competitive in a fast changing environment, many companies started to migrate their legacy applications towards a Microservices architecture. Such extensive migration processes require careful planning and consideration of implications and challenges likewise. In this regard, hands-on experiences from industry practice are still rare. To fill this gap in scientific literature, we contribute a qualitative study on intentions, strategies, and challenges in the context of migrations to Microservices. We investigated the migration process of 14 systems across different domains and sizes by conducting 16 in-depth interviews with software professionals from 10 companies. Along with a summary of the most important findings, we present a separate discussion of each case. As primary migration drivers, maintainability and scalability were identified. Due to the high complexity of their legacy systems, most companies preferred a rewrite using current technologies over splitting up existing code bases. This was often caused by the absence of a suitable decomposition approach. As such, finding the right service cut was a major technical challenge, next to building the necessary expertise with new technologies. Organizational challenges were especially related to large, traditional companies that simultaneously established agile processes. Initiating a mindset change and ensuring smooth collaboration between teams were crucial for them. Future research on the evolution of software systems can in particular profit from the individual cases presented.
A wide-bandwidth galvanically isolated current sensing circuit with an integrated Rogowski coil in 180nm CMOS is presented. Exploiting the high-frequency properties of an optimized on-chip Rogowski coil, currents can be measured up to a bandwidth of 75 MHz. The analog sensor front-end comprises a two-stage integrator, which allows a chopper frequency below signal bandwidth, resulting in 2.2 mVrms output noise. An additional integrated Hall sensor extends the measurement range towards DC.
A 20 V, 8 MHz resonant DCDC converter with predictive control for 1 ns resolution soft-switching
(2015)
Fast switching power supplies allow to reduce the size and cost of external passive components. However, the capacitive switching losses of the power stage will increase and become the dominant part of the total losses. Therefore, resonant topologies are the known key to reduce the losses of the power stage. A power switch with an additional resonant circuit can be turned on under soft-switching conditions, ideally with zero-voltage-switching (ZVS). As conventional resonant converts are only efficient for a constant load, this paper presents a predictive regulation loop to approach soft-switching conditions under varying load and component tolerances. A sample and hold based detection circuit is utilized to control the turn-on of the power switch by a digital regulation. The proposed design was fabricated in a 180 nm high-voltage BiCMOS technology. The efficiency of the converter was measured to be increased by up to 16 % vs. worst case timing and by 13 % compared to a conventional hard-switching buck converter at 20 V input voltage and at approximately 8 MHz switching frequency.
The majority of people in sub-Saharan Africa (SSA) rely on so-called “paratransit” for their mobility needs. The term refers to a large informal transport sector that runs independent of government, of which 83% comprises minibus taxis (MBT). MBT technology is often old and contribute significantly to climate change with their high carbon dioxide (CO2) emissions. Issues related to sustainability and climate change are becoming more important world-wide and hardly any attention is given to MBTs. Converting the MBTs from internal combustion engines (ICEs) to electric motors could be a possible solution. The existing power grid in SSA is largely based on fossil power plants and is unstable. This can be seen by frequent local power blackouts. To avoid further strain on the existing power grid, it would therefore make sense to charge the electric minibus taxis (eMBTs) through a grid consisting of renewable energies. A mobility map is created via simulations with collected data points of the MBTs. By using this mobility map, the energy demand of the eMBTs is calculated. Furthermore, a region-specific photovoltaic (PV) and wind simulation can be realised based on existing weather data, and a tool to size the supply system to charge the eMBTs is developed after all data has been collected. With the help of this work, it can be determined to what extent renewable energies such as PV and wind power can be used to support the transition from ICEs to electric engines in the MBT sector.
This document presents a new complete standalone system for a recognition of sleep apnea using signals from the pressure sensors placed under the mattress. The developed hardware part of the system is tuned to filter and to amplify the signal. Its software part performs more accurate signal filtering and identification of apnea events. The overall achieved accuracy of the recognition of apnea occurrence is 91%, with the average measured recognition delay of about 15 seconds, which confirms the suitability of the proposed method for future employment. The main aim of the presented approach is the support of the healthcare system with the cost-efficient tool for recognition of sleep apnea in the home environment.
The scoring of sleep stages is an essential part of sleep studies. The main objective of this research is to provide an algorithm for the automatic classification of sleep stages using signals that may be obtained in a non-obtrusive way. After reviewing the relevant research, the authors selected a multinomial logistic regression as the basis for their approach. Several parameters were derived from movement and breathing signals, and their combinations were investigated to develop an accurate and stable algorithm. The algorithm was implemented to produce successful results: the accuracy of the recognition of Wake/NREM/REM stages is equal to 73%, with Cohen's kappa of 0.44 for the analyzed 19324 sleep epochs of 30 seconds each. This approach has the advantage of using the only movement and breathing signals, which can be recorded with less effort than heart or brainwave signals, and requiring only four derived parameters for the calculations. Therefore, the new system is a significant improvement for non-obtrusive sleep stage identification compared to existing approaches.
This document presents an algorithm for a nonobtrusive recognition of Sleep/Wake states using signals derived from ECG, respiration, and body movement captured while lying in a bed. As a core mathematical base of system data analytics, multinomial logistic regression techniques were chosen. Derived parameters of the three signals are used as the input for the proposed method. The overall achieved accuracy rate is 84% for Wake/Sleep stages, with Cohen’s kappa value 0.46. The presented algorithm should support experts in analyzing sleep quality in more detail. The results confirm the potential of this method and disclose several ways for its improvement.
A generic, knowledge-based method for automatic topology selection of analog circuits in a predefined analog reuse library is presented in this paper on the OTA (Operational Transconductance Amplifier) example. Analog circuits of a given circuit class are classified in a topology tree, where each node represents a specific topology. Child nodes evolve from their parent nodes by an enhancement of the parent node’s topological structure. Topology selection is performed by a depth first-search in the topology tree starting at the root node, thus checking topologies of increasing complexity. The decisions at each node are based on solving equations or – if this is not possible – on simulations. The search ends at the first (and thus the simplest) topology which can meet the specification after an adequate circuit sizing. The advantages of the generic, tree based topology selection method presented in this paper are shown in comparison to a pool selection method and to heuristic approaches. The selection is based on an accomplished chip investigation.
We present a new methodology for automatic selection and sizing of analog circuits demonstrated on the OTA circuit class. The methodology consists of two steps: a generic topology selection method supported by a “part-sizing” process and subsequent final sizing. The circuit topologies provided by a reuse library are classified in a topology tree. The appropriate topology is selected by traversing the topology tree starting at the root node. The decision at each node is gained from the result of the part-sizing, which is in fact a node-specific set of simulations. The final sizing is a simulation-based optimization. We significantly reduce the overall simulation effort compared to a classical simulation-based optimization by combining the topology selection with the part-sizing process in the selection loop. The result is an interactive user friendly system, which eases the analog designer’s work significantly when compared to typical industrial practice in analog circuit design. The topology selection method and sizing process are implemented as a tool into a typical analog design environment. The design productivity improvement achievable by our method is shown by a comparison to other design automation approaches.
Virtual prototyping of integrated mixed-signal smart-sensor systems requires high-performance co-simulation of analog frontend circuitry with complex digital controller hardware and embedded real-time software. We use SystemC/TLM 2.0 in combination with a cycle-count accurate temporal decoupling approach to simulate digital components and firmware code execution at high speed while preserving clock cycle accuracy and, thus, real-time behavior at time quantum boundaries. Optimal time quanta ensuring real-time capability can be calculated and set automatically during simulation if the simulation engine has access to exact timing information about upcoming communication events. These methods fail in case of non-deterministic, asynchronous events resulting in a possibly invalid simulation result. In this paper, we propose an extension of this method to the case of asynchronous events generated by blackbox sources from which a-priori event timing information is not available, such as coupled analog simulators or hardware in the loop. Additional event processing latency and/or rollback effort caused by temporal decoupling is minimized by calculating optimal time quanta dynamically in a SystemC model using a linear prediction scheme. For an example smart-sensor system model, we show that quasi- periodic events that trigger activities in temporally decoupled processes are handled accurately after the predictor has settled.
Reduction of power consumption of digital systems is a major concern especially in modern smart sensor systems. These systems are often only activated on request and their power consumption is therefore dominated by the idle-mode. Power reduction mechanisms such as clock or power gating reduce the activity or leakage in the purely digital circuits. We propose a novel adaptive clocking scheme that optimizes the energy demand using a fine-grained oscillator control on cycle-level. To evaluate our new approach, we analytically analyze the power consumption of the regarded system in comparison with available methods. The power of our new adaptive clocking is shown in an integrated smart sensor for capacitive measurements working in a passive wireless sensor node. Using our methods, we show that the energy demand of the example system is reduced even in the case of continuous measurements that demand for a high activity in the digital circuitry.
Software development teams have to face stress caused by deadlines, staff turnover, or individual differences in commitment, expertise, and time zones. While students are typically taught the theory of software project management, their exposure to such stress factors is usually limited. However, preparing students for the stress they will have to endure once they work in project teams is important for their own sake, as well as for the sake of team performance in the face of stress. Team performance has been linked to the diversity of software development teams, but little is known about how diversity influences the stress experienced in teams. In order to shed light on this aspect, we provided students with the opportunity to self-experience the basics of project management in self-organizing teams, and studied the impact of six diversity dimensions on team performance, coping with stressors, and positive perceived learning effects. Three controlled experiments at two universities with a total of 65 participants suggest that the social background impacts the perceived stressors the most, while age and work experience have the highest impact on perceived learnings. Most diversity dimensions have a medium correlation with the quality of work, yet no significant relation to the team performance. This lays the foundation to improve students’ training for software engineering teamwork based on their diversity-related needs and to create diversity-sensitive awareness among educators, employers and researchers.
Optimization-based design automation for analog ICs still remains behind the demands. A promising alternative is given by procedural approaches such as parameterized generators, also known as PCells. We are working on a complete analog design flow based on parameterized generators for entire circuits and corresponding layout modules. Because the conventional programming of such enhanced generators is far too complicated and costly, new methods are needed to ease their development. This paper presents gPCDS (graphical PCDS), a novel tool for a designer-oriented development of schematic module generators, integrated into a common schematic entry environment. The tool is based on PCDS (Parameterized Circuit Description Scheme), a meta-language for the creation of parametrized analog circuits. Schematic module generators are a very desirable complement to layout module generators in order to achieve a seamless schematic- driven layout design flow on module level. By facilitating a way of generator development that matches a design expert’s mentality, gPCDS contributes to close this gap in the analog design flow.
The automotive industry faces three major challenges – shortage of fossil fuels, politics of global warming and rising competition from new markets. In order to remain competitive companies have to develop more efficient and alternative fuel vehicles that meet the individual requirements of the customers. Functional Integration combined with new Technologies and materials are the key to stable success in this industry. The sustaining upward trend to system innovations within the last ten years confirms this. The development of complex products like automobiles claim skills of various disciplines e.g. engineering, chemistry. Furthermore, these skills are spread all over the supply chain. Hence the only way to stay successful in the automotive industry is cooperation and collaborative innovation. Interdisciplinary and interorganizational development has high demands on cooperation models especially in the automotive industry. In this case study cooperation models are analyzed and evaluated according to their applicability to interdisciplinary, interorganizational development projects in the automotive industry. Following, the research campus ARENA2036 is analyzed. ARENA2036 is an interdisciplinary, interorganizational development project housing automobile manufacturers, suppliers, research establishments and university institutes. Finally, based on interviews with the partners and the precede analyses of cooperation models, suggestions for implementation are given to ARENA2036.
In a digitally controlled slope shaping system, reliable detection of both voltage and current slope is required to enable a closed-loop control for various power switches independent of system parameters. In most state-of-the-art works, this is realized by monitoring the absolute voltage and current values. Better accuracy at lower DC power loss is achieved by sensing techniques for a reliable passive detection, which is achieved through avoiding DC paths from the high voltage network into the sensing network. Using a high-speed analog-to-digital converter, the whole waveform of the transient derivative can be stored digitally and prepared for a predictive cycle-by-cycle regulation, without requiring high-precision digital differentiation algorithms. To gain an accurate representation of the voltage and current derivative waveforms, system parasitics are investigated and classified in three sections: (1) component parasitics, which are identified by s-parameter measurements and extraction of equivalent circuit models, (2) PCB design issues related to the sensing circuit, and (3) interconnections between adjacent boards.
The contribution of this paper is an optimized sensing network on the basis of the experimental study supporting fast transition slopes up to 100 V/ns and 1 A/ns and beyond, making the sensing technique attractive for slope shaping of fast switching devices like modern generation IGBTs, CoolMOSTM and SiC mosfets. Measurements of the optimized dv/dt and di/dt setups are demonstrated for a hard switched IGBT power stage.
A concept for a slope shaping gate driver IC is proposed, used to establish control over the slew rates of current and voltage during the turn-on and turn off switching transients.
It combines the high speed and linearity of a fully-integrated closed-loop analog gate driver, which is able to perform real-time regulation, with the advantages of digital control, like flexibility and parameter independency, operating in a predictive cycle-bycycle regulation. In this work, the analog gate drive integrated circuit is partitioned into functional blocks and modeled in the small-signal domain, which also includes the non-linearity of parameters. An analytical stability analysis has been performed in order to ensure full functionality of the system controlling a modern generation IGBT and a superjunction MOSFET. Major parameters of influence, such as gate resistor and summing node capacitance, are investigated to achieve stable control. The large-signal behavior, investigated by simulations of a transistor level design, verifies the correct operation of the circuit. Hence, the gate driver can be designed for robust operation.
RoPose-Real: real world dataset acquisition for data-driven industrial robot arm pose estimation
(2019)
It is necessary to employ smart sensory systems in dynamic and mobile workspaces where industrial robots are mounted on mobile platforms. Such systems should be aware of flexible and non-stationary workspaces and able to react autonomously to changing situations. Building upon our previously presented RoPose-system, which employs a convolutional neural network architecture that has been trained on pure synthetic data to estimate the kinematic chain of an industrial robot arm system, we now present RoPose-Real. RoPose-Real extends the prior system with a comfortable and targetless extrinsic calibration tool, to allow for the production of automatically annotated datasets for real robot systems. Furthermore, we use the novel datasets to train the estimation network with real world data. The extracted pose information is used to automatically estimate the observing sensor pose relative to the robot system. Finally we evaluate the performance of the presented subsystems in a real world robotic scenario.
As production workspaces become more mobile and dynamic it becomes increasingly important to reliably monitor the overall state of the environment. Therein manipulators or other robotic systems likely have to be able to act autonomously together with humans and other systems within a joint workspace. Such interactions require that all components in non-stationary environments are able to perceive the state relative to each other. As vision-sensors provide a rich source of information to accomplish this, we present RoPose, a convolutional neural network (CNN) based approach, to estimate the two dimensional joint configuration of a simulated industrial manipulator from a camera image. This pose information can further be used by a novel targetless calibration setup to estimate the pose of the camera relative to the manipulator’s space. We present a pipeline to automatically generate synthetic training data and conclude with a discussion of the potential usage of the same pipeline to acquire real image datasets of physically existent robots.
Early reduction of risks in a startup or an innovation project is highly important. Appropriate means for risk reduction, such as testing business models with different kinds of experiments exist. However, deciding what to test and how to select the right test, is challenging for many startups and innovation projects. This article presents the so-called Business Experiments Navigator (BEN), a toolkit to assist startup and innovation processes. It compliments other tools such as the Business Model Canvas or the Lean Startup process. The main contribution of BEN is to bridge the gap between the riskiest assumptions of a business model and the multitude of available testing techniques by providing assumption templates. The Business Experiments Navigator has been validated in several workshops. Results show that it creates awareness among the workshop participants that a business model is based on assumptions which impose risks and need to be validated. Further, users of BEN were able to identify relevant assumptions and map different kinds of assumptions to appropriate testing techniques. The process applied in the workshops, as well as the assumption templates, helped the participants understand the main concepts and transfer their learnings, to their own business ideas.
Many GaN power transistors contain a PN junction between gate and the channel region close to the source. In order to maintain the on-state, current must continuously be supplied to the junction. Therefore, the commonly recommended approach uses a gate bias voltage of 12V to compensate the Miller current through a boost circuit. For the same purpose, a novel gate driving method based on an inductive feed forward has been presented. With this, stable turn-on can be achieved even for a bias voltage of only 5V. The effectiveness of this concept is demonstrated by double pulse measurements, switching currents up to 27A and a voltage of 400V. For both approaches a compact design with low source inductance is characterized. In addition to the significant reduction of the gate bias voltage and peak gate current, the new approach reduces the switching losses for load currents >23 A.
Improved inductive feed-forward for fast turn-on of power semiconductors during hard switching
(2019)
A transformer is used to increase the gate voltage during turn-on, thus reducing the necessary bias voltage of the gate driver. Counteracting the voltage dependency of the gate capacitance of high-voltage power devices, faster transitions are possible. The additional transformer only slighly increases the over-voltage during turn-off.
Novel design for a coreless printed circuit board transformer realizing high bandwidth and coupling
(2019)
Rogowski coils offer galvanic isolation and can measure alternating currents with a high bandwidth. Coreless printed circuit board (PCB) transformers have been used as an alternative to limit the additional stray inductance if a Rogowski coil can not be attached to the circuit. A new PCB transformer layout is proposed to reduce cost, decrease additional stray inductance, increase the bandwidth of current measurements and simplify the integration into existing designs.
Due to the lack of sophisticated component libraries for microelectromechanical systems (MEMS), highly optimized MEMS sensors are currently designed using a polygon driven design flow. The advantage of this design flow is its accurate mechanical simulation, but it lacks a method for an efficient and accurate electrostatic analysis of parasitic effects of MEMS. In order to close this gap in the polygon-driven design flow, we present a customized electrostatic analysis flow for such MEMS devices. Our flow features a 2.5D fabrication-process simulation, which simulates the three typical MEMS fabrication steps (namely deposition of materials including topography, deep reactive-ion etching, and the release etch by vapor-phase etching) very fast and on an acceptable abstraction level. Our new 2.5D fabrication-process simulation can be combined with commercial field-solvers such as they are commonly used in the design of integrated circuits. The new process simulation enables a faster but nevertheless satisfactory analysis of the electrostatic parasitic effects, and hence simplifies the electrical optimization of MEMS.
A new method for the analysis of movement dependent parasitics in full custom designed MEMS sensors
(2017)
Due to the lack of sophisticated microelectromechanical systems (MEMS) component libraries, highly optimized MEMS sensors are currently designed using a polygon driven design flow. The strength of this design flow is the accurate mechanical simulation of the polygons by finite element (FE) modal analysis. The result of the FE-modal analysis is included in the system model together with the data of the (mechanical) static electrostatic analysis. However, the system model lacks the dynamic parasitic electrostatic effects, arising from the electric coupling between the wiring and the moving structures. In order to include these effects in the system model, we present a method which enables the quasi dynamic parasitic extraction with respect to in-plane movements of the sensor structures. The method is embedded in the polygon driven MEMS design flow using standard EDA tools. In order to take the influences of the fabrication process into account, such as etching process variations, the method combines the FE-modal analysis and the fabrication process simulation data. This enables the analysis of dynamic changing electrostatic parasitic effects with respect to movements of the mechanical structures. Additionally, the result can be included into the system model allowing the simulation of positive feedback of the electrostatic parasitic effects to the mechanical structures.
In contrast to IC design, MEMS design still lacks sophisticated component libraries. Therefore, the physical design of MEMS sensors is mostly done by simply drawing polygons. Hence, the sensor structure is only given as plain graphic data which hinders the identification and investigation of topology elements such as spring, anchor, mass and electrodes. In order to solve this problem, we present a rule-based recognition algorithm which identifies the architecture and the topology elements of a MEMS sensor. In addition to graphic data, the algorithm makes use of only a few marking layers, as well as net and technology information. Our approach enables RC-extraction with commercial field solvers and a subsequent synthesis of the sensor circuit. The mapping of the extracted RC-values to the topology elements of the sensor enables a detailed analysis and optimization of actual MEMS sensors.
Nowadays, the demand for a MEMS development/design kit (MDK) is even more in focus than ever before. In order to achieve a high quality and cost effectiveness in the development process for automotive and consumer applications, an advanced design flow for the MEMS (micro electro mechanical systems) element is urgently required. In this paper, such a development methodology and flow for parasitic extraction of active semiconductor devices is presented. The methodology considers geometrical extraction and links the electrically active pn junctions to SPICE standard library models and subsequently extracts the netlist. An example for a typical pressure sensor is presented and discussed. Finally, the results of the parasitic extraction are compared with fabricated devices in terms of accuracy and capability.
In the present paper we demonstrate the novel technique to apply the recently proposed approach of In-Place Appends – overwrites on Flash without a prior erase operation. IPA can be applied selectively: only to DB-objects that have frequent and relatively small updates. To do so we couple IPA to the concept of NoFTL regions, allowing the DBA to place update-intensive DB-objects into special IPA-enabled regions. The decision about region configuration can be (semi-)automated by an advisor analyzing DB-log files in the background.
We showcase a Shore-MT based prototype of the above approach, operating on real Flash hardware. During the demonstration we allow the users to interact with the system and gain hands-on experience under different demonstration scenarios.
The increasing share of renewable energy with volatile production results in higher variability of prices for electrical energy. Optimized operating schedules, e.g., for industrial units, can yield a considerable reduction of energy costs by shifting processes with high power consumption to times with low energy prices. We present a distributed control architecture for virtual power plants (VPPs) where VPP participants benefit from flexible adaptation of schedules to price forecasts while maintaining control of their operating schedule. An aggregator trades at the energy market on behalf of the participants and benefits from more detailed and reliable load profiles within the VPP.
Incubators in multinational corporations : development of a corporate incubator operator model
(2017)
This paper analyzes the components of a corporate incubator operator model in multinational companies. Thereby, three relevant phases were identified: pre incubation, incubation, and exit. Each phase contains different criteria that represent critical success factors for a corporate incubator, which are based on theoretical findings and lessons learned from practice. During the pre-incubation phase companies should define their need for a corporate incubator, the origin of ideas and the selection criteria for incubator tenants. The actual phase of incubation refers to the incubator program, which should be flexible with respect to each tenant. Furthermore, resource allocation plays an important role during the incubator program. Exit options after a successful incubation differ according to internal ideas and external start-ups, as well as the objective of the incubator. The research is based on a comprehensive screening of existing incubator literature and a qualitative content analysis of statements from eight experts of international corporate incubators.
An assessment model to foster the adoption of agile software product lines in the automotive domain
(2018)
A software product line is commonly used for the software development in large automotive organizations. A strategic reuse of software is needed to handle the increasing complexity of the development and to maintain the quality of numerous software variants. However, the development process needs to be continuously adapted at a fast pace to satisfy the changing market demands. Introducing agile software development methods promise the flexibility to react on customers’ change requests and market demands to deliver high quality software. Despite this need, it is still challenging to combine agile software development and product lines. The maturity of an agile adoption is often hard to determine. Assessing the current situation regarding the combination is a first step towards a successful inclusion of agile methods into automotive software product lines. Based on an interview study with 16 participants and a literature review, we build the so-called ASPLA Model allowing self-assessments within the team to determine the current state of agile software development in combination with software product lines. The model comprises seven areas of improvement and recommends a possibility to improve the current status.
Combining agile development and software product lines in automotive: challenges and recommendations
(2018)
Software product lines (SPLs) are used throughout the automotive industry. SPLs help to manage the large number of variants and to improve quality by reuse. In order to develop high quality software faster, agile software development (ASD) practices are introduced. From both the research and the management point of view it is still not clear how these two approaches can be combined. We derive recommendations to combine ASD and SPLs based on challenges identified for an automotive specific model. This study combines the outcome of a literature review and a qualitative interview study with 16 practitioners from the automotive domain. We evaluate the results and analyze the relationship between ASD and SPLs in the automotive domain. Furthermore, we derive recommendations to combine ASD and SPLs based on challenges identified in the automotive domain. This study identifies 86 individual challenges. Important challenges address supplier collaboration and faster software release cycles without loss of quality. The identified challenges and the derived recommendations show that the combination of ASD and SPL in the automotive industry is promising but not trivial. There is a need for an automotive-specific approach that combines ASD and SPL.
In this paper, we propose a novel fitting method that uses local image features to fit a 3D morphable face model to 2D images. To overcome the obstacle of optimising a cost function that contains a non-differentiable feature extraction operator, we use a learning-based cascaded regression method that learns the gradient direction from data. The method allows to simultaneously solve for shape and pose parameters. Our method is thoroughly evaluated on morphable model generated data and first results on real data are presented. Compared to traditional fitting methods, which use simple raw features like pixel colour or edge maps, local features have been shown to be much more robust against variations in imaging conditions. Our approach is unique in that we are the first to use local features to fit a 3D morphable model. Because of the speed of our method, it is applicable for realtime applications. Our cascaded regression framework is available as an open source library at github.com/patrikhuber/ superviseddescent.
We present a fully automatic approach to real-time 3D face reconstruction from monocular in-the-wild videos. With the use of a cascaded-regressor-based face tracking and a 3D morphable face model shape fitting, we obtain a semidense 3D face shape. We further use the texture information from multiple frames to build a holistic 3D face representation from the video footage. Our system is able to capture facial expressions and does not require any person specific training. We demonstrate the robustness of our approach on the challenging 300 Videos in the Wild (300- VW) dataset. Our real-time fitting framework is available as an open-source library at http://4dface.org.
Analysis and planning of Enterprise Architectures (EA) is a complex task for stakeholders. The change of one architecture element has impact on multiple other elements because of manifold relationships and interactions between them. The interactive cockpit approach presented in this paper supports stakeholders planning and analyzing EAs and to tackle the intrinsic complexity. This approach supplies a cockpit with multiple viewpoints to put relevant information side-by-side without losing the context combined with interaction functionality. In this paper, we develop such cockpit starting with relevant use cases, describing a potential design based on well-established foundations in EA modeling, and outline an exemplary usage scenario.
Equations for fast and exact calculation of a simple model for heat transfer from a bond wire to a cylindrical finite mold package including nonideal heat transfer from wire to mold are presented. These allow for a characterization of an arbitrary mold/bond wire combination. The real mold geometry is approximated using the mold model cylinder radius and the thermal contact conductance of the mold/bond wire interface. For changes in bond and mold material, wire length, diameter, and current transient profiles, the resulting temperature transients can then be predicted. As the method is based on numerical integration of differential equations, arbitrary pulse shapes, which are industrially relevant, can be calculated. Very high thermal contact conductance values (above 40 000 W/m2K heat transfer) have been detected in real package/bond systems. The method was validated by successful comparison with finite element method simulations and alternative calculation methods and measurements.
When a bonding wire becomes too hot, it fuses and fails. The ohmic heat that is generated in the wire can be partially dissipated to a mold package. For this cooling effect the thermal contact between wire and package is an important parameter. Because this parameter can degrade over lifetime, the fusing of a bonding wire can also occur as a long-term effect. Another important factor is the thermal power generated in the vicinity of the bond pads. Nowadays, the reliability of bond wires relies on robust dimensioning based on estimations. Smaller package sizes increase the need for better predictive methods.
The Bond Calculator, a new thermo-electrical simulation tool, is able to predict the temperature profiles along bond wires of arbitrary dimensions in dependence on the applied arbitrary transient current profile, the mold surrounding the wire, and the thermal contact between wire and mold.
In this paper we closely investigated the spatial temperature profiles along different bond wires in air in order to make a first step towards the experimental verification of the simulation model. We are using infrared microscopy in order to measure the thermal radiation generated along the bond wire. This is easier to perform quantitatively in air than in the mold package, because of the non-negligible absorbance of the mold material in the infrared wavelength region.
This paper presents a new broadband antenna for satellite communications. It describes the procedure involved in the design of a microstrip antenna array and its multi-level passive feed network that together yield circular polarization and the necessary gain to be used in an earth-satellite link. The designed antenna is notable for its large bandwidth, circular polarization, high gain and small dimensions.
Organizations identified the opportunities of big data analytics to support the business with problem-specific insights through the exploitation of generated data. Sociotechnical solutions are developed in big data projects to reach competitive advantage. Although these projects are aligned to specific business needs, common architectural challenges are not addressed in a comprehensive manner. Enterprise architecture management is a holistic approach to tackle complex business and IT architectures. The transformation of an organization’s EA is influenced by big data transformation processes and their data-driven approach on all layers. In this paper, we review big data literature to analyze which requirements for the EA management discipline are proposed. Based on a systematic literature identification, conceptual categories of requirements for EA management are elicited utilizing an inductive category formation. These conceptual categories of requirements constitute a category system that facilitates a new perspective on EA management and fosters the innovation-driven evolution of the EA management.
discipline.
Due to frequently changing requirements, the internal structure of cloud services is highly dynamic. To ensure flexibility, adaptability, and maintainability for dynamically evolving services, modular software development has become the dominating paradigm. By following this approach, services can be rapidly constructed by composing existing, newly developed and publicly available third-party modules. However, newly added modules might be unstable, resource-intensive, or untrustworthy. Thus, satisfying non-functional requirements such as reliability, efficiency, and security while ensuring rapid release cycles is a challenging task. In this paper, we discuss how to tackle these issues by employing container virtualization to isolate modules from each other according to a specification of isolation constraints. We satisfy non-functional requirements for cloud services by automatically transforming the modules comprised into a container-based system. To deal with the increased overhead that is caused by isolating modules from each other, we calculate the minimum set of containers required to satisfy the isolation constraints specified. Moreover, we present and report on a prototypical transformation pipeline that automatically transforms cloud services developed based on the Java Platform Module System into container-based systems.
Serverless computing is an emerging cloud computing paradigm with the goal of freeing developers from resource management issues. As of today, serverless computing platforms are mainly used to process computations triggered by events or user requests that can be executed independently of each other. These workloads benefit from on-demand and elastic compute resources as well as per-function billing. However, it is still an open research question to which extent parallel applications, which comprise most often complex coordination and communication patterns, can benefit from serverless computing.
In this paper, we introduce serverless skeletons for parallel cloud programming to free developers from both parallelism and resource management issues. In particular, we investigate on the well known and widely used farm skeleton, which supports the implementation of a wide range of applications. To evaluate our concepts, we present a prototypical development and runtime framework and implement two applications based on our framework: Numerical integration and hyperparameter optimization - a commonly applied technique in machine learning. We report on performance measurements for both applications and discuss
the usefulness of our approach.
This paper addresses what we call the investment question: under what plausible circumstances, if any, can variable renewable energy (VRE, and solar photovoltaic (PV) in particular) be a good investment? Although VRE has been growing rapidly world-wide, it is generally subsidized. Under what cost and market conditions can solar PV flourish without subsidy? We employ solar insolation and market price data from the U.S. and from Germany to gain insight into the investment question. We find that unsubsidized solar PV is or may soon be a justifiable investment, but that market arrangements may play a crucial role in determining success. We end by sketching a proposal that amounts to a reformed capacity market that would afford participation of solar PV.
Fitting 3D Morphable Face Models (3DMM) to a 2D face image allows the separation of face shape from skin texture, as well as correction for face expression. However, the recovered 3D face representation is not readily amenable to processing by convolutional neural networks (CNN). We propose a conformal mapping from a 3D mesh to a 2D image, which makes these machine learning tools accessible by 3D face data. Experiments with a CNN based face recognition system designed using the proposed representation have been carried out to validate the advocated approach. The results obtained on standard benchmarking data sets show its promise.
To evaluate the quality of a person´s sleep it is essential to identify the sleep stages and their durations. Currently, the gold standard in terms of sleep analysis is overnight polysomnography (PSG), during which several techniques like EEG (eletroencephalogram), EOG (electrooculogram), EMG (electromyogram), ECG (electrocardiogram), SpO2 (blood oxygen saturation) and for example respiratory airflow and respiratory effort are recorded. These expensive and complex procedures, applied in sleep laboratories, are invasive and unfamiliar for the subjects and it is a reason why it might have an impact on the recorded data. These are the main reasons why low-cost home diagnostic systems are likely to be advantageous. Their aim is to reach a larger population by reducing the number of parameters recorded. Nowadays, many wearable devices promise to measure sleep quality using only the ECG and body-movement signals. This work presents an android application developed in order to proof the accuracy of an algorithm published in the sleep literature. The algorithm uses ECG and body movement recordings to estimate sleep stages. The pre-recorded signals fed into the algorithm have been taken from physionet1 online database. The obtained results have been compared with those of the standard method used in PSG. The mean agreement ratios between the sleep stages REM, Wake, NREM-1, NREM-2 and NREM-3 were 38.1%, 14%, 16%, 75% and 54.3%.
In this paper, an approach is introduced how reinforcement learning can be used to achieve interoperability between heterogeneous Internet of Things (IoT) components. More specifically, we model an HTTP REST service as a Markov Decision Process and adapt Q-Learning to the properties of REST so that an agent in the role of an HTTP REST client can learn the semantics of the service and, especially an optimal sequence of service calls to achieve an application specific goal. With our approach, we want to open up and facilitate a discussion in the community, as we see the key for achieving interoperability in IoT by the utilization of artificial intelligence techniques.
OpenAPI, WADL, RAML, and API Blueprint are popular formats for documenting Web APIs. Although these formats are in general both human and machine-readable, only the part of the format describing the syntax of a Web API is machine-understandable. Descriptions, which explain the meaning and purpose of Web API elements, are embedded as natural language text snippets into documents and target human readers but not machines. To enable machines to read and process these state-of-practice Web API documentation, we propose a Transformer model that solves the generic task of identifying a Web API element within a syntax structure that matches a natural language query. For our first prototype, we focus on the Web API integration task of matching output with input parameters and fined-tuned a pre-trained CodeBERT model to the downstream task of question answering with samples from 2,321 OpenAPI documentation. We formulate the original question answering problem as a multiple choice task: given a semantic natural language description of an output parameter (question) and the syntax of the input schema (paragraph), the model chooses the input parameter (answer) in the schema that best matches the description. The paper describes the data preparation, tokenization, and fine-tuning process as well as discusses possible applications of our model as part of a recommender system. Furthermore, we evaluate the generalizability and the robustness of our fine-tuned model, with the result that it achieves an accuracy of 81.46% correctly chosen parameters.
In networked operating room environments, there is an emerging trend towards standardized non-proprietary communication protocols which allow to build new integration solutions and flexible human-machine interaction concepts. The most prominent endeavor is the IEEE 11073 SDC protocol. For some uses cases, it would be helpful if not just medical devices could be controlled based on SDC, but also building automation systems like light, shutters, air condition, etc. For those systems, the KNX protocol is widely used. We build an SDC-to-KNX gateway which allows to use the SDC protocol for sending commands to connected KNX devices. The first prototype system was successfully implemented at the demonstration operating room at Reutlingen University. This is a first step toward the integration of a broader variety of KNX devices.
For decades, Software Process Improvement (SPI) programs have been implemented, inter alia, to improve quality and speed of software development. To set up, guide, and carry out SPI projects, and to measure SPI state, impact, and success, a multitude of different SPI approaches and considerable experience are available. SPI addresses many aspects ranging from individual developer skills to entire organizations. It comprises for instance the optimization of specific activities in the software lifecycle as well as the creation of organization awareness and project culture. In the course of conducting a systematic mapping study on the state-of-the-art in SPI from a general perspective, we observed Global Software Engineering (GSE) becoming a topic of interest in recent years. Therefore, in this paper, we provide a detailed investigation of those papers from the overall systematic mapping study that were classified as addressing SPI in the context of GSE. From the main study’s result set, a set of 30 papers dealing with GSE was selected for an in-depth analysis using the systematic review instrument to study the contributions and to develop an initial picture of how GSE is considered from the perspective of SPI. Our findings show the analyzed papers delivering a substantial discussion of cultural models and how such models can be used to better address and align SPI programs with multi-national environments. Furthermore, experience is shared discussing how agile approaches can be implemented in companies working at the global scale. Finally, success factors and barriers are studied to help companies implementing SPI in a GSE context.
Software process improvement (SPI) is around for decades, but it is a critically discussed topic. In several waves, different aspects of SPI have been discussed in the past, e.g., large scale company-level SPI programs, maturity models, success factors, and in-project SPI. It is hard to find new streams or a consensus in the community, but there is a trend coming along with agile and lean software development. Apparently, practitioners reject extensive and prescriptive maturity models and move towards smaller, faster and continuous project-integrated SPI. Based on data from two survey studies conducted in Germany (2012) and Europe (2016), we analyze the process customization for projects and practices for implementing SPI in the participating companies. Our findings indicate that, even in regulated industry sectors, companies increasingly adopt in-project SPI activities, primarily with the goal to continuously optimize specific processes. Therefore, with this paper, we want to stimulate a discussion on how to evolve traditional SPI towards a continuous learning environment.
Software engineering courses have to deliver theoretical and technical knowledge and skills while establishing links to practice. However, due to course goals or resource limitations, it is not always possible or even meaningful to set up complete projects and let students work on a real piece of software. For instance, if students shall understand the impact of group dynamics on productivity, a particular software to be developed is of less interest than an environment in which students can learn about team-related phenomena. To address this issue, we use experimentation as a teaching tool in software engineering courses. Experiments help to precisely characterize and study a problem in a systematic way, to observe phenomena, and to develop and evaluate solutions. Furthermore, experiments help establishing short feedback and learning cycles, and they also allow for experiencing risk and failure scenarios in a controlled environment. In this paper, we report on three courses in which we implemented different experiments and we share our experiences and lessons learned. Using these courses, we demonstrate how to use classroom experiments, and we provide a discussion on the feasibility based on formal and informal course evaluations. This experience report thus aims to help teachers integrating small- and medium sized experiments in their courses.
Software development consists to a large extent of human-based processes with continuously increasing demands regarding interdisciplinary team work. Understanding the dynamics of software teams can be seen as highly important to successful project execution. Hence, for future project managers, knowledge about non-technical processes in teams is significant. In this paper, we present a course unit that provides an environment in which students can learn and experience the role of different communication patterns in distributed agile software development. In particular, students gain awareness about the importance of communication by experiencing the impact of limitations of communication channels and the effects on collaboration and team performance. The course unit presented uses the controlled experiment instrument to provide the basic organization of a small software project carried out in virtual teams. We provide a detailed design of the course unit to allow for implementation in further courses. Furthermore, we provide experiences obtained from implementing this course unit with 16 graduate students. We observed students struggling with technical aspects and team coordination in general, while not realizing the importance of communication channels (or their absence). Furthermore, we could show the students that lacking communication protocols impact team coordination and performance regardless of the communication channels used.
Together with many success stories, promises such as the increase in production speed and the improvement in stakeholders' collaboration have contributed to making agile a transformation in the software industry in which many companies want to take part. However, driven either by a natural and expected evolution or by contextual factors that challenge the adoption of agile methods as prescribed by their creator(s), software processes in practice mutate into hybrids over time. Are these still agile In this article, we investigate the question: what makes a software development method agile We present an empirical study grounded in a large-scale international survey that aims to identify software development methods and practices that improve or tame agility. Based on 556 data points, we analyze the perceived degree of agility in the implementation of standard project disciplines and its relation to used development methods and practices. Our findings suggest that only a small number of participants operate their projects in a purely traditional or agile manner (under 15%). That said, most project disciplines and most practices show a clear trend towards increasing degrees of agility. Compared to the methods used to develop software, the selection of practices has a stronger effect on the degree of agility of a given discipline. Finally, there are no methods or practices that explicitly guarantee or prevent agility. We conclude that agility cannot be defined solely at the process level. Additional factors need to be taken into account when trying to implement or improve agility in a software company. Finally, we discuss the field of software process-related research in the light of our findings and present a roadmap for future research.
Modern web-based applications are often built as multi-tier architecture using persistence middleware. Middleware technology providers recommend the use of Optimistic Concurrency Control (OCC) mechanism to avoid the risk of blocked resources. However, most vendors of relational database management systems implement only locking schemes for concurrency control. As consequence a kind of OCC has to be implemented at client or middleware side.
A simple Row Version Verification (RVV) mechanism has been proposed to implement an OCC at client side. For performance reasons the middleware uses buffers (cache) of its own to avoid network traffic and possible disk I/O. This caching however complicates the use of RVV because the data in the middleware cache may be stale (outdated). We investigate various data access technologies, including the new Java Persistence API (JPA) and Microsoft’s LINQ technologies for their ability to use the RVV programming discipline.
The use of persistence middleware that tries to relieve the programmer from the low level transaction programming turns out to even complicate the situation in some cases.Programmed examples show how to use SQL data access patterns to solve the problem.