Refine
Document Type
- Conference proceeding (222)
- Journal article (30)
Language
- English (252)
Is part of the Bibliography
- yes (252)
Institute
- Technik (141)
- Informatik (90)
- ESB Business School (19)
- Life Sciences (1)
- Texoversum (1)
Publisher
- IEEE (252) (remove)
Boost converters suffer from a bandwidth limitation caused by the right-half plane zero (RHPZ), which occurs in the control-to-output transfer function. In contrast, there are many applications that require superior dynamic behavior. Further, size and cost of boost converter systems can be minimized by reduced voltage deviations and fast transient responses in case of large signal load transients. The key idea of the proposed ΔV/Δt-intervention control concept is to adapt the controller output to its new steady state value immediately after a load transient by prediction from known parameters. The concept is implemented in a digital control circuit, consisting of an ASIC in a 110 nm-technology and a Xilinx Spartan-6 field programmable gate array (FPGA). In a boost converter with 3.5V input voltage, 6.3V output voltage, 1.2A load, and 500 kHz switching frequency, the output voltage deviations are 2.8x smaller, scaling down the output capacitor value by the same factor. The recovery times are 2.4x shorter in case of large signal load transients with the proposed concept. The control is widely applicable, as it supports constant switching frequencies and allows for duty cycle and inductor current limitations. It also shows various advantages compared to conventional control and to selected adaptive control concepts.
This article proposes several modified quasi Z-source dc/dc boost converters. These can achieve soft-switching by using a clamp-switch network comprised of an active switch and a diode in parallel with a capacitor connected across one of the inductors of the Z-source network. In this way, ringing at the transistor switching node is mitigated, and the voltage at the turn-on of the transistor is reduced. Even a zero voltage switching (ZVS) of the main transistor is possible if the capacitor in the clamp-switch network is adequately chosen. The proposed circuit structure and operating mode are described and validated through simulations and measurements on a low-power prototype.
Together with many success stories, promises such as the increase in production speed and the improvement in stakeholders' collaboration have contributed to making agile a transformation in the software industry in which many companies want to take part. However, driven either by a natural and expected evolution or by contextual factors that challenge the adoption of agile methods as prescribed by their creator(s), software processes in practice mutate into hybrids over time. Are these still agile In this article, we investigate the question: what makes a software development method agile We present an empirical study grounded in a large-scale international survey that aims to identify software development methods and practices that improve or tame agility. Based on 556 data points, we analyze the perceived degree of agility in the implementation of standard project disciplines and its relation to used development methods and practices. Our findings suggest that only a small number of participants operate their projects in a purely traditional or agile manner (under 15%). That said, most project disciplines and most practices show a clear trend towards increasing degrees of agility. Compared to the methods used to develop software, the selection of practices has a stronger effect on the degree of agility of a given discipline. Finally, there are no methods or practices that explicitly guarantee or prevent agility. We conclude that agility cannot be defined solely at the process level. Additional factors need to be taken into account when trying to implement or improve agility in a software company. Finally, we discuss the field of software process-related research in the light of our findings and present a roadmap for future research.
Among the multitude of software development processes available, hardly any is used by the book. Regardless of company size or industry sector, a majority of project teams and companies use customized processes that combine different development methods— so-called hybrid development methods. Even though such hybrid development methods are highly individualized, a common understanding of how to systematically construct synergetic practices is missing. In this paper, we make a first step towards devising such guidelines. Grounded in 1,467 data points from a large-scale online survey among practitioners, we study the current state of practice in process use to answer the question: What are hybrid development methods made of? Our findings reveal that only eight methods and few practices build the core of modern software development. This small set allows for statistically constructing hybrid development methods. Using an 85% agreement level in the participants’ selections, we provide two examples illustrating how hybrid development methods are characterized by the practices they are made of. Our evidence-based analysis approach lays the foundation for devising hybrid development methods.
In this paper we describe an interactive web-based visual analysis tool for Formula one races. It first provides an overview about all races on a yearly basis in a calendar-like representation. From this starting point, races can be selected and visually inspected in detail. We support a dynamic race position diagram as well as a more detailed lap times line plot for showing the drivers’ lap times in comparison. Many interaction techniques are supported like selections, filtering, highlighting, color coding, or details-on demand. We illustrate the usefulness of our visualization tool by applying it to a Formula one dataset while we describe the different dynamic visual racing patterns for a number of selected races and drivers.
Verification of an active time constant tuning technique for continuous-time delta-sigma modulators
(2022)
In this work we present a technique to compensate the effects of R-C / g m -C time-constant (TC) errors due to process variation in continuous-time delta-sigma modulators. Local TC error compensation factors are shifted around in the modulator loop to positions where they can be implemented efficiently with finely tunable circuit structures, such as current-steering digital-to-analog converters (DAC). We apply our technique to a third-order, single-bit, low-pass continuous-time delta-sigma modulator in cascaded integrator feedback structure, implemented in a 0.35-μm CMOS process. A tuning scheme for the reference currents of the feedback DACs is derived as a function of the individual TC errors and verified by circuit simulations. We confirm the tuning technique experimentally on the fabricated circuit over a TC parameter variation range of ±20%. Stable modulator operation is achieved for all parameter sets. The measured performances satisfy the expectations from our theoretical calculations and circuit-level simulations.
Redirected walking techniques allow people to walk in a larger virtual space than the physical extents of the laboratory. We describe two experiments conducted to investigate human sensitivity to walking on a curved path and to validate a new redirected walking technique. In a psychophysical experiment, we found that sensitivity to walking on a curved path was significantly lower for slower walking speeds (radius of 10 meters versus 22 meters). In an applied study, we investigated the influence of a velocity-dependent dynamic gain controller and an avatar controller on the average distance that participants were able to freely walk before needing to be reoriented. The mean walked distance was significantly greater in the dynamic gain controller condition, as compared to the static controller (22 meters versus 15 meters). Our results demonstrate that perceptually motivated dynamic redirected walking techniques, in combination with reorientation techniques, allow for unaided exploration of a large virtual city model.
Recognizing actions of humans, reliably inferring their meaning and being able to potentially exchange mutual social information are core challenges for autonomous systems when they directly share the same space with humans. Today’s technical perception solutions have been developed and tested mostly on standard vision benchmark datasets where manual labeling of sensory ground truth is a tedious but necessary task. Furthermore, rarely occurring human activities are underrepresented in such data leading to algorithms not recognizing such activities. For this purpose, we introduce a modular simulation framework which offers to train and validate algorithms on various environmental conditions. For this paper we created a dataset, containing rare human activities in urban areas, on which a current state of the art algorithm for pose estimation fails and demonstrate how to train such rare poses with simulated data only.
Engineers of the research project “Digital Product Life-Cycle” are using a graph-based design language to model all aspects of the product they are working on. This abstract model is the base for all further investigations, developments and implementations. In particular at early stages of development, collaborative decision making is very important. We propose a semantic augmented knowledge space by means of mixed reality technology, to support engineering teams. Therefore we present an interaction prototype consisting of a pico projector and a camera. In our usage scenario engineers are augmenting different artefacts in a virtual working environment. The concept of our prototype contains both an interaction and a technical concept. To realise implicit and natural interactions, we conducted two prototype tests: (1) A test with a low-fidelity prototype and (2) a test by using the method Wizard of Oz. As a result, we present a prototype with interaction selection using augmentation spotlighting and an interaction zoom as a semantic zoom.
This research addresses the question of why employees use enterprise social networks (ESN). Against the background of technology acceptance research, we propose an extended unified theory of acceptance and use of technology (UTAUT) model, adapt it to an ESN context, and test our model against data from ESN users of large and medium-sized enterprises. We use partial least squares structural equation modeling to gain insights into the determinants of ESN use. This paper contributes to ESN acceptance research by evaluating a model containing determinants of ESN use. It also examines the effects of determinants on five different usage dimensions of ESN. The results reveal that facilitating conditions are the main driver of ESN use while the impact of intention to use is comparably small. Implications for theory and practice are discussed.
The experimental characterization of the thermal impedance Zth of large power MOSFETs is commonly done by measuring the junction temperature Tj in the cooling phase after the device has been heated, preferably to a high junction temperature for increased accuracy. However, turning off a large heating current (as required by modern MOSFETs with low on-state resistances) takes some time because of parasitic inductances in the measurement system. Thus, most setups do not allow the characterization of the junction temperature in the time range below several tens of μs.
In this paper, an optimized measurement setup is presented which allows accurate Tj characterization already 3 μs after turn-off of heating. With this, it becomes possible to experimentally investigate the influence of thermal capacitances close to the active region of the device. Measurement results will be presented for advanced power MOSFETs with very large heating currents up to 220 A. Three bonding variants are investigated and the observed differences will be explained.
Transaction processing is of growing importance for mobile computing. Booking tickets, flight reservation, banking, ePayment, and booking holiday arrangements are just a few examples for mobile transactions. Due to temporarily disconnected situations the synchronisation and consistent transaction processing are key issues. Serializability is a too strong criteria for correctness when the semantics of a transaction is known. We introduce a transaction model that allows higher concurrency for a certain class of transactions defined by its semantic. The transaction results are ”escrow serializable” and the synchronisation mechanism is non-blocking. Experimental implementation showed higher concurrency, transaction throughput, and less resources used than common locking or optimistic protocols.
Ambitious goals set by the European Union strategy towards the emission reduction of multimodal logistic chains and new requirements for intermodal terminals set by the evolution of customer needs, contribute to a shift in the driver for the infrastructure development: from economy of scale to economy of density. This paper aims to present an innovative method for designing a process oriented technology chain for intermodal terminals in order to fulfill these new demanding requirements. The results of the case study of the Zero Emission Logistic Terminal Reutlingen are presented, highlighting how this particular context enables the design and development of a modular concept, paving the way for the generalization of the findings towards the transfer to similar contexts of other European cities.
In this paper it is first identified the trade-off among costs, flexibility and performances of autonomous robotic solutions for material handling processes, where adding value with automation is not as trivial as in production processes: hence the requirement for automated solutions to be simple, lean and efficient becomes even stricter. Then a method for modelling and comparing differential performances and costs of manual and autonomous solutions is developed. As a result of the method, a smart man-machine collaborative interface is designed and its impact evaluated on a specific case of study. Results are then generalized and prove the strong conclusions that in unconstrained environments, where full standardization cannot be achieved, the risk of investing in autonomous solutions can only be mitigated by creating a fast and smart man-machine collaborative interface.
IT environments that consist of a very large number of rather small structures like microservices, Internet of Things (IoT) components, or mobility systems are emerging to support flexible and agile products and services in the age of digital transformation. Biological metaphors of living and adaptable ecosystems with service-oriented enterprise architectures provide the foundation for self-optimizing, resilient run-time environments and distributed information systems. We are extending Enterprise Architecture (EA) methodologies and models that cover a high degree of heterogeneity and distribution to support the digital transformation and related information systems with micro-granular architectures. Our aim is to support flexibility and agile transformation for both IT and business capabilities within adaptable digital enterprise architectures. The present research paper investigates mechanisms for integrating Microservice Architectures (MSA) by extending original enterprise architecture reference models with elements for more flexible architectural metamodels and EA-mini-descriptions.
AI technologies such as deep learning provide promising advances in many areas. Using these technologies, enterprises and organizations implement new business models and capabilities. In the beginning, AI-technologies have been deployed in an experimental environment. AI-based applications have been created in an ad-hoc manner and without methodological guidance or engineering approach. Due to the increasing importance of AI-technologies, however, a more structured approach is necessary that enable the methodological engineering of AI-based applications. Therefore, we develop in this paper first steps towards methodological engineering of AI-based applications. First, we identify some important differences between the technological foundations of AI- technologies, in particular deep learning, and traditional information technologies. Then we create a framework that enables to engineer AI-applications using four steps: identification of an AI-application type, sub-type identification, lifecycle phase, and definition of details. The introduced framework considers that AI-applications use an inductive approach to infer knowledge from huge collections and streams of data. It not only enables the rapid development of AI-application but also the efficient sharing of knowledge on AI-applications.
Continuous refactoring is necessary to maintain source code quality and to cope with technical debt. Since manual refactoring is inefficient and error prone, various solutions for automated refactoring have been proposed in the past. However, empirical studies have shown that these solutions are not widely accepted by software developers and most refactorings are still performed manually. For example, developers reported that refactoring tools should support functionality for reviewing changes. They also criticized that introducing such tools would require substantial effort for configuration and integration into the current development environment.
In this paper, we present our work towards the Refactoring-Bot, an autonomous bot that integrates into the team like a human developer via the existing version control platform. The bot automatically performs refactorings to resolve code smells and presents the changes to a developer for asynchronous review via pull requests. This way, developers are not interrupted in their workflow and can review the changes at any time with familiar tools. Proposed refactorings can then be integrated into the code base via the push of a button. We elaborate on our vision, discuss design decisions, describe the current state of development, and give an outlook on planned development and research activities.
Current approaches for enterprise architecture lack analytical instruments for cyclic evaluations of business and system architectures in real business enterprise system environments. This impedes the broad use of enterprise architecture methodologies. Furthermore, the permanent evolution of systems desynchronizes quickly model representation and reality. Therefore we are introducing an approach for complementing the existing top-down approach for the creation of enterprise architecture with a bottom approach. Enterprise Architecture Analytics uses the architectural information contained in many infrastructures to provide architectural information. By applying Big Data technologies it is possible to exploit this information and to create architectural information. That means, Enterprise Architectures may be discovered, analyzed and optimized using analytics. The increased availability of architectural data also improves the possibilities to verify the compliance of Enterprise Architectures. Architectural decisions are linked to clustered architecture artifacts and categories according to a holistic EAM Reference Architecture with specific architecture metamodels. A special suited EAM Maturity Framework provides the base for systematic and analytics supported assessments of architecture capabilities.
While the concepts of object-oriented antipatterns and code smells are prevalent in scientific literature and have been popularized by tools like SonarQube, the research field for service-based antipatterns and bad smells is not as cohesive and organized. The description of these antipatterns is distributed across several publications with no holistic schema or taxonomy. Furthermore, there is currently little synergy between documented antipatterns for the architectural styles SOA and Microservices, even though several antipatterns may hold value for both. We therefore conducted a Systematic Literature Review (SLR) that identified 14 primary studies. 36 service-based antipatterns were extracted from these studies and documented with a holistic data model. We also categorized the antipatterns with a taxonomy and implemented relationships between them. Lastly, we developed a web application for convenient browsing and implemented a GitHub-based repository and workflow for the collaborative evolution of the collection. Researchers and practitioners can use the repository as a reference, for training and education, or for quality assurance.
The benefits of urban data cannot be realized without a political and strategic view of data use. A core concept within this view is data governance, which aligns strategy in data-relevant structures and entities with data processes, actors, architectures, and overall data management. Data governance is not a new concept and has long been addressed by scientists and practitioners from an enterprise perspective. In the urban context, however, data governance has only recently attracted increased attention, despite the unprecedented relevance of data in the advent of smart cities. Urban data governance can create semantic compatibility between heterogeneous technologies and data silos and connect stakeholders by standardizing data models, processes, and policies. This research provides a foundation for developing a reference model for urban data governance, identifies challenges in dealing with data in cities, and defines factors for the successful implementation of urban data governance. To obtain the best possible insights, the study carries out qualitative research following the design science research paradigm, conducting semi-structured expert interviews with 27 municipalities from Austria, Germany, Denmark, Finland, Sweden, and the Netherlands. The subsequent data analysis based on cognitive maps provides valuable insights into urban data governance. The interview transcripts were transferred and synthesized into comprehensive urban data governance maps to analyze entities and complex relationships with respect to the current state, challenges, and success factors of urban data governance. The findings show that each municipal department defines data governance separately, with no uniform approach. Given cultural factors, siloed data architectures have emerged in cities, leading to interoperability and integrability issues. A city-wide data governance entity in a cross-cutting function can be instrumental in breaking down silos in cities and creating a unified view of the city’s data landscape. The further identified concepts and their mutual interaction offer a powerful tool for developing a reference model for urban data governance and for the strategic orientation of cities on their way to data-driven organizations.
In this work, a comparison between different brushless harmonic-excited wound-rotor synchronous machines is performed. The general idea of all topologies is the elimination of the slip rings and auxiliary windings by using the already existing stator and rotor winding for field excitation. This is achieved by injecting a harmonic airgap field with the help of power electronics. This harmonic field does not interact with the fundamental field, it just transfers the excitation power across the airgap. Alternative methods with varying number of phases, different pole-pair combinations, and winding layouts are covered and compared with a detailed Finite-Element-parameterized model. Parasitic effects due to saturation and coupling between the harmonic and main windings are considered.
This paper investigates the electrothermal stability and the predominant defect mechanism of a Schottky gate AlGaN/GaN HEMT. Calibrated 3-D electrothermal simulations are performed using a simple semiempirical dc model, which is verified against high-temperature measurements up to 440°C. To determine the thermal limits of the safe operating area, measurements up to destruction are conducted at different operating points. The predominant failure mechanism is identified to be hot-spot formation and subsequent thermal runaway, induced by large drain–gate leakage currents that occur at high temperatures. The simulation results and the high temperature measurements confirm the observed failure patterns.
Compared to the automotive sector, where automation is the rule, in many other less standardized sectors automation is still the exception. This could soon hurt the productivity of industrialized countries, where the unemployment is low and the population is aging. Phenomena like the recent downfall in productivity, due to lockdowns and social distancing for prevention of health hazards during the COVID19 pandemic, only add to the problem. For these reasons, the relevance, motivation and intention for more automation in less standardized sectors has probably never been higher. However, available statistics say that providers and users of technologies struggle to bring more automation into action in automation-unfriendly sectors. In this paper, we present a decision support method for investment in automation that tackles the problem: the STIC analysis. The method takes a holistic and quantitative approach tying together technological, context-related and economic input parameters and synthetizing them in a final economic indicator. Thanks to the modelling of such parameters, it is possible to gain sensibility on the technological and/or process adjustments that would have the highest impact on the efficiency of the automation, thereby delivering value for both technology users and technology providers.
The success of an autonomous robotic system is influenced by several interdependent factors not easily identifiable. This paper is set to lay the foundation of a new integrated approach in order to deeply examine all the parameters and understand their contribution to success. After introducing the problem, two cutting edge autonomous systems for the process of unloading of containers will be presented. Then the STIC analysis, a recently developed method for modelling and interpreting all the parameters, will be introduced. The preliminary results of applying such a methodology to a first study case, based on one of the two systems available to the authors, will be shortly presented. Future research is in the end recommended in order to prove that this methodology is the only way to efficiently and effectively mitigate the risk that stops potential users from investing in autonomous systems in the logistics sector.
In standardized sectors such as the automotive, the cost-benefit ratio of automation solutions is high as they contribute to increase capacity, decrease costs and improve product quality. In less standardized application fields, the contribution of automation to improvements in capacity, cost and quality blurs. The automation of complex and unstructured tasks requires sophisticated, expensive and low-performing systems, whose impact on product quality is oftentimes not directly perceived by customers. As a result, the full automation of process chains in the general manufacturing or the logistic sectors is often a sub optimal solution. Taking the distance from the false idea that a process should be either fully automated, or fully manual, this paper presents a novel heuristic method for design of lean human-robot interaction, the Quality Interaction Function Deployment, with the objective of the “right level of automation”. Functions are divided among human and automated agents and several automation scenarios are created and evaluated with respect to their compliance to the requirements of all process´ stakeholders. As a result, synergies among operators (manual tasks) and machines (automated tasks) are improved, thus reducing time-losses and increasing productivity.
Early reduction of risks in a startup or an innovation project is highly important. Appropriate means for risk reduction, such as testing business models with different kinds of experiments exist. However, deciding what to test and how to select the right test, is challenging for many startups and innovation projects. This article presents the so-called Business Experiments Navigator (BEN), a toolkit to assist startup and innovation processes. It compliments other tools such as the Business Model Canvas or the Lean Startup process. The main contribution of BEN is to bridge the gap between the riskiest assumptions of a business model and the multitude of available testing techniques by providing assumption templates. The Business Experiments Navigator has been validated in several workshops. Results show that it creates awareness among the workshop participants that a business model is based on assumptions which impose risks and need to be validated. Further, users of BEN were able to identify relevant assumptions and map different kinds of assumptions to appropriate testing techniques. The process applied in the workshops, as well as the assumption templates, helped the participants understand the main concepts and transfer their learnings, to their own business ideas.
Most Question-answering (QA) systems rely on training data to reach their optimal performance. However, acquiring training data for supervised systems is both time-consuming and resource-intensive. To address this, in this paper, we propose TFCSG, an unsupervised similar question retrieval approach that leverages pre-trained language models and multi-task learning. Firstly, topic keywords in question sentences are extracted sequentially based on a latent topic-filtering algorithm to construct unsupervised training corpus data. Then, the multi-task learning method is used to build the question retrieval model. There are three tasks designed. The first is a short sentence contrastive learning task. The second is the question sentence and its corresponding topic sequence similarity judgment task. The third is using question sentences to generate their corresponding topic sequence task. The three tasks are used to train the language model in parallel. Finally, similar questions are obtained by calculating the cosine similarity between sentence vectors. The comparison experiment on public question datasets that TFCSG outperforms the comparative unsupervised baseline method. And there is no need for manual marking, which greatly saves human resources.
Virtual prototyping of integrated mixed-signal smart-sensor systems requires high-performance co-simulation of analog frontend circuitry with complex digital controller hardware and embedded real-time software. We use SystemC/TLM 2.0 in combination with a cycle-count accurate temporal decoupling approach to simulate digital components and firmware code execution at high speed while preserving clock cycle accuracy and, thus, real-time behavior at time quantum boundaries. Optimal time quanta ensuring real-time capability can be calculated and set automatically during simulation if the simulation engine has access to exact timing information about upcoming communication events. These methods fail in case of non-deterministic, asynchronous events resulting in a possibly invalid simulation result. In this paper, we propose an extension of this method to the case of asynchronous events generated by blackbox sources from which a-priori event timing information is not available, such as coupled analog simulators or hardware in the loop. Additional event processing latency and/or rollback effort caused by temporal decoupling is minimized by calculating optimal time quanta dynamically in a SystemC model using a linear prediction scheme. For an example smart-sensor system model, we show that quasi- periodic events that trigger activities in temporally decoupled processes are handled accurately after the predictor has settled.
When a bonding wire becomes too hot, it fuses and fails. The ohmic heat that is generated in the wire can be partially dissipated to a mold package. For this cooling effect the thermal contact between wire and package is an important parameter. Because this parameter can degrade over lifetime, the fusing of a bonding wire can also occur as a long-term effect. Another important factor is the thermal power generated in the vicinity of the bond pads. Nowadays, the reliability of bond wires relies on robust dimensioning based on estimations. Smaller package sizes increase the need for better predictive methods.
The Bond Calculator, a new thermo-electrical simulation tool, is able to predict the temperature profiles along bond wires of arbitrary dimensions in dependence on the applied arbitrary transient current profile, the mold surrounding the wire, and the thermal contact between wire and mold.
In this paper we closely investigated the spatial temperature profiles along different bond wires in air in order to make a first step towards the experimental verification of the simulation model. We are using infrared microscopy in order to measure the thermal radiation generated along the bond wire. This is easier to perform quantitatively in air than in the mold package, because of the non-negligible absorbance of the mold material in the infrared wavelength region.
The 17 SDGs, as agreed upon by the international community, are designed to be implemented across all levels of human activity. Alongside the level of international politics, this also includes the local levels, national politics, wider society, and the economic sphere. Many channels are called on to further implementation, including the transfer of technology to developing and emerging countries. As the patent holders, this must include the active participation of companies. While the literature examines the important role of technology transfer in North-South business-to-business (B2B) partnerships, studies on the technology transfer between European and African companies are scarce. Therefore, in this study we use original data from 26 interviews conducted with managers engaged in sales partnerships between German manufacturers and their distributors in African markets to examine the existence and forms of technology transfer. We find that training and marketing excellence are the predominant forms of technology transfer and based on that suggest a refinement of established frameworks on B2B technology transfer.
This paper evaluates experimentally the susceptibility of IT-networks under influences and the threats of HPEM (High Power Electromagnetic) and IEMI (Intentional Electromagnetic Interferences). As HPEM source a PBG 5 (Pulse Burst Generator) adapted to a TEM (Transversal Electromagnetic) Horn type antenna and a 90 cm IRA (Impulse Radiating Antenna) type antenna is used. Different network cable types and categories with different lengths are used. The immunity of the IT network is examined and the breakdown failure rate of the system is defined for a PRF (Pulse Repetition Frequency) of 500 s-1 in duration of 10 seconds. Series of measurements were carried out and disturbances of keyboards, mouse, switches, distortions on monitors and failures of the IT network and, even crash of PCs were observed. It is shown amongst other that by increasing the pulse repetition rate or frequency, generic test IT-networks are more susceptible to interference. Obtained results provide another view of the susceptibility analysis of modern generic IT-networks against UWB-Threats.
Substrate coupling is a critical failure mechanism especially in fast-switching integrated power stages controlling high-side NMOS power FETs. The parasitic coupling across the substrate in integrated power stages at rise times of up to 500 ps and input voltages of up to 40V is investigated in this paper. The coupling has been studied for the power stage of an integrated buck converter. In particular, dedicated diverting and isolation structures against substrate coupling are analyzed by simulations and evaluated with measurements from test chips in 180nm high-voltage BiCMOS. The results are compared regarding effectiveness, area as well as implementation effort and cost. Back-side metalization shows superior characteristics with nearly 100% noise suppression. Readily available p-guard ring structures bring 75% disturbance reduction. The results are applicable to advanced and future power management solutions with fully integrated switched-mode power supplies at switching frequencies >10 MHz.
The deterioration of the shielding performance of electromagnetic interference finger stock gaskets in a corrosive environment is investigated. The visualization of the real contact area shows a drastic reduction of the engaged active contact region between fingers and their mating surfaces in presence of corrosives residues. In fact, additional openings occur besides the “Tlike” holes due to the porous nature of gaskets. This leads to a strong degradation of the shielding effectiveness. Modified Bethe’s theory is used to estimate the equivalent circuit parameters while the shielding effectiveness in terms of ratio between two transfer functions is obtained upon applying the filter theory. Quantitative measurements carried out for different gasket types show a good agreement with calculated results, demonstrating thus the validity of the approach.
Micro grids often consist of energy generators, storages and consumers with controllers which are not prepared for their integration into communication networks for energy systems. In this paper it will be presented, how standards from the field of energy automation can be applied in such controllers. The data for communication interfaces can be structured according to the IEC 61850- or the VHPREADY standard. It is investigated which requirements must be supported to implement such data models within the controllers. For the transmission of the data we propose the OPC UA protocol, which supports extensive security measures and which is today available for nearly all modern types of controllers and computers.
Software process improvement (SPI) is around for decades, but it is a critically discussed topic. In several waves, different aspects of SPI have been discussed in the past, e.g., large scale company-level SPI programs, maturity models, success factors, and in-project SPI. It is hard to find new streams or a consensus in the community, but there is a trend coming along with agile and lean software development. Apparently, practitioners reject extensive and prescriptive maturity models and move towards smaller, faster and continuous project-integrated SPI. Based on data from two survey studies conducted in Germany (2012) and Europe (2016), we analyze the process customization for projects and practices for implementing SPI in the participating companies. Our findings indicate that, even in regulated industry sectors, companies increasingly adopt in-project SPI activities, primarily with the goal to continuously optimize specific processes. Therefore, with this paper, we want to stimulate a discussion on how to evolve traditional SPI towards a continuous learning environment.
To evaluate the quality of a person´s sleep it is essential to identify the sleep stages and their durations. Currently, the gold standard in terms of sleep analysis is overnight polysomnography (PSG), during which several techniques like EEG (eletroencephalogram), EOG (electrooculogram), EMG (electromyogram), ECG (electrocardiogram), SpO2 (blood oxygen saturation) and for example respiratory airflow and respiratory effort are recorded. These expensive and complex procedures, applied in sleep laboratories, are invasive and unfamiliar for the subjects and it is a reason why it might have an impact on the recorded data. These are the main reasons why low-cost home diagnostic systems are likely to be advantageous. Their aim is to reach a larger population by reducing the number of parameters recorded. Nowadays, many wearable devices promise to measure sleep quality using only the ECG and body-movement signals. This work presents an android application developed in order to proof the accuracy of an algorithm published in the sleep literature. The algorithm uses ECG and body movement recordings to estimate sleep stages. The pre-recorded signals fed into the algorithm have been taken from physionet1 online database. The obtained results have been compared with those of the standard method used in PSG. The mean agreement ratios between the sleep stages REM, Wake, NREM-1, NREM-2 and NREM-3 were 38.1%, 14%, 16%, 75% and 54.3%.
The respiratory rate is a vital sign indicating breathing illness. It is necessary to analyze the mechanical oscillations of the patient's body arising from chest movements. An inappropriate holder on which the sensor is mounted, or an inappropriate sensor position is some of the external factors which should be minimized during signal registration. This paper considers using a non-invasive device placed under the bed mattress and evaluates the respiratory rate. The aim of the work is the development of an accelerometer sensor holder for this system. The normal and deep breathing signals were analyzed, corresponding to the relaxed state and when taking deep breaths. The evaluation criterion for the holder's model is its influence on the patient's respiratory signal amplitude for each state. As a result, we offer a non-invasive system of respiratory rate detection, including the mechanical component providing the most accurate values of mentioned respiratory rate.
We present a compact battery charger topology for weight and cost sensitive applications with an average output current of 9A targeted for 36V batteries commonly found in electric bicycles. Instead of using a conventional boost converter with large DC-link capacitors, we accomplish PFC-functionality by shaping the charging current into a sin²-shape. In addition, a novel control scheme without input-current sensing is introduced. A-priori knowledge is used to implement a feed-forward control in combination with a closed-loop output current control to maintain the target current. The use of a full-bridge/half bridge LLC converter enables operation in a wide input-voltage range.
A fully featured prototype has been built with a peak output power of 1050W. An average output power of 400W was measured, resulting in a power density of 1.8 kW/dm³. At 9A charging current, a power factor of 0.96 was measured and the efficiency exceeds 93% on average with passive rectification.
The impact of pulse charging has been evaluated on a 400Wh battery which was charged with the proposed converter as well as CC-CV-charging for reference. Both charging schemes show similar battery surface temperatures.
In this paper, it aims to model wind speed time series at multiple sites. The five-parameter Johnson distribution is deployed to relate the wind speed at each site to a Gaussian time series, and the resultant m-dimensional Gaussian stochastic vector process Z(t) is employed to model the temporal-spatial correlation of wind speeds at m different sites. In general, it is computationally tedious to obtain the autocorrelation functions (ACFs) and cross-correlation functions (CCFs) of Z(t), which are different to those of wind speed times series. In order to circumvent this correlation distortion problem, the rank ACF and rank CCF are introduced to characterize the temporal-spatial correlation of wind speeds, whereby the ACFs and CCFs of Z(t) can be analytically obtained. Then, Fourier transformation is implemented to establish the cross-spectral density matrix of Z(t), and an analytical approach is proposed to generate samples of wind speeds at m different sites. Finally, simulation experiments are performed to check the proposed methods, and the results verify that the five-parameter Johnson distribution can accurately match distribution functions of wind speeds, and the spectral representation method can well reproduce the temporal-spatial correlation of wind speeds.
Recognizing human actions is a core challenge for autonomous systems as they directly share the same space with humans. Systems must be able to recognize and assess human actions in real-time. To train the corresponding data-driven algorithms, a significant amount of annotated training data is required. We demonstrate a pipeline to detect humans, estimate their pose, track them over time and recognize their actions in real-time with standard monocular camera sensors. For action recognition, we transform noisy human pose estimates in an image like format we call Encoded Human Pose Image (EHPI). This encoded information can further be classified using standard methods from the computer vision community. With this simple procedure, we achieve competitive state-of-the-art performance in pose based action detection and can ensure real-time performance. In addition, we show a use case in the context of autonomous driving to demonstrate how such a system can be trained to recognize human actions using simulation data.
Silicon photonic micro-ring resonators (MRR) developed on the silicon-on-insulator (SOI) platform, owing to their high sensitivity and small footprint, show great potential for many chemical and biological sensing applications such as label-free detection in environmental monitoring, biomedical engineering, and food analysis. In this tutorial, we provide the theoretical background and give design guidelines for SOI-based MRR as well as examples of surface functionalization procedures for label-free detection of molecules. After introducing the advantages and perspectives of MRR, fundamentals of MRR are described in detail, followed by an introduction to the fabrication methods, which are based on a complementary metal-oxide semiconductor (CMOS) technology. Optimization of MRR for chemical and biological sensing is provided, with special emphasis on the optimization of waveguide geometry. At this point, the difference between chemical bulk sensing and label-free surface sensing is explained, and definitions like waveguide sensitivity, ring sensitivity, overall sensitivity as well as the limit of detection (LoD) of MRR are introduced. Further, we show and explain chemical bulk sensing of sodium chloride (NaCl) in water and provide a recipe for label-free surface sensing.
Serverless computing is an emerging cloud computing paradigm with the goal of freeing developers from resource management issues. As of today, serverless computing platforms are mainly used to process computations triggered by events or user requests that can be executed independently of each other. These workloads benefit from on-demand and elastic compute resources as well as per-function billing. However, it is still an open research question to which extent parallel applications, which comprise most often complex coordination and communication patterns, can benefit from serverless computing.
In this paper, we introduce serverless skeletons for parallel cloud programming to free developers from both parallelism and resource management issues. In particular, we investigate on the well known and widely used farm skeleton, which supports the implementation of a wide range of applications. To evaluate our concepts, we present a prototypical development and runtime framework and implement two applications based on our framework: Numerical integration and hyperparameter optimization - a commonly applied technique in machine learning. We report on performance measurements for both applications and discuss
the usefulness of our approach.
This article illustrates a method for sensorless control of a switched reluctance motor. The detection of the time instants for switching between the working phases is determined based on the evaluation of the switching frequency of the hysteresis current controllers for appropriately selected sensing phases. This enables a simple and cost efficient implementation. The method is compared with a pulse injection method in terms of efficiency and resolution.
OpenAPI, WADL, RAML, and API Blueprint are popular formats for documenting Web APIs. Although these formats are in general both human and machine-readable, only the part of the format describing the syntax of a Web API is machine-understandable. Descriptions, which explain the meaning and purpose of Web API elements, are embedded as natural language text snippets into documents and target human readers but not machines. To enable machines to read and process these state-of-practice Web API documentation, we propose a Transformer model that solves the generic task of identifying a Web API element within a syntax structure that matches a natural language query. For our first prototype, we focus on the Web API integration task of matching output with input parameters and fined-tuned a pre-trained CodeBERT model to the downstream task of question answering with samples from 2,321 OpenAPI documentation. We formulate the original question answering problem as a multiple choice task: given a semantic natural language description of an output parameter (question) and the syntax of the input schema (paragraph), the model chooses the input parameter (answer) in the schema that best matches the description. The paper describes the data preparation, tokenization, and fine-tuning process as well as discusses possible applications of our model as part of a recommender system. Furthermore, we evaluate the generalizability and the robustness of our fine-tuned model, with the result that it achieves an accuracy of 81.46% correctly chosen parameters.
In the present paper we demonstrate the novel technique to apply the recently proposed approach of In-Place Appends – overwrites on Flash without a prior erase operation. IPA can be applied selectively: only to DB-objects that have frequent and relatively small updates. To do so we couple IPA to the concept of NoFTL regions, allowing the DBA to place update-intensive DB-objects into special IPA-enabled regions. The decision about region configuration can be (semi-)automated by an advisor analyzing DB-log files in the background.
We showcase a Shore-MT based prototype of the above approach, operating on real Flash hardware. During the demonstration we allow the users to interact with the system and gain hands-on experience under different demonstration scenarios.
Methods based exclusively on heart rate hardly allow to differentiate between physical activity, stress, relaxation, and rest, that is why an additional sensor like activity/movement sensor added for detection and classification. The response of the heart to physical activity, stress, relaxation, and no activity can be very similar. In this study, we can observe the influence of induced stress and analyze which metrics could be considered for its detection. The changes in the Root Mean Square of the Successive Differences provide us with information about physiological changes. A set of measurements collecting the RR intervals was taken. The intervals are used as a parameter to distinguish four different stages. Parameters like skin conductivity or skin temperature were not used because the main aim is to maintain a minimum number of sensors and devices and thereby to increase the wearability in the future.
Energy efficient electric control of drives is more and more important for electric mobility and manufacturing industries. Online dynamic optimization of induction machines is challenging due to the computational complexity involved and the variable power losses during dynamic operation of induction machines. This paper proposes a simple technique for sub-optimal online loss optimization using rotor flux linkage templates for energy efficient dynamic operation of induction machines. Such a rotor flux linkage template is given by a rotor flux linkage trajectory which is optimal for a specific scenario. This template is calculated in an offline optimization process. For a specific scenario during real time operation the rotor flux linkage is calculated by appropriately scaling the given template.
As production workspaces become more mobile and dynamic it becomes increasingly important to reliably monitor the overall state of the environment. Therein manipulators or other robotic systems likely have to be able to act autonomously together with humans and other systems within a joint workspace. Such interactions require that all components in non-stationary environments are able to perceive the state relative to each other. As vision-sensors provide a rich source of information to accomplish this, we present RoPose, a convolutional neural network (CNN) based approach, to estimate the two dimensional joint configuration of a simulated industrial manipulator from a camera image. This pose information can further be used by a novel targetless calibration setup to estimate the pose of the camera relative to the manipulator’s space. We present a pipeline to automatically generate synthetic training data and conclude with a discussion of the potential usage of the same pipeline to acquire real image datasets of physically existent robots.
RoPose-Real: real world dataset acquisition for data-driven industrial robot arm pose estimation
(2019)
It is necessary to employ smart sensory systems in dynamic and mobile workspaces where industrial robots are mounted on mobile platforms. Such systems should be aware of flexible and non-stationary workspaces and able to react autonomously to changing situations. Building upon our previously presented RoPose-system, which employs a convolutional neural network architecture that has been trained on pure synthetic data to estimate the kinematic chain of an industrial robot arm system, we now present RoPose-Real. RoPose-Real extends the prior system with a comfortable and targetless extrinsic calibration tool, to allow for the production of automatically annotated datasets for real robot systems. Furthermore, we use the novel datasets to train the estimation network with real world data. The extracted pose information is used to automatically estimate the observing sensor pose relative to the robot system. Finally we evaluate the performance of the presented subsystems in a real world robotic scenario.