Refine
Document Type
- Conference proceeding (20)
- Journal article (9)
- Book (5)
- Book chapter (5)
- Patent / Standard / Guidelines (5)
- Doctoral Thesis (4)
- Anthology (2)
Language
- English (50) (remove)
Has full text
- no (50) (remove)
Is part of the Bibliography
- yes (50)
Institute
- Technik (50) (remove)
Publisher
- Elsevier (9)
- VDE Verlag (4)
- SPIE. The International Society for Optical Engineering (3)
- University of Colorado (3)
- Dnipro University of Technology (2)
- Springer (2)
- WIP (2)
- eceee (2)
- ASME (1)
- CRC Press (1)
This paper introduces a highly scalable heteromodular origami art technique for constructing 3D framework structures using elementary struts and connectors folded from uncut sheets of standard A4 office paper. The presented technique, named ZEBRA, allows the design of meter-scale architectural objects, such as truss bridges and towers, which are capable of bearing substantial mechanical loads. Moving parts, ranging from simple levers to complete multi-bar linkages, can be integrated into static frameworks using a set of kinematic extensions. An overview is given of how the ZEBRA system can be used to teach university students various theoretical and practical aspects of the engineering sciences in an entertaining and hands-on way.
In order to evaluate the performance of different stapes prosthesis types, a coupled finite element (FE) model of human ear was developed. First, the middle-ear FE model was developed and validated using the middle-ear transfer function measurements available in literature including pathological cases. Then, the inner-ear FE model was developed and validated using tonotopy, impedance, and level of cochlea amplification curves from literature. Both models are based on pre-existing research with some improvements and were combined into one coupled FE model. The stapes in the coupled FE ear model was replaced with a model of a stapes prosthesis to create a reconstructed ear model that can be used to estimate how different types of protheses perform relative to each other as well as to the natural ear. This will help in designing of new innovative types of stapes prostheses or any other type of middle-ear prostheses as well as to improve the ones that are already available on the market.
The Virtual Power Plant Neckar-Alb is a demonstration platform for operation, optimization and control of distributed energy resources, which are able to produce, store or consume electric energy. A heterogeneous set of distributed energy devices has been installed at the Campus of Reutlingen University by the Reutlingen Energy Centre (REZ) of the School of Engineering. The distributed energy devices have been combined to local microgrids and connected to an operative central power plant with additional participants. The demonstration platform serves students, researchers and industry experts for education and investigation of new technologies, devices and software.
The purpose of this article is to provide insight of a new simple forecasting method based on a state-estimation algorithm known as the Kalman filter. While the accuracy of such algorithm is not comparable to state-of-the-art forecasting algorithms for PV-power production it does not require any internet connection, eyefish cameras or time intensive training. The algorithm was tested with several months of real high-resolution data with adequate results for the intended applications. The minimization of the necessary spinning reserve on a PV-diesel hybrid system to increase the solar fraction and reduce diesel consumption.
The paper illustrates the status quo of a research project for the development of a control system enabling CHP units for a demand-oriented electricity production by an intelligent management of the heat storage tank. Thereby the focus of the project is twofold. One is the compensation of the fluctuating power production by the renewable energies solar and wind. Secondly, a reduction of the load on the power grid is intended by better matching local electricity demand and production.
In detail, the general control strategy is outlined, the method utilized for forecasting heat and electricity demand is illustrated as well as a correlation method for the temperature distribution in the heat storage tank based on a Sigmoid function is proposed. Moreover, the simulation model for verification and optimization of the control system and the two field test sites for implementing and testing the system are introduced.
Instead of waiting for and constantly adapting to details of political interventions, utilities need to focus on their environment from a holistic perspective. The unique position of the company - be it a local utility, a bigger player, or an international utility specializing in specitic segments - has to be the basis of goals and strategies. But without consistent translation of these goals and strategies into processes, structures, and company culture, a strategy remains pure theory. Companies need to engage in a continuing learning process. This means being willing to pass on strategies, to slow down or speed up, to work from a different angle etc.
Induced by a societal decision to phase out conventional energy production - the so-called Energiewende (energy transition) - the rise of distributed generation acts as a game changer within the German energy market. The share of electricity produced from renewable resources increased to 31,6% in 2015 (UBA, 2016) with a targeted share of renewable resources in the electricity mix of 55%-60% in 2035 (RAP, 2015), opening perspectives for new products and services. Moreover, the rapidly increasing degree of digitization enables innovative and disruptive business models in niches at the grid's edge that might be the winners of the future. It also stimulates the market entry of newcomers and competitors from other sectors, such as IT or telecommunication, challenging the incumbent utilities. For example, virtual and decentral market places for energy are emerging; a trend that is likely to speed up considerably by blockchain technology, if the regulatory environment is adjusted accordingly. Consequently, the energy business is turned upside down, with customers now being at the wheel. For instance, more than one-third of the renewable production capacities are owned by private persons (Trendsearch, 2013). Therefore, the objective of this chapter is to examine private energy consumer and prosumer segments and their needs to derive business models for the various decentralized energy technologies and services. Subsequently, success factors for dealing with the changing market environment and consequences of the potentially disruptive developments for the market structure are evaluated.
After more than three decades of electronic design automation, most layouts for analog integrated circuits are still handcrafted in a laborious manual fashion today. This book presents Self-organized Wiring and Arrangement of Responsive Modules (SWARM), a novel interdisciplinary methodology addressing the design problem with a decentralized multi-agent system. Its basic approach, similar to the roundup of a sheep herd, is to let autonomous layout modules interact with each other inside a successively tightened layout zone. Considering various principles of self-organization, remarkable overall solutions can result from the individual, local, selfish actions of the modules. Displaying this fascinating phenomenon of emergence, examples demonstrate SWARM’s suitability for floorplanning purposes and its application to practical place-and-route problems. From an academic point of view, SWARM combines the strengths of procedural generators with the assets of optimization algorithms, thus paving the way for a new automation paradigm called bottom-up meets top-down.
On-chip metallization, especially in modern integrated BCD technologies, is often subject to high current densities and pronounced temperature cycles due to heat dissipation from power switches like LDMOS transistors. This paper continues the work on a sensor concept where small sense lines are embedded in the metallization layers above the active area of a switching LDMOS transistor. The sensors show a significant resistance change that correlates with the number of power cycles. Furthermore, influences of sense line layer, geometry and the dissipated energy are shown. In this paper, the focus lies on a more detailed analysis of the observed change in sense line resistance.
Lithographical hotspot (LH) detection using deep learning (DL) has received much attention in the recent years. It happens mainly due to the facts the DL approach leads to a better accuracy over the traditional, state-of-the-art programming approaches. The purpose of ths study is to compare existing data augmentation (DA) techniques for the integrated circuit (IC) mask data using DL methods. DA is a method which refers to the process of creating new samples similar to the training set, thereby helping to reduce the gap between classes as well as improving the performance of the DL system. Experimental results suggest that the DA methods increase overall DL models performance for the hotspot detection tasks.
We presented our robot framework and our efforts to make face analysis more robust towards self-occlusion caused by head pose. By using a lightweight linear fitting algorithm, we are able to obtain 3D models of human faces in real-time. The combination of adaptive tracking and 3D face modelling for the analysis of human faces is used as a basis for further research on human-machine interaction on our SCITOS robot platform.
Current clinical practice is often unable to identify the causes of conductive hearing loss in the middle ear with sufficient certainty without exploratory surgery. Besides the large uncertainties due to interindividual variances, only partially understood cause–effect principles are a major reason for the hesitant use of objective methods such as wideband tympanometry in diagnosis, despite their high sensitivity to pathological changes. For a better understanding of objective metrics of the middle ear, this study presents a model that can be used to reproduce characteristic changes in metrics of the middle ear by altering local physical model parameters linked to the anatomical causes of a pathology. A finite-element model is, therefore, fitted with an adaptive parameter identification algorithm to results of a temporal bone study with stepwise and systematically prepared pathologies. The fitted model is able to reproduce well the measured quantities reflectance, impedance, umbo and stapes transfer function for normal ears and ears with otosclerosis, malleus fixation, and disarticulation. In addition to a good representation of the characteristic influences of the pathologies in the measured quantities, a clear assignment of identified model parameters and pathologies consistent with previous studies is achieved. The identification results highlight the importance of the local stiffness and damping values in the middle ear for correct mapping of pathological characteristics and address the challenges of limited measurement data and wide parameter ranges from the literature. The great sensitivity of the model with respect to pathologies indicates a high potential for application in model-based diagnosis.
To prevent high buildings in endangered zones suffering from seismic attack, TMD are applied successfully. In many applications the dampers are placed along the height of the edifice to reduce the damage during the earthquake. The dimensioning of TMD is a multidimensional optimisation problem with many local maxima. To find the absolute best or a very good design, advanced optimisation strategies have to be applied. Bionic optimization proposes different methods to deal with such tasks but requires many repeated studies of the buildings and dampers design. To improve the speed of the analysis, the authors propose a reduced model of the building including the dampers. A series of consecutive generations shows a growing capacity to reduce the impact of an earthquake on the building. The proposals found help to dimension the dampers. A detailed analysis of the building under earthquake loading may yield an efficient design.
Service robots need to be aware of persons in their vicinity in order to interact with them. People tracking enables the robot to perceive persons by fusing the information of several sensors. Most robots rely on laser range scanners and RGB cameras for this task. The thesis focuses on the detection and tracking of heads. This allows the robot to establish eye contact, which makes interactions feel more natural.
Developing a fast and reliable pose invariant head detector is challenging. The head detector that is proposed in this thesis works well on frontal heads, but is not fully pose-invariant. This thesis further explores adaptive tracking to keep track of heads that do not face the robot. Finally, head detector and adaptive tracker are combined within a new people tracking framework and experiments show its effectiveness compared to a state-of the-art system.
One of the challenges in condition monitoring systems is the residual life time prediction. This prediction is done based on statistical methods, based on physical knowledge about the considered process or a combination of these approaches. Physical knowledge of the system is a result of long-term experience of process operators. However, it can be gained as well by analyzing appropriately designed process models. The additional benefit of such models is that particular effects and their impact on the process behavior can be analyzed in detail and without plant operation in a shorter time. The current contribution developed in the framework of the research project Model Based Hierarchic Condition Monitoring presents such models for condition monitoring of roller chains. First, already existing high order dynamic models given by nonlinear differential equations of such chains are extended to incorporate effects that occur due to a deterioration of the chain condition. Then, a simple model is developed and compared to the high order model. Based on the two models the change in the process behavior due to a deterioration of the roller chain condition is analyzed to illustrate that these models can be used in future research in the above mentioned research project to better predict the residual life time of the considered roller chains.
Simulation models of the middle ear have rarely been used for diagnostic purposes due to their limited predictive ability with respect to pathologies. One big challenge is the large uncertainty and ambiguity in the choice of material parameters of the model.
Typically, the model parameters are determined by fitting simulation results to validation measurements. In a previous study, it was shown that fitting the model parameters of a finite-element model using the middle-ear transfer function and various other measurable output variables from normal ears alone is not sufficient to obtain a good predictive ability of the model on pathological middle-ear conditions. However, the inclusion of validation measurements on one pathological case resulted in a very good predictive ability also for other pathological cases. Although the found parameter set was plausible in all aspects, it was not yet possible to draw conclusions about the uniqueness and the accuracy or the uncertainty of the parameter set.
To answer these questions, statistical solution approaches are used in this study. Using the Monte Carlo method, a large number of plausible model data sets are generated that correctly represent the normal and pathological middle-ear characteristics in terms of various output variables like e.g., impedance, reflectance, umbo, and stapes transfer function. Subsequent principal component analyses (PCA) allow to draw conclusions about correlations, quantitative limits and statistical density of parameter values.
Furthermore, applying inverse PCA yields numerous plausible parameterizations of the middle-ear model, which can be used for data augmentation and training of a neural network which is capable of distinguishing between a normal middle ear and pathologies like otosclerosis, malleus fixation, and disarticulation based on objectively measured quantities like impedance, reflectance, and umbo velocity.
Understanding the factors that influence the accuracy of visual SLAM algorithms is very important for the future development of these algorithms. So far very few studies have done this. In this paper, a simulation model is presented and used to investigate the effect of the number of scene points tracked, the effect of the baseline length in triangulation and the influence of image point location uncertainty. It is shown that the latter is very critical, while the other all play important roles. Experiments with a well known semi-dense visual SLAM approach are also presented, when used in a monocular visual odometry mode. The experiments show that not including sensor bias and scale factor uncertainty is very detrimental to the accuracy of the simulation results.
After more than three decades of electronic design automation, most layouts for analog integrated circuits are still handcrafted in a laborious manual fashion today. Obverse to the highly automated synthesis tools in the digital domain (coping with the quantitative difficulty of packing more and more components onto a single chip – a desire well known as More Moore), analog layout automation struggles with the many diverse and heavily correlated functional requirements that turn the analog design problem into a More than Moore challenge. Facing this qualitative complexity, seasoned layout engineers rely on their comprehensive expert knowledge to consider all design constraints that uncompromisingly need to be satisfied. This usually involves both formally specified and nonformally communicated pieces of expert knowledge, which entails an explicit and implicit consideration of design constraints, respectively.
Existing automation approaches can be basically divided into optimization algorithms (where constraint consideration occurs explicitly) and procedural generators (where constraints can only be taken into account implicitly). As investigated in this thesis, these two automation strategies follow two fundamentally different paradigms denoted as top-down automation and bottom-up automation. The major trait of top-down automation is that it requires a thorough formalization of the problem to enable a self-intelligent solution finding, whereas a bottom-up automatism –controlled by parameters– merely reproduces solutions that have been preconceived by a layout expert in advance. Since the strengths of one paradigm may compensate the weaknesses of the other, it is assumed that a combination of both paradigms –called bottom-up meets top-down– has much more potential to tackle the analog design problem in its entirety than either optimization-based or generator-based approaches alone.
Against this background, the thesis at hand presents Self-organized Wiring and Arrangement of Responsive Modules (SWARM), an interdisciplinary methodology addressing the design problem with a decentralized multi-agent system. Its basic principle, similar to the roundup of a sheep herd, is to let responsive mobile layout modules (implemented as context-aware procedural generators) interact with each other inside a user-defined layout zone. Each module is allowed to autonomously move, rotate and deform itself, while a supervising control organ successively tightens the layout zone to steer the interaction towards increasingly compact (and constraint compliant) layout arrangements. Considering various principles of self-organization and incorporating ideas from existing decentralized systems, SWARM is able to evoke the phenomenon of emergence: although each module only has a limited viewpoint and selfishly pursues its personal objectives, remarkable overall solutions can emerge on the global scale.
Several examples exhibit this emergent behavior in SWARM, and it is particularly interesting that even optimal solutions can arise from the module interaction. Further examples demonstrate SWARM’s suitability for floorplanning purposes and its application to practical place-and-route problems. The latter illustrates how the interacting modules take care of their respective design requirements implicitly (i.e., bottom-up) while simultaneously paying respect to high level constraints (such as the layout outline imposed top-down by the supervising control organ). Experimental results show that SWARM can outperform optimization algorithms and procedural generators both in terms of layout quality and design productivity. From an academic point of view, SWARM’s grand achievement is to tap fertile virgin soil for future works on novel bottom-up meets top-down automatisms. These may one day be the key to close the automation gap in analog layout design.
The hearing contact lens® (HCL) is a new type of hearing aid devices. One of its main components is a piezo-electric actuator. In order to evaluate and maximize the HCL’s performance, a model of the HCL coupled to the middle ear was developed using finite element approach. The model was validated step by step starting with the HCL only. To validate the HCL model, vibrational measurements on the HCL were performed using a Laser-Doppler-Vibrometer (LDV). Then, a silicone cap was placed onto the HCL to provide an interface between the HCL and the tympanic membrane of the middle-ear model and additional LDV measurements on temporal bones were performed to validate the coupled model. The coupled model was used to evaluate the equivalent sound pressure of the HCL. Moreover, a deeper insight was gained into the contact between the HCL and tympanic membrane and its effects on the HCL performance. The model can be used to investigate the sensitivity of geometrical and material parameters with respect to performance measures of the HCL and evaluate the feedback behavior.
Industrial hybrid systems with high pv penetration : performance, analysis and key success factors
(2016)
Since the first industrial-scale hybrid system was installed by SMA in 2012, information about the performance of several hybrid systems around the world has been monitored. This paper analyses the performance of SMA’s largest PV-Diesel hybrid system in the industrial-scale installed in Bolivia in 2014 and summarizes the lessons learned by managing this system with large-scale energy storage. The paper finally concludes with an outlook for future hybrid systems.
Nowadays, software development plays an important role in the entire value chain in production machine and plant engineering. An important component for rapid development of high quality software is the virtual commissioning. The real machine is described on the basis of simulation models. Therefore, the control software can be verified at an early stage using the simulation models. Since production machines are produced highly individual or in very small series, the challenge of virtual commissioning is to reduce the effort in the development of simulation models. Therefore, a systemic reuse of the simulation models and the control software for different variants of a machine is essential for an economic use. This necessarily requires a consideration of the variability which may occur between the production machines. This contribution analyzes the question of how to systematically deal with the software-related variability in the context of virtual commissioning. For this purpose, first the characteristics of the virtual commissioning and variability handling are considered. Subsequently, the requirements to a so-called variant infrastructure for virtual commissioning are analyzed and possible solutions are discussed.
Urgent action is needed to keep the chance of limiting global warming to 1.5°C or even 2.0°C. Current outlooks by IPCC, and many other organisations forecast that this will be impossible at current pace of emission 'reductions' – Germany has already hit 1.5° warming this year. Across 2019, particularly during the UN New York Climate summit, numerous organisations declared their ambition to become net carbon neutral. Amongst these were investors and companies, including quite a number of German ones.
We apply a mixed methods approach, utilising data gathered from approx. 900 companies after Climate Week in context of the Energy Efficiency Index of German Industry (EEI), along with media research focusing on decarbonisation plans announced and initiatives pledging climate action.
With this, we analyse how German companies in the manufacturing sectors react to rising societal pressure and emerging policies, particularly what measures they have taken or plan to implement to reduce the footprint of their company, their products and their supply chain. In this, we particularly analyse whether and in what way energy- and resource consumption, as well as carbon emissions are considered in the development and lifecycle of goods manufactured. This is of huge relevance as these goods determine the future footprint of buildings, vehicles and industry.
Regarding the supply chain, current articles indicate that small and medium-sized enterprises (SME) are particularly challenged by increasing demands from their large corporate clients and an alleged lack of preparedness to be able to take and afford prompt decarbonisation action themselves (Buchenau et. al. 2019). Notably the automotive industry recently announced new models that will be 100% carbon neutral all the way through (ibid). We thus analyse if and how factors such as company size, energy intensity and sector affiliation influence a company’s plan to fully decarbonize. Ownership structure and corporate culture, it appears, significantly impact on the degree of decarbonisation action underway.
The limited interfaces of today's IC design environments for editing PCell parameters hinder a solid advancement towards more complex analog PCell modules. This paper presents Hierarchical Instance Parameter Editing (HIPE), a highly flexible concept for the customization of PCell sub-instances. Introducing a new type of parameter, HIPE facilitates the dynamic creation of multi-level editing forms reflecting the actual contents of a PCell instance. This approach greatly improves a PCell's ease-of-use, substantially simplifies PCell development, and allows for a hierarchical execution of parameter validation callbacks. Our HIPE implementation has been integrated into a professional PCell development tool and represents a key enabling technology for upcoming generations of high-level hierarchical PCells.
An improved gate drive circuit is provided for a power device, such as a transistor. Tue gate driver circuit may in -clude: a current control circuit; a first secondary current source that is used to control the switching transient during turn off of the power transistor and a second secondary current source that is used to control the switching transient during turn on of the power transistor. In operation, the current control circuit operates, during turn on ofthe power transistor, to source a gate drive current to a control node ofthe power transistor and, during turn off ofthe power transistor, to sink a gate drive current from the control node of the power transistor. The first and second secondary current sources adjust the gate drive current to control the voltage or current rate of change and thereby the overshoot during the switching transient.
Compared to diesel or gasoline, using compressed natural gas as a fuel allows for significantly decreased carbon dioxide emissions. With the benefits of this technology fully exploited, substantial increases of engine efficiency can be expected in the near future. However, this will lead to exhaust gas temperatures well below the range required for the catalytic removal of residual methane, which is a strong greenhouse gas. By combination with a countercurrent heat exchanger, the temperature level of the catalyst can be raised significantly in order to achieve sufficient levels of methane conversion with minimal additional fuel penalty. This thesis provides fundamental theoretical background of these so-called heat-integrated exhaust purification systems. On this basis, prototype heat exchangers and appropriate operating strategies for highly dynamic operation in passenger cars are developed and evaluated.
This book covers the fundamental knowledge of layout design from the ground up, addressing both physical design, as generally applied to digital circuits, and analog layout. Such knowledge provides the critical awareness and insights a layout designer must possess to convert a structural description produced during circuit design into the physical layout used for IC/PCB fabrication.
Due to the lack of sophisticated component libraries for microelectromechanical systems (MEMS), highly optimized MEMS sensors are currently designed using a polygon driven design flow. The advantage of this design flow is its accurate mechanical simulation, but it lacks a method for analyzing the dynamic parasitic electrostatic effects arising from the electric coupling between (stationary) wiring and structures in motion. In order to close this gap, we present a method that enables the parasitics arising from in-plane, sensor-structure motion to be extracted quasi-dynamically. With the method's structural-recognition feature we can analyze and optimize dynamic parasitic electrostatic effects.
The field of additive manufacturing has experienced an incredible surge over the last few years. Affordable end-user printers, which open a wide variety of new production opportunities, have spread throughout the so called "maker community", while overblown reports about the possibilities on offer have whipped up a real hype. Although new ways of making products have now become available, what can actually be achieved often lags behind people's expectations. On the one hand, this is down to component quality and the materials being used, while on the other hand, most printers found in the end-user sphere use the FDM or DLP process, which significantly restricts the printing methods that can be utilized.
Annotations of subject IDs in images are very important as ground truth for face recognition applications and news retrieval systems. Face naming is becoming a significant research topic in news image indexing applications. By exploiting the uniqueness of name, face naming is transformed to the problem of multiple instance learning (MIL) with exclusive constraint, namely the eMIL problem. First, the positive bags and the negative bags are automatically annotated by a hybrid recurrent convolutional neural network and a distributed affinity propagation cluster. Next, positive instance selection and updating are used to reduce the influence of false-positive bag and to improve the performance. Finally, max exclusive density and iterative Max-ED algorithms are proposed to solve the eMIL problem. The experimental results show that the proposed algorithms achieve a significant improvement over other algorithms.
To improve the energy conversion efficiency of solar organic cells, the clue may lie in the development of devices inspired by an efficient light harvesting mechanism of some aquatic photosynthetic microorganisms that are adapted to low light intensity. Consequently, we investigated the pathways of excitation energy transfer (EET) from successive light harvesting pigments to the low energy level inside the phycobiliprotein antenna system of Acaryochloris marina, a cyanobacterium, using a time resolved absorption difference spectroscopy with a resolution time of 200 fs. The objective was to understand the actual biochemical process and pathways that determine the EET mechanism. Anisotropy of the EET pathway was calculated from the absorption change trace in order to determine the contribution of excitonic coupling. The results reveal a new electron energy relaxation pathway of 14 ps inside the phycocyanin component, which runs from phycocyanin to the terminal emitter. The bleaching of the 660 nm band suggests a broader absorption of the terminal emitter between 660 nm and 675 nm. Further, there are trimer depolarization kinetics of 450 fs and 500 fs in high and low ionic strength, respectively, which arise from the relaxation of the β84 and α84 in adjacent monomers of phycocyanin. Under conditions of low ionic strength buffer solution, the evolution of the kinetic amplitude during the depolarization of the trimer is suggestive of trimer conservation within the phycocyanin hexamer. The anisotropy values were 0.38 and 0.40 in high and in low ionic strength, respectively, indicating that there is no excitonic delocalization in the high energy level of phycocyanin hexamers.
Despite strong political efforts across Europe, small and medium- sized enterprises (SMEs) seem to neglect adopting effective measures for energy efficiency. Adopting a cultural perspective and based on a study among industrial SMEs in Southern Germany, we investigate what drives decisions for energy efficiency in SMEs and how energy management contributes to closing the energy efficiency gap. The study follows a mixed-methods approach and combines eleven ethnographic case studies and a quantitative survey among 500 manufacturing SMEs in Southern Germany.
The main contribution of the paper is to offer a perspective on energy efficiency in SMEs beyond the diffusion of energyefficient technology. By contrast, our results strongly suggest that the diffusion of energy efficiency in industrial companies should not be solely reduced to decisions for technical measures. We shed light on how energy efficiency is established and the importance of energy management in SMEs.
Our study shows that energy efficiency is well established in the investigated SMEs. At the same time, establishment cannot be explained by company size or energy demand. By contrast, the contextual environment of the company and the individual leadership of the company appear to have a more substantial influence. The embedding of energy efficiency in corporate strategy, a broad spectrum of different practices, the involvement of the employees, actions for raising awareness in everyday work life, and distributing attention by organizational measures constitute the driving forces in establishing energy efficiency, and these drivers can be subsumed under the label of energy management.
In an effort to make the cultural and institutional aspects of energy efficiency in industrial organizations more visible, this article introduces a theoretical framework of decision-making processes. Taking a sociological perspective and viewing organizations as cultural systems embedded in wider social contexts, I have developed a multilevel framework addressing institutional, organizational, and individual dimensions shaping decisions on energy efficiency. The framework's development is based on qualitative empirical fieldwork and integrates insights into organizational theory; neo-institutional theory, the attention-based view of the firm, and organizational culture theories. I conclude that decisions on energy efficiency are results of problematization and theorization processes. These processes emerge between the institutional issue-field, the organization, and its members. The model explains decisions shaped by environment (external and material), organizational processes (energy-efficiency practices, climate and culture) and individuals’ characteristics. The framework serves several purposes: introducing a meta-theory of decision making, providing a concept for empirical analysis, and enabling connectivity to the research on barriers.
Energy Communities explores core potential systemic benefits and costs in engaging consumers into communities, particularly relating to energy transition. The book evaluates the conditions under which energy communities might be regarded as customer-centered, market-driven and welfare-enhancing. The book also reviews the issue of prevalence and sustainability of energy communities and whether these features are likely to change as opportunities for distributed energy grow. Sections cover the identification of welfare considerations for citizens and for society on a local and national level, and from social, economic and ecological perspectives, while also considering different community designs and evolving business models.
Disclosed is an electronic drive circuit and a drive method. The drive circuit includes an output; a first output transistor comprising a control node and a load path, wherein the load path is coupled between the output and a first supply node; a voltage regulator configured to control a voltage across the load path of the first output transistor; and a first driver configured to drive the first output transistor based on a first control signal.
A digital twin - a replica of energy devices - was established in the computing environment of MATLAB and Simulink. It simulates continuously their operation and is time synchronized and connected to the cenral energy management and control system of a virtual power plant. The model can be used as a platform for testing device performance in various conditions, working schedules and new optimization options.
A device including a first and second monitoring unit, the first monitoring unit detecting a first voltage potential and the second monitoring unit detecting a second voltage potential, the monitoring units comparing the first voltage potential and the second voltage potential to the value of the supply voltage and activate a control unit as a function of the comparisons, the control unit determining a switching point in time of a second power transistor, and an arrangement being present which generates current when the second power transistor is being switched on, the current changing the first voltage potential, and the control unit activates a first power transistor when the first voltage potential has the same value as the supply voltage, so that the first power transistor is de-energized.
The generous feed-in tariffs (FiTs) introduced in Germany—which resulted in major growth in decentralized solar photovoltaic (PV) systems—will phase out in the coming years, making many of the existing distributed generation assets stranded. This challenge creates an opportunity for community-focused energy utilities, such as Elektrizitätswerke Schönau eG (EWS) based in Schönau, Germany, to try a new approach to assist its customers, makes the transition to a more sustainable future. This chapter describes how EWS is developing products and offering community-based solutions including peer-to-peer trading using automated platforms. Such innovative offering may lead to successful differentiation in a competitive and highly decentralized future.
This book shows how the objectification orientated controlling approach can ensure the successful management of a company in a challenging and competitive environment, which is characterized by increasing complexity, dynamic, and uncertainty. The objectification orientated controlling approach outlined in this book is based on the philosophy of a service provider who supports managers and decision-makers. This idea is well-reflected in the term "business partner", which shows that only management and controlling together are able to ensure the success of a company. The author combines scientific and practical evidence to deduce the objectification orientated controlling approach. The challenges of globalization, a stringent alignment at company value, as well as the objectification approach are the main building blocks. Based on these criteria for success, the controlling approach can be individually shaped for each specific company. This book is aimed at students and practitioners who want to learn more about improving business using a state-of-the-art support function, controlling.
Coupling electricity and heat sector is one of the most necessary actions for the successful energy transition. Efficient electrification for space heating and domestic hot water generation is needed for buildings, which are not connected to any district heating network, as distributed heating demand momentarily is largely met by fossil fuels. Hence, hybrid energy systems will play a pivotal role for the energy transition in buildings. Heat pumps running on PV-electricity is one of the most widely discussed combination for this purpose. In this paper, a heuristic optimization method for the optimal operation of a heat pump driven by the objective for maximum onsite PV electricity utilization is presented. In this context, the thermal flexibility of the building and a thermal energy storage (TES) for generation of domestic hot water (DHW) are activated in order to shift the operation of the heat pump to times of PV-generation. Yearly simulations for a system consisting of heat pump, PV modules, building with floor heating installation and TES for DHW generation are carried out. Variation parameters for the simulation include room temperature amplitude (0.5, 1, 1.5 and 2 K) based on mean room temperature (21 °C), PV-capacity (4, 6, 8 and 10 kW) and type of heat pump (ground source and air source type). The yearly energy balances show that buildings offer significant thermal storage capacity avoiding an additional, large TES for space heating fulfillment and improving the share of onsite PV electricity utilization. With introduction of a battery, which has been analyzed as well for different sizes (1.9, 4.8, 7.7 and 10.6 kWh), the share of onsite PVelectricity utilization can even be improved. However, thermal flexibility supplemented by the varying room temperature amplitude for a bigger battery does not improve the share of onsite PV-electricity utilization. Nevertheless, even with a battery not more than 50% of the electrical load including operation of the heat pump can be covered by PV-electricity for the specific system under investigation. This is noteworthy on the one hand, since it indicates that a hybrid heating system consisting of heat pump and PV cannot solely cover the heat demand of residential buildings. One the other hand, this emphasizes the necessity to include further renewable sources like wind power, in order to draw the complete picture. This, however, is beyond the scope of this paper, which mainly focuses on introduction and verification of the novel control method with regard to a practical building.
The book provides suggestions on how to start using bionic optimization methods, including pseudo-code examples of each of the important approaches and outlines of how to improve them. The most efficient methods for accelerating the studies are discussed. These include the selection of size and generations of a study’s parameters, modification of these driving parameters, switching to gradient methods when approaching local maxima, and the use of parallel working hardware.
Bionic optimization means finding the best solution to a problem using methods found in nature. As evolutionary strategies and particle swarm optimization seem to be the most important methods for structural optimization, we primarily focus on them. Other methods such as neural nets or ant colonies are more suited to control or process studies, so their basic ideas are outlined in order to motivate readers to start using them.
A set of sample applications shows how bionic optimization works in practice. From academic studies on simple frames made of rods to earthquake-resistant buildings, readers follow the lessons learned, difficulties encountered and effective strategies for overcoming them. For the problem of tuned mass dampers, which play an important role in dynamic control, changing the goal and restrictions paves the way for multi-objective-optimization. As most structural designers today use commercial software such as FE-Codes or CAE systems with integrated simulation modules, ways of integrating bionic optimization into these software packages are outlined and examples of typical systems and typical optimization approaches are presented.
The closing section focuses on an overview and outlook on reliable and robust as well as on multi-objective optimization, including discussions of current and upcoming research topics in the field concerning a unified theory for handling stochastic design processes.
Automated stabilization of loading capacity of coal shearer screw with controlled cutting drive
(2015)
A solution of topical scientific problem of coal shearer output increase providing minimum specific power supply for coal cutting, transportation, and loading in terms of thin seams has been proposed. The solution is based on the use of earlier proposed criterion of screw gumming for optimum cutting velocity-coal shearer feed rate ratio in the context of increased screw rotation owing to phase voltage frequency increase. Simulation results of automated control system for coal shearer operations with frequency-controlled cutting drive within thin seams have confirmed the efficiency of the system using proposed algorithm of smart analysis of coal shearer power signal.
Based on a survey among customers of seven German municipal utilities, we estimate two regression models to identify the most prospective customer segments and their preferences and motivations for participating in peer-to-peer (P2P) electricity trading and develop implications for decision-makers in the energy sector and policy-makers for this currently relatively unknown product. Our results show a large general openness of private households towards P2P electricity trading, which is also the main predictor of respondents' intention to participate. It is mainly influenced by individuals’ environmental attitude, technical interest, and independence aspiration. Respondents with the highest willingness to participate in P2P electricity trading are mainly motivated by the ability to share electricity, and to a lesser extent by economic reasons. They also have stronger preferences for innovative pricing schemes (service bundles, time-of-use tariffs). Differences between individuals can be observed depending on their current ownership (prosumers) or installation probability of a microgeneration unit (consumers, planners). Rather than current prosumers, especially planners willing to install microgeneration in the foreseeable future are considered to be the most promising target group for P2P electricity trading. Finally, our results indicate that P2P electricity trading could be a promising niche option in the German energy transition.
Based on a survey among customers of seven German municipal utilities, we estimate hierarchical multiple regression models to identify consumer motivations for participating in P2P electricity trading and develop implications for marketing strategies for this currently relatively unknown product. Our results show a low importance of socio-demographics in explaining differences between consumer groups, but high influence of attitudes, knowledge and likelihood to purchase related products. The most valuable target groups for P2P electricity trading marketing strategies of municipal utilities first and foremost should aim at are innovators, especially prosumers. They are well-informed about and open minded concerning electricity sharing and highly environmentally aware. They ask for transparency and are willing to purchase related products. They are attracted by the ability to share generation and consumption and to a lesser extent by economic reasons. Our results indicate that the marketing efforts should to a special degree take peer effects into account, as they are found to wield great influence on general openness towards and purchase intention for P2P electricity products. Finally, municipal utilities should build on the high level of satisfaction and trust of consumers and use P2P electricity trading as measure to keep and win customers willing to change their supplier.
IGBT modules with anti-parallel FWDs are widely used in inductive load switching power applications, such as motor drive applications. Nowadays there is a continuous effort to increase the efficiency of such systems by decreasing their switching losses. This paper addresses the problems arising in the turn-on process of an IGBT working in hard-switching conditions. A method is proposed which achieves – contrary to most other approaches – a high switching speed and, at the same time, a low peak reverse-recovery current. This is done by applying an improved gate current waveform that is briefly lowered during the turn-on process. The proposed method achieves low switching losses. Its effectiveness is demonstrated by experimental results with IGBT modules for 600V and 1200V.
The demonstration project Virtual Power Plant Neckar-Alb is constructing a Virtual Power Plant (VPP) demonstration site at the Reutlingen University campus. The VPP demonstrator integrates a heterogeneous set of distributed energy resources (DERs) which are connected to control the infrastructure and an energy management system. This paper describes the components and the architecture of the demonstrator and presents strategies for demonstration of multiple optimization and control systems with different control paradigms.