Refine
Document Type
- Journal article (199)
- Book chapter (21)
- Conference proceeding (18)
- Anthology (2)
- Book (1)
- Review (1)
- Working Paper (1)
Is part of the Bibliography
- yes (243)
Institute
- ESB Business School (96)
- Life Sciences (66)
- Informatik (47)
- Technik (25)
- Texoversum (9)
Publisher
- Elsevier (243) (remove)
Woven piezoelectric sensors as part of the textile reinforcement of fiber reinforced plastics
(2019)
Sensor integration in fiber reinforced plastic (FRP) structures enables online process and structural health monitoring (SHM). This paper describes the development and application of woven fabric-based piezoelectric impact and bending sensors for integration into FRP. The work focuses on design and characterization of woven piezoelectric sensors, especially as a part of the reinforcement structure. The reinforcement of the component acts as a sensor in itself and therefore no additional external objects in the form of sensor components or sensor fibers, which could create unwanted weak points within the FRP, are added. The bending test results reveal a direct relationship between the applied load and the sensor signal. Furthermore, the appropriate sensor position in the component cross section was determined and the influence of thermal polarization on the sensor properties was investigated.
Willingness-to-pay for alternative fuel vehicle characteristics : a stated choice study for Germany
(2016)
In the light of European energy efficiency and clean air regulations, as well as an ambitious electric mobility goal of the German government, we examine consumer preferences for alternative fuel vehicles (AFVs) based on a Germany-wide discrete choice experiment among 711 potential car buyers. We estimate consumers’ willingness to-pay and compensating variation (CV) for improvements in vehicle attributes, also taking taste differences in the population into account by applying a latent class model with 6 distinct consumer segments. Our results indicate that about 1/3 of the consumers are oriented towards at least one AFV option, with almost half of them being AFV-affine, showing a high probability of choosing AFVs despite their current shortcomings. Our results suggest that German car buyers’ willingness-to-pay for improvements of the various vehicle attributes varies considerably across consumer groups and that the vehicle features have to meet some minimum requirements for considering AFVs. The CV values show that decision-makers in the administration and industry should focus on the most promising consumer group of ‘AFV aficionados’ and their needs. It also shows that some vehicle attribute improvements could increase the demand for AFVs cost-effectively, and that consumers would accept surcharges for some vehicle attributes at a level which could enable their private provision and economic operation (e.g. fast-charging infrastructure). Improvement of other attributes will need governmental subsidies to compensate for insufficient consumer valuation (e.g. battery capacity).
The present study investigated the possibilities and limitations of using a low-cost NIR spectrometer for the verification of the presence of the declared active pharmaceutical ingredients (APIs) in tablet formulations, especially for medicine screening studies in low-resource settings. Spectra from 950 to 1650 nm were recorded for 170 pharmaceutical products representing 41 different APIs, API combinations or placebos. Most of the products, including 20 falsified medicines, had been collected in medicine quality studies in African countries. After exploratory principal component analysis, models were built using data-driven soft independent modelling of class analogy (DD-SIMCA), a one-class classifier algorithm, for tablet products of penicillin V, sulfamethoxazole/trimethoprim, ciprofloxacin, furosemide, metronidazole, metformin, hydrochlorothiazide, and doxycycline. Spectra of amoxicillin and amoxicillin/clavulanic acid tablets were combined into a single model. Models were tested using Procrustes cross-validation and by projection of spectra of tablets containing the same or different APIs. Tablets containing no or different APIs could be identified with 100 % specificity in all models. A separation of the spectra of amoxicillin and amoxicillin/clavulanic acid tablets was achieved by partial least squares discriminant analysis. 15 out of 19 external validation products (79 %) representing different brands of the same APIs were correctly identified as members of the target class; three of the four rejected samples showed an API mass percentage of the total tablet weight that was out of the range covered in the respective calibration set. Therefore, in future investigations larger and more representative spectral libraries are required for model building. Falsified medicines containing no API, incorrect APIs, or grossly incorrect amounts of the declared APIs could be readily identified. Variation between different NIR-S-G1 spectroscopic devices led to a loss of accuracy if spectra recorded with different devices were pooled. Therefore, piecewise direct standardization was applied for calibration transfer. The investigated method is a promising tool for medicine screening studies in low-resource settings.
Product engineering and subsequent phases of product lifecycles are predominantly managed in isolation. Companies therefore do not fully exploit potentials through using data from smart factories and product usage. The novel intelligent and integrated Product Lifecycle Management (i²PLM) describes an approach that uses these data for product engineering. This paper describes the i²PLM, shows the cause-and-effect relationships in this context and presents in detail the validation of the approach. The i²PLM is applied and validated on a smart product in an industrial research environment. Here, the subsequent generation of a smart lunchbox is developed based on production and sensor data. The results of the validation give indications for further improvements of the i²PLM. This paper describes how to integrate the i²PLM into a learning factory.
In addition to increased safety by detecting possible overload, continuous component monitoring by sensor integration makes the use of fiber reinforced plastics more cost-effective. Since the components are continuously monitored, one can switch from time-based to condition-based maintenance. However, the integration of conventional sensor components causes weak points, as foreign objects are inserted into the reinforcing structure. In this paper, we examine the use of the textile reinforcement as a sensor in itself. We describe how bending sensors can be formed by slightly modifying in the composite’s reinforcement structure. We investigated two different sensor principles. (1) The integration of textile plate capacitors into the structure; (2) The construction of textile piezo elements as part of the reinforcing structure. The bending test results reveal that textile plate capacitors show a load-dependent signal output. The samples with textile piezo elements show a significant increase in signal strength.
Context: Organizations increasingly develop software in a distributed manner. The cloud provides an environment to create and maintain software-based products and services. Currently, it is unknown which software processes are suited for cloud-based development and what their effects in specific contexts are.
Objective: We aim at better understanding the software process applied to distributed software development using the cloud as development environment. We further aim at providing an instrument which helps project managers comparing different solution approaches and to adapt team processes to improve future project activities and outcomes.
Method: We provide a simulation model which helps analyzing different project parameters and their impact on projects performed in the cloud. To evaluate the simulation model, we conduct different analyses using a Scrumban process and data from a project executed in Finland and Spain. An extra adaptation of the simulation model for Scrum and Kanban was used to evaluate the suitability of the simulation model to cover further process models.
Results: A comparison of the real project data with the results obtaind from the different simulation runs shows the simulation producing results close to the real data, and we could successfully replicate a distributed software project. Furthermore, we could show that the simulation model is suitable to address further process models.
Conclusion: The simulator helps reproducing activities, developers, and events in the project, and it helps analyzing potential tradeoffs, e.g., regarding throughput, total time, project size, team size and work-in-progress limits. Furthermore, the simulation model supports project managers selecting the most suitable planning alternative thus supporting decision-making processes.
Process quality has reached a high level on mass production, utilizing well known methods like the DoE. The drawback of the unterlying statistical methods is the need for tests under real production conditions, which cause high costs due to the lost output. Research over the last decade let to methods for correcting a process by using in-situ data to correct the process parameters, but still a lot of pre-production is necessary to get this working. This paper presents a new approach in improving the product quality in process chains by using context data - which in part are gathered by using Industry 4.0 devices - to reduce the necessary pre-production.
Vehicles have been so far improved in terms of energy-efficiency and safety mainly by optimising the engine and the power train. However, there are opportunities to increase energy-efficiency and safety by adapting the individual driving behaviour in the given driving situation. In this paper, an improved rule match algorithm is introduced, which is used in the expert system of a human-centred driving system. The goal of the driving system is to optimise the driving behaviour in terms of energy-efficiency and safety by giving recommendations to the driver. The improved rule match algorithm checks the incoming information against the driving rules to recognise any breakings of a driving rule. The needed information is obtained by monitoring the driver, the current driving situation as well as the car, using in-vehicle sensors and serial-bus systems. On the basis of the detected broken driving rules, the expert system will create individual recommendations in terms of energy-efficiency and safety, which will allow eliminating bad driving habits, while considering the driver needs.
Managerial accountants spend a large part of their working time on more operational activities in cost accounting, reporting, and operational planning and budgeting. In all these areas, there has been increasing discussion in recent years, both in theory and practice, about using more digital technologies. For reporting, this means not only an intensified discussion of technologies such as RPA and AI but also more intensive changes to existing reporting systems. In particular, management information systems (MIS), which are maintained by managerial accountants and used by managers for corporate management, should be mentioned here. Based on an empirical survey in a large German company, this article discusses the requirements and assessments of users when switching from a regular MIS to a cloud-based system.
Unsaturated polyester resins (UPR) and vinyl ester resins (VER) are among the most commercially important thermosetting matrix materials for composites. Although comparatively low cost, their technological performance is suitable for a wide range of applications, such as fiber-reinforced plastics, artificial marble or onyx, polymer concrete, or gel coats. The main areas of UPR consumption include the wind energy, marine, pipe and tank, transportation, and construction industries. This chapter discusses basic UPR and VER chemistry and technology of manufacturing, and consequent applications. Some important properties and performance characteristics are discussed, such as shrinkage behavior, flame retardance, and property modification by nanoparticles. Also briefly introduced and described are the practical aspects of UPR and VER processing, with special emphasis on the most widely used technological approaches, such as hand and spray layup, resin infusion, resin transfer molding, sheet and bulk molding, pultrusion, winding, and centrifugal casting.
Unsaturated polyester resins (UPR) and vinyl ester resins (VER) are among the most commercially important thermosetting matrix materials for composites. Although comparatively low cost, their technological performance is suitable for a wide range of applications, such as fiber-reinforced plastics, artificial marble or onyx, polymer concrete, or gel coats. The main areas of UPR consumption include the wind energy, marine, pipe and tank, transportation, and construction industries.
This chapter discusses basic UPR and VER chemistry and technology of manufacturing, and consequent applications. Some important properties and performance characteristics are discussed, such as shrinkage behavior, flame retardance, and property modification by nanoparticles. Also briefly introduced and described are the practical aspects of UPR and VER processing, with special emphasis on the most widely used technological approaches, such as hand and spray layup, resin infusion, resin transfer molding, sheet and bulk molding, pultrusion, winding, and centrifugal casting.
Twitter and citations
(2023)
Social media, especially Twitter, plays an increasingly important role among researchers in showcasing and promoting their research. Does Twitter affect academic citations? Making use of Twitter activity about columns published on VoxEU, a renowned online platform for economists, we develop an instrumental variable strategy to show that Twitter activity about a research paper has a causal effect on the number of citations that this paper will receive. We find that the existence of at least one tweet, as opposed to none, increases citations by 16-25%. Doubling overall Twitter engagement boosts citations by up to 16%.
Industrial practice is characterized by random events, also referred to as internal and external turbulences, which disturb the target-oriented planning and execution of production and logistics processes. Methods of probabilistic forecasting, in contrast to single value predictions, allow an estimation of the probability of various future outcomes of a random variable in the form of a probability density function instead of predicting the probability of a specific single outcome. Probabilistic forecasting methods, which are embedded into the analytics process to gain insights for the future based on historical data, therefore offer great potential for incorporating uncertainty into planning and control in industrial environments. In order to familiarize students with these potentials, a training module on the application of probabilistic forecasting methods in production and intralogistics was developed in the learning factory 'Werk150' of the ESB Business School (Reutlingen University). The theoretical introduction to the topic of analytics, probabilistic forecasting methods and the transition to the application domain of intralogistics is done based on examples from other disciplines such as weather forecasting and energy consumption forecasting. In addition, data sets of the learning factory are used to familiarize the students with the steps of the analytics process in a practice-oriented manner. After this, the students are given the task of identifying the influencing factors and required information to capture intralogistics turbulences based on defined turbulence scenarios (e.g. failure of a logistical resource) in the learning factory. Within practical production scenario runs, the students apply probabilistic forecasting using and comparing different probabilistic forecasting methods. The graduate training module allows the students to experience the potentials of using probabilistic forecasting methods to improve production and intralogistics processes in context with turbulences and to build up corresponding professional and methodological competencies.
In countries such as Germany, where municipalities have planning sovereignty, problems of urban sprawl often arise. As the dynamics of land development have not substantially subsided over the last years, the national government decided to test the instrument of ‘Tradable Planning Permits’ (TPP) in a nationwide field experiment with 87 municipalities involved. The field experiment was able to implement the key features of a TPP system in a laboratory setting with approximated real socioeconomic and planning conditions. In a TPP system allocated planning permits must be used by municipalities for developing land. The permits can be traded between local jurisdictions, so that they have flexibility in deciding how to comply with the regulation. In order to evaluate the performance of such a system, specific field data about future building areas and their impact on community budgets for the period 2014–2028 were collected. The field experiment contains several sessions with representatives of the municipalities and with students. The participants were confronted with two (municipalities) and four (students) schemes. The results show that a trading system can curb down land development in an effective and also efficient manner. However, depending on the regulatory framework, the trading schemes show different price developments and distributional effects. The unexperienced representatives of the local authorities can easily handle with the permits in the administration and in the established market. A trading scheme sets very high incentives to save open space and to direct development activities to areas within existing planning boundaries. It is therefore a promising instrument for Germany and also other regions or countries with an established land-use planning system.
Towards a model for holistic mapping of supply chains by means of tracking and tracing technologies
(2022)
The usage of tracking and tracing technologies not only enables transparency and visibility of supply chains but also offers far-reaching advantages for companies, such as ensuring product quality or reducing supplier risks. Increasing the amount of shared information supports both internal and external planning processes as well as the stability and resilience of globally operating value chains. This paper aims to differentiate and define the functionalities of tracking and tracing technologies that are frequently used interchangeably in literature. Furthermore, this paper incorporates influencing factors impacting a sequencing of the connected world in Industry4.0 supply chain networks. This includes legal influences, the embedment of supply chain-related standards, and new possibilities of emerging technologies. Finally, the results are summarized in a model for the holistic mapping of supply chains by means of tracking and tracing technologies. The resulting technological solutions that can be derived from the model enable companies to address missing elements in order to enable the holistic mapping of supply chain events as well as the transparent representation of a digital shadow throughout the entire supply chain.
Instead of waiting for and constantly adapting to details of political interventions, utilities need to focus on their environment from a holistic perspective. The unique position of the company - be it a local utility, a bigger player, or an international utility specializing in specitic segments - has to be the basis of goals and strategies. But without consistent translation of these goals and strategies into processes, structures, and company culture, a strategy remains pure theory. Companies need to engage in a continuing learning process. This means being willing to pass on strategies, to slow down or speed up, to work from a different angle etc.
The data presented in this article characterize the thermomechanical and microhardness properties of a novel melamine-formaldehyde resin (MF) intended for the use as a self-healing surface coating. The investigated MF resin is able to undergo reversible crosslinking via Diels Alder reactive groups. The microhardness data were obtained from nanoindentation measurements performed on solid resin film samples at different stages of the self-healing cycle. Thermomechanical analysis was performed under dynamic load conditions. The data provide supplemental material to the manuscript published by Urdl et al. 2020 (https://doi.org/10.1016/j.eurpolymj.2020.109601) on the self-healing performance of this resin, where a more thorough discussion on the preparation, the properties of this coating material and its application in impregnated paper-based decorative laminates can be found.
In the course of a more intensive energy generation from regenerative sources, an increased number of energy storages is required. In addition to the widespread means of storing electric energy, storing energy thermally can contribute significantly. However, limited research exists on the behaviour of thermal energy storages (TES) in practical operation. While the physical processes are well known, it is nevertheless often not possible to adequately evaluate its performance with respect to the quality of thermal stratification inside the tank, which is crucial for the thermodynamic effectiveness of the TES. The behaviour of a TES is experimentally investigated in cyclic charging and discharging operation in interaction with a cogeneration (CHP) unit at a test rig in the lab. From the measurements the quality of thermal stratification is evaluated under varying conditions using different metrics such as normalised stratification factor, modified MIX number, exergy number and exergy efficiency, which extends the state of art for CHP applications. The results show that the positioning of the temperature sensors for turning the CHP unit on and off has a significant influence on both the effective capacity of a TES and the quality of thermal stratification inside the tank. It is also revealed that the positioning of at least one of these sensors outside the storage tank, i.e. in the return line to the CHP unit, prevents deterioration of thermal stratification, thereby enhancing thermodynamic effectiveness. Furthermore, the effects of thermal load and thermal load profile on effective capacity and thermal stratification are discussed, even though these are much smaller compared to the effect of positioning the temperature sensors.
Theory and practice of implementing a successful enterprise IoT strategy in the industry 4.0 era
(2021)
Since the arrival of the internet and affordable access to technologies, digital technologies have occupied a growing place in industries, propelling us towards a 4th industrial revolution: Industry 4.0. In today’s era of digital upheaval, enterprises are increasingly undergoing transformations that are leading to their digitalization. The traditional manufacturing industry is in the throes of a digital transformation that is accelerated by exponentially growing technologies (e.g., intelligent robots, Internet of Things, sensors, 3D printing). Around the world, enterprises are in a frantic race to implement solutions based on IoT to improve their productivity, innovation, and reduce costs and improve their markets on the international scene. Considering the immense transformative potential that IoTs and big data have to bring to the industrial sector, the adoption of IoT in all industrial systems is a challenge to remain competitive and thus transform the industry into a smart factory. This paper presents the description of the innovation and digitalization process, following the Industry 4.0 paradigm to implement a successful enterprise IoT strategy.
We investigated the state of artificial intelligence (AI) in pharmaceutical research and development (R&D) and outline here a risk and reward perspective regarding digital R&D. Given the novelty of the research area, a combined qualitative and quantitative research method was chosen, including the analysis of annual company reports, investor relations information, patent applications, and scientific publications of 21 pharmaceutical companies for the years 2014 to 2019. As a result, we can confirm that the industry is in an ‘early mature’ phase of using AI in R&D. Furthermore, we can demonstrate that, despite the efforts that need to be managed, recent developments in the industry indicate that it is worthwhile to invest to become a ‘digital pharma player’.
Induced by a societal decision to phase out conventional energy production - the so-called Energiewende (energy transition) - the rise of distributed generation acts as a game changer within the German energy market. The share of electricity produced from renewable resources increased to 31,6% in 2015 (UBA, 2016) with a targeted share of renewable resources in the electricity mix of 55%-60% in 2035 (RAP, 2015), opening perspectives for new products and services. Moreover, the rapidly increasing degree of digitization enables innovative and disruptive business models in niches at the grid's edge that might be the winners of the future. It also stimulates the market entry of newcomers and competitors from other sectors, such as IT or telecommunication, challenging the incumbent utilities. For example, virtual and decentral market places for energy are emerging; a trend that is likely to speed up considerably by blockchain technology, if the regulatory environment is adjusted accordingly. Consequently, the energy business is turned upside down, with customers now being at the wheel. For instance, more than one-third of the renewable production capacities are owned by private persons (Trendsearch, 2013). Therefore, the objective of this chapter is to examine private energy consumer and prosumer segments and their needs to derive business models for the various decentralized energy technologies and services. Subsequently, success factors for dealing with the changing market environment and consequences of the potentially disruptive developments for the market structure are evaluated.
Context: Development of software intensive products and services increasingly occurs by continuously deploying product or service increments, such as new features and enhancements, to customers. Product and service developers must continuously find out what customers want by direct customer feedback and usage behaviour observation. Objective: This paper examines the preconditions for setting up an experimentation system for continuous customer experiments. It describes the RIGHT model for Continuous Experimentation (Rapid Iterative value creation Gained through High-frequency Testing), illustrating the building blocks required for such a system. Method: An initial model for continuous experimentation is analytically derived from prior work. The model is matched against empirical case study findings from two startup companies and further developed. Results: Building blocks for a continuous experimentation system and infrastructure are presented. Conclusions: A suitable experimentation system requires at least the ability to release minimum viable products or features with suitable instrumentation, design and manage experiment plans, link experiment results with a product roadmap, and manage a flexible business strategy. The main challenges are proper, rapid design of experiments, advanced instrumentation of software to collect, analyse, and store relevant data, and the integration of experiment results in both the product development cycle and the software development process.
Artificial intelligence (AI) technologies, such as machine learning or deep learning, have been predicted to highly impact future organizations and radically change the way how projects are managed. The Project Management Institute (PMI), the network of around 1.1 million certified project managers, ranked AI as one of the top three disruptors of their profession. In an own study on the effect of AI, 37% of the project management processes can be executed by machine learning and other AI technologies. In addition, Gartner recently postulated that 80% of the work of today's project managers may be eliminated by AI in 2030.
This editorial aims to outline today's project and portfolio management in context of pharmaceutical research and development (R&D), followed by an AI-vision and a more tangible mission, and illustrate what the consequences of an AI-enabled project and portfolio management could be for pharmaceutical R&D.
The paper describes a new stimulus using learning factories and an academic research programme - an M.Sc. in Digital Industrial Management and Engineering (DIME) comprising a double degree - to enhance international collaboration between four partner universities. The programme will be structured in such a way as to maintain or improve the level of innovation at the learning factories of each partner. The partners agreed to use Learning Factory focus areas along with DIME learning modules to stimulate international collaboration. Furthermore, they identified several research areas within the framework of the DIME program to encourage horizontal and vertical collaboration. Vertical collaboration connects faculty expertise across the Learning Factory network to advance knowledge in one of the focus areas, while Horizontal collaboration connects knowledge and expertise across multiple focus areas. Together they offer a platform for students to develop disciplinary and cross-disciplinary applied research skills necessary for addressing the complex challenges faced by industry. Hence, the university partners have the opportunity to develop the learning factory capabilities in alignment with the smart manufacturing concept. The learning factory is thus an important pillar in this venture. While postgraduate students/researchers in the DIME program are the enablers to ensure the success of entire projects, the learning factory provides a learning environment which is entirely conducive to fostering these successful collaborations. Ultimately, the partners are focussed on utilising smart technologies in line with the digitalization of the production process.
Perivascular stromal cells, including mesenchymal stem/stromal cells (MSCs), secrete paracrine factor in response to exercise training that can facilitate improvements in muscle remodeling. This study was designed to test the capacity for muscle-resident MSCs (mMSCs) isolated from young mice to release regenerative proteins in response to mechanical strain in vitro, and subsequently determine the extent to which strain-stimulated mMSCs can enhance skeletal muscle and cognitive performance in a mouse model of uncomplicated aging. Protein arrays confirmed a robust increase in protein release at 24 h following an acute bout of mechanical strain in vitro (10%, 1 Hz, 5 h) compared to non-strain controls. Aged (24 month old), C57BL/6 mice were provided bilateral intramuscular injection of saline, non strain control mMSCs, or mMSCs subjected to a single bout of mechanical strain in vitro (4 ×104). No significant changes were observed in muscle weight, myofiber size, maximal force, or satellite cell quantity at 1 or 4 wks between groups. Peripheral perfusion was significantly increased in muscle at 4 wks post-mMSC injection (p < 0.05), yet no difference was noted between control and preconditioned mMSCs. Intramuscular injection of preconditioned mMSCs increased the number of new neurons and astrocytes in the dentate gyrus of the hippocampus compared to both control groups (p < 0.05), with a trend toward an increase in water maze performance noted (p=0.07). Results from this study demonstrate that acute injection of exogenously stimulated muscle-resident stromal cells do not robustly impact aged muscle structure and function, yet increase the survival of new neurons in the hippocampus.
Marketing channels are among the most important elements of any value chain. This is because the bulk of a nation´s manufacturing output flows through them. The intermediaries (e.g., distributors, wholesalers, retailers) constituting marketing channels perform specific distribution functions,such as transportation, storage, sales, financing, and relationship building, better than most manufacturers. Over his distinguished career, Louis P. Bucklin investigated many questions about the structuring and functioning of marketing channels using conceptual, empirical, and microeconomics model-based methodologies. Today, the academic marketing literature contains hundreds of articles that have employed these three broad classes of methodologies to investigate issues of channel intermediaries´ interorganizational relationships, for example, power-dependence, relational outcomes, conflict and negotiations, and manufacturing firms´ channel strategy, for example, channel structure, selection, coordination and control. So far, however, there has been no review of how the three different methodologies have contributed to advancing knowledge across this set of channels research domains.
Context
In a world of high dynamics and uncertainties, it is almost impossible to have a long-term prediction of which products, services, or features will satisfy the needs of the customer. To counter this situation, the conduction of Continuous Improvement or Design Thinking for product discovery are common approaches. A major constraint in conducting product discovery activities is the high effort to discover and validate features and requirements. In addition, companies struggle to integrate product discovery activities into their agile processes and iterations.
Objective
This paper aims at suggests a supportive tool, the “Discovery Effort Worthiness (DEW) Index”, for product owners and agile teams to determine a suitable amount of effort that should be spent on Design Thinking activities. To operationalize DEW, proposals for practitioners are presented that can be used to integrate product discovery into product development and delivery.
Method
A case study was conducted for the development of the DEW index. In addition, we conducted an expert workshop to develop proposals for the integration of product discovery activities into the product development and delivery process.
Results
First, we present the "Discovery Effort Worthiness Index" in form of a formula. Second, we identified requirements that must be fulfilled for systematic integration of product discovery activities into product development and delivery. Third, we derived from the requirements proposals for the integration of product discovery activities with a company's product development and delivery.
Conclusion
The developed "Discovery Effort Worthiness Index" provides a tool for companies and their product owners to determine how much effort they should spend on Design Thinking methods to discover and validate requirements. Integrating product discovery with product development and delivery should ensure that the results of product discovery are incorporated into product development. This aims to systematically analyze product risks to increase the chance of product success.
Computers are increasingly used in teams in various contexts, for example in negotiations. Especially when using computer-support for decision making processes, it is an important question whether active collaboration within the team - for example via audio-conference - has additional benefits beyond the supply of full task-relevant information via computer. In team negotiations, team representatives are only able to represent the whole team, if diverse preferences of the team members are aligned prior to the negotiation. In an experimental study with 150 participants, we provided team members with the complete information about each other's preferences during an either collaboratively (computer-mediated) or seperately conducted computer-supported negotiation preparation and subsequently asked them for their priorities as representatives of the team. Our results showed that providing complete task-relevant information via computer is insufficient to compensate for the absence of active collaboration within the team.
Today, virtualizing pharma R&D is increasingly related with data analytics and artificial intelligence (AI), technologies that have been developed by software companies outside the healthcare sector. The process of virtualizing pharma R&D is closely related to the technological advancements that result in the generation of large data sets ranging from genomics, proteomics, metabolomics, medical imaging, IoT wearables and large clinical trials, making it necessary for pharma companies to find new ways to store and ultimately analyze information. As a consequence, pharma companies are experimenting with AI in R&D ranging from in-silico drug design to clinical trail participants identification or dosage error reduction.
This paper presents the first part of a research-work conducted at the University of Applied Sciences (HFT- Stuttgart). The aim of the research was to investigate the potential of low-cost renewable energy systems to reduce the energy demand of the building sector in hot and dry areas. Radiative cooling to the night sky represents a low-cost renewable energy source. The dry desert climate conditions promote radiative cooling applications. The system technology adopted in this work is based on uncovered solar thermal collectors integrated into the building’s hydronic system. By implementing different control strategies, the same system could be used for cooling as well as for heating applications. This paper focuses on identifying the collector parameters which are required as the coefficients to configure such an unglazed collector for calibrating its mathematical model within the simulation environment. The parameter identification process implies testing the collector for its thermal performance. This paper attempts to provide an insight into the dynamic testing of uncovered solar thermal collectors (absorbers), taking into account their prospective operation at nighttime for radiative cooling applications. In this study, the main parameters characterizing the performance of the absorbers for radiative cooling applications are identified and obtained from standardized testing protocol. For this aim, a number of plastic solar absorbers of different designs were tested on the outdoor test-stand facility at HFT-Stuttgart for the characterization of their thermal performance. The testing process was based on the quasi-dynamic test method of the international standard for solar thermal collectors EN ISO 9806. The test database was then used within a mathematical optimization tool (GenOpt) to determine the optimal parameter settings of each absorber under testing. Those performance parameters were significant to compare the thermal performance of the tested absorbers. The coefficients (identified parameters) were used then to plot the thermal efficiency curves of all absorbers, for both the heating and cooling modes of operation. Based on the intended main scope of the system utilization (heating or cooling), the tested absorbers could be benchmarked. Hence, one of those absorbers was selected to be used in the following simulation phase as was planned in the research-project.
This paper covers test and verification of a forecast-based Monte Carlo algorithm for an optimized, demand-oriented operation of combined heat and power (CHP) units using the hardware-in-the-loop approach. For this purpose, the optimization algorithm was implemented at a test bench at Reutlingen University for controlling a CHP unit in combination with a thermal energy storage, both in real hardware. In detail, the hardware-in-the-loop tests are intended to reveal the effects of demand forecasting accuracy, the impact of thermal energy storage capacity and the influence of load profiles on demand-oriented operation of CHP units. In addition, the paper focuses on the evaluation of the content of energy in the thermal energy storage under practical conditions. It is shown that a 5-layer model allows to determine the energy stored quite accurately, which is verified by experimental results. The hardware-in-the-loop tests disclose that demand forecasting accuracies, especially electricity demand forecasting, as well as load profiles strongly impact the potential for CHP electricity utilization on-site in demand-oriented mode. Moreover, it is shown that a larger effective capacity of the thermal energy storage positively affects demand-oriented operation. In the hardware-in-the-loop tests, the fraction of electricity generated by the CHP unit utilized on-site could thus be increased by a maximum of 27% compared to heat-led operation, which is still the most common modus operandi of small-scale CHP plants. Hence, the hardware-in-the-loop tests were adequate to prove the significant impact of the proposed algorithm for optimization of demand-oriented operation of CHP units.
The use of learning factories for education in maintenance concepts is limited, despite the important role maintenance plays in the effective operation of organizational assets. A training programme in a learning factory environment is presented where a combination of gamification, classroom training and learning factory applications is used to introduce students to the concepts of maintenance plan development, asset failure characteristics and the costs associated with maintenance decision-making. The programme included a practical task to develop a maintenance plan for different advanced manufacturing machines in a learning factory setting. The programme stretched over a four-day period and demonstrated how learning factories can be effectively utilized to teach management related concepts in an interdisciplinary team context, where participants had no, or very limited, previous exposure to these concepts.
Zero or plus energy office buildings must have very high building standards and require highly efficient energy supply systems due to space limitations for renewable installations. Conventional solar cooling systems use photovoltaic electricity or thermal energy to run either a compression cooling machine or an absorption-cooling machine in order to produce cooling energy during daytime, while they use electricity from the grid for the nightly cooling energy demand. With a hybrid photovoltaic-thermal collector, electricity as well as thermal energy can be produced at the same time. These collectors can produce also cooling energy at nighttime by longwave radiation exchange with the night sky and convection losses to the ambient air. Such a renewable trigeneration system offers new fields of applications. However, the technical, ecological and economical aspects of such systems are still largely unexplored.
In this work, the potential of a PVT system to heat and cool office buildings in three different climate zones is investigated. In the investigated system, PVT collectors act as a heat source and heat sink for a reversible heat pump. Due to the reduced electricity consumption (from the grid) for heat rejection, the overall efficiency and economics improve compared to a conventional solar cooling system using a reversible air-to-water heat pump as heat and cold source.
A parametric simulation study was carried out to evaluate the system design with different PVT surface areas and storage tank volumes to optimize the system for three different climate zones and for two different building standards. It is shown such systems are technically feasible today. With a maximum utilization of PV electricity for heating, ventilation, air conditioning and other electricity demand such as lighting and plug loads, high solar fractions and primary energy savings can be achieved.
Annual costs for such a system are comparable to conventional solar thermal and solar electrical cooling systems. Nevertheless, the economic feasibility strongly depends on country specific energy prices and energy policy. However, even in countries without compensation schemes for energy produced by renewables, this system can still be economically viable today. It could be shown, that a specific system dimensioning can be found at each of the investigated locations worldwide for a valuable economic and ecological operation of an office building with PVT technologies in different system designs.
The technologies of digital transformation, such as the Internet-of-Things (IoT), artificial intelligence or predictive maintenance enable significant efficiency gains in industry and are becoming increasingly important as a competitive factor. However, their successful implementation and creative, future application requires the broad acceptance and knowledge of non-IT-related groups, such as production management students, engineers or skilled workers, which is still lacking today. This paper presents a low-threshold training concept bringing IoT-technologies and applications into manufacturing related higher education and employee training. The concept addresses the relevant topics starting from IoT-basics to predictive maintenance using mobile low-cost hardware and infrastructure.
The fifth generation of mobile communication (5G) is a wireless technology developed to provide reliable, fast data transmission for industrial applications, such as autonomous mobile robots and connect cyber-physical systems using Internet of Things (IoT) sensors. In this context, private 5G networks enable the full performance of industrial applications built on dedicated 5G infrastructures. However, emerging wireless communication technologies such as 5G are a complex and challenging topic for training in learning factories, often lacking physical or visual interaction. Therefore, this paper aims to develop a real-time performance monitoring system of private 5G networks and different industrial 5G devices to visualise the performance and impact factors influencing 5G for students and future connectivity experts. Additionally, this paper presents the first long-term measurements of private 5G networks and shows the performance gap between the actual and targeted performance of private 5G networks.
Artificial Intelligence-based Assistants AIAs are spreading quickly both in homes and offices. They already have left their original habitats of "intelligent speakers" providing easy access to music collections. The initiated a multitude of new devices and are already populating devices such as TV sets. Characteristic for the intelligent digital assistants is the formation of platforms around their core functionality. Thus, AIS capabilities of the assistants are used to offer new services and create new interfaces for business processes. There are positive network effects between the assistants and the services as well as within the services. Therefore, many companies see the need to get involved in the field of digital assistants but lack a framework to align their initiatives with their corporate strategies. In order to lay the foundation for a comprehensive method, we are therefore investigating intelligent digital assistants. Based on this analysis, we are developing a framework of strategic opportunities and challenges.
We analyze economics PhDs’ collaborations in peer-reviewed journals from 1990 to 2014 and investigate such collaborations’ quality in relation to each co-author’s research quality, field and specialization. We find that a greater overlap between co-authors’ previous research fields is significantly related to a greater publication success of co-authors’ joint work and this is robust to alternative specifications. Co-authors that engage in a distant collaboration are significantly more likely to have a large research overlap, but this significance is lost when co-authors’ social networks are accounted for. High quality collaboration is more likely to emerge as a result of an interaction between specialists and generalists with overlapping fields of expertise. Regarding interactions across subfields of economics (interdisciplinarity), it is more likely conducted by co- authors who already have interdisciplinary portfolios, than by co-authors who are specialized or starred in different subfields.
Information Systems in Distributed Environment (ISDE) is becoming a prominent standard in this globalization era due to advancement in information and communication technologies. The advent of the internet has supported Distributed Software Development (DSD) by introducing new concepts and opportunities, resulting in benefits such as scalability, flexibility, interdependence, reduced cost, resource pools, and usage tracking. The distributed development of information systems as well as their deployment and operation in distributed environments impose new challenges for software organizations and can lead to business advantages. In distributed environments, business units collaborate across time zones, organizational boundaries, work cultures and geographical distances, something that ultimately has led to an increasing diversification and growing complexity of cooperation among units. The real-world practice of developing, deployment and operation of information systems in globally distributed projects has been viewed from various perspectives, though technical and engineering in conjunction with managerial and organizational viewpoints have dominated the researcher’s attention so far. Successful participation in distributed environments, however, is ultimately a matter of the participants understanding and exploiting the particularities of their respective local contexts at specific points in time and exploring practical solutions through the local resources available.
This special issue of the Computer standards & interfaces journal therefore includes papers received from the public call for papers and extended and improved versions of those papers that were selected from the best of the International Workshop on Information Systems in Distributed Environment (ISDE 2014). It aims to serve as a forum to bring together academics, researchers, practitioners and students in the field of distributed information system, by presenting novel developments and lesson learned from real world cases, and to promote the exchange of ideas, discussion and advancement in these areas.
The increasing complexity and need for availability of automated guided vehicles (AGVs) pose challenges to companies, leading to a focus on new maintenance strategies. In this paper, a smart maintenance architecture based on a digital twin is presented to optimize the technical and economic effectiveness of AGV maintenance activities. To realize this, a literature review was conducted to identify the necessary requirements for Smart Maintenance and Digital Twins. The identified requirements were combined into modules and then integrated into an architecture. The architecture was evaluated on a real AGV on the battery as one of the critical components.
Globalisation, shorter product life cycles, and increasing product varieties have led to complex supply chains. At the same time, there is a growing interest of customers and governments in having a greater transparency of brands, manufacturers, and producers throughout the supply chain. Due to the complex structure of collaborative manufacturing networks, the increase of supply chain transparency is a challenge for manufacturing companies. The blockchain technology offers an innovative solution to increase the transparency, security, authenticity, and auditability of products. However, there are still uncertainties when applying the blockchain technology to manufacturing scenarios and thus enable all stakeholders to trace back each component of an assembled product. This paper proposes a framework design to increase the transparency and auditability of products in collaborative manufacturing networks by adopting the blockchain technology. In this context, each component of a product is marked with a unique identification number generated by blockchain-based smart contracts. In this way, a transparent auditability of assembled products and their components can be achieved for all stakeholders, including the custome.
Silicones
(2014)
Silicones are found in a variety of applications with requirements that range from long life at elevated temperatures to fluidity at low temperatures. This chapter first considers silicone elastomers and their application in room temperature vulcanizing (RTV) and heat curing systems (HTV). Also, new technologies for UV curing are introduced. Coverage of RTVs includes both one-component and two-component systems and the different cure chemistries of each, and is followed by a separate discussion of silicone laminates. Due to the high importance of silicone fluids, they are also discussed. Fluids include polishes, release agents, surfactants, and dielectric fluids.
Silicones
(2022)
Silicones are found in a variety of applications with requirements that range from long life at elevated temperatures to fluidity at low temperatures. This chapter first considers silicone elastomers and their application in room temperature vulcanizing (RTV) and heat curing systems (HTV). Also, new technologies for UV curing are introduced. Coverage of RTVs includes both one-component and two-component systems and the different cure chemistries of each and is followed by a separate discussion of silicone laminates. Due to the high importance of silicone fluids, they are also discussed. Fluids include polishes, release agents, surfactants, and dielectric fluids.
Modern production systems are characterized by the increasingly use of CPS and IoT networks. However, processing the available information for adaptation and reconfiguration often occurs in relatively large time cycles. It thus does not take advantage of the optimization potential available in the short term. In this paper, a concept is presented that, considering the process information of the individual heterogeneous system elements, detects optimization potentials and performs or proposes adaptation or reconfiguration. The concept is evaluated utilizing a case study in a learning factory. The resulting system thus enables better exploitation of the potentials of the CPPS.
The persistent development towards decreasing batch sizes due to an ongoing product individualization, as well as increasingly dynamic market and competitive conditions lead to new changeability requirements in production environments. Since each of the individualized products mgith require different base materials or components and manufacturing resources, the paths of the products giong through the factory as well as the required internal transport and material supply processes are going to differ for every product. Conventional planning and control systems, which rely on predifined processes and central decision-making, are not capable to deal with the arising system's complexity along the dimensions of changing goods, layouts and throughput requirements. The concepts of "self-organization" in combination with "autonomous ocntrol" provide promising solutions to solve these new requirements by using among other things the potential of autonomous, decentralized and target-optimized logistical objects (e.g. smart products, bins and conveyor systems) wich are able to communicate and interact with each other as well as with human wokers. To investigate the potential of automation and human-robot collaboration for intralogistics, a research project for the development of a collaborative tugger train has been started at the ESB Logistics Learning Factory in lin with various student projects in neighboring research areas. This collaboraive tugger train system in combination with other manual (e.g. handcarts) and (semi-) automated conveyoer systems (e.g. automated guided forklift) will be integrated into a dynamic, self-organized scenario with varying production batch sizes to develop a method for target-oriented sefl-organization and autonomous control of intralogistics systems. For a structured investigation of self-organized scenarios a generic intralogistics model as well as a criteria cataloghe has been developed. The ESB Logistics Learning will serve as a practice-oriented research, validation and demonstration environment for these purposes.
The transmembrane Ca2+ − activated Cl− channel - human bestrophin-1 (hBest1) is expressed in retinal pigment epithelium and mutations of BEST1 gene cause ocular degenerative diseases colectivelly referred to as “bestrophinopathies”. A large number of genetical, biochemical, biophysical and molecular biological studies have been performed to understand the relationship between structure and function of the hBest1 protein and its pathophysiological significance. Here, we review the current understanding of hBest1 surface organization, interactions with membrane lipids in model membranes, and its association with microdomains of cellular membranes. These highlights are significant for modulation of channel activity in cells.
Self-healing thermosets
(2022)
This chapter discusses the basic extrinsic, intrinsic, and combined extrinsic/intrinsic strategies for equipping thermosetting polymers with self-healing properties. The main focus will be on the presentation of a holistic optimization of thermosetting materials, that is, on a simultaneous optimization of both self-healing and other, specialized material properties. Due to their very rigid, highly cross-linked three-dimensional structure, thermosetting polymers require special chemical strategies to achieve self-healing properties. The main chemical strategies available for this will be briefly outlined. The examples given illustrate interesting and/or typical procedures and serve as an inspiration to find solutions for your own applications. They summarize important recent development in research and technology aiming toward multifunctional truly smart self-healing thermosetting materials. An important aspect in this topic area is also how precisely the self-healing effects are analytically checked, quantified, and evaluated. A range of measuring methods is available for this purpose. In this chapter, the most important analytical tools for testing self-healing properties are briefly introduced and highlighted with some illustrative examples.
Structural and functional thermosetting composite materials are exposed to different kinds of stress which can damage the polymer matrix, thus impairing the intended properties. Therefore, self-healing materials have attracted the attention of many research groups over the last decades in order to provide satisfactory material properties and outstanding product durability. The present article provides a critical overview of promising self-healing strategies for crosslinked thermoset polymers. It is organized in two parts: an overview about the different approaches to self-healing is given in the first part, whereas the second part focuses on the specific chemistries of the main strategies to achieve self-healing through crosslinking. It is attempted to provide a comprehensive discussion of different approaches which are described in the scientific literature. By comparison of the advantages and disadvantages, the authors wish to provide helpful insights on the assessment of the potential to transfer the extensive present knowledge about self-healing materials and methods to surface varnishing thermoset coatings.
Artificial intelligence is a field of research that is seen as a means of realization regarding digitalization and industry 4.0. It is considered as the critical technology needed to drive the future evolution of manufacturing systems. At the same time, autonomous guided vehicles (AGV) developed as an essential part due to the flexibility they contribute to the whole manufacturing process within manufacturing systems. However, there are still open challenges in the intelligent control of these vehicles on the factory floor. Especially when considering dynamic environments where resources should be controlled in such a way, that they can be adjusted to turbulences efficiently. Therefore, this paper aimed to develop a conceptual framework for addressing a catalog of criteria that considers several machine learning algorithms to find the optimal algorithm for the intelligent control of AGVs. By applying the developed framework, an algorithm is automatically selected that is most suitable for the current operation of the AGV in order to enable efficient control within the factory environment. In future work, this decision-making framework can be transferred to even more scenarios with multiple AGV systems, including internal communication along with AGV fleets. With this study, the automatic selection of the optimal machine learning algorithm for the AGV improves the performance in such a way, that computational power is distributed within a hybrid system linking the AGV and cloud storage in an efficient manner.
Since its first publication in 2015, the learning factory morphology has been frequently used to design new learning factories and to classify existing ones. The structuring supports the concretization of ideas and promotes exchange between stakeholders.
However, since the implementation of the first learning factories, the learning factory concept has constantly evolved.
Therefore, in the Working Group "Learning Factory Design" of the International Association of Learning Factories, the existing morphology has been revised and extended based on an analysis of the trends observed in the evolution of learning factory concepts. On the one hand, new design elements were complemented to the previous seven design dimensions, and on the other hand, new design dimensions were added. The revised version of the morphology thus provides even more targeted support in the design of new learning factories in the future.
Supply chains have become increasingly complex, making it difficult to ensure transparency throughout the whole supply chain. In this context, first approaches came up, adopting the immutable, decentralised, and secure characteristics of the blockchain technology to increase the transparency, security, authenticity, and auditability of assets in supply chains. This paper investigates recent publications combining the blockchain technology and supply chain management and classifies them regarding the complexity to be mapped on the blockchain. As a result, the increase of supply chain transparency is identified as the main objective of recent blockchain projects in supply chain management. Thereby, most of the recent publications deal with simple supply chains and products. The few approaches dealing with complex parts only map sub-areas of supply chains. Currently no example exists which has the aim of increasing the transparency of complex manufacturing supply chains, and which enables the mapping of complex assembly processes, an efficient auditability of all assets, and an implementation of dynamic adjustments.
The proper selection of a demand forecasting method is directly linked to the success of supply chain management (SCM). However, today’s manufacturing companies are confronted with uncertain and dynamic markets. Consequently, classical statistical methods are not always appropriate for accurate and reliable forecasting. Algorithms of Artificial intelligence (AI) are currently used to improve statistical methods. Existing literature only gives a very general overview of the AI methods used in combination with demand forecasting. This paper provides an analysis of the AI methods published in the last five years (2017-2021). Furthermore, a classification is presented by clustering the AI methods in order to define the trend of the methods applied. Finally, a classification of the different AI methods according to the dimensionality of data, volume of data, and time horizon of the forecast is presented. The goal is to support the selection of the appropriate AI method to optimize demand forecasting.
On-chip metallization, especially in modern integrated BCD technologies, is often subject to high current densities and pronounced temperature cycles due to heat dissipation from power switches like LDMOS transistors. This paper continues the work on a sensor concept where small sense lines are embedded in the metallization layers above the active area of a switching LDMOS transistor. The sensors show a significant resistance change that correlates with the number of power cycles. Furthermore, influences of sense line layer, geometry and the dissipated energy are shown. In this paper, the focus lies on a more detailed analysis of the observed change in sense line resistance.
Relationship marketing is an important issue in every business. Knowing the customers and establishing, maintaining and enhancing long-term customer relationships is a key component of long-term business success. Considering that sport is such big business today, it is surprising that this crucial approach to marketing has yet to be fully recognised either in literature or in the sports business itself. Relationship Marketing in Sports aims to fill this void by discussing and reformulating the principles of relationship marketing and by demonstrating how relationship marketing can be successfully applied in practice within a sports context. Written by a unique author team of academic and practitioner experience, the book provides the reader with: the first book to apply the principles of relationship marketing specifically to a sports context case studies from around the world to provide a uniquely global approach applicable worldwide strong pedagogical features including learning outcomes, overviews, discussion questions, glossary, guided reading and web links practical advice for professional, semi-professional and non-professional sporting organisations a companion website providing web links, case studies and PowerPoint slides for lecturers. Relationship Marketing in Sports is crucial reading for both students and professionals alike and marks a turning point in the marketing of sports.
Reflectometry is known since long as an interferometric method which can be used to characterize surfaces and thin films regarding their structure and,to a certain degree,composition as well.Properties like layer structures,layer thickness,density,and interface roughness can be determined by fitting the obtained reflectivity data with an appropriate model using a recursive fitting routine. However,one major drawback of the reflectometric method is its restriction to planar surfaces.In this article we demonstrate an approach to apply X-ray and neutron reflectometry to curved surfaces by means of the example of bent bare and coated glass slides.We prove the possibility to observe all features like Fresnel decay,Kiessig fringes,Bragg peaks and off-specular scattering and are able to interpret the data using common fitting software and to derive quantitative results about roughness,layer thickness and internal structure. The proposed method has become practical due to the availability of high quality 2D-detectors. It opens up the option to explore many kinds and shapes of samples,which,due to their geometry,have not been in the focus of reflectometry techniques until now.
Context: An experiment-driven approach to software product and service development is gaining increasing attention as a way to channel limited resources to the efficient creation of customer value. In this approach, software capabilities are developed incrementally and validated in continuous experiments with stakeholders such as customers and users. The experiments provide factual feedback for guiding subsequent development.
Objective: This paper explores the state of the practice of experimentation in the software industry. It also identifies the key challenges and success factors that practitioners associate with the approach.
Method: A qualitative survey based on semi-structured interviews and thematic coding analysis was conducted. Ten Finnish software development companies, represented by thirteen interviewees, participated in the study.
Results: The study found that although the principles of continuous experimentation resonated with industry practitioners, the state of the practice is not yet mature. In particular, experimentation is rarely systematic and continuous. Key challenges relate to changing the organizational culture, accelerating the development cycle speed, and finding the right measures for customer value and product success. Success factors include a supportive organizational culture, deep customer and domain knowledge, and the availability of the relevant skills and tools to conduct experiments.
Conclusions: It is concluded that the major issues in moving towards continuous experimentation are on an organizational level; most significant technical challenges have been solved. An evolutionary approach is proposed as a way to transition towards experiment-driven development.
Comparative analysis of the R&D efficiency of 14 leading pharmaceutical companies for the years 1999–2018 shows that there is a close positive correlation between R&D spending and the two investigated R&D output parameters, approved NMEs and the cumulative impact factor of their publications. In other words, higher R&D investments (input) were associated with higher R&D output. Second, our analyses indicate that there are "economies of scale" (size) in pharmaceutical R&D.
Employing diffuse reflection ultraviolet visible (UV–Vis) spectroscopy we developed an approach that is capable to quantitatively determine flux residues on a technical copper surface. The technical copper surface was soldered with a no-clean flux system of organic acids. By a post-solder cleaning step with different cleaning parameters, various levels of residues were produced. The surface was quantitatively and qualitatively characterized using X-ray photoelectron spectroscopy (XPS), Auger electron spectroscopy (AES), Fourier transform infrared spectroscopy (FTIR) and diffuse reflection UV–Vis spectroscopy. With the use of a multivariate analysis (MVA) we examined the UV–Vis data to create a correlation to the carbon content on the surface. The UV–Vis data could be discriminated for all groups by their level of organic residues. Combined with XPS the data were evaluated by a partial least squares (PLS) regression to establish a model. Based on this predictive model, the carbon content was calculated with an absolute error of 2.7 at.%. Due to the high correlation of predictive model, the easy-to-use measurement and the evaluation by multivariate analysis the developed method seems suitable for an online monitoring system. With this system, flux residues can be detected in a manufacturing cleaning process of technical surfaces after soldering.
Pultrusion of braids
(2016)
Properties data of phenolic resins synthetized for the impregnation of saturating Kraft paper
(2018)
The quality of decorative laminates boards depends on the impregnation process of Kraft papers with a phenolic resin,which constitute the raw materials for the manufacture of the cores of such boards.In the laminates industries,the properties of resins are adapted via their syntheses,usually by mixing phenol and formaldehyde in a batch,where additives,temperature and stirring parameters can be controlled. Therefore, many possibilities of preparation and phenolic resins exist, that leads to different combinations of physico chemical properties. In this article, the properties data of eight phenolic resins synthetized with different parameters of pH and reaction times at 60 °C and 90 °C are presented: the losses of pH after synthesis and the dynamic viscosities measured after synthesis and one the solid content is adjusted to 45%w/w in methanol. Data aquired by Differential Scanning Calorimetry (DSC) of the resins and Inverse Gas Chromatography (IGC) of cured solids are given as well.
The increase in product variance and shorter product lifecycles result in higher production ramp-up frequencies and promote the usage of mixed-model lines. The ramp-up is considered a critical step in the product life cycle and in the automotive industry phases of the ramp-up are often executed on separated production lines (pilot lines) or factories (pilot plants) to verify processes and to qualify employees without affecting the production of other products in the mixed-model line. The required financial funds for planning and maintaining dedicated pilot lines prevent small and medium-sized enterprises (SMEs) from the application. Hence, SMEs require different tools for piloting and training during the production ramp-up. Learning islands on which employees can be trained through induced and autonomous learning propose a solution. In this work, a concept for the development and application which contains the required organization, activities, and materials is developed through expert interviews. The results of a case study application with a medium-sized automotive manufacturer show that learning islands are a viable tool for employee qualification and process verification during the ramp-up of mixed-model lines.
Process analysis and process control have attracted increasing interest in recent years. The development and application of process analytical methods are a prerequisite for the knowledge-based manufacturing of industrial goods and allow for the production of high-value products of defined, constantly good quality. Discussed in this chapter are the measurement principle and some relevant aspects and illustrative examples of online monitoring tools as the basis for process control in the manufacturing and processing of thermosetting resins. Optical spectroscopy is featured as one of the main process analytical methods applicable to, among other applications, online monitoring of resin synthesis. In combination with chemometric methods for multivariate data analysis, powerful process models can be generated within the framework of feedback and feed-forward control concepts. Other analytical methods covered in this chapter are those frequently used to control further processing of thermosets to the final parts, including dielectric analysis, ultrasonics, fiber optics, and Fiber Bragg Grating sensors.
Processing
(2014)
In this chapter, some relevant aspects and illustrative examples of online monitoring tools as the basis for process control in the manufacturing and processing of thermosetting resins are briefly discussed. In principle, any chemical or physical information made accessible by sensors can be used for online monitoring of resin formation, resin location in the mold, and resin cure. For instance, changes in the flow properties of the reaction mixture are often routinely recorded in dependence of the reaction time during resin synthesis as a measure for the degree of conversion of raw materials into macromolecules or oligomers by applying rheometry in an in-process environment. Typically, a small sample of the reaction mixture is by-passed, subjected to rheological measurement, and re-introduced into the bulk reactor. In a similar way, pH measurements, turbidimetric measurements, or other analyses are performed. Although rheometry may not always be suitable for following resin cure (especially in cases where there is a very rapid increase in viscosity after initiation of the cure), [1] naturally, the method can in principle also be used in the subsequent processing of the thermosets, for instance in the curing of wood glue applied to wood specimen [2]. Similarly, pH changes during thermoset curing can be followed. Hence, an encyclopedic and comprehensive approach to present process control methods would systematically proceed according to the involved physical measurement principle. However, since only a very Brief sketch of means for monitoring thermoset processing can be given here, only a small, personally biased selection of important methods and application examples is addressed in the following sections. These examples hopefully illustrate some of the general strategies and solutions to problems that are typically encountered when processing thermosets.
The properties of polyelectrolyte multilayers are ruled by the process parameters employed during self-assembly. This is the first study in which a design of experiment approach was used to validate and control the production of ultrathin polyelectrolyte multilayer coatings by identifying the ranges of critical process parameters (polyelectrolyte concentration, ionic strength and pH) within which coatings with reproducible properties (thickness, refractive index and hydrophilicity) are created. Mathematical models describing the combined impact of key process parameters on coatings properties were developed demonstrating that only ionic strength and pH affect the coatings thickness, but not polyelectrolyte concentration. While the electrolyte concentration had a linear effect, the pH contribution was described by a quadratic polynomial. A significant contribution of this study is the development of a new approach to estimate the thickness of polyelectrolyte multilayer nanofilms by quantitative rhodamine B staining, which might be useful in all cases when ellipsometry is not feasible due to the shape complexity or small size of the coated substrate. The novel approach proposed here overcomes the limitations of known methods as it offers a low spatial sampling size and the ability to analyse a wide area without restrictions on the chemical composition and shape of the substrate.
Preliminary results of homomorphic deconvolution application to surface EMG signals during walking
(2021)
Homomorphic deconvolution is applied to sEMG signals recorded during walking. Gastrocnemius lateralis and tibialis anterior signals were acquired according to SENIAM recommendation. MUAP parameters like amplitude and scale were estimated, whilst the MUAP shape parameter was fixed. This features a useful time-frequency representation of sEMG signal. Estimation of scale MUAP parameter was verified extracting the mean frequency of filtered EMG signal, extracted from the scale parameter estimated with two different MUAP shape values.
Clay minerals play an increasingly important role as functional fillers and reinforcing materials for clay polymer nanocomposites (CPN) in advanced applications. Among the prerequisites necessary for polymer improvement by clay minerals are homogeneous and stable Distribution of the clay mineral throughout the CPN, good compatibility of the reinforcement with the Matrix component and suitable processability. Typically, clay minerals are surface-modified with organic interface active compounds like detergents or silanes to obtain favorable properties as filler. They are incorporated into the polymer matrix using manufacturing Equipment like extruders, batch reactors or other mixing machines. In order for the surface modification to survive the stresses and strains during incorporation, the modified clay minerals must display sufficient thermal and mechanical stability to retain the compatibilizing effect. In the present study, thermogravimetry was used in combination with isoconversional kinetic analysis to determine the thermal stability of a silane-modified clay mineral based on bentonite. These findings were compared with the stability of the same clay mineral that was only surfactant-modified. It was found that silane modification leads to significantly improved thermal stability, which depends strongly on the type of silane employed.
The general conclusion of climate change studies is the necessity of eliminating net CO2 emissions in general and from the electric power systems in particular by 2050. The share of renewable energy is increasing worldwide, but due to the intermittent nature of wind and solar power, a lack of system flexibility is already hampering the further integration of renewable energy in some countries. In this study, we analyze if and how combinations of carbon pricing and power-to-gas (PtG) generation in the form of green power-to-hydrogen followed by methanation (which we refer to as PtG throughout) using captured CO2 emissions can provide transitions to deep decarbonization of energy systems. To this end, we focus on the economics of deep decarbonization of the European electricity system with the help of an energy system model. In different scenario analyses, we find that a CO2 price of 160 €/t (by 2050) is on its own not sufficient to decarbonize the electricity sector, but that a CO2 price path of 125 (by 2040) up to 160 €/t (by 2050), combined with PtG technologies, can lead to an economically feasible decarbonization of the European electricity system by 2050. These results are robust to higher than anticipated PtG costs.
The powder coating of veneered particle boards by the sequence electrostatic powder application -powder curing via hot pressing is studied in order to create high gloss surfaces. To obtain an appealingaspect, veneer Sheets were glued by heat and pressure on top of particle boards and the resulting surfaceswere used as carrier substrates for powder coat finishing. Prior to the powder coating, the veneeredparticle board surfaces were pre-treated by sanding to obtain good uniformity and the boards werestored in a climate chamber at controlled temperature and humidity conditions to adjust an appropriate electrical surface resistance. Characterization of surface texture was done by 3D microscopy. The surfaceelectrical resistance was measured for the six veneers before and after their application on the particleboard surface. A transparent powder top-coat was applied electrostatically onto the veneered particleboard surface. Curing of the powder was done using a heated press at 130◦C for 8 min and a smooth, glossy coating was obtained on the veneered surfaces. By applying different amounts of powder thecoating thickness could be varied and the optimum amount of powder was determined for each veneer type.
The wet chemical deposition of solution processed transparent conducting oxides (TCO) provides an alternative low cost and economical deposition technique to realize large-areas of conducting films. Since the price for the most common TCO Indium Tin Oxide rises enormously, Aluminum Zinc Oxide (AZO) as alternative TCO reaches more and more interest. The optoelectronical properties of nanoparticle coatings strongly depend beneath the porosity of the coating on the shape and size of the used particles. By using bigger or rod-shaped particles it is possible to minimize the amount of grain boundaries resulting in an improvement of the electrical properties, whereas particles bigger than 100 nm should not be used if highly transparent coatings are necessary as these big particles scatter the visible light and lower the transmittance of the coatings. In this work we present a simple method to synthesize AZO particles with different shape and size, but comparable electronical properties. We use a simple, well reproducible polyol method for synthesis and influence the shape and size of the particles by adding different amounts of water to the precursor solution. We can show that the addition of aluminum as dopant strongly hinders the crystal growth but the addition of water counteracts this, so that both, spherical and rod-shaped particles can be obtained.
Due to the complexity of assembly processes, a high ratio of tasks is still performed by human workers. Short-cyclically changing work contents due to smaller lot sizes, especially the varied series assesmbly, increases both the need for information support as well as the risk of rising physical and psychological stress. The use of technical and digital assistance systems can counter these challenges. Through the integration of information and communication technology as well as collaborative assembly technologies, hybrid cyber-physical assembly systems will emerge. Widely established assembly planning approaches for digital and technical support systems in cyber physical assembly systems will be outlined and discussed with regard to synergies and delimitations of planning perspectives.
Monitoring heart rate and breathing is essential in understanding the physiological processes for sleep analysis. Polysomnography (PSG) system have traditionally been used for sleep monitoring, but alternative methods can help to make sleep monitoring more portable in someone's home. This study conducted a series of experiments to investigate the use of pressure sensors placed under the bed as an alternative to PSG for monitoring heart rate and breathing during sleep. The following sets of experiments involved the addition of small rubber domes - transparent and black - that were glued to the pressure sensor. The resulting data were compared with the PSG system to determine the accuracy of the pressure sensor readings. The study found that the pressure sensor provided reliable data for extracting heart rate and respiration rate, with mean absolute errors (MAE) of 2.32 and 3.24 for respiration and heart rate, respectively. However, the addition of small rubber hemispheres did not significantly improve the accuracy of the readings, with MAEs of 2.3 bpm and 7.56 breaths per minute for respiration rate and heart rate, respectively. The findings of this study suggest that pressure sensors placed under the bed may serve as a viable alternative to traditional PSG systems for monitoring heart rate and breathing during sleep. These sensors provide a more comfortable and non-invasive method of sleep monitoring. However, the addition of small rubber domes did not significantly enhance the accuracy of the readings, indicating that it may not be a worthwhile addition to the pressure sensor system.
Context: Companies increasingly strive to adapt to market and ecosystem changes in real time. Gauging and understanding team performance in such changing environments present a major challenge.
Objective: This paper aims to understand how software developers experience the continuous adaptation of performance in a modern, highly volatile environment using Lean and Agile software development methodology. This understanding can be used as a basis for guiding formation and maintenance of high-performing teams, to inform performance improvement initiatives, and to improve working conditions for software developers.
Method: A qualitative multiple-case study using thematic interviews was conducted with 16 experienced practitioners in five organisations.
Results: We generated a grounded theory, Performance Alignment Work, showing how software developers experience performance. We found 33 major categories of performance factors and relationships between the factors. A cross-case comparison revealed similarities and differences between different kinds and different sizes of organisations.
Conclusions: Based on our study, software teams are engaged in a constant cycle of interpreting their own performance and negotiating its alignment with other stakeholders. While differences across organisational sizes exist, a common set of performance experiences is present despite differences in context variables. Enhancing performance experiences requires integration of soft factors, such as communication, team spirit, team identity, and values, into the overall development process. Our findings suggest a view of software development and software team performance that centres around behavioural and social sciences.
Rare but extreme events, such as pandemics, terror attacks, and stock market collapses, pose a risk that could undermine cooperation in societies and groups. We extend the public goods game (PGG) to investigate the relationship between rare but extreme external risks and cooperation in a laboratory experiment. By incorporating risk as an external random variable in the PGG, independent of the participants’ contributions, we preserve the economic equilibrium of non-cooperation in the original game. Furthermore, we examine whether cooperation can be restored by the relatively simple intervention of informing about countermeasures while keeping the actual risk constant. Our experimental results reveal that on average extreme risks indeed decrease contributions by about 20%; however, countermeasure information increases contributions by about 10%. Specifically, in the first interactions, cooperation levels can even reach those observed in the riskless baseline. Our results suggest that countermeasure information could help reinforce social cohesion and resilience in the face of rare but extreme risks.
A systematic study using a central composite design of experiments (DoE) was performed on the oxygen plasma surface modifications of two different polymers—Pellethane 2363-55DE, which is a polyurethane, and vinyltrimethoxysilane-grafted ethylene-propylene (EPR-g-VTMS), a cross-linked ethylene-propylene rubber. The impacts of four parameters—gas pressure, generator power, treatment duration, and process temperature—were assessed, with static contact angles and calculated surface free energies (SFEs) as the main responses in the DoE. The plasma effects on the surface roughness and chemistry were determined using scanning electron microscopy (SEM) and X-ray photoelectron spectroscopy (XPS). Through the sufficiently accurate DoE model evaluation, oxygen gas pressure was established as the most impactful factor, with the surface energy and polarity rising with falling oxygen pressure. Both polymers, though different in composition, exhibited similar modification trends in surface energy rise in the studied system. The SEM images showed a rougher surface topography after low pressure plasma treatments. XPS and subsequent multivariate data analysis of the spectra established that higher oxidized species were formed with plasma treatments at low oxygen pressures of 0.2 mbar.
The use of additive manufacturing technologies for industrial production is constantly growing. This technology differs from the known production proecdures. The areas for scheduling, detailed and sequence planning are particularly important for additive production due to the long print times and flexible use of the production area. Therefore, production-relevant variables are considered and used for the production planning and control (PPC) of additive manufacturing machines. For this purpose, an optimization model is presented which shows a time-oriented build space utilization. In the implementation, a nesting algorithm is used to check the combinability of different models for each individual print job.
This paper generalizes the theory of policy uncertainty with the new literature on rational inattention. First, the model demonstrates that inattention is dependent on the signal variance and the policy parameter. Second, I discover a novel trade-off showing that a policy instrument mitigates attention. Third, the policy instrument is non-linear and reciprocal to both the size and variance of the signal. The unifying theory creates new implications to economic theory and public policy alike.
Conventional production systems are evolving through cyber-physical systems and application-oriented approaches of AI, more and more into "smart" production systems, which are characterized among other things by a high level of communication and integration of the individual components. The exchange of information between the systems is usually only oriented towards the data content, where semantics is usually only implicitly considered. The adaptability required by external and internal influences requires the integration of new or the redesign of existing components. Through an open application-oriented ontology the information and communication exchange are extended by explicit semantic information. This enables a better integration of new and an easier reconfiguration of existing components. The developed ontology, the derived application and use of the semantic information will be evaluated by means of a practical use case.
Wave-like differential equations occur in many engineering applications. Here the engineering setup is embedded into the framework of functional analysis of modern mathematical physics. After an overview, the –Hilbert space approach to free Euler–Bernoulli bending vibrations of a beam in one spatial dimension is investigated. We analyze in detail the corresponding positive, selfadjoint differential operators of 4-th order associated to the boundary conditions in statics. A comparison with free string wave swinging is outlined.
On the design of an urban data and modeling platform and its application to urban district analyses
(2020)
An integrated urban platform is the essential software infrastructure for smart, sustainable and resilitent city planning, operation and maintenance. Today such platforms are mostly designed to handle and analyze large and heterogeneous urban data sets from very different domains. Modeling and optimization functionalities are usually not part of the software concepts. However, such functionalities are considered crucial by the authors to develop transformation scenarios and to optimized smart city operation. An urban platform needs to handle multiple scales in the time and spatial domain, ranging from long term population and land use change to hourly or sub-hourly matching of renewable energy supply and urban energy demand.
The fiber deformations of once-dried, bleached and never-dried unbleached kraft pulps were studied with respect to their behavior in high- and low-consistency refining. The pulps were stained with congo red to experimentally highlight areas where the arrangement of the fibrils was altered by refining such as dislocated zones or slip planes. The stained fibers were analyzed with conventional Metso Fiberlab but also with a novel prototype measurement device utilizing a color imaging setup. The local intensity of the stain in the fiber was expressed as degree of overall damage (Overall fiber damage index, OFDI). The rewetted zero span tensile index (RWZSTI) was used to verify the OFDI with respect to the pulp strength. High consistency refining resulted in a clear increase in the number of kinks which negatively influenced the pulp strength. The OFDI which was used to detect the intensity of local fiber defects also responded accordingly. A higher OFDI resulted in a lower pulp strength. Low consistency refining removed a significant amount of kinks and resulted in an increase in fiber swelling. A slight increase in fibrillation and a significant increase in flake-like fines were also observed. The OFDI, however, was not reduced in low consistency refining as it would be expected by the removal of less severe dislocations. One reason proposed here is that low consistency refining created new fiber pores that allowed the dye to penetrate into the fiber wall similarly as it does in the zones of the dislocations.
Respiratory diseases are leading causes of death and disability in the world. The recent COVID-19 pandemic is also affecting the respiratory system. Detecting and diagnosing respiratory diseases requires both medical professionals and the clinical environment. Most of the techniques used up to date were also invasive or expensive.
Some research groups are developing hardware devices and techniques to make possible a non-invasive or even remote respiratory sound acquisition. These sounds are then processed and analysed for clinical, scientific, or educational purposes.
We present the literature review of non-invasive sound acquisition devices and techniques.
The results are about a huge number of digital tools, like microphones, wearables, or Internet of Thing devices, that can be used in this scope.
Some interesting applications have been found. Some devices make easier the sound acquisition in a clinic environment, but others make possible daily monitoring outside that ambient. We aim to use some of these devices and include the non-invasive recorded respiratory sounds in a Digital Twin system for personalized health.
This paper is concerned with the study, optimization and control of the moisture sorption kinetics of agricultural products at temperatures typically found in processing and storage. A nonlinear autoregressive with exogenous inputs (NARX) neural network was developed to predict moisture sorption kinetics and consequently equilibrium moisture contents of shiitake mushrooms (Lentinula edodes (Berk.) Pegler) over a wide range of relative humidity and different temperatures. Sorption kinetic data of mushroom caps was separately generated using a continuous, gravimetric dynamic vapour sorption analyser at emperatures of 25-40 °C over a stepwise variation of relative humidity ranging from 0 to 85%. The predictive power of the neural network was based on physical data, namely relative humidity and temperature. The model was fed with a total of 4500 data points by dividing them into three subsets, namely, 70% of the data was used for training, 15% of the data for testing and 15% of the data for validation, randomly selected from the whole dataset. The NARX neural network was capable of precisely simulating equilibrium moisture contents of mushrooms derived from the dynamic vapour sorption kinetic data throughout the entire range of relative humidity.
Nanocoatings based on sol–gel coatings are presented as suitable tool to modify materials based on polymers. The main focus is set onto textiles as the most common polymer materials. It presents which types of functionalization can be reached by modified sol–gel processes. Also a suitable categorization of functions is given and set into relation to common applications. A special focus is set on the functional properties, antimicrobial, UV protective, and flame retardant. The concept of bifunctional coatings is discussed and especially the combination of water-repellent and antistatic is presented.
Thin radio-frequency magnetron sputter deposited nano-hydroxyapatite (HA) films were prepared on the surface of a Fe-tricalcium phosphate (Fe-TCP) bioceramic composite, which was obtained using a conventional powder injection moulding technique. The obtained nano-hydroxyapatite coated Fe-TCP biocomposites (nano HA-Fe-TCP) were studied with respect to their chemical and phase composition, surface morphology, water contact angle, surface free energy and hysteresis. The deposition process resulted in a homogeneous, single-phase HA coating. The ability of the surface to support adhesion and the proliferation of human mesenchymal stem cells (hMSCs) was studied using biological short-term tests in vitro. The surface of the uncoated Fe-TCP bioceramic composite showed an initial cell attachment after 24 h of seeding, but adhesion, proliferation and growth did not persist during 14 days of culture.However, the HA-Fe-TCP surfaces allowed cell adhesion, and proliferation during 14 days. The deposition of the nano-HA films on the Fe-TCP surface resulted in higher surface energy, improved hydrophilicity and biocompatibility compared with the surface of the uncoated Fe-TCP. Furthermore, it is suggested that an increase in the polar component of the surface energy was responsible for the enhanced cell adhesion and proliferation in the case of the nano-HA Fe-TCP biocomposites.
The isothermal curing of melamine resin is investigated by in-line infrared spectroscopy at different temperatures. The infrared spectra are decomposed into time courses of characteristic spectral patterns using Multivariate Curve Resolution (MCR). It was found that depending on the applied curing temperature, melamine films with different spectral fingerprints and correspondingly different chemical network structures are formed. The network structures of fully cured resin films are specific for the applied curing temperatures used and cannot simply be compensated by changes in the curing time. For industrial curing processes, this means that cure temperature is the main system determining factor at constant M:F ratio. However, different MF resin networks can be specifically obtained from one and the same melamine resin by suitable selection of the curing time and temperatures profiles to design resin functionality. The spectral fingerprints after short curing time as well as after long curing time reflect the fundamental differences in the thermoset networks that can be obtained with industrial short-cycle and multi-daylight presses.
Teaching at assembly workstations in production in SMEs (small and medium sized companies) often does not take place at all or only insufficiently. In addition to the lack of technical content, there are also aggravatingly incorrect movement sequences from an ergonomic point of view, which "untrained" people usually automatically acquire. An AI based approach is used to analyze a definite workflow for a specific assembly scope regarding the behavior of several employees. Based on these different behaviors, the AI gives feedback at which points in time, work steps and movement’s particularly dangerous incorrect postures occur. Motion capturing and digital human model simulation in combination with the results of the AI define the optimized workflow. Individual employees can be trained directly due to the fact that AI identifies their most serious incorrect postures and provide them with a direct analogy of their “wrong” posture and “easy on the joints posture”. With the assistance of various test persons, the AI can conduct a study in which the most frequently occurring incorrect postures can be identified. This could be realized in general or tailored to specific groups of people (e.g. "People over 1.90m tall must be particularly careful not to make the following mistake...). The approach will be tested and validated at the Werk150, the factory of the ESB Business School, on the campus of the Reutlingen University. The new gained knowledge will be used subsequently for training in SMEs.
Model-based hearing diagnosis based on wideband tympanometry measurements utilizing fuzzy arithmetic
(2019)
Today's audiometric methods for the diagnosis of middle ear disease are often based on a comparison of measurements with standard curves that represent the statistical range of normal hearing responses. Because of large inter-individual variances in the middle ear, especially in wideband tympanometry (WBT), specificity and quantitative evaluation are greatly restricted. A new model-based approach could transform today's predominantly qualitative hearing diganostics into a quantitative and tailored, patient-specific diagnosis, by evaluating WBT measurements with the aid of a middle-ear model. For this particular investigation, a finite element model of a human ear was used. It consisted of an acoustic ear canal and a tympanic cavity model, a middle-ear with detailed nonlinear models of the tympanic membrane and annular ligament, and a simplified inner-ear model. This model has made it possible to identify pathologies from measurements, by analyzing the parameters through senstivity studies and parameter clustering. Uncertainties due to the lack of knowledge, subjectivity in numerical implementation and model simplification are taken into account by the application of fuzzy arithmetic. The most confident parameter set can be determined by applying an inverse fuzzy method on the measurement data. The principle and the benefits of this model-based approach are illustrated by the example of a two-mass oscillator, and also by the simulation of the energy absorbance of an ear with malleus fixation, where the parameter changes that are introduced can be determined quantitatively through the system identification.
Human bestrophin-1 (hBest1) is a transmembrane Ca2+- dependent anion channel, associated with the transport of Cl−, HCO3- ions, γ-aminobutiric acid (GABA), glutamate (Glu), and regulation of retinal homeostasis. Its mutant forms cause retinal degenerative diseases, defined as Bestrophinopathies. Using both physicochemical - surface pressure/mean molecular area (π/A) isotherms, hysteresis, compressibility moduli of hBest1/sphingomyelin (SM) monolayers, Brewster angle microscopy (BAM) studies, and biological approaches - detergent membrane fractionation, Laurdan (6-dodecanoyl-N,N-dimethyl-2-naphthylamine) and immunofluorescence staining of stably transfected MDCK-hBest1 and MDCK II cells, we report:
1) Ca2+, Glu and GABA interact with binary hBest1/SM monolayers at 35 °C, resulting in changes in hBest1 surface conformation, structure, self-organization and surface dynamics. The process of mixing in hBest1/SM monolayers is spontaneous and the effect of protein on binary films was defined as “fluidizing”, hindering the phase-transition of monolayer from liquid-expanded to intermediate (LE-M) state;
2) in stably transfected MDCK-hBest1 cells, bestrophin-1 was distributed between detergent resistant (DRM) and detergent-soluble membranes (DSM) - up to 30 % and 70 %, respectively; in alive cells, hBest1 was visualized in both liquid-ordered (Lo) and liquid-disordered (Ld) fractions, quantifying protein association up to 35 % and 65 % with Lo and Ld. Our results indicate that the spontaneous miscibility of hBest1 and SM is a prerequisite to diverse protein interactions with membrane domains, different structural conformations and biological functions.
The present work proposes the use of modern ICT technologies such as smartphones, NFCs, internet, and web technologies, to help patients in carrying out their therapies. The implemented system provides a calendar with a reminder of the assumptions, ensures the drug identification through NFC, allows remote assistance from healthcare staff and family members to check and manage the therapy in real-time. The system also provides centralized information on the patient's therapeutic situation, helpful in choosing new compatible therapies.
The fifth mobile communications generation (5G) offers the deployment scenario of licensed 5G standalone non-public networks (NPNs). Standalone NPNs are locally restricted 5G networks based on 5G New Radio technology which are fully isolated from public networks. NPNs operate on their dedicated core network and offer organizations high data security and customizability for intrinsic network control. Especially in networked and cloud manufacturing, 5G is seen as a promising enabler for delay-sensitive applications such as autonomous mobile robots and robot motion control based on the tactile internet that requires wireless communication with deterministic traffic and strict cycling times. However, currently available industrial standalone NPNs do not meet the performance parameters defined in the 5G specification and standardization process. Current research lacks in performance measurements of download, upload, and time delays of 5G standalone-capable end-devices in NPNs with currently available software and hardware in industrial settings. Therefore, this paper presents initial measurements of the data rate and the round-trip delay in standalone NPNs with various end-devices to generate a first performance benchmark for 5G-based applications. In addition, five end-devices are compared to gain insights into the performance of currently available standalone-capable 5G chipsets. To validate the data rate, three locally hosted measurement methods, namely iPerf3, LibreSpeed and OpenSpeedTest, are used. Locally hosted Ping and LibreSpeed have been executed to validate the time delay. The 5G standalone NPN of Reutlingen University uses licensed frequencies between 3.7-3.8 GHz and serves as the testbed for this study.
The use of deep learning models with medical data is becoming more widespread. However, although numerous models have shown high accuracy in medical-related tasks, such as medical image recognition (e.g. radiographs), there are still many problems with seeing these models operating in a real healthcare environment. This article presents a series of basic requirements that must be taken into account when developing deep learning models for biomedical time series classification tasks, with the aim of facilitating the subsequent production of the models in healthcare. These requirements range from the correct collection of data, to the existing techniques for a correct explanation of the results obtained by the models. This is due to the fact that one of the main reasons why the use of deep learning models is not more widespread in healthcare settings is their lack of clarity when it comes to explaining decision making.
The evaluation of the effectiveness of different machine learning algorithms on a publicly available database of signals derived from wearable devices is presented with the goal of optimizing human activity recognition and classification. Among the wide number of body signals we choose a couple of signals, namely photoplethysmographic (optically detected subcutaneous blood volume) and tri-axis acceleration signals that are easy to be simultaneously acquired using commercial widespread devices (e.g. smartwatches) as well as custom wearable wireless devices designed for sport, healthcare, or clinical purposes. To this end, two widely used algorithms (decision tree and k-nearest neighbor) were tested, and their performance were compared to two new recent algorithms (particle Bernstein and a Monte Carlo-based regression) both in terms of accuracy and processing time. A data preprocessing phase was also considered to improve the performance of the machine learning procedures, in order to reduce the problem size and a detailed analysis of the compression strategy and results is also presented.
Machine failures’ consequences – a classification model considering ultra-efficiency criteria
(2023)
To strive for a sustainable production, maintenance has to evaluate possible machine failure consequences not just economically but also holistically. Approaches such as the ultra-efficiency factory consider energy, material, human/staff, emission, and organization as optimization dimensions. These ultra-efficiency dimensions can be considered for analyzing not only the respective machine failure but also the effects on the entire production system holistically. This paper presents an easy to use method, based on a questionnaire, for assessing the failure consequences of a machine malfunction in a production system considering the ultra-efficiency dimensions. The method was validated in a battery production.
Strategy to adjust people’s performance capabilities to new requirements and grantee employability in the world of work. Good examples for this are the current changes in the logistics environment. Regularly, new services and processes close to production were taken into the portfolio of logistics enterprises, so the daily Tasks are changing continuously for the skilled works.
LOPEC aims in developing and offering special-tailored training for Lean Logistics and required basic skills for skilled workers on shopfloor level. Needed know-how for today’s challenges in logistics will be transferred. Another aspect of LOPEC is the development and use of a personal excellence self-assessment that allows a Person to assess and thus improve his/her own level of maturity in employability skills. Thus, LOPEC is aiming at People ehancement as entry ticket to lifelong continuous learning by increasing the maturity level of personal logistic excellence. A common European view for “Logistics personal excellence” for skilled workers will ensure that the final product is an open product, using international, pan European validated standards. As results LOPEC will provide training modules for post-secondary education in the area of Lean Logistics, required basics skills and offers transparency of personal excellence with a personal self-assessment Software solution, regarding the personal maturity Level of hard and soft skills at any time. It can be used as an innovative tool for monitoring personal lifelong learning routes as well as within companies as a strategic tool within Human Resource Development.
Learning factories on demand
(2021)
Learning Factories are research and learning environments that demonstrate new concepts and technologies for the industry in a practical environment. The interaction between physical and virtual components is a central aspect. The mediation and presentation usually occur directly in the learning factory and are thus limited in time and concerning the user group. A learning factory- on-demand- can be provided by dividing and virtualizing the individual components via containers and microservices. This enables both local operation and operation hybrid cloud or cloud systems. Physical components can be mapped either through standardized interfaces or suitable emulators. Using the example of the Learning Factory at Reutlingen University (Werk150), it will be shown how different use cases can be made available utilizing software-based orchestration, thus promoting broader and more independent teaching.
In the last decade, numerous learning factories for education, training, and research have been built up in industry and academia. In recent years learning factory initiatives were elevated from a local to a European and then to a worldwide level. In 2014 the CIRP Collaborative Working Group (CWG) on Learning Factories enables a lively exchange on the topic "Learning Factories for future oriented research and education in manufacturing". In this paper results of discussions inside the CWG are presented. First, what is meant by the term Learning Factory is outlined. Second, based on the definition a description model (morphology) for learning factories is presented. The morphology covers the most relevant characteristics and features of learning factories in seven dimensions. Third, following the morphology the actual variance of learning factory manifestations is shown in six learning factory application scenarios from industrial training over education to research. Finally, future prospects of the learning factory concept are presented.
Learning factories present a promising environment for education, training and research, especially in manufacturing related areas which are a main driver for wealth creation in any nation. While numerous learning factories have been built in industry and academia in the last decades, a comprehensive scientific overview of the topic is still missing. This paper intends to close this gap establishing the state of the art of learning factories. The motivations, historic background, and the didactic foundations of learning factories are outlined. Definitions of the term learning factory and the corresponding morphological model are provided. An overview of existing learning factory approaches in industry and academia is provided, showing the broad range of different applications and varying contents. The state of the art of learning factories curricula design and their use to enhance learning and research as well as potentials and limitations are presented. Conclusions and an outlook on further research priorities are offered.
Herein, biochar from biomass residues is demonstrated as active materials for the catalytic cracking of waste motor oil into diesel-like fuels. Above all, alkali-treated rice husk biochar showed great activity with a 250% increase in the kinetic constant compared to the thermal cracking. It also showed better activity than synthetic materials, as previously reported. Moreover, much lower activation energy (185.77 to 293.48 kJ/mol) for the cracking process was also obtained. According to materials characterization, the catalytic activity was more related to the nature of the biochar’s surface than its specific surface area. Finally, liquid products complied with all the physical properties defined by international standards for diesel-like fuels, with the presence of hydrocarbons chains between C10 - C27 similar to the ones obtained in commercial diesel.
Cyber-Physical Production Systems increasingly use semantic information to meet the grown flexibility requirements. Ontologies are often used to represent and use this semantic information. Existing systems focus on mapping knowledge and less on the exchange with other relevant IT systems (e.g., ERP systems) in which crucial semantic information, often implicit, is contained. This article presents an approach that enables the exchange of semantic information via adapters. The approach is demonstrated by a use case utilizing an MES system and an ERP system.
Introducing continuous experimentation in large software-intensive product and service organisations
(2017)
Software development in highly dynamic environments imposes high risks to development organizations. One such risk is that the developed software may be of only little or no value to customers, wasting the invested development efforts.Continuous experiment ation, as an experiment-driven development approach, may reduce such development risks by iteratively testing product and service assumptions that are critical to the success of the software. Although several experiment-driven development approaches are available, there is little guidance available on how to introduce continuous experimentation into an organization. This article presents a multiple-case study that aims at better understanding the process of introducing continuous experimentation into an organization with an already established development process. The results from the study show that companies are open to adopting such an approach and learning throughout the introduction process. Several benefits were obtained, such as reduced development efforts, deeper customer insights, and better support for development decisions. Challenges included complex stakeholder structures, difficulties in defining success criteria, and building experimen- tation skills. Our findings indicate that organizational factors may limit the benefits of experimentation. Moreover, introducing continuous experimentation requires fundamental changes in how companies operate, and a systematic introduction process can increase the chances of a successful start.
The interaction between lipid bilayers in water has been intensively studied over the last decades. Osmotic stress was applied to evaluate the forces between two approaching lipid bilayers in aqueous solution. The force–distance relation between lipid mono- or bilayers deposited on mica sheets using a surface force apparatus (SFA) was also measured. Lipid stabilised foam films offer another possibility to study the interactions between lipid monolayers. These films can be prepared comparatively easy with very good reproducibility. Foam films consist usually of two adsorbed surfactant monolayers separated by a layer of the aqueous solution from which the film is created. Their thickness can be conveniently measured using microinterferometric techniques. Studies with foam films deliver valuable information on the interactions between lipid membranes and especially their stability and permeability. Presenting inverse black lipid membrane (BLM) foam films supply information about the properties of the lipid self-organisation in bilayers. The present paper summarises results on microscopic lipid stabilised foam films by measuring their thickness and contact angle. Most of the presented results concern foam films prepared from dispersions of the zwitterionic lipid 1,2-dimyristoyl-sn-glycero-3-phosphorylcholine (DMPC) and some of its mixtures with the anionic lipid — 1,2-dimyristoyl-sn-glycero-3-[phospho-rac-(1-glycerol)] (DMPG).
The strength of the long range and short range forces between the lipid layers is discussed. The van der Waals attractive force is calculated. The electrostatic repulsive force is estimated from experiments at different electrolyte concentrations (NaCl, CaCl2) or by modification of the electrostatic double layer surface potential by incorporating charged lipids in the lipid monolayers. The short range interactions are studied and modified by using small carbohydrates (fructose and sucrose), ethanol (EtOH) or dimethylsulfoxide (DMSO). Some results are compared with the structure of lipid monolayers deposited at the liquid/air interface (monolayers spread in Langmuir trough), which are one of most studied biomembrane model system. The comparison between the film thickness and the free energy of film formation is used to estimate the contribution of the different components of the disjoining pressure to the total interaction in the film and their dependence on the composition of the film forming solution.