Refine
Document Type
- Journal article (198)
- Book chapter (18)
- Book (1)
- Anthology (1)
- Review (1)
Is part of the Bibliography
- yes (219)
Institute
- ESB Business School (79)
- Life Sciences (66)
- Informatik (46)
- Technik (21)
- Texoversum (7)
Publisher
- Elsevier (219) (remove)
Adoption of artificial intelligence (AI) has risen sharply in recent years but many firms are not successful in realising the expected benefits or even terminate projects before completion. While there are a number of previous studies that highlight challenges in AI projects, critical factors that lead to project failure are mostly unknown. The aim of this study is therefore to identify distinct factors that are critical for failure of AI projects. To address this, interviews with experts in the field of AI from different industries are conducted and the results are analyzed using qualitative analysis methods. The results show that both, organizational and technological issues can cause project failure. Our study contributes to knowledge by reviewing previously identified challenges in terms of their criticality for project failure based on new empirical data, as well as, by identifying previously unknown factors.
Consistent supply chain management across all levels of value creation is a common approach in the industrial sector. The implementation in agricultural processes requires rethinking in the supply chain concept. The reasons are the heuristic characterized processes, the stochastic environmental conditions, the mobility of the production facilities and the low division of work.
In this paper we deal with how concepts of innovative supply chain management of Industrie 4.0 could not only deliver a way to overcome said problems but also provide the foundation for the development of new forms of work and business models for Farming 4.0.
Haptic softness is a central product attribute for many fabric-related retailers. Can those retailers use music - an easy to implement in-store atmospheric cue - to influence consumers' perception of this central product attribute? Across four studies, this research shows that high (vs. low) music softness enhances consumers' haptic softness perceptions. We argue that this cross-modal effect occurs owing to a transfer of softness-related associations from the auditory to the haptic modality. To better inform retail practice, we examine three managerially relevant boundary conditions at the product and store levels.
Polyethylene glycol (PEG) is a widely used modification for drug delivery systems. It reduces undesired interaction with biological components, aggregation of complexes and serves as a hydrophilic linker of ligands for targeted drug delivery. However, PEGylation can also lead to undesired changes in physicochemical characteristics of chitosan/siRNA nanoplexes and hamper gene silencing.
To address this conflicting issue, PEG-chitosan copolymers were synthesized with stepwise increasing degrees of PEG substitution (1.5% to 8.0%). Subsequently formed PEG-chitosan/siRNA nanoplexes were characterized physicochemically and biologically. The results showed that small ratios of chitosan PEGylation did not affect nanoplex stability and density. However, higher PEGylation ratios reduced nanoplex size and charge, as well as cell uptake and final siRNA knockdown efficiency.
Therefore, we recommend fine-tuning of PEGylation ratios to generate PEG-chitosan/siRNA delivery systems with maximum bioactivity. The degree of PEGylation for chitosan/siRNA nanoplexes should be kept low in order to maintain optimal nanoplex efficiency.
The sol-gel approach offers a new class of flame retardants with a high potential for textile applications. Pure inorganic sol-gel systems do, however, typically not provide an effect sufficient for a sel-fextinguishing behavior on its own. We therefore employed compounds with nitrogen and phosphorous containing groups. Especially the combination of compounds with both elements, using the synergism, is promising for the aim to find well-applicable, environmental friendly, halogen-free flame retardants. In our approach, the sol-gel network ensured on the one hand the link to the textile as nonflammable binder. On the other hand, the sol-gel-based networks modified with functional groups containing nitrogen groups provided flame retardancy. In this way, a flame retardant finishing for textiles could be obtained by simple finishing techniques as, e.g., padding. Besides a characterization with various flame tests (e.g., according to EN ISO 15025 e protective clothing), we used a combination of cone calorimetry, thermogravimetry coupled with infrared spectroscopy analysis and scanning electron microscopy to analyze the mechanism of flame retardancy. Thus, we could show that the main mechanism is based on the formation of a protection layer. This work provides a model system for sol-gel-based flame retardants and has the potential to show the principle feasibility of the sol-gel approach in flame retardancy of textiles. It therefore lays the groundwork for tailoring sol-gel layers from newly synthesized sol-gel precursors containing nitrogen and phosphorous groups.
The promise of immutable documents to make it easier and less expensive for consumers and producers to collaborate in a verifiable way would represent an enormous progress, especially as companies strive for establish service contracts which are based on the flow of many small transactions using machine-to-machine communication. The blockchain technology logs these data, verifies the authenticity and make them available for service offers. This work deals with an architecture enabling to setup order processing between consumers and produceers using blockchain. In this way, the technical feasibility is shown and the special characteristics of blockchain production networks will be discussed.
Due to the lack of sophisticated component libraries for microelectromechanical systems (MEMS), highly optimized MEMS sensors are currently designed using a polygon driven design flow. The advantage of this design flow is its accurate mechanical simulation, but it lacks a method for analyzing the dynamic parasitic electrostatic effects arising from the electric coupling between (stationary) wiring and structures in motion. In order to close this gap, we present a method that enables the parasitics arising from in-plane, sensor-structure motion to be extracted quasi-dynamically. With the method's structural-recognition feature we can analyze and optimize dynamic parasitic electrostatic effects.
Functionalised particles are highly requested in materials research, as they can be used as vital components in many advanced applications such as smart materials, functional coatings, drug carrier systems or adsorption materials. In this study, furan-functionalised melamine-formaldehyde (MF) particles were successfully prepared for the first time using an organic sol-gel process. Commercially available 2-Aminomethylfuran (AMF) and 2-Aminomethyl-5-methylfuran (AMMF) were used as modifying agents. In the isolated polymer particles, a melamine (M) to modifying agent ratio of M:AMF mol/mol 2.04:1 and M:AMMF ratio of mol/mol 1.25:1 was used. The obtained particles were isolated in various centrifugation and re-dispersion cycles and analysed using ATR-FT-IR, Raman and solid state 13C NMR spectroscopy, TGA, SEM and DSC measurements. Upon functionalisation the size of the MF particles increased (MF 1.59 µm, 27% CV (coefficient of variation); MF-AMF 2.56 µm, 25% CV; MF-AMMF 2.20 µm, 35% CV). DSC measurements showed that another type of exothermic residual reactivity besides condensation-based curing takes place with the furan-modified particles that is not related to the liberation of volatile compounds. The newly obtained particles are able to undergo Diels-Alder reactions with maleimide groups. The characteristic IR and Raman absorbance bands of the reaction products after the particles were reacted with 4,4′-Diphenylmethanebismaleimide reagent confirm the formation of a Diels-Alder adduct.
In visual adaptive tracking, the tracker adapts to the target, background, and conditions of the image sequence. Each update introduces some error, so the tracker might drift away from the target over time. To increase the robustness against the drifting problem, we present three ideas on top of a particle filter framework: An optical-flow-based motion estimation, a learning strategy for preventing bad updates while staying adaptive, and a sliding window detector for failure detection and finding the best training examples. We experimentally evaluate the ideas using the BoBoT dataseta. The code of our tracker is available online.
Plasmonics and nanophotonics both deal with the interaction of light with structures of typically sub-wavelength size in one of more dimensions. Over the past decade or two, interest in these topics has grown significantly. This includes basic research towards detailed understanding of light-matter interaction and the manipulation of light on the nanometer scale as well as the search for applications ranging from quantum information processing, data storage, solar cells, spectroscopy and microscopy to (bio-)sensors and biomedical devices. Key enablers for this development are advanced materials and the variety of techniques to structure them with nanometer precision on the one hand, and progress in the theoretical description and numerical implementations, on the other. Besides the traditional metals Au, Ag, Al, and Cu also compounds such as refractory metal nitrides with much higher durability as well as semiconductors, dielectrics and hybrid structures have become of interest. Structuring techniques are not only aiming at the fabrication of individual elements with highest precision for detailed interaction analysis, but also at methods for large scale, low-cost nanofabrication mostly for sensor applications. In the former case, mostly electron beam lithography and focused ion beam milling are employed, while for high throughput various forms of nanoimprint and self-assembly based techniques are favored. Thin film deposition and pattern transfer techniques are mostly derived from those developed for nano-electronics, however more recently methods such as electroless plating, atomic layer deposition or etching and 3-D additive techniques are appearing. Thus, highly specialized expertise has been acquired in the different disciplines, and successful research and technology transfer will draw from this pool of knowledge.
Homogeneous and monodispersed furan functionalised melamine-formaldehyde particles were produced. As a precursor, 2-chloro-1,3,5-triazine-2,4-diamine (Mel) was selectively substituted with 2-aminomethyl furan (Fu) units in a convenient one step reaction. The pure reaction product Fu-Mel, which was used without further purification, was reacted with formaldehyde by conventional sol-gel condensation in aqueous medium to yield chemically homogenous, spherically shaped and monodispersed particles. The particles were analysed using ATR-FT-IR, Raman, 1H and 13C NMR spectroscopy, TGA, SEM and DSC measurements. The reactivity of the furan groups located at the particle surface was studied by performing a thermoreversible Diels-Alder cycloaddition reaction with bis-maleimide coupling agents. The formed networks showed thermoreversible behaviour, which was characterised by dynamic IR and DSC measurements.
This article examines the risks and societal costs associated with flexible average inflation targeting in the United States and symmetric inflation targeting in the Eurozone. Employing an empirical approach, we analyze monthly cumulative inflation gaps over a monetary policy horizon of 36 months. By investigating the trajectories of the cumulative inflation gaps, we find a heavy tailed distribution and a 20 percent probability of over- and undershooting the inflation target. We exhibit that the offsetting mechanism introduced in the revised monetary strategies lack credibility in ensuring price stability during a period of persistent inflation. Consequently, the credibility of central banks may be compromised. The policy implications are the integration of an escape clause and prompt monetary corrections in cases where the inflation goal is not achieved. This study provides insights for policymakers and central banks, emphasizing challenges in maintaining credibility and price stability within the new monetary strategies.
Concrete is significant for construction. A problem in application is the appearance of cracks that will damage its strength. An autogenous crack-healing mechanism based on bacteria receives increasing attention in recent years. The bacteria are able to form calcium carbonate (CaCO3) precipitations in suitable conditions to protect and reinforce the concrete. However, a large number of spores are crushed in aged specimens, resulting in a loss of viability. A new kind of hydrogel crosslinked by alginate, chitosan and calcium ions was introduced in this study. It was observed that the addition of chitosan improved the swelling properties of calcium alginate. Opposite pH response to calcium alginate was observed when the chitosan content in the solution reached 1.0%. With an addition of 1.0% chitosan in hydrogel beads, 10.28% increase of compressive strength and 13.79% increase of flexural strength to the control were observed. The results reveal self-healing properties of concretes. A healing crack of 4 cm length and 1 mm width was observed when using cement PO325, with the addition of bacterial spores (2.54–3.07 × 105/cm3 concrete) encapsulated by hydrogel containing no chitosan.
The Covid-19 virus has triggered a worldwide pandemic and therefore many employees were required to work from home which caused numerous challenges. With the Covid-19 pandemic now in its third year, there are already several studies available on the subject of home offices. To investigate the impact of remote work on employee satisfaction and trust, this quantitative study aims to review existing results and formulate hypotheses based on a conceptual model created through a qualitative study and extensive literature review. The research question is as follows: Does home office during Covid-19 affect employee satisfaction and trust? To test the hypotheses, a structural equation model was constructed and analyzed. The culture of trust and flexibility are identified as the biggest influencing factors in this study.
Impact of phenolic resin preparation on its properties and its penetration behavior in Kraft paper
(2018)
The core of decorative laminates is generally made of stacked Kraft paper sheets impregnated with a phenolic resin. As the impregnation process in industry is relatively fast, new methods need to be developed to characterize it for different paper-resin systems. Several phenolic resins were synthesized with the same Phenol:Formaldehyde ratio of 1:1.8 and characterized by Fourier Transform Infrared Spectrometry (FTIR) as well as Size-Exclusion Chromatography (SEC). In addition, their viscosities and surface tensions when diluted in methanol to 45% of solid content were measured. The capacity of each resin to penetrate a Kraft paper sheet was characterized using a new method, which measures the conductivities induced by the liquid resin crossing the paper substrate. With this method, crossing times could be measured with a good accuracy. Surprisingly, the results showed that the penetration time of the resin samples is not correlated to the viscosity values, but rather to the surface tension characteristics and the chemical characteristics of paper. Furthermore, some resins had a higher swelling effect on the fibers that delayed the crossing of the liquid through the paper.
Mesoporous silica microspheres (MPSMs) find broad application as separation materials in high liquid chromatography (HPLC). A promising preparation strategy uses p(GMA-co-EDMA) as hard templates to control the pore properties and a narrow size distribution of the MPMs. Here six hard templates were prepared which differ in their porosity and surface functionalization. This was achieved by altering the ratio of GMA to EDMA and by adjusting the proportion of monomer and porogen in the polymerization process. The various amounts of GMA incorporated into the polymer network of P1-6 lead to different numbers of tetraethylene pentamine in the p(GMA-co-EDMA) template. This was established by a partial least squares regression (PLS-R) model, based on FTIR spectra of the templates. Deposition of silica nanoparticles (SNP) into the template under Stoeber conditions and subsequent removal of the polymer by calcination result in MPSM1-6. The size of the SNPs and their incorporation depends on the pore parameters of the template and degree of TEPA functionalization. Moreover, the incorporated SNPs construct the silica network and control the pore parameters of the MPSMs. Functionalization of the MPSMs with trimethoxy (octadecyl) silane allows their use as a stationary phase for the separation of biomolecules. The pore characteristics and the functionalization of the template determine the pore structure of the silica particles and, consequently, their separation properties.
Customer relationship management (CRM) is one of the most frequently adopted management tools and has received much attention in the literature. From a company-wide perspective, CRM is viewed as a complex process requiring interventions in different company areas. Previous research has already highlighted the pitfalls and failures related to a partial and incomplete view of CRM. This study advances research on CRM by investigating the impact of the relative implementation time according to which interventions are implemented in different areas (customer management, CRM technology, organizational alignment, and CRM strategy) on CRM performance. The results of the empirical study reveal that compared to other critical CRM activities, a later implementation of organizational alignment activities has a negative impact on performance. Further, our results show that CRM implementations do not equally address the areas of customer acquisition, growth, and loyalty, since this clearly depends on company objectives and also on geographical differences.
Manufacturing has to adapt to changing situations in order to stay competitive.It demands a flexible and easy-to-use integration of production equipment and ICT systems. The contribution of this paper is the presentation of the implementation of the Manufacturing Integration Assistant (MIALinx). The integration steps range from integrating sensors over collecting and rule-based processing of sensor information to the execution of required actions. Furthermore, we describe the implementation of MIALinx by commissioning it in a manufacturing environment to retrofit legacy machines for Industrie 4.0. Finally, we validate the suitability of our approach by applying our solution in a medium-size company.
The flexible and easy-to-use integration of production equipment and IT systems on the shop floor becomes more and more a success factor for manufacturing to adapt rapidly to changing situations. The approach of the Manufacturing Integration Assistant (MIALinx) is to simplify this challenge. The integration steps range from integrating sensors over collecting and rule-based processing of sensor information to the execution of required actions. This paper presents the implementation of MIALinx to retrofit legacy machines for Industry 4.0 in a manufacturing environment and focus on the concept and implementation of the easy-to-use user interface as a key element.
Papermaking waste liquid (black liquor) is a serious source of water pollution worldwide. The subsequent treatment of it is very difficult cause it contains a large amount of lignin, inorganic salts, organic matter, and pigments, which lead to serious water pollution. Lignin is the main by-product of the paper industry and is the only natural aromatic recyclable resource. Its effective utilization rate is currently less than 3%. Therefore, how to effectively recycle lignin in papermaking waste liquid and further synthesize industrialized products is of great significance to the sustainable development and environmental protection. Besides, based on the shortage of petroleum resources in recent years, the application of biomass resources instead of petroleum resources in the industry is also an important issue. In this article, we explored the best optimal conditions for the oxypropylation and esterification of lignin, and prepared bio-bitumen based on modified lignin, and then applied it to the waterproof coating sheets. FTIR and mechanical properties (softening point, low-temperature flexibility, peel strength, etc.) were tested on the obtained waterproof coating sheets. The results show that the addition of modified lignin reduced the softening point and peel strength of the coating sheets. Interestingly, both oxypropylated lignin (OL) and esterified lignin (OEL) were very beneficial to resist the decrease in peel strength during the aging process, showing a significant improvement in the performance of the coating sheets after aging compared to the control.
High-performance liquid chromatography is one of the most important analytical tools for the identification and separation of substances. The efficiency of this method is largely determined by the stationary phase of the columns. Although monodisperse mesoporous silica microspheres (MPSM) represent a commonly used material as stationary phase their tailored preparation remains challenging. Here we report on the synthesis of four MPSMs via the hard template method. Silica nanoparticles (SNPs) which form the silica network of the final MPSMs were generated in situ from tetraethyl orthosilicate (TEOS) in the presence of (3-aminopropyl) triethoxysilane (APTES) functionalized p(GMA-co-EDMA) as hard template. Methanol, ethanol, 2-propanol, and 1-butanol were applied as solvents to control the size of the SNPs in the hybrid beads (HB). After calcination, MPSMs with different sizes, morphology and pore properties were obtained and characterized by scanning electron microscopy, nitrogen adsorption and desorption measurements, thermogravimetric analysis, solid state NMR and DRIFT IR spectroscopy. Interestingly, the 29Si NMR spectra of the HBs show T and Q group species which suggests that there is no covalent linkage between the SNPs and the template. The MPSMs were functionalized with trimethoxy (octadecyl) silane and used as stationary phases in reversed-phase chromatography to separate a mixture of eleven different amino acids. The separation characteristics of the MPSMs strongly depend on their morphology and pore properties which are controlled by the solvent during the preparation of the MPSMs. Overall, the separation behavior of the best phases is comparable with those of commercially available columns. The phases even achieve faster separation of the amino acids without loss of quality.
In spite of many studies, knowledge about the fundamental factors influencing adhesion between addition curing silicones and aluminum substrates is very limited. The aim of this publication is to evaluate the influence of the formulation and the surface state of the adherend on bond strength. For this purpose, the composition of an addition curing silicone was systematically varied and the effects on both material and bond properties were examined. Additionally, the influence of surface aging at different humidities (0% r. h., 34% r. h., 82% r. h.) of acid etch pretreated aluminum substrates was considered. It is shown that the mechanical properties of the silicone material can be easily adjusted over a wide range by changing the formulation. Although high tensile strengths up to 9.2 MPa for the silicone material can be achieved, lap-shear strengths remain moderate at approximately 3.5 MPa. Predominant adhesive failures show the limited adhesive strength of the basic formulation without additives. Basic ingredients of addition curing silicones without additives are able to reach a certain adhesive strength. However, this strength was quite limited and adhesion promoters are required to further improve adhesion. The humidity at which the pretreated substrates are stored has an overall minor influence on bond strength. Surprisingly, bond strength tends to increase with the storage time of aluminum substrates despite lower surface energies in comparison to freshly pretreated substrates. All in all, the storage conditions of aluminum had a rather small influence on adhesion, whereas the composition of the silicone adhesive strongly influences bond strength.
Soft thermoplastic polysiloxane-urea-elastomers (PSUs) were prepared for the application as a biomaterial to replace the human natural lens after cataract surgery. PSUs were synthesized from amino-terminated polydimethylsiloxanes (PDMS), 4,4′-Methylenebis(cyclohexylisocyanate) (H12MDI) and 1,3–Bis(3-aminopropyl)-1,1,3,3–tetramethyldisiloxane (APTMDS) by a two-step polyaddition route. Such a material has to be highly transparent and must exhibit a low Young’s Modulus and excellent dimensional stability. Polydimethylsiloxanes in the range of 3000–33,000 g·mol−1 were therefore prepared by ring-chain-equilibration of octamethylcyclotetrasiloxane (D4) and APTMDS in order to study the influence of the soft segment molecular weight on the mechanical properties and the transparency of the PSU-elastomers. 2,4,6,8-Tetramethyl-2,4,6,8-tetraphenylcyclotetrasiloxane (D4Me,Ph) was co-polymerized with D4 in order to adjust the refractive index of the polydimethyl-methyl-phenyl-siloxane-copolymers to a value equivalent to a young human natural lens. Very elastic PSUs with Elongation at Break values higher than 700% were prepared. PSU-elastomers, synthesized from PDMS of molecular weights up to 18,000 g·mol−1, showed transmittance values of over 90% within the visible spectrum range. The soft segment refractive index was increased through the incorporation of 14 mol % of methyl-phenyl-siloxane from 1.4011 to 1.4346 (37 °C). Young’s Moduli of PSU-elastomers were around 1 MPa and lower at PDMS molecular weights up to 15,000 g·mol−1. 10-cycle hysteresis measurements were applied to evaluate the mechanical stability of the PSUs at repeated stress. Hysteresis values at 100% strain decreased from 32 to 2% (10th cycle) with increasing PDMS molecular weight. Furthermore, hysteresis at 5% strain was only detected in PSU-elastomers with low PDMS molecular weights. Finally, preliminary results of in vitro cytotoxicity tests on a PSU-elastomer showed no toxic effects on HaCaT-cells.
Powder coating of engineered wood panels such as medium density fibreboards (MDF) is gaining industrial interest due to ecological and economic advantages of powder coating technology. For transferring powder coating technology to temperature-sensitive substrates like MDF, a thorough understanding of the melting, flowing and curing behaviour of the used low-bake resins is required. In the present study, thermo-analysis in combination with iso-conversional kinetic data analysis as well as rheometry is applied to characterise the properties of an epoxy-based powder coating. Neat resin and cured powder coating films are examined in order to define an ideal production window within which the resin is preferably applied and processed to yield satisfactory surface performance on the one hand and without exposing the carrier MDF too high a temperature load on the other hand to prevent the panel from deteriorating in mechanical strength. In order to produce powder coated films of high surface gloss – a feature that has not yet successfully been realized on MDF with powder coatings – a new curing technology, in-mould surface finishing, has been applied.
Healthy sleep is required for sufficient restoration of the human body and brain. Therefore, in the case of sleep disorders, appropriate therapy should be applied timely, which requires a prompt diagnosis. Traditionally, a sleep diary is a part of diagnosis and therapy monitoring for some sleep disorders, such as cognitive behaviour therapy for insomnia. To automatise sleep monitoring and make it more comfortable for users, substituting a sleep diary with a smartwatch measurement could be considered. With the aim of providing accurate results, a study with a total of 30 night recordings was conducted. Objective sleep measurement with a Samsung Galaxy Watch 4 was compared with a subjective approach (sleep diary), evaluating the four relevant sleep characteristics: time of getting asleep, wake up time, sleep efficiency (SE), and total sleep time (TST). The performed analysis has demonstrated that the median difference between both measurement approaches was equal to 7 and 3 minutes for a time of getting asleep and wake up time correspondingly, which allows substituting a subjective measurement with a smartwatch. The SE was determined with a median difference between the two measurement methods of 5.22%. This result also implicates a possibility of substitution. Some single recordings have indicated a higher variance between the two approaches. Therefore, the conclusion can be made that a substitution provides reliable results primarily in the case of long-term monitoring. The results of the evaluation of the TST measurement do not allow to recommend substitution of the measurement method.
Digitisation forms a part of Industrie 4.0 and is both threatening, but also providing an opportunity to transform business as we know it; and can make entire business models redundant. Although companies might realise the need to digitise, many are unsure of how to start this digital transformation. This paper addresses the problems and challenges faced in digitisation, and develops a model for initialising digital transformation in enterprises. The model is based on a continuous improvement cycle, and also includes triggers for innovative and digital thinking within the enterprise. The model was successfully validated in the German service sector.
Cyber-Physical Production Systems increasingly use semantic information to meet the grown flexibility requirements. Ontologies are often used to represent and use this semantic information. Existing systems focus on mapping knowledge and less on the exchange with other relevant IT systems (e.g., ERP systems) in which crucial semantic information, often implicit, is contained. This article presents an approach that enables the exchange of semantic information via adapters. The approach is demonstrated by a use case utilizing an MES system and an ERP system.
New business opportunities appeared using the potential of the Internet and related digital technologies, like the Internet of Things, services computing, artificial intelligence, cloud, edge, and fog computing, social networks, big data with analytics, mobile systems, collaboration networks, and cyber-physical systems. Companies are transforming their strategy and product base, as well as their culture, processes and information systems to adopt digital transformation or to approach for digital leadership. Digitalization fosters the development of IT environments with many rather small and distributed structures, like the Internet of Things, Microservices, or other micro-granular elements. Digitalization has a substantial impact for architecting the open and complex world of highly distributed digital servcies and products, as part of a new digital enterprise architecture, which structure and direct service-dominant digital products and services. The present research paper investigates mechanisms for supporting the evolution of digital enterprise architectures with user-friendly methods and instruments of interaction, visualization, and intelligent decision management during the exploration of multiple and interconnected perspectives by an architecture management cockpit.
The paper illustrates the status quo of a research project for the development of a control system enabling CHP units for a demand-oriented electricity production by an intelligent management of the heat storage tank. Thereby the focus of the project is twofold. One is the compensation of the fluctuating power production by the renewable energies solar and wind. Secondly, a reduction of the load on the power grid is intended by a better match of local electricity demand and production. In detail, the general control strategy is outlined, the method utilized for forecasting heat and electricity demand is illustrated as well as a correlation method for the temperature distribution in the heat storage tank based on a Sigmoid function is proposed. Moreover, the simulation model for verification and optimization of the control system and the two field test sites for implementing and testing the system are introduced.
The interaction between lipid bilayers in water has been intensively studied over the last decades. Osmotic stress was applied to evaluate the forces between two approaching lipid bilayers in aqueous solution. The force–distance relation between lipid mono- or bilayers deposited on mica sheets using a surface force apparatus (SFA) was also measured. Lipid stabilised foam films offer another possibility to study the interactions between lipid monolayers. These films can be prepared comparatively easy with very good reproducibility. Foam films consist usually of two adsorbed surfactant monolayers separated by a layer of the aqueous solution from which the film is created. Their thickness can be conveniently measured using microinterferometric techniques. Studies with foam films deliver valuable information on the interactions between lipid membranes and especially their stability and permeability. Presenting inverse black lipid membrane (BLM) foam films supply information about the properties of the lipid self-organisation in bilayers. The present paper summarises results on microscopic lipid stabilised foam films by measuring their thickness and contact angle. Most of the presented results concern foam films prepared from dispersions of the zwitterionic lipid 1,2-dimyristoyl-sn-glycero-3-phosphorylcholine (DMPC) and some of its mixtures with the anionic lipid — 1,2-dimyristoyl-sn-glycero-3-[phospho-rac-(1-glycerol)] (DMPG).
The strength of the long range and short range forces between the lipid layers is discussed. The van der Waals attractive force is calculated. The electrostatic repulsive force is estimated from experiments at different electrolyte concentrations (NaCl, CaCl2) or by modification of the electrostatic double layer surface potential by incorporating charged lipids in the lipid monolayers. The short range interactions are studied and modified by using small carbohydrates (fructose and sucrose), ethanol (EtOH) or dimethylsulfoxide (DMSO). Some results are compared with the structure of lipid monolayers deposited at the liquid/air interface (monolayers spread in Langmuir trough), which are one of most studied biomembrane model system. The comparison between the film thickness and the free energy of film formation is used to estimate the contribution of the different components of the disjoining pressure to the total interaction in the film and their dependence on the composition of the film forming solution.
Introducing continuous experimentation in large software-intensive product and service organisations
(2017)
Software development in highly dynamic environments imposes high risks to development organizations. One such risk is that the developed software may be of only little or no value to customers, wasting the invested development efforts.Continuous experiment ation, as an experiment-driven development approach, may reduce such development risks by iteratively testing product and service assumptions that are critical to the success of the software. Although several experiment-driven development approaches are available, there is little guidance available on how to introduce continuous experimentation into an organization. This article presents a multiple-case study that aims at better understanding the process of introducing continuous experimentation into an organization with an already established development process. The results from the study show that companies are open to adopting such an approach and learning throughout the introduction process. Several benefits were obtained, such as reduced development efforts, deeper customer insights, and better support for development decisions. Challenges included complex stakeholder structures, difficulties in defining success criteria, and building experimen- tation skills. Our findings indicate that organizational factors may limit the benefits of experimentation. Moreover, introducing continuous experimentation requires fundamental changes in how companies operate, and a systematic introduction process can increase the chances of a successful start.
Cyber-Physical Production Systems increasingly use semantic information to meet the grown flexibility requirements. Ontologies are often used to represent and use this semantic information. Existing systems focus on mapping knowledge and less on the exchange with other relevant IT systems (e.g., ERP systems) in which crucial semantic information, often implicit, is contained. This article presents an approach that enables the exchange of semantic information via adapters. The approach is demonstrated by a use case utilizing an MES system and an ERP system.
Herein, biochar from biomass residues is demonstrated as active materials for the catalytic cracking of waste motor oil into diesel-like fuels. Above all, alkali-treated rice husk biochar showed great activity with a 250% increase in the kinetic constant compared to the thermal cracking. It also showed better activity than synthetic materials, as previously reported. Moreover, much lower activation energy (185.77 to 293.48 kJ/mol) for the cracking process was also obtained. According to materials characterization, the catalytic activity was more related to the nature of the biochar’s surface than its specific surface area. Finally, liquid products complied with all the physical properties defined by international standards for diesel-like fuels, with the presence of hydrocarbons chains between C10 - C27 similar to the ones obtained in commercial diesel.
Learning factories present a promising environment for education, training and research, especially in manufacturing related areas which are a main driver for wealth creation in any nation. While numerous learning factories have been built in industry and academia in the last decades, a comprehensive scientific overview of the topic is still missing. This paper intends to close this gap establishing the state of the art of learning factories. The motivations, historic background, and the didactic foundations of learning factories are outlined. Definitions of the term learning factory and the corresponding morphological model are provided. An overview of existing learning factory approaches in industry and academia is provided, showing the broad range of different applications and varying contents. The state of the art of learning factories curricula design and their use to enhance learning and research as well as potentials and limitations are presented. Conclusions and an outlook on further research priorities are offered.
In the last decade, numerous learning factories for education, training, and research have been built up in industry and academia. In recent years learning factory initiatives were elevated from a local to a European and then to a worldwide level. In 2014 the CIRP Collaborative Working Group (CWG) on Learning Factories enables a lively exchange on the topic "Learning Factories for future oriented research and education in manufacturing". In this paper results of discussions inside the CWG are presented. First, what is meant by the term Learning Factory is outlined. Second, based on the definition a description model (morphology) for learning factories is presented. The morphology covers the most relevant characteristics and features of learning factories in seven dimensions. Third, following the morphology the actual variance of learning factory manifestations is shown in six learning factory application scenarios from industrial training over education to research. Finally, future prospects of the learning factory concept are presented.
Strategy to adjust people’s performance capabilities to new requirements and grantee employability in the world of work. Good examples for this are the current changes in the logistics environment. Regularly, new services and processes close to production were taken into the portfolio of logistics enterprises, so the daily Tasks are changing continuously for the skilled works.
LOPEC aims in developing and offering special-tailored training for Lean Logistics and required basic skills for skilled workers on shopfloor level. Needed know-how for today’s challenges in logistics will be transferred. Another aspect of LOPEC is the development and use of a personal excellence self-assessment that allows a Person to assess and thus improve his/her own level of maturity in employability skills. Thus, LOPEC is aiming at People ehancement as entry ticket to lifelong continuous learning by increasing the maturity level of personal logistic excellence. A common European view for “Logistics personal excellence” for skilled workers will ensure that the final product is an open product, using international, pan European validated standards. As results LOPEC will provide training modules for post-secondary education in the area of Lean Logistics, required basics skills and offers transparency of personal excellence with a personal self-assessment Software solution, regarding the personal maturity Level of hard and soft skills at any time. It can be used as an innovative tool for monitoring personal lifelong learning routes as well as within companies as a strategic tool within Human Resource Development.
Machine failures’ consequences – a classification model considering ultra-efficiency criteria
(2023)
To strive for a sustainable production, maintenance has to evaluate possible machine failure consequences not just economically but also holistically. Approaches such as the ultra-efficiency factory consider energy, material, human/staff, emission, and organization as optimization dimensions. These ultra-efficiency dimensions can be considered for analyzing not only the respective machine failure but also the effects on the entire production system holistically. This paper presents an easy to use method, based on a questionnaire, for assessing the failure consequences of a machine malfunction in a production system considering the ultra-efficiency dimensions. The method was validated in a battery production.
The evaluation of the effectiveness of different machine learning algorithms on a publicly available database of signals derived from wearable devices is presented with the goal of optimizing human activity recognition and classification. Among the wide number of body signals we choose a couple of signals, namely photoplethysmographic (optically detected subcutaneous blood volume) and tri-axis acceleration signals that are easy to be simultaneously acquired using commercial widespread devices (e.g. smartwatches) as well as custom wearable wireless devices designed for sport, healthcare, or clinical purposes. To this end, two widely used algorithms (decision tree and k-nearest neighbor) were tested, and their performance were compared to two new recent algorithms (particle Bernstein and a Monte Carlo-based regression) both in terms of accuracy and processing time. A data preprocessing phase was also considered to improve the performance of the machine learning procedures, in order to reduce the problem size and a detailed analysis of the compression strategy and results is also presented.
The use of deep learning models with medical data is becoming more widespread. However, although numerous models have shown high accuracy in medical-related tasks, such as medical image recognition (e.g. radiographs), there are still many problems with seeing these models operating in a real healthcare environment. This article presents a series of basic requirements that must be taken into account when developing deep learning models for biomedical time series classification tasks, with the aim of facilitating the subsequent production of the models in healthcare. These requirements range from the correct collection of data, to the existing techniques for a correct explanation of the results obtained by the models. This is due to the fact that one of the main reasons why the use of deep learning models is not more widespread in healthcare settings is their lack of clarity when it comes to explaining decision making.
The fifth mobile communications generation (5G) offers the deployment scenario of licensed 5G standalone non-public networks (NPNs). Standalone NPNs are locally restricted 5G networks based on 5G New Radio technology which are fully isolated from public networks. NPNs operate on their dedicated core network and offer organizations high data security and customizability for intrinsic network control. Especially in networked and cloud manufacturing, 5G is seen as a promising enabler for delay-sensitive applications such as autonomous mobile robots and robot motion control based on the tactile internet that requires wireless communication with deterministic traffic and strict cycling times. However, currently available industrial standalone NPNs do not meet the performance parameters defined in the 5G specification and standardization process. Current research lacks in performance measurements of download, upload, and time delays of 5G standalone-capable end-devices in NPNs with currently available software and hardware in industrial settings. Therefore, this paper presents initial measurements of the data rate and the round-trip delay in standalone NPNs with various end-devices to generate a first performance benchmark for 5G-based applications. In addition, five end-devices are compared to gain insights into the performance of currently available standalone-capable 5G chipsets. To validate the data rate, three locally hosted measurement methods, namely iPerf3, LibreSpeed and OpenSpeedTest, are used. Locally hosted Ping and LibreSpeed have been executed to validate the time delay. The 5G standalone NPN of Reutlingen University uses licensed frequencies between 3.7-3.8 GHz and serves as the testbed for this study.
The present work proposes the use of modern ICT technologies such as smartphones, NFCs, internet, and web technologies, to help patients in carrying out their therapies. The implemented system provides a calendar with a reminder of the assumptions, ensures the drug identification through NFC, allows remote assistance from healthcare staff and family members to check and manage the therapy in real-time. The system also provides centralized information on the patient's therapeutic situation, helpful in choosing new compatible therapies.
Human bestrophin-1 (hBest1) is a transmembrane Ca2+- dependent anion channel, associated with the transport of Cl−, HCO3- ions, γ-aminobutiric acid (GABA), glutamate (Glu), and regulation of retinal homeostasis. Its mutant forms cause retinal degenerative diseases, defined as Bestrophinopathies. Using both physicochemical - surface pressure/mean molecular area (π/A) isotherms, hysteresis, compressibility moduli of hBest1/sphingomyelin (SM) monolayers, Brewster angle microscopy (BAM) studies, and biological approaches - detergent membrane fractionation, Laurdan (6-dodecanoyl-N,N-dimethyl-2-naphthylamine) and immunofluorescence staining of stably transfected MDCK-hBest1 and MDCK II cells, we report:
1) Ca2+, Glu and GABA interact with binary hBest1/SM monolayers at 35 °C, resulting in changes in hBest1 surface conformation, structure, self-organization and surface dynamics. The process of mixing in hBest1/SM monolayers is spontaneous and the effect of protein on binary films was defined as “fluidizing”, hindering the phase-transition of monolayer from liquid-expanded to intermediate (LE-M) state;
2) in stably transfected MDCK-hBest1 cells, bestrophin-1 was distributed between detergent resistant (DRM) and detergent-soluble membranes (DSM) - up to 30 % and 70 %, respectively; in alive cells, hBest1 was visualized in both liquid-ordered (Lo) and liquid-disordered (Ld) fractions, quantifying protein association up to 35 % and 65 % with Lo and Ld. Our results indicate that the spontaneous miscibility of hBest1 and SM is a prerequisite to diverse protein interactions with membrane domains, different structural conformations and biological functions.
Model-based hearing diagnosis based on wideband tympanometry measurements utilizing fuzzy arithmetic
(2019)
Today's audiometric methods for the diagnosis of middle ear disease are often based on a comparison of measurements with standard curves that represent the statistical range of normal hearing responses. Because of large inter-individual variances in the middle ear, especially in wideband tympanometry (WBT), specificity and quantitative evaluation are greatly restricted. A new model-based approach could transform today's predominantly qualitative hearing diganostics into a quantitative and tailored, patient-specific diagnosis, by evaluating WBT measurements with the aid of a middle-ear model. For this particular investigation, a finite element model of a human ear was used. It consisted of an acoustic ear canal and a tympanic cavity model, a middle-ear with detailed nonlinear models of the tympanic membrane and annular ligament, and a simplified inner-ear model. This model has made it possible to identify pathologies from measurements, by analyzing the parameters through senstivity studies and parameter clustering. Uncertainties due to the lack of knowledge, subjectivity in numerical implementation and model simplification are taken into account by the application of fuzzy arithmetic. The most confident parameter set can be determined by applying an inverse fuzzy method on the measurement data. The principle and the benefits of this model-based approach are illustrated by the example of a two-mass oscillator, and also by the simulation of the energy absorbance of an ear with malleus fixation, where the parameter changes that are introduced can be determined quantitatively through the system identification.
The isothermal curing of melamine resin is investigated by in-line infrared spectroscopy at different temperatures. The infrared spectra are decomposed into time courses of characteristic spectral patterns using Multivariate Curve Resolution (MCR). It was found that depending on the applied curing temperature, melamine films with different spectral fingerprints and correspondingly different chemical network structures are formed. The network structures of fully cured resin films are specific for the applied curing temperatures used and cannot simply be compensated by changes in the curing time. For industrial curing processes, this means that cure temperature is the main system determining factor at constant M:F ratio. However, different MF resin networks can be specifically obtained from one and the same melamine resin by suitable selection of the curing time and temperatures profiles to design resin functionality. The spectral fingerprints after short curing time as well as after long curing time reflect the fundamental differences in the thermoset networks that can be obtained with industrial short-cycle and multi-daylight presses.
Thin radio-frequency magnetron sputter deposited nano-hydroxyapatite (HA) films were prepared on the surface of a Fe-tricalcium phosphate (Fe-TCP) bioceramic composite, which was obtained using a conventional powder injection moulding technique. The obtained nano-hydroxyapatite coated Fe-TCP biocomposites (nano HA-Fe-TCP) were studied with respect to their chemical and phase composition, surface morphology, water contact angle, surface free energy and hysteresis. The deposition process resulted in a homogeneous, single-phase HA coating. The ability of the surface to support adhesion and the proliferation of human mesenchymal stem cells (hMSCs) was studied using biological short-term tests in vitro. The surface of the uncoated Fe-TCP bioceramic composite showed an initial cell attachment after 24 h of seeding, but adhesion, proliferation and growth did not persist during 14 days of culture.However, the HA-Fe-TCP surfaces allowed cell adhesion, and proliferation during 14 days. The deposition of the nano-HA films on the Fe-TCP surface resulted in higher surface energy, improved hydrophilicity and biocompatibility compared with the surface of the uncoated Fe-TCP. Furthermore, it is suggested that an increase in the polar component of the surface energy was responsible for the enhanced cell adhesion and proliferation in the case of the nano-HA Fe-TCP biocomposites.
Nanocoatings based on sol–gel coatings are presented as suitable tool to modify materials based on polymers. The main focus is set onto textiles as the most common polymer materials. It presents which types of functionalization can be reached by modified sol–gel processes. Also a suitable categorization of functions is given and set into relation to common applications. A special focus is set on the functional properties, antimicrobial, UV protective, and flame retardant. The concept of bifunctional coatings is discussed and especially the combination of water-repellent and antistatic is presented.
This paper is concerned with the study, optimization and control of the moisture sorption kinetics of agricultural products at temperatures typically found in processing and storage. A nonlinear autoregressive with exogenous inputs (NARX) neural network was developed to predict moisture sorption kinetics and consequently equilibrium moisture contents of shiitake mushrooms (Lentinula edodes (Berk.) Pegler) over a wide range of relative humidity and different temperatures. Sorption kinetic data of mushroom caps was separately generated using a continuous, gravimetric dynamic vapour sorption analyser at emperatures of 25-40 °C over a stepwise variation of relative humidity ranging from 0 to 85%. The predictive power of the neural network was based on physical data, namely relative humidity and temperature. The model was fed with a total of 4500 data points by dividing them into three subsets, namely, 70% of the data was used for training, 15% of the data for testing and 15% of the data for validation, randomly selected from the whole dataset. The NARX neural network was capable of precisely simulating equilibrium moisture contents of mushrooms derived from the dynamic vapour sorption kinetic data throughout the entire range of relative humidity.
Respiratory diseases are leading causes of death and disability in the world. The recent COVID-19 pandemic is also affecting the respiratory system. Detecting and diagnosing respiratory diseases requires both medical professionals and the clinical environment. Most of the techniques used up to date were also invasive or expensive.
Some research groups are developing hardware devices and techniques to make possible a non-invasive or even remote respiratory sound acquisition. These sounds are then processed and analysed for clinical, scientific, or educational purposes.
We present the literature review of non-invasive sound acquisition devices and techniques.
The results are about a huge number of digital tools, like microphones, wearables, or Internet of Thing devices, that can be used in this scope.
Some interesting applications have been found. Some devices make easier the sound acquisition in a clinic environment, but others make possible daily monitoring outside that ambient. We aim to use some of these devices and include the non-invasive recorded respiratory sounds in a Digital Twin system for personalized health.
The fiber deformations of once-dried, bleached and never-dried unbleached kraft pulps were studied with respect to their behavior in high- and low-consistency refining. The pulps were stained with congo red to experimentally highlight areas where the arrangement of the fibrils was altered by refining such as dislocated zones or slip planes. The stained fibers were analyzed with conventional Metso Fiberlab but also with a novel prototype measurement device utilizing a color imaging setup. The local intensity of the stain in the fiber was expressed as degree of overall damage (Overall fiber damage index, OFDI). The rewetted zero span tensile index (RWZSTI) was used to verify the OFDI with respect to the pulp strength. High consistency refining resulted in a clear increase in the number of kinks which negatively influenced the pulp strength. The OFDI which was used to detect the intensity of local fiber defects also responded accordingly. A higher OFDI resulted in a lower pulp strength. Low consistency refining removed a significant amount of kinks and resulted in an increase in fiber swelling. A slight increase in fibrillation and a significant increase in flake-like fines were also observed. The OFDI, however, was not reduced in low consistency refining as it would be expected by the removal of less severe dislocations. One reason proposed here is that low consistency refining created new fiber pores that allowed the dye to penetrate into the fiber wall similarly as it does in the zones of the dislocations.
On the design of an urban data and modeling platform and its application to urban district analyses
(2020)
An integrated urban platform is the essential software infrastructure for smart, sustainable and resilitent city planning, operation and maintenance. Today such platforms are mostly designed to handle and analyze large and heterogeneous urban data sets from very different domains. Modeling and optimization functionalities are usually not part of the software concepts. However, such functionalities are considered crucial by the authors to develop transformation scenarios and to optimized smart city operation. An urban platform needs to handle multiple scales in the time and spatial domain, ranging from long term population and land use change to hourly or sub-hourly matching of renewable energy supply and urban energy demand.
Conventional production systems are evolving through cyber-physical systems and application-oriented approaches of AI, more and more into "smart" production systems, which are characterized among other things by a high level of communication and integration of the individual components. The exchange of information between the systems is usually only oriented towards the data content, where semantics is usually only implicitly considered. The adaptability required by external and internal influences requires the integration of new or the redesign of existing components. Through an open application-oriented ontology the information and communication exchange are extended by explicit semantic information. This enables a better integration of new and an easier reconfiguration of existing components. The developed ontology, the derived application and use of the semantic information will be evaluated by means of a practical use case.
This paper generalizes the theory of policy uncertainty with the new literature on rational inattention. First, the model demonstrates that inattention is dependent on the signal variance and the policy parameter. Second, I discover a novel trade-off showing that a policy instrument mitigates attention. Third, the policy instrument is non-linear and reciprocal to both the size and variance of the signal. The unifying theory creates new implications to economic theory and public policy alike.
The use of additive manufacturing technologies for industrial production is constantly growing. This technology differs from the known production proecdures. The areas for scheduling, detailed and sequence planning are particularly important for additive production due to the long print times and flexible use of the production area. Therefore, production-relevant variables are considered and used for the production planning and control (PPC) of additive manufacturing machines. For this purpose, an optimization model is presented which shows a time-oriented build space utilization. In the implementation, a nesting algorithm is used to check the combinability of different models for each individual print job.
A systematic study using a central composite design of experiments (DoE) was performed on the oxygen plasma surface modifications of two different polymers—Pellethane 2363-55DE, which is a polyurethane, and vinyltrimethoxysilane-grafted ethylene-propylene (EPR-g-VTMS), a cross-linked ethylene-propylene rubber. The impacts of four parameters—gas pressure, generator power, treatment duration, and process temperature—were assessed, with static contact angles and calculated surface free energies (SFEs) as the main responses in the DoE. The plasma effects on the surface roughness and chemistry were determined using scanning electron microscopy (SEM) and X-ray photoelectron spectroscopy (XPS). Through the sufficiently accurate DoE model evaluation, oxygen gas pressure was established as the most impactful factor, with the surface energy and polarity rising with falling oxygen pressure. Both polymers, though different in composition, exhibited similar modification trends in surface energy rise in the studied system. The SEM images showed a rougher surface topography after low pressure plasma treatments. XPS and subsequent multivariate data analysis of the spectra established that higher oxidized species were formed with plasma treatments at low oxygen pressures of 0.2 mbar.
Context: Companies increasingly strive to adapt to market and ecosystem changes in real time. Gauging and understanding team performance in such changing environments present a major challenge.
Objective: This paper aims to understand how software developers experience the continuous adaptation of performance in a modern, highly volatile environment using Lean and Agile software development methodology. This understanding can be used as a basis for guiding formation and maintenance of high-performing teams, to inform performance improvement initiatives, and to improve working conditions for software developers.
Method: A qualitative multiple-case study using thematic interviews was conducted with 16 experienced practitioners in five organisations.
Results: We generated a grounded theory, Performance Alignment Work, showing how software developers experience performance. We found 33 major categories of performance factors and relationships between the factors. A cross-case comparison revealed similarities and differences between different kinds and different sizes of organisations.
Conclusions: Based on our study, software teams are engaged in a constant cycle of interpreting their own performance and negotiating its alignment with other stakeholders. While differences across organisational sizes exist, a common set of performance experiences is present despite differences in context variables. Enhancing performance experiences requires integration of soft factors, such as communication, team spirit, team identity, and values, into the overall development process. Our findings suggest a view of software development and software team performance that centres around behavioural and social sciences.
Monitoring heart rate and breathing is essential in understanding the physiological processes for sleep analysis. Polysomnography (PSG) system have traditionally been used for sleep monitoring, but alternative methods can help to make sleep monitoring more portable in someone's home. This study conducted a series of experiments to investigate the use of pressure sensors placed under the bed as an alternative to PSG for monitoring heart rate and breathing during sleep. The following sets of experiments involved the addition of small rubber domes - transparent and black - that were glued to the pressure sensor. The resulting data were compared with the PSG system to determine the accuracy of the pressure sensor readings. The study found that the pressure sensor provided reliable data for extracting heart rate and respiration rate, with mean absolute errors (MAE) of 2.32 and 3.24 for respiration and heart rate, respectively. However, the addition of small rubber hemispheres did not significantly improve the accuracy of the readings, with MAEs of 2.3 bpm and 7.56 breaths per minute for respiration rate and heart rate, respectively. The findings of this study suggest that pressure sensors placed under the bed may serve as a viable alternative to traditional PSG systems for monitoring heart rate and breathing during sleep. These sensors provide a more comfortable and non-invasive method of sleep monitoring. However, the addition of small rubber domes did not significantly enhance the accuracy of the readings, indicating that it may not be a worthwhile addition to the pressure sensor system.
Due to the complexity of assembly processes, a high ratio of tasks is still performed by human workers. Short-cyclically changing work contents due to smaller lot sizes, especially the varied series assesmbly, increases both the need for information support as well as the risk of rising physical and psychological stress. The use of technical and digital assistance systems can counter these challenges. Through the integration of information and communication technology as well as collaborative assembly technologies, hybrid cyber-physical assembly systems will emerge. Widely established assembly planning approaches for digital and technical support systems in cyber physical assembly systems will be outlined and discussed with regard to synergies and delimitations of planning perspectives.
The wet chemical deposition of solution processed transparent conducting oxides (TCO) provides an alternative low cost and economical deposition technique to realize large-areas of conducting films. Since the price for the most common TCO Indium Tin Oxide rises enormously, Aluminum Zinc Oxide (AZO) as alternative TCO reaches more and more interest. The optoelectronical properties of nanoparticle coatings strongly depend beneath the porosity of the coating on the shape and size of the used particles. By using bigger or rod-shaped particles it is possible to minimize the amount of grain boundaries resulting in an improvement of the electrical properties, whereas particles bigger than 100 nm should not be used if highly transparent coatings are necessary as these big particles scatter the visible light and lower the transmittance of the coatings. In this work we present a simple method to synthesize AZO particles with different shape and size, but comparable electronical properties. We use a simple, well reproducible polyol method for synthesis and influence the shape and size of the particles by adding different amounts of water to the precursor solution. We can show that the addition of aluminum as dopant strongly hinders the crystal growth but the addition of water counteracts this, so that both, spherical and rod-shaped particles can be obtained.
The powder coating of veneered particle boards by the sequence electrostatic powder application -powder curing via hot pressing is studied in order to create high gloss surfaces. To obtain an appealingaspect, veneer Sheets were glued by heat and pressure on top of particle boards and the resulting surfaceswere used as carrier substrates for powder coat finishing. Prior to the powder coating, the veneeredparticle board surfaces were pre-treated by sanding to obtain good uniformity and the boards werestored in a climate chamber at controlled temperature and humidity conditions to adjust an appropriate electrical surface resistance. Characterization of surface texture was done by 3D microscopy. The surfaceelectrical resistance was measured for the six veneers before and after their application on the particleboard surface. A transparent powder top-coat was applied electrostatically onto the veneered particleboard surface. Curing of the powder was done using a heated press at 130◦C for 8 min and a smooth, glossy coating was obtained on the veneered surfaces. By applying different amounts of powder thecoating thickness could be varied and the optimum amount of powder was determined for each veneer type.
The general conclusion of climate change studies is the necessity of eliminating net CO2 emissions in general and from the electric power systems in particular by 2050. The share of renewable energy is increasing worldwide, but due to the intermittent nature of wind and solar power, a lack of system flexibility is already hampering the further integration of renewable energy in some countries. In this study, we analyze if and how combinations of carbon pricing and power-to-gas (PtG) generation in the form of green power-to-hydrogen followed by methanation (which we refer to as PtG throughout) using captured CO2 emissions can provide transitions to deep decarbonization of energy systems. To this end, we focus on the economics of deep decarbonization of the European electricity system with the help of an energy system model. In different scenario analyses, we find that a CO2 price of 160 €/t (by 2050) is on its own not sufficient to decarbonize the electricity sector, but that a CO2 price path of 125 (by 2040) up to 160 €/t (by 2050), combined with PtG technologies, can lead to an economically feasible decarbonization of the European electricity system by 2050. These results are robust to higher than anticipated PtG costs.
Clay minerals play an increasingly important role as functional fillers and reinforcing materials for clay polymer nanocomposites (CPN) in advanced applications. Among the prerequisites necessary for polymer improvement by clay minerals are homogeneous and stable Distribution of the clay mineral throughout the CPN, good compatibility of the reinforcement with the Matrix component and suitable processability. Typically, clay minerals are surface-modified with organic interface active compounds like detergents or silanes to obtain favorable properties as filler. They are incorporated into the polymer matrix using manufacturing Equipment like extruders, batch reactors or other mixing machines. In order for the surface modification to survive the stresses and strains during incorporation, the modified clay minerals must display sufficient thermal and mechanical stability to retain the compatibilizing effect. In the present study, thermogravimetry was used in combination with isoconversional kinetic analysis to determine the thermal stability of a silane-modified clay mineral based on bentonite. These findings were compared with the stability of the same clay mineral that was only surfactant-modified. It was found that silane modification leads to significantly improved thermal stability, which depends strongly on the type of silane employed.
Preliminary results of homomorphic deconvolution application to surface EMG signals during walking
(2021)
Homomorphic deconvolution is applied to sEMG signals recorded during walking. Gastrocnemius lateralis and tibialis anterior signals were acquired according to SENIAM recommendation. MUAP parameters like amplitude and scale were estimated, whilst the MUAP shape parameter was fixed. This features a useful time-frequency representation of sEMG signal. Estimation of scale MUAP parameter was verified extracting the mean frequency of filtered EMG signal, extracted from the scale parameter estimated with two different MUAP shape values.
The properties of polyelectrolyte multilayers are ruled by the process parameters employed during self-assembly. This is the first study in which a design of experiment approach was used to validate and control the production of ultrathin polyelectrolyte multilayer coatings by identifying the ranges of critical process parameters (polyelectrolyte concentration, ionic strength and pH) within which coatings with reproducible properties (thickness, refractive index and hydrophilicity) are created. Mathematical models describing the combined impact of key process parameters on coatings properties were developed demonstrating that only ionic strength and pH affect the coatings thickness, but not polyelectrolyte concentration. While the electrolyte concentration had a linear effect, the pH contribution was described by a quadratic polynomial. A significant contribution of this study is the development of a new approach to estimate the thickness of polyelectrolyte multilayer nanofilms by quantitative rhodamine B staining, which might be useful in all cases when ellipsometry is not feasible due to the shape complexity or small size of the coated substrate. The novel approach proposed here overcomes the limitations of known methods as it offers a low spatial sampling size and the ability to analyse a wide area without restrictions on the chemical composition and shape of the substrate.
Processing
(2014)
In this chapter, some relevant aspects and illustrative examples of online monitoring tools as the basis for process control in the manufacturing and processing of thermosetting resins are briefly discussed. In principle, any chemical or physical information made accessible by sensors can be used for online monitoring of resin formation, resin location in the mold, and resin cure. For instance, changes in the flow properties of the reaction mixture are often routinely recorded in dependence of the reaction time during resin synthesis as a measure for the degree of conversion of raw materials into macromolecules or oligomers by applying rheometry in an in-process environment. Typically, a small sample of the reaction mixture is by-passed, subjected to rheological measurement, and re-introduced into the bulk reactor. In a similar way, pH measurements, turbidimetric measurements, or other analyses are performed. Although rheometry may not always be suitable for following resin cure (especially in cases where there is a very rapid increase in viscosity after initiation of the cure), [1] naturally, the method can in principle also be used in the subsequent processing of the thermosets, for instance in the curing of wood glue applied to wood specimen [2]. Similarly, pH changes during thermoset curing can be followed. Hence, an encyclopedic and comprehensive approach to present process control methods would systematically proceed according to the involved physical measurement principle. However, since only a very Brief sketch of means for monitoring thermoset processing can be given here, only a small, personally biased selection of important methods and application examples is addressed in the following sections. These examples hopefully illustrate some of the general strategies and solutions to problems that are typically encountered when processing thermosets.
Process analysis and process control have attracted increasing interest in recent years. The development and application of process analytical methods are a prerequisite for the knowledge-based manufacturing of industrial goods and allow for the production of high-value products of defined, constantly good quality. Discussed in this chapter are the measurement principle and some relevant aspects and illustrative examples of online monitoring tools as the basis for process control in the manufacturing and processing of thermosetting resins. Optical spectroscopy is featured as one of the main process analytical methods applicable to, among other applications, online monitoring of resin synthesis. In combination with chemometric methods for multivariate data analysis, powerful process models can be generated within the framework of feedback and feed-forward control concepts. Other analytical methods covered in this chapter are those frequently used to control further processing of thermosets to the final parts, including dielectric analysis, ultrasonics, fiber optics, and Fiber Bragg Grating sensors.
Properties data of phenolic resins synthetized for the impregnation of saturating Kraft paper
(2018)
The quality of decorative laminates boards depends on the impregnation process of Kraft papers with a phenolic resin,which constitute the raw materials for the manufacture of the cores of such boards.In the laminates industries,the properties of resins are adapted via their syntheses,usually by mixing phenol and formaldehyde in a batch,where additives,temperature and stirring parameters can be controlled. Therefore, many possibilities of preparation and phenolic resins exist, that leads to different combinations of physico chemical properties. In this article, the properties data of eight phenolic resins synthetized with different parameters of pH and reaction times at 60 °C and 90 °C are presented: the losses of pH after synthesis and the dynamic viscosities measured after synthesis and one the solid content is adjusted to 45%w/w in methanol. Data aquired by Differential Scanning Calorimetry (DSC) of the resins and Inverse Gas Chromatography (IGC) of cured solids are given as well.
Employing diffuse reflection ultraviolet visible (UV–Vis) spectroscopy we developed an approach that is capable to quantitatively determine flux residues on a technical copper surface. The technical copper surface was soldered with a no-clean flux system of organic acids. By a post-solder cleaning step with different cleaning parameters, various levels of residues were produced. The surface was quantitatively and qualitatively characterized using X-ray photoelectron spectroscopy (XPS), Auger electron spectroscopy (AES), Fourier transform infrared spectroscopy (FTIR) and diffuse reflection UV–Vis spectroscopy. With the use of a multivariate analysis (MVA) we examined the UV–Vis data to create a correlation to the carbon content on the surface. The UV–Vis data could be discriminated for all groups by their level of organic residues. Combined with XPS the data were evaluated by a partial least squares (PLS) regression to establish a model. Based on this predictive model, the carbon content was calculated with an absolute error of 2.7 at.%. Due to the high correlation of predictive model, the easy-to-use measurement and the evaluation by multivariate analysis the developed method seems suitable for an online monitoring system. With this system, flux residues can be detected in a manufacturing cleaning process of technical surfaces after soldering.
Comparative analysis of the R&D efficiency of 14 leading pharmaceutical companies for the years 1999–2018 shows that there is a close positive correlation between R&D spending and the two investigated R&D output parameters, approved NMEs and the cumulative impact factor of their publications. In other words, higher R&D investments (input) were associated with higher R&D output. Second, our analyses indicate that there are "economies of scale" (size) in pharmaceutical R&D.
Context: An experiment-driven approach to software product and service development is gaining increasing attention as a way to channel limited resources to the efficient creation of customer value. In this approach, software capabilities are developed incrementally and validated in continuous experiments with stakeholders such as customers and users. The experiments provide factual feedback for guiding subsequent development.
Objective: This paper explores the state of the practice of experimentation in the software industry. It also identifies the key challenges and success factors that practitioners associate with the approach.
Method: A qualitative survey based on semi-structured interviews and thematic coding analysis was conducted. Ten Finnish software development companies, represented by thirteen interviewees, participated in the study.
Results: The study found that although the principles of continuous experimentation resonated with industry practitioners, the state of the practice is not yet mature. In particular, experimentation is rarely systematic and continuous. Key challenges relate to changing the organizational culture, accelerating the development cycle speed, and finding the right measures for customer value and product success. Success factors include a supportive organizational culture, deep customer and domain knowledge, and the availability of the relevant skills and tools to conduct experiments.
Conclusions: It is concluded that the major issues in moving towards continuous experimentation are on an organizational level; most significant technical challenges have been solved. An evolutionary approach is proposed as a way to transition towards experiment-driven development.
Reflectometry is known since long as an interferometric method which can be used to characterize surfaces and thin films regarding their structure and,to a certain degree,composition as well.Properties like layer structures,layer thickness,density,and interface roughness can be determined by fitting the obtained reflectivity data with an appropriate model using a recursive fitting routine. However,one major drawback of the reflectometric method is its restriction to planar surfaces.In this article we demonstrate an approach to apply X-ray and neutron reflectometry to curved surfaces by means of the example of bent bare and coated glass slides.We prove the possibility to observe all features like Fresnel decay,Kiessig fringes,Bragg peaks and off-specular scattering and are able to interpret the data using common fitting software and to derive quantitative results about roughness,layer thickness and internal structure. The proposed method has become practical due to the availability of high quality 2D-detectors. It opens up the option to explore many kinds and shapes of samples,which,due to their geometry,have not been in the focus of reflectometry techniques until now.
Relationship marketing is an important issue in every business. Knowing the customers and establishing, maintaining and enhancing long-term customer relationships is a key component of long-term business success. Considering that sport is such big business today, it is surprising that this crucial approach to marketing has yet to be fully recognised either in literature or in the sports business itself. Relationship Marketing in Sports aims to fill this void by discussing and reformulating the principles of relationship marketing and by demonstrating how relationship marketing can be successfully applied in practice within a sports context. Written by a unique author team of academic and practitioner experience, the book provides the reader with: the first book to apply the principles of relationship marketing specifically to a sports context case studies from around the world to provide a uniquely global approach applicable worldwide strong pedagogical features including learning outcomes, overviews, discussion questions, glossary, guided reading and web links practical advice for professional, semi-professional and non-professional sporting organisations a companion website providing web links, case studies and PowerPoint slides for lecturers. Relationship Marketing in Sports is crucial reading for both students and professionals alike and marks a turning point in the marketing of sports.
On-chip metallization, especially in modern integrated BCD technologies, is often subject to high current densities and pronounced temperature cycles due to heat dissipation from power switches like LDMOS transistors. This paper continues the work on a sensor concept where small sense lines are embedded in the metallization layers above the active area of a switching LDMOS transistor. The sensors show a significant resistance change that correlates with the number of power cycles. Furthermore, influences of sense line layer, geometry and the dissipated energy are shown. In this paper, the focus lies on a more detailed analysis of the observed change in sense line resistance.
The proper selection of a demand forecasting method is directly linked to the success of supply chain management (SCM). However, today’s manufacturing companies are confronted with uncertain and dynamic markets. Consequently, classical statistical methods are not always appropriate for accurate and reliable forecasting. Algorithms of Artificial intelligence (AI) are currently used to improve statistical methods. Existing literature only gives a very general overview of the AI methods used in combination with demand forecasting. This paper provides an analysis of the AI methods published in the last five years (2017-2021). Furthermore, a classification is presented by clustering the AI methods in order to define the trend of the methods applied. Finally, a classification of the different AI methods according to the dimensionality of data, volume of data, and time horizon of the forecast is presented. The goal is to support the selection of the appropriate AI method to optimize demand forecasting.
Supply chains have become increasingly complex, making it difficult to ensure transparency throughout the whole supply chain. In this context, first approaches came up, adopting the immutable, decentralised, and secure characteristics of the blockchain technology to increase the transparency, security, authenticity, and auditability of assets in supply chains. This paper investigates recent publications combining the blockchain technology and supply chain management and classifies them regarding the complexity to be mapped on the blockchain. As a result, the increase of supply chain transparency is identified as the main objective of recent blockchain projects in supply chain management. Thereby, most of the recent publications deal with simple supply chains and products. The few approaches dealing with complex parts only map sub-areas of supply chains. Currently no example exists which has the aim of increasing the transparency of complex manufacturing supply chains, and which enables the mapping of complex assembly processes, an efficient auditability of all assets, and an implementation of dynamic adjustments.
Artificial intelligence is a field of research that is seen as a means of realization regarding digitalization and industry 4.0. It is considered as the critical technology needed to drive the future evolution of manufacturing systems. At the same time, autonomous guided vehicles (AGV) developed as an essential part due to the flexibility they contribute to the whole manufacturing process within manufacturing systems. However, there are still open challenges in the intelligent control of these vehicles on the factory floor. Especially when considering dynamic environments where resources should be controlled in such a way, that they can be adjusted to turbulences efficiently. Therefore, this paper aimed to develop a conceptual framework for addressing a catalog of criteria that considers several machine learning algorithms to find the optimal algorithm for the intelligent control of AGVs. By applying the developed framework, an algorithm is automatically selected that is most suitable for the current operation of the AGV in order to enable efficient control within the factory environment. In future work, this decision-making framework can be transferred to even more scenarios with multiple AGV systems, including internal communication along with AGV fleets. With this study, the automatic selection of the optimal machine learning algorithm for the AGV improves the performance in such a way, that computational power is distributed within a hybrid system linking the AGV and cloud storage in an efficient manner.
Structural and functional thermosetting composite materials are exposed to different kinds of stress which can damage the polymer matrix, thus impairing the intended properties. Therefore, self-healing materials have attracted the attention of many research groups over the last decades in order to provide satisfactory material properties and outstanding product durability. The present article provides a critical overview of promising self-healing strategies for crosslinked thermoset polymers. It is organized in two parts: an overview about the different approaches to self-healing is given in the first part, whereas the second part focuses on the specific chemistries of the main strategies to achieve self-healing through crosslinking. It is attempted to provide a comprehensive discussion of different approaches which are described in the scientific literature. By comparison of the advantages and disadvantages, the authors wish to provide helpful insights on the assessment of the potential to transfer the extensive present knowledge about self-healing materials and methods to surface varnishing thermoset coatings.
Self-healing thermosets
(2022)
This chapter discusses the basic extrinsic, intrinsic, and combined extrinsic/intrinsic strategies for equipping thermosetting polymers with self-healing properties. The main focus will be on the presentation of a holistic optimization of thermosetting materials, that is, on a simultaneous optimization of both self-healing and other, specialized material properties. Due to their very rigid, highly cross-linked three-dimensional structure, thermosetting polymers require special chemical strategies to achieve self-healing properties. The main chemical strategies available for this will be briefly outlined. The examples given illustrate interesting and/or typical procedures and serve as an inspiration to find solutions for your own applications. They summarize important recent development in research and technology aiming toward multifunctional truly smart self-healing thermosetting materials. An important aspect in this topic area is also how precisely the self-healing effects are analytically checked, quantified, and evaluated. A range of measuring methods is available for this purpose. In this chapter, the most important analytical tools for testing self-healing properties are briefly introduced and highlighted with some illustrative examples.
The transmembrane Ca2+ − activated Cl− channel - human bestrophin-1 (hBest1) is expressed in retinal pigment epithelium and mutations of BEST1 gene cause ocular degenerative diseases colectivelly referred to as “bestrophinopathies”. A large number of genetical, biochemical, biophysical and molecular biological studies have been performed to understand the relationship between structure and function of the hBest1 protein and its pathophysiological significance. Here, we review the current understanding of hBest1 surface organization, interactions with membrane lipids in model membranes, and its association with microdomains of cellular membranes. These highlights are significant for modulation of channel activity in cells.
The persistent development towards decreasing batch sizes due to an ongoing product individualization, as well as increasingly dynamic market and competitive conditions lead to new changeability requirements in production environments. Since each of the individualized products mgith require different base materials or components and manufacturing resources, the paths of the products giong through the factory as well as the required internal transport and material supply processes are going to differ for every product. Conventional planning and control systems, which rely on predifined processes and central decision-making, are not capable to deal with the arising system's complexity along the dimensions of changing goods, layouts and throughput requirements. The concepts of "self-organization" in combination with "autonomous ocntrol" provide promising solutions to solve these new requirements by using among other things the potential of autonomous, decentralized and target-optimized logistical objects (e.g. smart products, bins and conveyor systems) wich are able to communicate and interact with each other as well as with human wokers. To investigate the potential of automation and human-robot collaboration for intralogistics, a research project for the development of a collaborative tugger train has been started at the ESB Logistics Learning Factory in lin with various student projects in neighboring research areas. This collaboraive tugger train system in combination with other manual (e.g. handcarts) and (semi-) automated conveyoer systems (e.g. automated guided forklift) will be integrated into a dynamic, self-organized scenario with varying production batch sizes to develop a method for target-oriented sefl-organization and autonomous control of intralogistics systems. For a structured investigation of self-organized scenarios a generic intralogistics model as well as a criteria cataloghe has been developed. The ESB Logistics Learning will serve as a practice-oriented research, validation and demonstration environment for these purposes.
Modern production systems are characterized by the increasingly use of CPS and IoT networks. However, processing the available information for adaptation and reconfiguration often occurs in relatively large time cycles. It thus does not take advantage of the optimization potential available in the short term. In this paper, a concept is presented that, considering the process information of the individual heterogeneous system elements, detects optimization potentials and performs or proposes adaptation or reconfiguration. The concept is evaluated utilizing a case study in a learning factory. The resulting system thus enables better exploitation of the potentials of the CPPS.
Silicones
(2014)
Silicones are found in a variety of applications with requirements that range from long life at elevated temperatures to fluidity at low temperatures. This chapter first considers silicone elastomers and their application in room temperature vulcanizing (RTV) and heat curing systems (HTV). Also, new technologies for UV curing are introduced. Coverage of RTVs includes both one-component and two-component systems and the different cure chemistries of each, and is followed by a separate discussion of silicone laminates. Due to the high importance of silicone fluids, they are also discussed. Fluids include polishes, release agents, surfactants, and dielectric fluids.
Silicones
(2022)
Silicones are found in a variety of applications with requirements that range from long life at elevated temperatures to fluidity at low temperatures. This chapter first considers silicone elastomers and their application in room temperature vulcanizing (RTV) and heat curing systems (HTV). Also, new technologies for UV curing are introduced. Coverage of RTVs includes both one-component and two-component systems and the different cure chemistries of each and is followed by a separate discussion of silicone laminates. Due to the high importance of silicone fluids, they are also discussed. Fluids include polishes, release agents, surfactants, and dielectric fluids.
Globalisation, shorter product life cycles, and increasing product varieties have led to complex supply chains. At the same time, there is a growing interest of customers and governments in having a greater transparency of brands, manufacturers, and producers throughout the supply chain. Due to the complex structure of collaborative manufacturing networks, the increase of supply chain transparency is a challenge for manufacturing companies. The blockchain technology offers an innovative solution to increase the transparency, security, authenticity, and auditability of products. However, there are still uncertainties when applying the blockchain technology to manufacturing scenarios and thus enable all stakeholders to trace back each component of an assembled product. This paper proposes a framework design to increase the transparency and auditability of products in collaborative manufacturing networks by adopting the blockchain technology. In this context, each component of a product is marked with a unique identification number generated by blockchain-based smart contracts. In this way, a transparent auditability of assembled products and their components can be achieved for all stakeholders, including the custome.
The increasing complexity and need for availability of automated guided vehicles (AGVs) pose challenges to companies, leading to a focus on new maintenance strategies. In this paper, a smart maintenance architecture based on a digital twin is presented to optimize the technical and economic effectiveness of AGV maintenance activities. To realize this, a literature review was conducted to identify the necessary requirements for Smart Maintenance and Digital Twins. The identified requirements were combined into modules and then integrated into an architecture. The architecture was evaluated on a real AGV on the battery as one of the critical components.
Information Systems in Distributed Environment (ISDE) is becoming a prominent standard in this globalization era due to advancement in information and communication technologies. The advent of the internet has supported Distributed Software Development (DSD) by introducing new concepts and opportunities, resulting in benefits such as scalability, flexibility, interdependence, reduced cost, resource pools, and usage tracking. The distributed development of information systems as well as their deployment and operation in distributed environments impose new challenges for software organizations and can lead to business advantages. In distributed environments, business units collaborate across time zones, organizational boundaries, work cultures and geographical distances, something that ultimately has led to an increasing diversification and growing complexity of cooperation among units. The real-world practice of developing, deployment and operation of information systems in globally distributed projects has been viewed from various perspectives, though technical and engineering in conjunction with managerial and organizational viewpoints have dominated the researcher’s attention so far. Successful participation in distributed environments, however, is ultimately a matter of the participants understanding and exploiting the particularities of their respective local contexts at specific points in time and exploring practical solutions through the local resources available.
This special issue of the Computer standards & interfaces journal therefore includes papers received from the public call for papers and extended and improved versions of those papers that were selected from the best of the International Workshop on Information Systems in Distributed Environment (ISDE 2014). It aims to serve as a forum to bring together academics, researchers, practitioners and students in the field of distributed information system, by presenting novel developments and lesson learned from real world cases, and to promote the exchange of ideas, discussion and advancement in these areas.
We analyze economics PhDs’ collaborations in peer-reviewed journals from 1990 to 2014 and investigate such collaborations’ quality in relation to each co-author’s research quality, field and specialization. We find that a greater overlap between co-authors’ previous research fields is significantly related to a greater publication success of co-authors’ joint work and this is robust to alternative specifications. Co-authors that engage in a distant collaboration are significantly more likely to have a large research overlap, but this significance is lost when co-authors’ social networks are accounted for. High quality collaboration is more likely to emerge as a result of an interaction between specialists and generalists with overlapping fields of expertise. Regarding interactions across subfields of economics (interdisciplinarity), it is more likely conducted by co- authors who already have interdisciplinary portfolios, than by co-authors who are specialized or starred in different subfields.
Artificial Intelligence-based Assistants AIAs are spreading quickly both in homes and offices. They already have left their original habitats of "intelligent speakers" providing easy access to music collections. The initiated a multitude of new devices and are already populating devices such as TV sets. Characteristic for the intelligent digital assistants is the formation of platforms around their core functionality. Thus, AIS capabilities of the assistants are used to offer new services and create new interfaces for business processes. There are positive network effects between the assistants and the services as well as within the services. Therefore, many companies see the need to get involved in the field of digital assistants but lack a framework to align their initiatives with their corporate strategies. In order to lay the foundation for a comprehensive method, we are therefore investigating intelligent digital assistants. Based on this analysis, we are developing a framework of strategic opportunities and challenges.
The technologies of digital transformation, such as the Internet-of-Things (IoT), artificial intelligence or predictive maintenance enable significant efficiency gains in industry and are becoming increasingly important as a competitive factor. However, their successful implementation and creative, future application requires the broad acceptance and knowledge of non-IT-related groups, such as production management students, engineers or skilled workers, which is still lacking today. This paper presents a low-threshold training concept bringing IoT-technologies and applications into manufacturing related higher education and employee training. The concept addresses the relevant topics starting from IoT-basics to predictive maintenance using mobile low-cost hardware and infrastructure.
Zero or plus energy office buildings must have very high building standards and require highly efficient energy supply systems due to space limitations for renewable installations. Conventional solar cooling systems use photovoltaic electricity or thermal energy to run either a compression cooling machine or an absorption-cooling machine in order to produce cooling energy during daytime, while they use electricity from the grid for the nightly cooling energy demand. With a hybrid photovoltaic-thermal collector, electricity as well as thermal energy can be produced at the same time. These collectors can produce also cooling energy at nighttime by longwave radiation exchange with the night sky and convection losses to the ambient air. Such a renewable trigeneration system offers new fields of applications. However, the technical, ecological and economical aspects of such systems are still largely unexplored.
In this work, the potential of a PVT system to heat and cool office buildings in three different climate zones is investigated. In the investigated system, PVT collectors act as a heat source and heat sink for a reversible heat pump. Due to the reduced electricity consumption (from the grid) for heat rejection, the overall efficiency and economics improve compared to a conventional solar cooling system using a reversible air-to-water heat pump as heat and cold source.
A parametric simulation study was carried out to evaluate the system design with different PVT surface areas and storage tank volumes to optimize the system for three different climate zones and for two different building standards. It is shown such systems are technically feasible today. With a maximum utilization of PV electricity for heating, ventilation, air conditioning and other electricity demand such as lighting and plug loads, high solar fractions and primary energy savings can be achieved.
Annual costs for such a system are comparable to conventional solar thermal and solar electrical cooling systems. Nevertheless, the economic feasibility strongly depends on country specific energy prices and energy policy. However, even in countries without compensation schemes for energy produced by renewables, this system can still be economically viable today. It could be shown, that a specific system dimensioning can be found at each of the investigated locations worldwide for a valuable economic and ecological operation of an office building with PVT technologies in different system designs.
The use of learning factories for education in maintenance concepts is limited, despite the important role maintenance plays in the effective operation of organizational assets. A training programme in a learning factory environment is presented where a combination of gamification, classroom training and learning factory applications is used to introduce students to the concepts of maintenance plan development, asset failure characteristics and the costs associated with maintenance decision-making. The programme included a practical task to develop a maintenance plan for different advanced manufacturing machines in a learning factory setting. The programme stretched over a four-day period and demonstrated how learning factories can be effectively utilized to teach management related concepts in an interdisciplinary team context, where participants had no, or very limited, previous exposure to these concepts.
This paper covers test and verification of a forecast-based Monte Carlo algorithm for an optimized, demand-oriented operation of combined heat and power (CHP) units using the hardware-in-the-loop approach. For this purpose, the optimization algorithm was implemented at a test bench at Reutlingen University for controlling a CHP unit in combination with a thermal energy storage, both in real hardware. In detail, the hardware-in-the-loop tests are intended to reveal the effects of demand forecasting accuracy, the impact of thermal energy storage capacity and the influence of load profiles on demand-oriented operation of CHP units. In addition, the paper focuses on the evaluation of the content of energy in the thermal energy storage under practical conditions. It is shown that a 5-layer model allows to determine the energy stored quite accurately, which is verified by experimental results. The hardware-in-the-loop tests disclose that demand forecasting accuracies, especially electricity demand forecasting, as well as load profiles strongly impact the potential for CHP electricity utilization on-site in demand-oriented mode. Moreover, it is shown that a larger effective capacity of the thermal energy storage positively affects demand-oriented operation. In the hardware-in-the-loop tests, the fraction of electricity generated by the CHP unit utilized on-site could thus be increased by a maximum of 27% compared to heat-led operation, which is still the most common modus operandi of small-scale CHP plants. Hence, the hardware-in-the-loop tests were adequate to prove the significant impact of the proposed algorithm for optimization of demand-oriented operation of CHP units.
This paper presents the first part of a research-work conducted at the University of Applied Sciences (HFT- Stuttgart). The aim of the research was to investigate the potential of low-cost renewable energy systems to reduce the energy demand of the building sector in hot and dry areas. Radiative cooling to the night sky represents a low-cost renewable energy source. The dry desert climate conditions promote radiative cooling applications. The system technology adopted in this work is based on uncovered solar thermal collectors integrated into the building’s hydronic system. By implementing different control strategies, the same system could be used for cooling as well as for heating applications. This paper focuses on identifying the collector parameters which are required as the coefficients to configure such an unglazed collector for calibrating its mathematical model within the simulation environment. The parameter identification process implies testing the collector for its thermal performance. This paper attempts to provide an insight into the dynamic testing of uncovered solar thermal collectors (absorbers), taking into account their prospective operation at nighttime for radiative cooling applications. In this study, the main parameters characterizing the performance of the absorbers for radiative cooling applications are identified and obtained from standardized testing protocol. For this aim, a number of plastic solar absorbers of different designs were tested on the outdoor test-stand facility at HFT-Stuttgart for the characterization of their thermal performance. The testing process was based on the quasi-dynamic test method of the international standard for solar thermal collectors EN ISO 9806. The test database was then used within a mathematical optimization tool (GenOpt) to determine the optimal parameter settings of each absorber under testing. Those performance parameters were significant to compare the thermal performance of the tested absorbers. The coefficients (identified parameters) were used then to plot the thermal efficiency curves of all absorbers, for both the heating and cooling modes of operation. Based on the intended main scope of the system utilization (heating or cooling), the tested absorbers could be benchmarked. Hence, one of those absorbers was selected to be used in the following simulation phase as was planned in the research-project.
Today, virtualizing pharma R&D is increasingly related with data analytics and artificial intelligence (AI), technologies that have been developed by software companies outside the healthcare sector. The process of virtualizing pharma R&D is closely related to the technological advancements that result in the generation of large data sets ranging from genomics, proteomics, metabolomics, medical imaging, IoT wearables and large clinical trials, making it necessary for pharma companies to find new ways to store and ultimately analyze information. As a consequence, pharma companies are experimenting with AI in R&D ranging from in-silico drug design to clinical trail participants identification or dosage error reduction.
Computers are increasingly used in teams in various contexts, for example in negotiations. Especially when using computer-support for decision making processes, it is an important question whether active collaboration within the team - for example via audio-conference - has additional benefits beyond the supply of full task-relevant information via computer. In team negotiations, team representatives are only able to represent the whole team, if diverse preferences of the team members are aligned prior to the negotiation. In an experimental study with 150 participants, we provided team members with the complete information about each other's preferences during an either collaboratively (computer-mediated) or seperately conducted computer-supported negotiation preparation and subsequently asked them for their priorities as representatives of the team. Our results showed that providing complete task-relevant information via computer is insufficient to compensate for the absence of active collaboration within the team.
Context
In a world of high dynamics and uncertainties, it is almost impossible to have a long-term prediction of which products, services, or features will satisfy the needs of the customer. To counter this situation, the conduction of Continuous Improvement or Design Thinking for product discovery are common approaches. A major constraint in conducting product discovery activities is the high effort to discover and validate features and requirements. In addition, companies struggle to integrate product discovery activities into their agile processes and iterations.
Objective
This paper aims at suggests a supportive tool, the “Discovery Effort Worthiness (DEW) Index”, for product owners and agile teams to determine a suitable amount of effort that should be spent on Design Thinking activities. To operationalize DEW, proposals for practitioners are presented that can be used to integrate product discovery into product development and delivery.
Method
A case study was conducted for the development of the DEW index. In addition, we conducted an expert workshop to develop proposals for the integration of product discovery activities into the product development and delivery process.
Results
First, we present the "Discovery Effort Worthiness Index" in form of a formula. Second, we identified requirements that must be fulfilled for systematic integration of product discovery activities into product development and delivery. Third, we derived from the requirements proposals for the integration of product discovery activities with a company's product development and delivery.
Conclusion
The developed "Discovery Effort Worthiness Index" provides a tool for companies and their product owners to determine how much effort they should spend on Design Thinking methods to discover and validate requirements. Integrating product discovery with product development and delivery should ensure that the results of product discovery are incorporated into product development. This aims to systematically analyze product risks to increase the chance of product success.
Marketing channels are among the most important elements of any value chain. This is because the bulk of a nation´s manufacturing output flows through them. The intermediaries (e.g., distributors, wholesalers, retailers) constituting marketing channels perform specific distribution functions,such as transportation, storage, sales, financing, and relationship building, better than most manufacturers. Over his distinguished career, Louis P. Bucklin investigated many questions about the structuring and functioning of marketing channels using conceptual, empirical, and microeconomics model-based methodologies. Today, the academic marketing literature contains hundreds of articles that have employed these three broad classes of methodologies to investigate issues of channel intermediaries´ interorganizational relationships, for example, power-dependence, relational outcomes, conflict and negotiations, and manufacturing firms´ channel strategy, for example, channel structure, selection, coordination and control. So far, however, there has been no review of how the three different methodologies have contributed to advancing knowledge across this set of channels research domains.
Perivascular stromal cells, including mesenchymal stem/stromal cells (MSCs), secrete paracrine factor in response to exercise training that can facilitate improvements in muscle remodeling. This study was designed to test the capacity for muscle-resident MSCs (mMSCs) isolated from young mice to release regenerative proteins in response to mechanical strain in vitro, and subsequently determine the extent to which strain-stimulated mMSCs can enhance skeletal muscle and cognitive performance in a mouse model of uncomplicated aging. Protein arrays confirmed a robust increase in protein release at 24 h following an acute bout of mechanical strain in vitro (10%, 1 Hz, 5 h) compared to non-strain controls. Aged (24 month old), C57BL/6 mice were provided bilateral intramuscular injection of saline, non strain control mMSCs, or mMSCs subjected to a single bout of mechanical strain in vitro (4 ×104). No significant changes were observed in muscle weight, myofiber size, maximal force, or satellite cell quantity at 1 or 4 wks between groups. Peripheral perfusion was significantly increased in muscle at 4 wks post-mMSC injection (p < 0.05), yet no difference was noted between control and preconditioned mMSCs. Intramuscular injection of preconditioned mMSCs increased the number of new neurons and astrocytes in the dentate gyrus of the hippocampus compared to both control groups (p < 0.05), with a trend toward an increase in water maze performance noted (p=0.07). Results from this study demonstrate that acute injection of exogenously stimulated muscle-resident stromal cells do not robustly impact aged muscle structure and function, yet increase the survival of new neurons in the hippocampus.
The paper describes a new stimulus using learning factories and an academic research programme - an M.Sc. in Digital Industrial Management and Engineering (DIME) comprising a double degree - to enhance international collaboration between four partner universities. The programme will be structured in such a way as to maintain or improve the level of innovation at the learning factories of each partner. The partners agreed to use Learning Factory focus areas along with DIME learning modules to stimulate international collaboration. Furthermore, they identified several research areas within the framework of the DIME program to encourage horizontal and vertical collaboration. Vertical collaboration connects faculty expertise across the Learning Factory network to advance knowledge in one of the focus areas, while Horizontal collaboration connects knowledge and expertise across multiple focus areas. Together they offer a platform for students to develop disciplinary and cross-disciplinary applied research skills necessary for addressing the complex challenges faced by industry. Hence, the university partners have the opportunity to develop the learning factory capabilities in alignment with the smart manufacturing concept. The learning factory is thus an important pillar in this venture. While postgraduate students/researchers in the DIME program are the enablers to ensure the success of entire projects, the learning factory provides a learning environment which is entirely conducive to fostering these successful collaborations. Ultimately, the partners are focussed on utilising smart technologies in line with the digitalization of the production process.
Artificial intelligence (AI) technologies, such as machine learning or deep learning, have been predicted to highly impact future organizations and radically change the way how projects are managed. The Project Management Institute (PMI), the network of around 1.1 million certified project managers, ranked AI as one of the top three disruptors of their profession. In an own study on the effect of AI, 37% of the project management processes can be executed by machine learning and other AI technologies. In addition, Gartner recently postulated that 80% of the work of today's project managers may be eliminated by AI in 2030.
This editorial aims to outline today's project and portfolio management in context of pharmaceutical research and development (R&D), followed by an AI-vision and a more tangible mission, and illustrate what the consequences of an AI-enabled project and portfolio management could be for pharmaceutical R&D.
Context: Development of software intensive products and services increasingly occurs by continuously deploying product or service increments, such as new features and enhancements, to customers. Product and service developers must continuously find out what customers want by direct customer feedback and usage behaviour observation. Objective: This paper examines the preconditions for setting up an experimentation system for continuous customer experiments. It describes the RIGHT model for Continuous Experimentation (Rapid Iterative value creation Gained through High-frequency Testing), illustrating the building blocks required for such a system. Method: An initial model for continuous experimentation is analytically derived from prior work. The model is matched against empirical case study findings from two startup companies and further developed. Results: Building blocks for a continuous experimentation system and infrastructure are presented. Conclusions: A suitable experimentation system requires at least the ability to release minimum viable products or features with suitable instrumentation, design and manage experiment plans, link experiment results with a product roadmap, and manage a flexible business strategy. The main challenges are proper, rapid design of experiments, advanced instrumentation of software to collect, analyse, and store relevant data, and the integration of experiment results in both the product development cycle and the software development process.