Refine
Document Type
- Journal article (875) (remove)
Language
- English (875) (remove)
Is part of the Bibliography
- yes (875) (remove)
Institute
- ESB Business School (280)
- Life Sciences (277)
- Informatik (180)
- Technik (90)
- Texoversum (45)
- Zentrale Einrichtungen (4)
Publisher
- Elsevier (198)
- MDPI (98)
- Springer (78)
- Wiley (53)
- De Gruyter (39)
- IEEE (30)
- American Chemical Society (24)
- Emerald (21)
- Sage Publishing (15)
- Taylor & Francis (14)
This paper presents the first part of a research-work conducted at the University of Applied Sciences (HFT- Stuttgart). The aim of the research was to investigate the potential of low-cost renewable energy systems to reduce the energy demand of the building sector in hot and dry areas. Radiative cooling to the night sky represents a low-cost renewable energy source. The dry desert climate conditions promote radiative cooling applications. The system technology adopted in this work is based on uncovered solar thermal collectors integrated into the building’s hydronic system. By implementing different control strategies, the same system could be used for cooling as well as for heating applications. This paper focuses on identifying the collector parameters which are required as the coefficients to configure such an unglazed collector for calibrating its mathematical model within the simulation environment. The parameter identification process implies testing the collector for its thermal performance. This paper attempts to provide an insight into the dynamic testing of uncovered solar thermal collectors (absorbers), taking into account their prospective operation at nighttime for radiative cooling applications. In this study, the main parameters characterizing the performance of the absorbers for radiative cooling applications are identified and obtained from standardized testing protocol. For this aim, a number of plastic solar absorbers of different designs were tested on the outdoor test-stand facility at HFT-Stuttgart for the characterization of their thermal performance. The testing process was based on the quasi-dynamic test method of the international standard for solar thermal collectors EN ISO 9806. The test database was then used within a mathematical optimization tool (GenOpt) to determine the optimal parameter settings of each absorber under testing. Those performance parameters were significant to compare the thermal performance of the tested absorbers. The coefficients (identified parameters) were used then to plot the thermal efficiency curves of all absorbers, for both the heating and cooling modes of operation. Based on the intended main scope of the system utilization (heating or cooling), the tested absorbers could be benchmarked. Hence, one of those absorbers was selected to be used in the following simulation phase as was planned in the research-project.
During the first years of the last decade, Egypt used to face recurrent electricity cut-offs in summer. In the past few years, the electricity tariff dramatically increased. Radiative cooling to the clear night sky is a renewable energy source that represents a relative solution. The dry desert climate promotes nocturnal radiative cooling applications. This study investigates the potential of nocturnal radiative cooling systems (RCSs) to reduce the energy consumption of the residential building sector in Egypt. The system technology proposed in this work is based on uncovered solar thermal collectors integrated into the building hydronic system. By implementing different control strategies, the same system could be used for both cooling and heating applications. The goal of this paper is to analyze the performance of RCSs in residential buildings in Egypt. The dynamic simulation program TRNSYS was used to simulate the thermal behavior of the system. The relevant issues of Egypt as a case-study are firstly overviewed. Then the paper introduces the work done to develop a building model that represents a typical residential apartment in Egypt. Typical occupancy profiles were developed to define the internal thermal gains. The adopted control strategy to optimize the system operation is presented as well. To fully understand and hence evaluate the operation of the proposed RCS, four simulation cases were considered: 1. a reference case (fully passive), 2. the stand-alone operation of the RCS, 3. ideal heating & cooling operation (fully-active), and 4. the hybrid-operation (when the active cooling system is supported by the proposed RCS). The analysis considered the main three distinct climates in Egypt, represented by the cities of Alexandria, Cairo and Asyut. The hotter and drier weather conditions resulted in a higher cooling potential and larger temperature differences. The simulated cooling power in Asyut was 28.4 W/m² for a 70 m² absorber field. For a smaller field area of 10 m², the cooling power reached 109 W/m² but with humble temperature differences. To meet the rigorous thermal comfort conditions, the proposed sensible RCS cannot fully replace conventional air-conditioning units, especially in humid areas like Alexandria. When working in a hybrid system, a 10% reduction in the active cooling energy demand could be achieved in Asyut to keep the cooling set-point at 24 °C. This percentage reduction was nearly doubled when the thermal comfort set-point was increased by two degrees (26 °C). In a sensitivity analysis, external shading devices as a passive measure as well as the implementation of the Egyptian code for buildings (ECP306/1–2005) were also investigated. The analysis of this study raised other relevant aspects to discuss, e.g. system-sizing, environmental effects, limitations and recommendations.
Learning factories present a promising environment for education, training and research, especially in manufacturing related areas which are a main driver for wealth creation in any nation. While numerous learning factories have been built in industry and academia in the last decades, a comprehensive scientific overview of the topic is still missing. This paper intends to close this gap establishing the state of the art of learning factories. The motivations, historic background, and the didactic foundations of learning factories are outlined. Definitions of the term learning factory and the corresponding morphological model are provided. An overview of existing learning factory approaches in industry and academia is provided, showing the broad range of different applications and varying contents. The state of the art of learning factories curricula design and their use to enhance learning and research as well as potentials and limitations are presented. Conclusions and an outlook on further research priorities are offered.
In the last decade, numerous learning factories for education, training, and research have been built up in industry and academia. In recent years learning factory initiatives were elevated from a local to a European and then to a worldwide level. In 2014 the CIRP Collaborative Working Group (CWG) on Learning Factories enables a lively exchange on the topic "Learning Factories for future oriented research and education in manufacturing". In this paper results of discussions inside the CWG are presented. First, what is meant by the term Learning Factory is outlined. Second, based on the definition a description model (morphology) for learning factories is presented. The morphology covers the most relevant characteristics and features of learning factories in seven dimensions. Third, following the morphology the actual variance of learning factory manifestations is shown in six learning factory application scenarios from industrial training over education to research. Finally, future prospects of the learning factory concept are presented.
The physicochemical properties of synthetically produced bone substitute materials (BSM) have a major impact on biocompatibility. This affects bony tissue integration, osteoconduction, as well as the degradation pattern and the correlated inflammatory tissue responses including macrophages and multinucleated giant cells (MNGCs). Thus, influencing factors such as size, special surface morphologies, porosity, and interconnectivity have been the subject of extensive research. In the present publication, the influence of the granule size of three identically manufactured bone substitute granules based on the technology of hydroxyapatite (HA)-forming calcium phosphate cements were investigated, which includes the inflammatory response in the surrounding tissue and especially the induction of MNGCs (as a parameter of the material degradation). For the in vivo study, granules of three different size ranges (small = 0.355–0.5 mm; medium = 0.5–1 mm; big = 1–2 mm) were implanted in the subcutaneous connective tissue of 45 male BALB/c mice. At 10, 30, and 60 days post implantationem, the materials were explanted and histologically processed. The defect areas were initially examined histopathologically. Furthermore, pro- and anti-inflammatory macrophages were quantified histomorphometrically after their immunohistochemical detection. The number of MNGCs was quantified as well using a histomorphometrical approach. The results showed a granule size-dependent integration behavior. The surrounding granulation tissue has passivated in the groups of the two bigger granules at 60 days post implantationem including a fibrotic encapsulation, while a granulation tissue was still present in the group of the small granules indicating an ongoing cell-based degradation process. The histomorphometrical analysis showed that the number of proinflammatory macrophages was significantly increased in the small granules at 60 days post implantationem. Similarly, a significant increase of MNGCs was detected in this group at 30 and 60 days post implantationem. Based on these data, it can be concluded that the integration and/or degradation behavior of synthetic bone substitutes can be influenced by granule size.
With the expansion of cyber-physical systems (CPSs) across critical and regulated industries, systems must be continuously updated to remain resilient. At the same time, they should be extremely secure and safe to operate and use. The DevOps approach caters to business demands of more speed and smartness in production, but it is extremely challenging to implement DevOps due to the complexity of critical CPSs and requirements from regulatory authorities. In this study, expert opinions from 33 European companies expose the gap in the current state of practice on DevOps-oriented continuous development and maintenance. The study contributes to research and practice by identifying a set of needs. Subsequently, the authors propose a novel approach called Secure DevOps and provide several avenues for further research and development in this area. The study shows that, because security is a cross-cutting property in complex CPSs, its proficient management requires system-wide competencies and capabilities across the CPSs development and operation.
Implementation of product-service systems (PSS) requires structural changes in the way that business in manufacturing industries is traditionally conducted. Literature frequently mentions the importance of human resource management (HRM), since people are involved in the entire process of PSS development and employees are the primary link to customers. However, to this day, no study has provided empirical evidence whether and in what way HRM of firms that implement PSS differs from HRM of firms that solely run a traditional manufacturing based business model. The aim of this study is to contribute to closing this gap by investigating the particular HR components of manufacturing firms that implement PSS and compare it with the HRM of firms that do not. The context of this study is the fashion industry, which is an ideal setting since it is a mature and highly competitive industry that is well-documented for causing significant environmental impact. PSS present a promising opportunity for fashion firms to differentiate and mitigate the industry’s ecological footprint. Analysis of variance (ANOVA) was conducted to analyze data of 102 international fashion firms. Findings reveal a significant higher focus on nearly the entire spectrum of HRM components of firms that implement PSS compared with firms that do not. Empirical findings and their interpretation are utilized to propose a general framework of the role of HRM for PSS implementation. This serves as a departure point for both scholars and practitioners for further research, and fosters the understanding of the role of HRM for managing PSS implementation.
The fashion industry is well documented for causing significant environmental impact. Product-service systems (PSS) present a promising way to solve this challenge. PSS shift the focus toward complementary service offers, which decouples customer satisfaction from material consumption and entails dematerialization. However, PSS are not ecoefficient by nature but need to be accompanied by corporate environmental management (CEM) practices. The objective of this article is to examine the potential of PSS to contribute to the environmental sustainability of today's fashion industry by investigating if fashion firms with a positive attitude toward PSS implementation also pursue goals related to the ecological environment. For this purpose, analysis of variance (ANOVA) is conducted to analyze data of 102 fashion firms. Results reveal that the diffusion of PSS in today's fashion industry is low and few firms consider implementing PSS. Results, furthermore, demonstrate that PSS implementation is positively related to CEM. This indicates that existing structures of CEM favor PSS implementation and unlock the eco-efficient potential of implemented PSS in the fashion industry.
In recent years the share economy has gained widespread success across different industries. Since small firms and new ventures obtain fewer resources, an increased focus on service allows them to differentiate and compete with cost pressure in traditionally manufacturing based industries. There still is a lack of understanding how these firms manage to successfully shift towards service-oriented business models. This paper adopts a dynamic capabilities approach to examine the particular microfoundations that underlie sensing, seizing and reconfiguring dynamic capabilities of early-stage service firms within a traditional retail market. The context of this study is the fashion industry. It is an ideal setting since it is characterized by severe competition, short life cycles, strong cost pressure and high volatility. There are few but increasing examples of entrepreneurial initiatives that try to compete by providing offers to resell, rent or swap clothes. Qualitative data of five early stage fashion ventures is analyzed. Findings reveal that the ability to develop and maintain long-term relationships is essential. It has also been found crucial to acquire knowledge from external network partners, delegate tasks and share information. Furthermore, skills for interacting with customers and adopting consumer feedback are critical. This study provides empirical evidence of dynamic capabilities of early-stage firms and contributes to knowledge on the factors that facilitate servitization in traditionally manufacturing based industries. For practitioners, the presented microfoundations provide a framework of critical tasks that allow them to develop and maintain a service oriented business model.
Venture capital and the innovative power of a state : econometric study including Google data
(2015)
This article focuses on venture capital investments and the innovative power of a state defined by its public infrastructure. The economic implications are evaluated by estimating several panel regression models. The novelty is twofold: on the one hand the research approach and on the other hand the new data set. The data ranges from 1995 to 2014 and consists of 10 European countries plus the US and Canada. For the first time we include Google search data on Venture Capital. The results show a significant increase in Venture Capital is mainly determined by economic conditions such as real GDP growth. The impact of the innovative power of a state is not significant. We find that Google data is positively related and significant in respect to Venture Capital investments too. Consequently, we confirm that private business investments cannot be created by government policy alone rather via solid macroeconomic conditions.
User innovators follow multiple diffusion and adoption pathways for their self-developed innovations. Users may choose to commercialize their self-developed products on the marketplace by becoming entrepreneurs. Few studies exist that focus on understanding personal and interpersonal factors that affect some user innovators’ entrepreneurial decision-making. Hence, this paper focuses on how user innovators make key decisions relating to opportunity recognition and evaluation and when opportunity evaluation leads to subsequent entrepreneurial action in the entrepreneurial process. We conducted an exploratory study using a multi-grounded theory methodology as the user entrepreneurship phenomenon embodies complex social processes. We collected data through the netnography approach that targeted 18 entrepreneurs with potentially relevant differences through crowdfunding platforms. We integrated self-determination, human capital, and social capital theory to address the phenomena under study. This study’s significant findings posit that users’ motives are dissatisfaction with existing goods, interest in innovation, altruism, social recognition, desire for independence, and economic benefits. Besides, use-related experience, product-related knowledge, product diffusion, and iterative feedback positively impact innovative users’ entrepreneurial decision-making.
Different types of raw cotton were investigated by a commercial ultraviolet-visible/near infrared (UV-Vis/NIR) spectrometer (210–2200 nm) as well as on a home-built setup for NIR hyperspectral imaging (NIR-HSI) in the range 1100–2200 nm. UV-Vis/NIR reflection spectroscopy reveals the dominant role proteins, hydrocarbons and hydroxyl groups play in the structure of cotton. NIR-HSI shows a similar result. Experimentally obtained data in combination with principal component analysis (PCA) provides a general differentiation of different cotton types. For UV-Vis/NIR spectroscopy, the first two principal components (PC) represent 82 % and 78 % of the total data variance for the UV-Vis and NIR regions, respectively. Whereas, for NIR-HSI, due to the large amount of data acquired, two methodologies for data processing were applied in low and high lateral resolution. In the first method, the average of the spectra from one sample was calculated and in the second method the spectra of each pixel were used. Both methods are able to explain ≥90 % of total variance by the first two PCs. The results show that it is possible to distinguish between different cotton types based on a few selected wavelength ranges. The combination of HSI and multivariate data analysis has a strong potential in industrial applications due to its short acquisition time and low-cost development. This study opens a novel possibility for a further development of this technique towards real large-scale processes.
Hyperspectral imaging and reflectance spectroscopy in the range from 200–380 nm were used to rapidly detect and characterize copper oxidation states and their layer thicknesses on direct bonded copper in a non-destructive way. Single-point UV reflectance spectroscopy, as a well-established method, was utilized to compare the quality of the hyperspectral imaging results. For the laterally resolved measurements of the copper surfaces an UV hyperspectral imaging setup based on a pushbroom imager was used. Six different types of direct bonded copper were studied. Each type had a different oxide layer thickness and was analyzed by depth profiling using X-ray photoelectron spectroscopy. In total, 28 samples were measured to develop multivariate models to characterize and predict the oxide layer thicknesses. The principal component analysis models (PCA) enabled a general differentiation between the sample types on the first two PCs with 100.0% and 96% explained variance for UV spectroscopy and hyperspectral imaging, respectively. Partial least squares regression (PLS-R) models showed reliable performance with R2c = 0.94 and 0.94 and RMSEC = 1.64 nm and 1.76 nm, respectively. The developed in-line prototype system combined with multivariate data modeling shows high potential for further development of this technique towards real large-scale processes.
UV hyperspectral imaging (225 nm–410 nm) was used to identify and quantify the honey- dew content of real cotton samples. Honeydew contamination causes losses of millions of dollars annually. This study presents the implementation and application of UV hyperspectral imaging as a non-destructive, high-resolution, and fast imaging modality. For this novel approach, a reference sample set, which consists of sugar and protein solutions that were adapted to honeydew, was set-up. In total, 21 samples with different amounts of added sugars/proteins were measured to calculate multivariate models at each pixel of a hyperspectral image to predict and classify the amount of sugar and honeydew. The principal component analysis models (PCA) enabled a general differentiation between different concentrations of sugar and honeydew. A partial least squares regression (PLS-R) model was built based on the cotton samples soaked in different sugar and protein concentrations. The result showed a reliable performance with R2cv = 0.80 and low RMSECV = 0.01 g for the valida- tion. The PLS-R reference model was able to predict the honeydew content laterally resolved in grams on real cotton samples for each pixel with light, strong, and very strong honeydew contaminations. Therefore, inline UV hyperspectral imaging combined with chemometric models can be an effective tool in the future for the quality control of industrial processing of cotton fibers.
Due to its wide-ranging endocrine functions, adipose tissue influences the whole body’s metabolism. Engineering long-term stable and functional human adipose tissue is still challenging due to the limited availability of suitable biomaterials and adequate cell maturation. We used gellan gum (GG) to create manual and bioprinted adipose tissue models because of its similarities to the native extracellular matrix and its easily tunable properties. Gellan gum itself was neither toxic nor monocyte activating. The resulting hydrogels exhibited suitable viscoelastic properties for soft tissues and were stable for 98 days in vitro. Encapsulated human primary adipose-derived stem cells (ASCs) were adipogenically differentiated for 14 days and matured for an additional 84 days. Live-dead staining showed that encapsulated cells stayed viable until day 98, while intracellular lipid staining showed an increase over time and a differentiation rate of 76% between days 28 and 56. After 4 weeks of culture, adipocytes had a univacuolar morphology, expressed perilipin A, and secreted up to 73% more leptin. After bioprinting establishment, we demonstrated that the cells in printed hydrogels had high cell viability and exhibited an adipogenic phenotype and function. In summary, GG-based adipose tissue models show long-term stability and allow ASCs maturation into functional, univacuolar adipocytes.
Adipose tissue is related to the development and manifestation of multiple diseases, demonstrating the importance of suitable in vitro models for research purposes. In this study, adipose tissue lobuli were explanted, cultured, and used as an adipose tissue control to evaluate in vitro generated adipose tissue models. During culture, lobule exhibited a stable weight, lactate dehydrogenase, and glycerol release over 15 days. For building up in vitro adipose tissue models, we adapted the biomaterial gelatin methacryloyl (GelMA) composition and handling to homogeneously mix and bioprint human primary mature adipocytes (MA) and adipose-derived stem cells (ASCs), respectively. Accelerated cooling of the bioink turned out to be essential for the homogeneous distribution of lipid-filled MAs in the hydrogel. Last, we compared manual and bioprinted GelMA hydrogels with MA or ASCs and the explanted lobules to evaluate the impact of the printing process and rate the models concerning the physiological reference. The viability analyses demonstrated no significant difference between the groups due to additive manufacturing. The staining of intracellular lipids and perilipin A suggest that GelMA is well suited for ASCs and MA. Therefore, we successfully constructed physiological in vitro models by bioprinting MA-containing GelMA bioinks.
Influence of the respirator on volatile organic compounds : an animal study in rats over 24 hours
(2015)
Long-term animal studies are needed to accomplish measurements of volatile organic compounds (VOCs) for medical diagnostics. In order to analyze the time course of VOCs, it is necessary to ventilate these animals. Therefore, a total of 10 male Sprague–Dawley rats were anaesthetized and ventilated with synthetic air via tracheotomy for 24 h. An ion mobility spectrometry coupled to multi-capillary columns (MCC–IMS) was used to analyze the expired air. To identify background contaminations produced by the respirator itself, six comparative measurements were conducted with ventilators only. Overall, a number of 37 peaks could be detected within the positive mode. According to the ratio peak intensity rat/ peak intensity ventilator blank, 22 peaks with a ratio >1.5 were defined as expired VOCs, 12 peaks with a ratio between 0.5 and 1.5 as unaffected VOCs, and three peaks with a ratio <0.5 as resorbed VOCs. The peak intensity of 12 expired VOCs changed significantly during the 24 h measurement. These results represent the basis for future intervention studies. Notably, online VOC analysis with MCC–IMS is possible over 24 h in ventilated rats and allows different experimental approaches.
In the IGF project No. 19617 N, nitrogen and phosphorous substituted alkoxysilanes were prepared and their ability to inhibit fire growth and spread for fabrics was explored. To this end, a series of flame retardants were synthesized using different strategies including click chemistry and nucleophilic substitution of commercial organophosphorus compounds with amino-based trialkoxysilanes and/or cyanuric chloride. The new halogen-free and aldehyde-free flame retardants were applied to different fabrics such as cotton (CO), polyethylene terephthalate (PET), polyamide (PA) and their blends using the well-known pad-dry-cure technique and sol-gel method. The flame-retarding efficiencies were evaluated by EN ISO 15025 test methods (protective clothing-protection against heat and flame method of test for limited flame spread). Good flame retardancy of the hybrid organic-inorganic materials was achieved with the addition of as small amount as 3-5 wt.% for cotton fabrics. Moreover, the water solubility and the washing resistance could be controlled through the functional groups attached to the phosphor atom or through the optimization of the curing temperature. Overall, the research project demonstrated that N-P-silanes are very good permanent flame retardants for textiles.
Flame-retardant finishing of cotton fabrics using DOPO functionalized alkoxy- and amido alkoxysilane
(2023)
In the present study, DOPO-based alkoxysilane (DOPO-ETES) and amido alkoxysilane (DOPO-AmdPTES) were synthesized by one-step and without by-products as halogen-free flame retardants. The flame retardants were applied on cotton fabric utilizing sol–gel method and pad-dry-cure finishing process. The flame retardancy, the thermal stability and the combustion ehaviour of treated cotton were evaluated by surface and bottom edge ignition flame test (according to EN ISO 15025), thermogravimetric analysis (TGA) and micro-scale combustion calorimeter (MCC). Unlike CO/DOPO-ETES sample, cotton treated with DOPO-AmdPTES nanosols exhibits self-extinguishing ehaviour with high char residue, an improvement of the LOI value and a significant reduction of the PHRR, HRC and THR compared to pristine cotton. Cotton finished with DOPO-AmdPTES reveals a semi-durability after ten laundering cycles keeping the flame-retardant properties unchanged. According to the results obtained from TGA-FTIR, Py-GC/MS and XPS, the major activity of flame retardant occurs in the condensed phase via catalytic induced char formation as physical barrier along with the activity in the gas phase derived mainly from the dilution effect. The early degradation of CO/DOPO-AmdPTES compared to CO/DOPO-ETES, triggered by the cleavage of the weak bond between P and C=O, as the DFT study indicated, provides the beneficial effect of this flame retardant on the fire resistance of cellulose.
The chemical recycling of used motor oil via catalytic cracking to convert it into secondary diesel-like fuels is a sustainable and technically attractive solution for managing environmental concerns associated with traditional disposal. In this context, this study was conducted to screen basic and acidic-aluminum silicate catalysts doped with different metals, including Mg, Zn, Cu, and Ni. The catalysts were thoroughly characterized using various techniques such as N2 adsorption–desorption isotherms, FT-IR spectroscopy, and TG analysis. The liquid and gaseous products were identified using GC, and their characteristics were compared with acceptable ranges from ASTM characterization methods for diesel fuel. The results showed that metal doping improved the performance of the catalysts, resulting in higher conversion rates of up to 65%, compared to thermal (15%) and aluminum silicates (≈20%). Among all catalysts, basic aluminum silicates doped with Ni showed the best catalytic performance, with conversions and yields three times higher than aluminum silicate catalysts. These findings significantly contribute to developing efficient and eco-friendly processes for the chemical recycling of used motor oil. This study highlights the potential of basic aluminum silicates doped with Ni as a promising catalyst for catalytic cracking and encourages further research in this area.
Fast pyrolysis as a valorization mechanism for banana rachis and low-density polyethylene waste
(2021)
Banana rachis and low-density polyethylene (LDPE) were selected as secondary feedstocks for the study of fast pyrolysis in a free-fall reactor. The experiments were performed at 600 °C for banana rachis and 450 °C for LDPE, based on literature and thermogravimetric analysis. The gaseous products of both feedstocks present similar composition in the C1-C2 compounds, while C3 compounds are only found in LDPE. The liquid products from banana and LDPE correspond to functional groups and shorter hydrocarbons, respectively. Scanning electron microscopy (SEM) and Fourier transform infrared (FTIR) analyses of the char showed important morphological changes to spheres in LDPE and structural changes due to thermal decomposition in the biomass. The pyrolysis char has high potential as adsorbent, encapsulation, or catalyst.
Characterization of low density polyethylene greenhouse films during the composting of rose residues
(2022)
This study presents an evaluation of a potential alternative to plastic degradation in the form of organic composting. It stems from the urgent need of finding solutions to the plastic residues and focuses on the compost-based degradation of greenhouse film covers in an important rose exporter company in Ecuador. Thus, this study analyzes the physical, chemical, and biological changes of rose wastes composting, and also evaluates the stability of new and aged agricultural plastic under these conditions. Interestingly, results of compost characterization show a slow degradation rate of organic matter and total organic carbon, along with a significant increase in pH and rise of bacterial populations. However, the results demonstrate that despite these findings, composting conditions had no significant influence on plastic degradation, and while deterioration of aged plastic samples was reported in some tests, it may be the result of environmental conditions and a prolonged exposure to solar radiation. Importantly, these factors could facilitate the adhesion of microorganisms and promote plastic biodegradation. Hence, it is encouraged for future studies to analyze the ecotoxicity of plastics in the compost, as well as isolate, identify, and evaluate the possible biodegradative potential of these microorganisms as an alternative to plastic waste management.
The effect of Hofmeister anions on the surface properties of polyelectrolyte multilayers built from hyaluronan and chitosan by layer-by-layer deposition is studied by ellipsometry and atomic force microscopy. The thickness, roughness and morphology of the resulting coatings were found to depend on the type of the anion. Relationship between the surface properties and the biological response of the polyelectrolyte multilayers is established by assessing the degree of protein (albumin) adsorption.
The properties of polyelectrolyte multilayers are ruled by the process parameters employed during self-assembly. This is the first study in which a design of experiment approach was used to validate and control the production of ultrathin polyelectrolyte multilayer coatings by identifying the ranges of critical process parameters (polyelectrolyte concentration, ionic strength and pH) within which coatings with reproducible properties (thickness, refractive index and hydrophilicity) are created. Mathematical models describing the combined impact of key process parameters on coatings properties were developed demonstrating that only ionic strength and pH affect the coatings thickness, but not polyelectrolyte concentration. While the electrolyte concentration had a linear effect, the pH contribution was described by a quadratic polynomial. A significant contribution of this study is the development of a new approach to estimate the thickness of polyelectrolyte multilayer nanofilms by quantitative rhodamine B staining, which might be useful in all cases when ellipsometry is not feasible due to the shape complexity or small size of the coated substrate. The novel approach proposed here overcomes the limitations of known methods as it offers a low spatial sampling size and the ability to analyse a wide area without restrictions on the chemical composition and shape of the substrate.
Controlling the surface properties and structure of thin nanosized coatings is of primary importance in diverse engineering and medical applications. Here we report on how the nanostructure, growth mechanism, thickness, roughness, and hydrophilicity of nanocomposites composed of weak natural or strong synthetic polyelectrolytes (PE) can be tailored by graphene oxide (GO) doping. GO reverses the build‐up mechanism affecting the internal structure and the hydrophilicity in a way depending on the type of the PE‐matrix. The extent of GO‐adsorption and its impact on the surface morphology was found to be independent on the type of the underlying PE‐matrix. The nanostructure of the hybrid films is not significantly altered when a single surface‐exposed GO‐layer is deposited, while increasing the number of embedded GO‐layers leads to pronounced surface heterogeneity. These results are expected to have valuable impact on the construction strategies of coatings with tunable surface properties.
Herein the optimization of the physicochemical properties and surface biocompatibility of polyelectrolyte multilayers of the natural, biocompatible and biodegradable, linear polysaccharides hyaluronan and chitosan by Hofmeister anions was systematically investigated. We demonstrated that there is an interconnection between the bulk and surface properties of HA/Chi multilayers both varying in accordance with the arrangement of the anions in the Hofmeister series. Kosmotropic anions increased the hydration, thickness, micro- and macro-roughness, and hydrophilicity and improved the biocompatibility of the films by reduction (2 orders of magnitude) of the films stiffness and complete anti-thrombogenicity.
The proper selection of a demand forecasting method is directly linked to the success of supply chain management (SCM). However, today’s manufacturing companies are confronted with uncertain and dynamic markets. Consequently, classical statistical methods are not always appropriate for accurate and reliable forecasting. Algorithms of Artificial intelligence (AI) are currently used to improve statistical methods. Existing literature only gives a very general overview of the AI methods used in combination with demand forecasting. This paper provides an analysis of the AI methods published in the last five years (2017-2021). Furthermore, a classification is presented by clustering the AI methods in order to define the trend of the methods applied. Finally, a classification of the different AI methods according to the dimensionality of data, volume of data, and time horizon of the forecast is presented. The goal is to support the selection of the appropriate AI method to optimize demand forecasting.
This paper is concerned with the study, optimization and control of the moisture sorption kinetics of agricultural products at temperatures typically found in processing and storage. A nonlinear autoregressive with exogenous inputs (NARX) neural network was developed to predict moisture sorption kinetics and consequently equilibrium moisture contents of shiitake mushrooms (Lentinula edodes (Berk.) Pegler) over a wide range of relative humidity and different temperatures. Sorption kinetic data of mushroom caps was separately generated using a continuous, gravimetric dynamic vapour sorption analyser at emperatures of 25-40 °C over a stepwise variation of relative humidity ranging from 0 to 85%. The predictive power of the neural network was based on physical data, namely relative humidity and temperature. The model was fed with a total of 4500 data points by dividing them into three subsets, namely, 70% of the data was used for training, 15% of the data for testing and 15% of the data for validation, randomly selected from the whole dataset. The NARX neural network was capable of precisely simulating equilibrium moisture contents of mushrooms derived from the dynamic vapour sorption kinetic data throughout the entire range of relative humidity.
Being exposed to compulsory religious education in school can have long-run consequences for students’ lives. At different points in time since the 1970s, German states terminated compulsory religious education in public schools and replaced it by a choice between ethics classes and religious education. This article shows that the reform not only led to reduced religiosity in students’ later life, but also eroded traditional attitudes towards gender roles and increased labor-market participation and earnings.
Nowadays, the importance of early active patient mobilization in the recovery and rehabilitation phase has increased significantly. One way to involve patients in the treatment is a gamification-like approach, which is one of the methods of motivation in various life processes. This article shows a system prototype for patients who require physical activity because of active early mobilization after medical interventions or during illness. Bedridden patients and people with a sedentary lifestyle (predominantly lying in bed) are also potential users. The main idea for the concept was non-contact system implementation for the patients making them feel effortless during its usage. The system consists of three related parts: hardware, software, and game application. To test the relevance and coherence of the system, it was used by 35 people. The participants were asked to play a video game requiring them to make body movements while lying down. Then they were asked to take part in a small survey to evaluate the system's usability. As a result, we offer a prototype consisting of hardware and software parts that can increase and diversify physical activity during active early mobilization of patients and prevent the occurrence of possible health problems due to predominantly low activity. The proposed design can be possibly implemented in hospitals, rehabilitation centers, and even at home.
Monitoring heart rate and breathing is essential in understanding the physiological processes for sleep analysis. Polysomnography (PSG) system have traditionally been used for sleep monitoring, but alternative methods can help to make sleep monitoring more portable in someone's home. This study conducted a series of experiments to investigate the use of pressure sensors placed under the bed as an alternative to PSG for monitoring heart rate and breathing during sleep. The following sets of experiments involved the addition of small rubber domes - transparent and black - that were glued to the pressure sensor. The resulting data were compared with the PSG system to determine the accuracy of the pressure sensor readings. The study found that the pressure sensor provided reliable data for extracting heart rate and respiration rate, with mean absolute errors (MAE) of 2.32 and 3.24 for respiration and heart rate, respectively. However, the addition of small rubber hemispheres did not significantly improve the accuracy of the readings, with MAEs of 2.3 bpm and 7.56 breaths per minute for respiration rate and heart rate, respectively. The findings of this study suggest that pressure sensors placed under the bed may serve as a viable alternative to traditional PSG systems for monitoring heart rate and breathing during sleep. These sensors provide a more comfortable and non-invasive method of sleep monitoring. However, the addition of small rubber domes did not significantly enhance the accuracy of the readings, indicating that it may not be a worthwhile addition to the pressure sensor system.
We report the temperature dependence of metal-enhanced fluorescence (MEF) of individual photosystem I (PSI) complexes from Thermosynechococcus elongatus (T. elongatus) coupled to gold nanoparticles (AuNPs). A strong temperature dependence of shape and intensity of the emission spectra is observed when PSI is coupled to AuNPs. For each temperature, the enhancement factor (EF) is calculated by comparing the intensity of individual AuNP-coupled PSI to the mean intensity of ‘uncoupled’ PSI. At cryogenic temperature (1.6 K) the average EF was 4.3-fold. Upon increasing the temperature to 250 K the EF increases to 84-fold. Single complexes show even higher EFs up to 441.0-fold. At increasing temperatures the different spectral pools of PSI from T. elongatus become distinguishable. These pools are affected differently by the plasmonic interactions and show different enhancements. The remarkable increase of the EFs is explained by a rate model including the temperature dependence of the fluorescence yield of PSI and the spectral overlap between absorption and emission spectra of AuNPs and PSI, respectively.
Cyber-Physical Production Systems increasingly use semantic information to meet the grown flexibility requirements. Ontologies are often used to represent and use this semantic information. Existing systems focus on mapping knowledge and less on the exchange with other relevant IT systems (e.g., ERP systems) in which crucial semantic information, often implicit, is contained. This article presents an approach that enables the exchange of semantic information via adapters. The approach is demonstrated by a use case utilizing an MES system and an ERP system.
Thermoplastic polycarbonate urethane elastomers (TPCU) are potential implant materials for treating degenerative joint diseases thanks to their adjustable rubber-like properties, their toughness, and their durability. We developed a water-containing high-molecular-weight sulfated hyaluronic acid-coating to improve the interaction of TPCU with the synovial fluid. It is suggested that trapped synovial fluid can act as a lubricant that reduces the friction forces and thus provides an enhanced abrasion resistance of TPCU implants. Aims of this work were (i) the development of a coating method for novel soft TPCU with high-molecular sulfated hyaluronic acid to increase the biocompatibility and (ii) the in vitro validation of the functionalized TPCUs in cell culture experiments.
Knee osteoarthritis is a common complication and can lead to total loss of joint function in patients. Treatment by either partial or total knee replacement with appropriate UHMWPE based implantsis highly invasive, may cause complications and may show unsatisfying results. Alternatively, treatment may be done by insertion of an elastic interpositional knee spacer with optimized material characteristics.
We report the development of high performance polyurethane-based polymers modified with bioactive molecules for fabrication of such knee spacers. In order to tailor mechanical and tribological properties and to improve resist to enzymatic degradation we propose a core-shell model for the spacer with specifically adapted properties.
Polyurethane-bases block copolymers (TPCUs) are block-copolymers with systematically varied soft and hard segments. They have been suggested to serve as material for chondral implants in joint regeneration. Such applications may require the adhesion of chondrocytes to the implant surface, facilitating cell growth while keeping their phenotype. Thus, aims of this work were (1) to modify the surface of soft biostable polyurethane-based model implants (TPCU and TSiPCU) with high-molecular weight hyaluronic acid (HA) using an optimized multistep strategy of immobilization, and (2) to evaluate bioactivity of the modified TPCUs in vitro. Our results show no cytotoxic potential of the TPCUs. HAbioactive molecules (Mw =700kDa) were immobilized onto the polyurethane surface via polyethylenimine (PEI) spacers, and modifications were confirmed by several characterization methods. Tests with porcine chondrocytes indicated the potential of the TPCU-HA for inducing enhanced cell proliferation.
Background
Alzheimer’s disease (AD) is diagnosed based upon medical history, neuropsychiatric examination, cerebrospinal fluid analysis, extensive laboratory analyses and cerebral imaging. Diagnosis is time consuming and labour intensive. Parkinson’s disease (PD) is mainly diagnosed on clinical grounds.
Objective
The primary aim of this study was to differentiate patients suffering from AD, PD and healthy controls by investigating exhaled air with the electronic nose technique. After demonstrating a difference between the three groups the secondary aim was the identification of specific substances responsible for the difference(s) using ion mobility spectroscopy. Thirdly we analysed whether amyloid beta (Aβ) in exhaled breath was causative for the observed differences between patients suffering from AD and healthy controls.
Methods
We employed novel pulmonary diagnostic tools (electronic nose device/ion-mobility spectrometry) for the identification of patients with neurodegenerative diseases. Specifically, we analysed breath pattern differences in exhaled air of patients with AD, those with PD and healthy controls using the electronic nose device (eNose). Using ion mobility spectrometry (IMS), we identified the compounds responsible for the observed differences in breath patterns. We applied ELISA technique to measure Aβ in exhaled breath condensates.
Results
The eNose was able to differentiate between AD, PD and HC correctly. Using IMS, we identified markers that could be used to differentiate healthy controls from patients with AD and PD with an accuracy of 94%. In addition, patients suffering from PD were identified with sensitivity and specificity of 100%. Altogether, 3 AD patients out of 53 participants were misclassified. Although we found Aβ in exhaled breath condensate from both AD and healthy controls, no significant differences between groups were detected.
Conclusion
These data may open a new field in the diagnosis of neurodegenerative disease such as Alzheimer’s disease and Parkinson’s disease. Further research is required to evaluate the significance of these pulmonary findings with respect to the pathophysiology of neurodegenerative disorders.
With the progress of technology in modern hospitals, an intelligent perioperative situation recognition will gain more relevance due to its potential to substantially improve surgical workflows by providing situation knowledge in real-time. Such knowledge can be extracted from image data by machine learning techniques but poses a privacy threat to the staff’s and patients’ personal data. De-identification is a possible solution for removing visual sensitive information. In this work, we developed a YOLO v3 based prototype to detect sensitive areas in the image in real-time. These are then deidentified using common image obfuscation techniques. Our approach shows that it is principle suitable for de-identifying sensitive data in OR images and contributes to a privacyrespectful way of processing in the context of situation recognition in the OR.
Digitalization changes the manufacturing dramatically. In regard of employees’ demands, global trends and the technological vision of future factories, automotive manufacturing faces a huge number of diverse challenges. Currently, research focuses on technological aspects of future factories in terms of digitalization. New ways of work and new organizational models for future factories have not been described yet. There are assumptions on how to develop the organization of work in a future factory but up to now, literature shows deficits in scientifically substantiated answers in this research area. Consequently, the objective of this paper is to present an approach on a work organization design for automotive Industry 4.0 manufacturing. Future requirements were analyzed and deducted to criteria that determine future agile organization design. These criteria were then transformed into functional mechanisms, which define the approach for shopfloor organization design
The powder coating of veneered particle boards by the sequence electrostatic powder application -powder curing via hot pressing is studied in order to create high gloss surfaces. To obtain an appealingaspect, veneer Sheets were glued by heat and pressure on top of particle boards and the resulting surfaceswere used as carrier substrates for powder coat finishing. Prior to the powder coating, the veneeredparticle board surfaces were pre-treated by sanding to obtain good uniformity and the boards werestored in a climate chamber at controlled temperature and humidity conditions to adjust an appropriate electrical surface resistance. Characterization of surface texture was done by 3D microscopy. The surfaceelectrical resistance was measured for the six veneers before and after their application on the particleboard surface. A transparent powder top-coat was applied electrostatically onto the veneered particleboard surface. Curing of the powder was done using a heated press at 130◦C for 8 min and a smooth, glossy coating was obtained on the veneered surfaces. By applying different amounts of powder thecoating thickness could be varied and the optimum amount of powder was determined for each veneer type.
Decorative laminates based on melamine formaldehyde (MF) resin impregnated papers are used at great extent for surface finishing of engineered wood that is used for furniture, kitchen, and working surfaces, flooring and exterior cladding. In all these applications, optically flawless appearance is a major issue. The work described here is focused on enhancing the cleanability and antifingerprint properties of smooth, matt surface-finished melamine-coated particleboards for furniture fronts, without at the same time changing or deteriorating other important surface parameters such as hardness, roughness or gloss. In order to adjust the surface polarity of a low pressure melamine film, novel interface-active macromolecular compounds were prepared and tested for their suitability as an antifingerprint additive. Two hydroxy-functional surfactants (polydimethysiloxane, PDMS-OH and perfluoroether, PF-OH) were oxidized under mild conditions to the corresponding aldehydes (PDMS-CHO and PF-CHO) using a pyridinium chlorochromate catalyst. With the most promising oxidized polymeric additive, PDMS-CHO, the contact angles against water, n-hexadecane, and squalene increased from 79.8°, 26.3° and 31.4° for the pure MF surface to 108.5°, 54.8°, and 59.3°, respectively, for the modified MF surfaces. While for the laminated MF surface based on the oxidized fluoroether the gloss values were much higher than required, for the surfaces based on oxidized polydimethylsiloxane the technological values as well as the lower gloss values were in agreement with the requirements and showed much improved surface cleanability, as was also confirmed by colorimetric measurements.
Unprecedented formation of sterically stabilized phospholipid liposomes of cuboidal morphology
(2021)
Sterically stabilized phospholipid liposomes of unprecedented cuboid morphology are formed upon introduction in the bilayer membrane of original polymers, based on polyglycidol bearing a lipid-mimetic residue. Strong hydrogen bonding in the polyglycidol sublayers creates attractive forces, which, facilitated by fluidization of the membrane, bring about the flattening of the bilayers and the formation of cuboid vesicles.
In our initial DaMoN paper, we set out the goal to revisit the results of “Starring into the Abyss [...] of Concurrency Control with [1000] Cores” (Yu in Proc. VLDB Endow 8: 209-220, 2014). Against their assumption, today we do not see single-socket CPUs with 1000 cores. Instead, multi-socket hardware is prevalent today and in fact offers over 1000 cores. Hence, we evaluated concurrency control (CC) schemes on a real (Intel-based) multi-socket platform. To our surprise, we made interesting findings opposing results of the original analysis that we discussed in our initial DaMoN paper. In this paper, we further broaden our analysis, detailing the effect of hardware and workload characteristics via additional real hardware platforms (IBM Power8 and 9) and the full TPC-C transaction mix. Among others, we identified clear connections between the performance of the CC schemes and hardware characteristics, especially concerning NUMA and CPU cache. Overall, we conclude that no CC scheme can efficiently make use of large multi-socket hardware in a robust manner and suggest several directions on how CC schemes and overall OLTP DBMS should evolve in future.
The present publication reports the purification effort of two natural bone blocks, that is, an allogeneic bone block (maxgraft®, botiss biomaterials GmbH, Zossen, Germany) and a xenogeneic block (SMARTBONE®, IBI S.A., Mezzovico Vira, Switzerland) in addition to previously published results based on histology. Furthermore, specialized scanning electron microscopy (SEM) and in vitro analyses (XTT, BrdU, LDH) for testing of the cytocompatibility based on ISO 10993-5/-12 have been conducted. The microscopic analyses showed that both bone blocks possess a trabecular structure with a lamellar subarrangement. In the case of the xenogeneic bone block, only minor remnants of collagenous structures were found, while in contrast high amounts of collagen were found associated with the allogeneic bone matrix. Furthermore, only island-like remnants of the polymer coating in case of the xenogeneic bone substitute seemed to be detectable. Finally, no remaining cells or cellular remnants were found in both bone blocks. The in vitro analyses showed that both bone blocks are biocompatible. Altogether, the purification level of both bone blocks seems to be favorable for bone tissue regeneration without the risk for inflammatory responses or graft rejection. Moreover, the analysis of the maxgraft® bone block showed that the underlying purification process allows for preserving not only the calcified bone matrix but also high amounts of the intertrabecular collagen matrix.
Introduction: Bioresorbable collagenous barrier membranes are used to prevent premature soft tissue ingrowth and to allow bone regeneration. For volume stable indications, only non-absorbable synthetic materials are available. This study investigates a new bioresorbable hydrofluoric acid (HF)-treated magnesium (Mg) mesh in a native collagen membrane for volume stable situations. Materials and Methods: HF-treated and untreated Mg were compared in direct and indirect cytocompatibility assays. In vivo, 18 New Zealand White Rabbits received each four 8 mm calvarial defects and were divided into four groups: (a) HF-treated Mg mesh/collagen membrane, (b) untreated Mg mesh/collagen membrane (c) collagen membrane and (d) sham operation. After 6, 12 and 18 weeks, Mg degradation and bone regeneration was measured using radiological and histological methods. Results: In vitro, HF-treated Mg showed higher cytocompatibility. Histopathologically, HF-Mg prevented gas cavities and was degraded by mononuclear cells via phagocytosis up to 12 weeks. Untreated Mg showed partially significant more gas cavities and a fibrous tissue reaction. Bone regeneration was not significantly different between all groups. Discussion and Conclusions: HF-Mg meshes embedded in native collagen membranes represent a volume stable and biocompatible alternative to the non-absorbable synthetic materials. HF-Mg shows less corrosion and is degraded by phagocytosis. However, the application of membranes did not result in higher bone regeneration.
Salivary gland tumors (SGTs) are a relevant, highly diverse subgroup of head and neck tumors whose entity determination can be difficult. Confocal Raman imaging in combination with multivariate data analysis may possibly support their correct classification. For the analysis of the translational potential of Raman imaging in SGT determination, a multi-stage evaluation process is necessary. By measuring a sample set of Warthin tumor, pleomorphic adenoma and non-tumor salivary gland tissue, Raman data were obtained and a thorough Raman band analysis was performed. This evaluation revealed highly overlapping Raman patterns with only minor spectral differences. Consequently, a principal component analysis (PCA) was calculated and further combined with a discriminant analysis (DA) to enable the best possible distinction. The PCA-DA model was characterized by accuracy, sensitivity, selectivity and precision values above 90% and validated by predicting model-unknown Raman spectra, of which 93% were classified correctly. Thus, we state our PCA-DA to be suitable for parotid tumor and non-salivary salivary gland tissue discrimination and prediction. For evaluation of the translational potential, further validation steps are necessary.
Glioblastoma WHO IV belongs to a group of brain tumors that are still incurable. A promising treatment approach applies photodynamic therapy (PDT) with hypericin as a photosensitizer. To generate a comprehensive understanding of the photosensitizer-tumor interactions, the first part of our study is focused on investigating the distribution and penetration behavior of hypericin in glioma cell spheroids by fluorescence microscopy. In the second part, fluorescence lifetime imaging microscopy (FLIM) was used to correlate fluorescence lifetime (FLT) changes of hypericin to environmental effects inside the spheroids. In this context, 3D tumor spheroids are an excellent model system since they consider 3D cell–cell interactions and the extracellular matrix is similar to tumors in vivo. Our analytical approach considers hypericin as probe molecule for FLIM and as photosensitizer for PDT at the same time, making it possible to directly draw conclusions of the state and location of the drug in a biological system. The knowledge of both state and location of hypericin makes a fundamental understanding of the impact of hypericin PDT in brain tumors possible. Following different incubation conditions, the hypericin distribution in peripheral and central cryosections of the spheroids were analyzed. Both fluorescence microscopy and FLIM revealed a hypericin gradient towards the spheroid core for short incubation periods or small concentrations. On the other hand, a homogeneous hypericin distribution is observed for long incubation times and high concentrations. Especially, the observed FLT change is crucial for the PDT efficiency, since the triplet yield, and hence the O2 activation, is directly proportional to the FLT. Based on the FLT increase inside spheroids, an incubation time 30 min is required to achieve most suitable conditions for an effective PDT.
The early detection of head and neck cancer is a prolonged challenging task. It requires a precise and accurate identification of tissue alterations as well as a distinct discrimination of cancerous from healthy tissue areas. A novel approach for this purpose uses microspectroscopic techniques with special focus on hyperspectral imaging (HSI) methods. Our proof-of-principle study presents the implementation and application of darkfield elastic light scattering spectroscopy (DF ELSS) as a non-destructive, high-resolution, and fast imaging modality to distinguish lingual healthy from altered tissue regions in a mouse model. The main aspect of our study deals with the comparison of two varying HSI detection principles, which are a point-by-point and line scanning imaging, and whether one might be more appropriate in differentiating several tissue types. Statistical models are formed by deploying a principal component analysis (PCA) with the Bayesian discriminant analysis (DA) on the elastic light scattering (ELS) spectra. Overall accuracy, sensitivity, and precision values of 98% are achieved for both models whereas the overall specificity results in 99%. An additional classification of model-unknown ELS spectra is performed. The predictions are verified with histopathological evaluations of identical HE-stained tissue areas to prove the model’s capability of tissue distinction. In the context of our proof-of-principle study, we assess the Pushbroom PCA-DA model to be more suitable for tissue type differentiations and thus tissue classification. In addition to the HE-examination in head and neck cancer diagnosis, the usage of HSI-based statistical models might be conceivable in a daily clinical routine.
The number of publications in the field of breath analysis using different types of ion mobility spectrometers (IMS) has increased over the last few years. In this paper, the publications between 2010 and 2013 are reviewed with respect to different types of IMS such as differential mobility spectrometers, high-field asymmetric waveform ion mobility spectrometers and multi-capillary columns coupled to conventional IMS. The analytes detected by IMS and declared with significance to a specific medical question were considered further with respect to medical and analytical questions. In total, 42 different analytes were found to be detected using IMS on a high significance level and were compared to findings using other analytical methods with respect to the individual analyte.
Background: Conventional methods for lung cancer detection including computed tomography (CT) and bronchoscopy are expensive and invasive. Thus, there is still a need for an optimal lung cancer detection technique. Methods: The exhaled breath of 50 patients with lung cancer histologically proven by bronchoscopic biopsy samples (32 adenocarcinomas, 10 squamous cell carcinomas, 8 small cell carcinomas), were analyzed using ion mobility spectrometry (IMS) and compared with 39 healthy volunteers. As a secondary assessment, we compared adenocarcinoma patients with and without epidermal growth factor receptor (EGFR) mutation. Results: A decision tree algorithm could separate patients with lung cancer including adenocarcinoma, squamous cell carcinoma and small cell carcinoma. One hundred-fifteen separated volatile organic compound (VOC) peaks were analyzed. Peak-2 noted as n-Dodecane using the IMS database was able to separate values with a sensitivity of 70.0% and a specificity of 89.7%. Incorporating a decision tree algorithm starting with n-Dodecane, a sensitivity of 76% and specificity of 100% was achieved. Comparing VOC peaks between adenocarcinoma and healthy subjects, n-Dodecane was able to separate values with a sensitivity of 81.3% and a specificity of 89.7%. Fourteen patients positive for EGFR mutation displayed a significantly higher n-Dodecane than for the 14 patients negative for EGFR (p<0.01), with a sensitivity of 85.7% and a specificity of 78.6%. Conclusion: In this prospective study, VOC peak patterns using a decision tree algorithm were useful in the detection of lung cancer. Moreover, n-Dodecane analysis from adenocarcinoma patients might be useful to discriminate the EGFR mutation.
Development work within an experimental environment, in which certain properties are investigated and optimized, requires many test runs and is therefore often associated with long execution times, costs and risks. This can affect product, material and technology development in industry and research. New digital driver technologies offer the possibility to automate complex manual work steps in a cost-effective way, to increase the relevance of the results and to accelerate the processes many times over. In this context, this article presents a low-cost, modular and open-source machine vision system for test execution and evaluates it on the basis of a real industrial application. For this purpose a methodology for the automated execution of the load intervals, the process documentation and for the evaluation of the generated data by means of machine learning to classify wear levels. The software and the mechanical structure are designed to be adaptable to different conditions, components and for a variety of tasks in industry and research. The mechanical structure is required for tracking the test object and represents a motion platform with independent positioning by machine vision operators or machine learning. An evaluation of the state of the test object is performed by the transfer learning after the initial documentation run. The manual procedure for classifying the visually recorded data on the state of the test object is described for the training material. This leads to an increased resource efficiency on the material as well as on the personnel side since on the one hand the significance of the tests performed is increased by the continuous documentation and on the other hand the responsible experts can be assigned time efficiently. The presence and know-how of the experts are therefore only required for defined and decisive events during the execution of the experiments. Furthermore, the generated data are suitable for later use as an additional source of data for predictive maintenance of the developed object.
The use of additive manufacturing technologies for industrial production is constantly growing. This technology differs from the known production proecdures. The areas for scheduling, detailed and sequence planning are particularly important for additive production due to the long print times and flexible use of the production area. Therefore, production-relevant variables are considered and used for the production planning and control (PPC) of additive manufacturing machines. For this purpose, an optimization model is presented which shows a time-oriented build space utilization. In the implementation, a nesting algorithm is used to check the combinability of different models for each individual print job.
The promise of immutable documents to make it easier and less expensive for consumers and producers to collaborate in a verifiable way would represent an enormous progress, especially as companies strive for establish service contracts which are based on the flow of many small transactions using machine-to-machine communication. The blockchain technology logs these data, verifies the authenticity and make them available for service offers. This work deals with an architecture enabling to setup order processing between consumers and produceers using blockchain. In this way, the technical feasibility is shown and the special characteristics of blockchain production networks will be discussed.
Pre-clinical evaluation of advanced nerve guide conduits using a novel 3D in vitro testing model
(2018)
Autografts are the current gold standard for large peripheral nerve defects in clinics despite the frequently occurring side effects like donor site morbidity. Hollow nerve guidance conduits (NGC) are proposed alternatives to autografts, but failed to bridge gaps exceeding 3 cm in humans. Internal NGC guidance cues like microfibres are believed to enhance hollow NGCs by giving additional physical support for directed regeneration of Schwann cells and axons. In this study, we report a new 3D in vitro model that allows the evaluation of different intraluminal fibre scaffolds inside a complete NGC. The performance of electrospun polycaprolactone (PCL) microfibres inside 5 mm long polyethylene glycol (PEG) conduits were investigated in neuronal cell and dorsal root ganglion (DRG) cultures in vitro. Z-stack confocal microscopy revealed the aligned orientation of neuronal cells along the fibres throughout the whole NGC length and depth. The number of living cells in the centre of the scaffold was not significantly different to the tissue culture plastic (TCP) control. For ex vivo analysis, DRGs were placed on top of fibre-filled NGCs to simulate the proximal nerve stump. In 21 days of culture, Schwann cells and axons infiltrated the conduits along the microfibres with 2.2 ± 0.37 mm and 2.1 ± 0.33 mm, respectively. We conclude that this in vitro model can help define internal NGC scaffolds in the future by comparing different fibre materials, composites and dimensions in one setup prior to animal testing.
In thermopervaporation the same economically favorable driving force as in membrane distillation, i.e., a temperature difference between feed and permeate for the transport, is used but with non-porous thin-film composite membranes. Membrane pores cannot be wetted and long-term operational stability can be achieved with the appropriate coating layer, but normally with a decrease of the flux compared to membrane distillation with porous hydrophobic membranes.
Porous asymmetric PVDF membranes were made to achieve low permeation resistance and pores which could be overcoated with polyelectrolyte polymers. This coating prohibits pore wetting and strongly reduces adsorption of organic substances.
Those membranes showed a high permeation rate for water due to a structure of phase-separated hydrophilic and hydrophobic three-dimensional domains. The permeation rates of these composite membranes for water is between 6 and 12 l/(h m²) at a feed temperature of 60 °C and permeate at a temperature of 40 °C of a 2% saline solution feed depending on the operational parameters. This is only a slight reduction of 10–15% in permeation rate compared to membrane distillation with porous hydrophobic membranes.
In whey dewatering experiment this membrane showed a constant performance over 4 days in intermittent operation mode and stability in cleaning with strong alkaline solution.
A vapor permeation processes for the separation of aromatic compounds from aliphatic compounds
(2014)
A number of rubbery and glassy membranes have been prepared and evaluated in vapor permeation experiments for separation of aromatic/aliphatic mixtures, using 5/95 (wt:wt) toluene/methylcyclohexane (MCH) as a model solution. Candidate membranes that met the required toluene/MCH selectivity of ≥ 10 were identified. The stability of the candidate membranes was tested by cycling the experiment between higher toluene concentrations and the original 5 wt% level. The best membrane produced has a toluene permeance of 280 gpu and a toluene/MCH selectivity of 13 when tested with a vapor feed of the model mixture at its boiling point and at atmospheric pressure. When a series of related membrane materials are compared, there is a sharp trade-off between membrane permeance and membrane selectivity. A process design study based on the experimental results was conducted. The best preliminary membrane design uses 45% of the energy of a conventional distillation process.
Maintenance is an increasingly complex and knowledge-intensive field. In order to address these challenges, assistance systems based on augmented, mixed, or virtual reality can be applied. Therefore, the objective of this paper is to present a framework that can be used to identify, select, and implement an assistance system based on reality technology in the maintenance environment. The development of the framework is based on a systematic literature review and subject matter expert interviews. The framework provides the best technological and economic solution in several steps. The validation of the framework is carried out through a case study.
Chronic obstructive pulmonary disease (COPD) is a chronic airway inflammatory disease characterized by incompletely reversible airway obstruction. This clinically heterogeneous group of patients is characterized by different phenotypes. Spirometry and clinical parameters, such as severity of dyspnea and exacerbation frequency, are used to diagnose and assess the severity of COPD. The purpose of this study was to investigate whether volatile organic compounds (VOCs) could be detected in the exhaled breath of patients with COPD and whether these VOCs could distinguish COPD patients from healthy subjects. Moreover, we aimed to investigate whether VOCs could be used as biomarkers for classifying patients into different subgroups of the disease. Ion mobility spectrometry was used to detect VOCs in the exhaled breath of COPD patients. One hundred and thirty-seven peaks were found to have a statistically significant difference between the COPD group and the combined healthy smokers and nonsmoker group. Six of these VOCs were found to correctly discriminate COPD patients from healthy controls with an accuracy of 70%. Only 15 peaks were found to be statistically different between healthy smokers and healthy nonsmokers. Furthermore, by determining the cutoff levels for each VOC peak, it was possible to classify the COPD patients into breathprint subgroups. Forced expiratory volume in 1 second, body mass index, and C-reactive protein seem to play a role in the discrepancies observed in the different breathprint subgroups.
Purpose
Injury or inflammation of the middle ear often results in the persistent tympanic membrane (TM) perforations, leading to conductive hearing loss (HL). However, in some cases the magnitude of HL exceeds that attributable by the TM perforation alone. The aim of the study is to better understand the effects of location and size of TM perforations on the sound transmission properties of the middle ear.
Methods
The middle ear transfer functions (METF) of six human temporal bones (TB) were compared before and after perforating the TM at different locations (anterior or posterior lower quadrant) and to different degrees (1 mm, ¼ of the TM, ½ of the TM, and full ablation). The sound-induced velocity of the stapes footplate was measured using single-point laser-Doppler-vibrometry (LDV). The METF were correlated with a Finite Element (FE) model of the middle ear, in which similar alterations were simulated.
Results
The measured and calculated METF showed frequency and perforation size dependent losses at all perforation locations. Starting at low frequencies, the loss expanded to higher frequencies with increased perforation size. In direct comparison, posterior TM perforations affected the transmission properties to a larger degree than anterior perforations. The asymmetry of the TM causes the malleus-incus complex to rotate and results in larger deflections in the posterior TM quadrants than in the anterior TM quadrants. Simulations in the FE model with a sealed cavity show that small perforations lead to a decrease in TM rigidity and thus to an increase in oscillation amplitude of the TM mainly above 1 kHz.
Conclusion
Size and location of TM perforations have a characteristic influence on the METF. The correlation of the experimental LDV measurements with an FE model contributes to a better understanding of the pathologic mechanisms of middle-ear diseases. If small perforations with significant HL are observed in daily clinical practice, additional middle ear pathologies should be considered. Further investigations on the loss of TM pretension due to perforations may be informative.
This article studies the effects of reverse factoring in a supply chain when the buyer company facilitates its lower short-term borrowing rates to the supplier corporation in return for extended payment terms. We explore the role of interest rate changes, rating changes, and the business cycle position on the cost and benefit trade-off from a supplier perspective. We utilize a combined empirical approach consisting of an event study in Step 1 and a simulation model in Step 2. The event study identifies the quantitative magnitude of central bank decisions and rating changes on the interest rate differential. The simulation computes with a rolling-window methodology the daily cost and benefits of reverse factoring from 2010 to 2018 under the assumption of the efficient market hypothesis. Our major finding is that changes of crucial financial variables such as interest rates, ratings, or news alerts will turn former win-win into win-lose situations for the supplier contingent to the business cycle. Overall, our results exhibit sophisticated trade-offs under reverse factoring and consequently require a careful evaluation in managerial decisions.
Uncontrolled movements of laparoscopic instruments can lead to inadvertent injury of adjacent structures. The risk becomes evident when the dissecting instrument is located outside the field of view of the laparoscopic camera. Technical solutions to ensure patient safety are appreciated. The present work evaluated the feasibility of an automated binary classification of laparoscopic image data using Convolutional Neural Networks (CNN) to determine whether the dissecting instrument is located within the laparoscopic image section. A unique record of images was generated from six laparoscopic cholecystectomies in a surgical training environment to configure and train The CNN. By using a temporary version of the neural network, the annotation of the training image files could be automated and accelerated. A combination of oversampling and selective data augmentation was used to enlarge the fully labelled image data set and prevent loss of accuracy due to imbalanced class volumes. Subsequently the same approach was applied to the comprehensive, fully annotated Cholec80 database. The described process led to the generation of extensive and balanced training image data sets. The performance of the CNN-based binary classifiers was evaluated on separate test records from both databases. On our recorded data, an accuracy of 0.88 with regard to the safety-relevant classification was achieved. The subsequent evaluation on the Cholec80 data set yielded an accuracy of 0.84. The presented results demonstrate the feasibility of a binary classification of laparoscopic image data for the detection of adverse events in a surgical training environment using a specifically configured CNN architecture.
The evaluation of the effectiveness of different machine learning algorithms on a publicly available database of signals derived from wearable devices is presented with the goal of optimizing human activity recognition and classification. Among the wide number of body signals we choose a couple of signals, namely photoplethysmographic (optically detected subcutaneous blood volume) and tri-axis acceleration signals that are easy to be simultaneously acquired using commercial widespread devices (e.g. smartwatches) as well as custom wearable wireless devices designed for sport, healthcare, or clinical purposes. To this end, two widely used algorithms (decision tree and k-nearest neighbor) were tested, and their performance were compared to two new recent algorithms (particle Bernstein and a Monte Carlo-based regression) both in terms of accuracy and processing time. A data preprocessing phase was also considered to improve the performance of the machine learning procedures, in order to reduce the problem size and a detailed analysis of the compression strategy and results is also presented.
Cell-cell and cell-extracellular matrix (ECM) adhesion regulates fundamental cellular functions and is crucial for cell-material contact. Adhesion is influenced by many factors like affinity and specificity of the receptor-ligand interaction or overall ligand concentration and density. To investigate molecular details of cell ECM and cadherins (cell-cell) interaction in vascular cells functional nanostructured surfaces were used Ligand-functionalized gold nanoparticles (AuNPs) with 6-8 nm diameter, are precisely immobilized on a surface and separated by non-adhesive regions so that individual integrins or cadherins can specifically interact with the ligands on the AuNPs. Using 40 nm and 90 nm distances between the AuNPs and functionalized either with peptide motifs of the extracellular matrix (RGD or REDV) or vascular endothelial cadherins (VEC), the influence of distance and ligand specificity on spreading and adhesion of endothelial cells (ECs) and smooth muscle cells (SMCs) was investigated. We demonstrate that RGD-dependent adhesion of vascular cells is similar to other cell types and that the distance dependence for integrin binding to ECM-peptides is also valid for the REDV motif. VEC-ligands decrease adhesion significantly on the tested ligand distances. These results may be helpful for future improvements in vascular tissue engineering and for development of implant surfaces.
A full understanding of the relationship between surface properties, protein adsorption, and immune responses is lacking but is of great interest for the design of biomaterials with desired biological profiles. In this study, polyelectrolyte multilayer (PEM) coatings with gradient changes in surface wettability were developed to shed light on how this impacts protein adsorption and immune response in the context of material biocompatibility. The analysis of immune responses by peripheral blood mononuclear cells to PEM coatings revealed an increased expression of proinflammatory cytokines tumor necrosis factor (TNF)-α, macrophage inflammatory protein (MIP)-1β, monocyte chemoattractant protein (MCP)-1, and interleukin (IL)-6 and the surface marker CD86 in response to the most hydrophobic coating, whereas the most hydrophilic coating resulted in a comparatively mild immune response. These findings were subsequently confirmed in a cohort of 24 donors. Cytokines were produced predominantly by monocytes with a peak after 24 h. Experiments conducted in the absence of serum indicated a contributing role of the adsorbed protein layer in the observed immune response. Mass spectrometry analysis revealed distinct protein adsorption patterns, with more inflammation-related proteins (e.g., apolipoprotein A-II) present on the most hydrophobic PEM surface, while the most abundant protein on the hydrophilic PEM (apolipoprotein A-I) was related to anti-inflammatory roles. The pathway analysis revealed alterations in the mitogen-activated protein kinase (MAPK)-signaling pathway between the most hydrophilic and the most hydrophobic coating. The results show that the acute proinflammatory response to the more hydrophobic PEM surface is associated with the adsorption of inflammation-related proteins. Thus, this study provides insights into the interplay between material wettability, protein adsorption, and inflammatory response and may act as a basis for the rational design of biomaterials.
This article reviews the literature on Christmas economics. First, we present an overall picture of the debate on the potential welfare loss of gift-giving and we show strategies that reduce the potential welfare loss and might increase the number of presents received. Second, we discuss the effect of Christmas on prices and the business cycle. We provide evidence that at Christmas stock prices and airfares increase, while food prices decrease.
The functionality of existing cyber-physical production systems generally focuses on mapping technologic specifications derived from production requirements. Consequently, such systems base their conception on a structurally mechanistic paradigm. Insofar as these approaches have considered humans, their conception likewise is based on the structurally identical paradigm. Due to the fundamental reorientation towards explicitly human-centered approaches, the fact that essential aspects of the dimension "human" remain unconsidered by the previous paradigm becomes more and more apparent. To overcome such limitations, mapping the "social" dimension requires a structurally different approach. In this paper, an anthropocentric approach is developed based on possible conceptions of the human being, enabling a structural integration of the human being in an extended dimension. Through the model, extending concepts for better integration of the human being in the sense of human-centered approaches, as envisioned in the Industrie 5.0 conception, is possible.
Artificial intelligence is a field of research that is seen as a means of realization regarding digitalization and industry 4.0. It is considered as the critical technology needed to drive the future evolution of manufacturing systems. At the same time, autonomous guided vehicles (AGV) developed as an essential part due to the flexibility they contribute to the whole manufacturing process within manufacturing systems. However, there are still open challenges in the intelligent control of these vehicles on the factory floor. Especially when considering dynamic environments where resources should be controlled in such a way, that they can be adjusted to turbulences efficiently. Therefore, this paper aimed to develop a conceptual framework for addressing a catalog of criteria that considers several machine learning algorithms to find the optimal algorithm for the intelligent control of AGVs. By applying the developed framework, an algorithm is automatically selected that is most suitable for the current operation of the AGV in order to enable efficient control within the factory environment. In future work, this decision-making framework can be transferred to even more scenarios with multiple AGV systems, including internal communication along with AGV fleets. With this study, the automatic selection of the optimal machine learning algorithm for the AGV improves the performance in such a way, that computational power is distributed within a hybrid system linking the AGV and cloud storage in an efficient manner.
Conventional production systems are evolving through cyber-physical systems and application-oriented approaches of AI, more and more into "smart" production systems, which are characterized among other things by a high level of communication and integration of the individual components. The exchange of information between the systems is usually only oriented towards the data content, where semantics is usually only implicitly considered. The adaptability required by external and internal influences requires the integration of new or the redesign of existing components. Through an open application-oriented ontology the information and communication exchange are extended by explicit semantic information. This enables a better integration of new and an easier reconfiguration of existing components. The developed ontology, the derived application and use of the semantic information will be evaluated by means of a practical use case.
Modern production systems are characterized by the increasingly use of CPS and IoT networks. However, processing the available information for adaptation and reconfiguration often occurs in relatively large time cycles. It thus does not take advantage of the optimization potential available in the short term. In this paper, a concept is presented that, considering the process information of the individual heterogeneous system elements, detects optimization potentials and performs or proposes adaptation or reconfiguration. The concept is evaluated utilizing a case study in a learning factory. The resulting system thus enables better exploitation of the potentials of the CPPS.
The paradigmatic shift of production systems towards Cyber-Physical Production Systems (CPPSs) requires the development of flexible and decentralized approaches. In this way, such systems enable manufacturers to respond quickly and accurately to changing requirements. However, domain-specific applications require the use of suitable conceptualizations. The issue at hand, when using various conceptualizations is the interoperability of different ontologies. To achieve flexibility and adaptability in CPPSs though requires overcoming interoperability issues within CPPSs. This paper presents an approach to increase flexibility and adaptability in CPPSs while addressing the interoperability issue. In this work, OWL ontologies conceptualize domain knowledge. The Intelligent Manufacturing Knowledge Ontology Repository (IMKOR) connects the domain knowledge in different ontologies. Testing if adaptions in one ontology within the IMKOR provide knowledge to the whole IMKOR. The tests showed, positive results and the repository makes the knowledge available to the whole CPPS. Furthermore, an increase in flexibility and adaptability was noticed.
Cyber-Physical Production Systems increasingly use semantic information to meet the grown flexibility requirements. Ontologies are often used to represent and use this semantic information. Existing systems focus on mapping knowledge and less on the exchange with other relevant IT systems (e.g., ERP systems) in which crucial semantic information, often implicit, is contained. This article presents an approach that enables the exchange of semantic information via adapters. The approach is demonstrated by a use case utilizing an MES system and an ERP system.
The strong demand for a transformation of the textile and fashion industry towards sustainability requires a continuous implementation of the guiding principle of Education for Sustainable Development (ESD) in education and industry [1, 2]. In a first step of the European research project "Sustainable fashion curriculum at textile Universities in Europe - Development, Implementation and Evaluation of a Teaching Module for Educators" (Fashion DIET) a continuing education module shall be created to implement ESD as a guiding principle in university teaching. The research-based teaching and learning materials are delivered through an e-learning portal.
Mystery shopping (MS) is a widely used tool to monitor the quality of service and personal selling. In consultative retail settings, assessments of mystery shoppers are supposed to capture the most relevant aspects of sales people’s service and sales behavior. Given the important conclusions drawn by managers from MS results, the standard assumption seems to be that assessments of mystery shoppers are strongly related to customer satisfaction and sales performance. However, surprisingly scant empirical evidence supports this assumption. We test the relationship between MS assessments and customer evaluations and sales performance with large-scale data from three service retail chains. Surprisingly, we do not find asubstantial correlation. The results show that mystery shoppers are not good proxies for real customers. While MS assessments are not related to sales, our findings confirm the established correlation between customer satisfaction measurements and sales results.
Context
Microservices as a lightweight and decentralized architectural style with fine-grained services promise several beneficial characteristics for sustainable long-term software evolution. Success stories from early adopters like Netflix, Amazon, or Spotify have demonstrated that it is possible to achieve a high degree of flexibility and evolvability with these systems. However, the described advantageous characteristics offer no concrete guidance and little is known about evolvability assurance processes for microservices in industry as well as challenges in this area. Insights into the current state of practice are a very important prerequisite for relevant research in this field.
Objective
We therefore wanted to explore how practitioners structure the evolvability assurance processes for microservices, what tools, metrics, and patterns they use, and what challenges they perceive for the evolvability of their systems.
Method
We first conducted 17 semi-structured interviews and discussed 14 different microservice-based systems and their assurance processes with software professionals from 10 companies. Afterwards, we performed a systematic grey literature review (GLR) and used the created interview coding system to analyze 295 practitioner online resources.
Results
The combined analysis revealed the importance of finding a sensible balance between decentralization and standardization. Guidelines like architectural principles were seen as valuable to ensure a base consistency for evolvability and specialized test automation was a prevalent theme. Source code quality was the primary target for the usage of tools and metrics for our interview participants, while testing tools and productivity metrics were the focus of our GLR resources. In both studies, practitioners did not mention architectural or service-oriented tools and metrics, even though the most crucial challenges like Service Cutting or Microservices Integration were of an architectural nature.
Conclusions
Practitioners relied on guidelines, standardization, or patterns like Event-Driven Messaging to partially address some reported evolvability challenges. However, specialized techniques, tools, and metrics are needed to support industry with the continuous evaluation of service granularity and dependencies. Future microservices research in the areas of maintenance, evolution, and technical debt should take our findings and the reported industry sentiments into account.
Context
Web APIs are one of the most used ways to expose application functionality on the Web, and their understandability is important for efficiently using the provided resources. While many API design rules exist, empirical evidence for the effectiveness of most rules is lacking.
Objective
We therefore wanted to study 1) the impact of RESTful API design rules on understandability, 2) if rule violations are also perceived as more difficult to understand, and 3) if demographic attributes like REST-related experience have an influence on this.
Method
We conducted a controlled Web-based experiment with 105 participants, from both industry and academia and with different levels of experience. Based on a hybrid between a crossover and a between-subjects design, we studied 12 design rules using API snippets in two complementary versions: one that adhered to a rule and one that was a violation of this rule. Participants answered comprehension questions and rated the perceived difficulty.
Results
For 11 of the 12 rules, we found that violation performed significantly worse than rule for the comprehension tasks. Regarding the subjective ratings, we found significant differences for 9 of the 12 rules, meaning that most violations were subjectively rated as more difficult to understand. Demographics played no role in the comprehension performance for violation.
Conclusions
Our results provide first empirical evidence for the importance of following design rules to improve the understandability of Web APIs, which is important for researchers, practitioners, and educators.
Software evolvability is an important quality attribute, yet one difficult to grasp. A certain base level of it is allegedly provided by service- and microservice-based systems, but many software professionals lack systematic understanding of the reasons and preconditions for this. We address this issue via the proxy of architectural modifiability tactics. By qualitatively mapping principles and patterns of Service Oriented Architecture (SOA) and microservices onto tactics and analyzing the results, we cannot only generate insights into service-oriented evolution qualities, but can also provide a modifiability comparison of the two popular service-based architectural styles. The results suggest that both SOA and microservices possess several inherent qualities beneficial for software evolution. While both focus strongly on loose coupling and encapsulation, there are also differences in the way they strive for modifiability (e.g. governance vs. evolutionary design). To leverage the insights of this research, however, it is necessary to find practical ways to incorporate the results as guidance into the software development process.
Background: Design patterns are supposed to improve various quality attributes of software systems. However, there is controversial quantitative evidence of this impact. Especially for younger paradigms such as service- and microservice-based systems, there is a lack of empirical studies.
Objective: In this study, we focused on the effect of four service-based patterns - namely process abstraction, service façade, decomposed capability, and event-driven messaging - on the evolvability of a system from the viewpoint of inexperienced developers.
Method: We conducted a controlled experiment with Bachelor students (N = 69). Two functionally equivalent versions of a service-based web shop - one with patterns (treatment group), one without (control group) - had to be changed and extended in three tasks. We measured evolvability by the effectiveness and efficiency of the participants in these tasks. Additionally, we compared both system versions with nine structural maintainability metrics for size, granularity, complexity, cohesion, and coupling.
Results: Both experiment groups were able to complete a similar number of tasks within the allowed 90 min. Median effectiveness was 1/3. Mean efficiency was 12% higher in the treatment group, but this difference was not statistically significant. Only for the third task, we found statistical support for accepting the alternative hypothesis that the pattern version led to higher efficiency. In the metric analysis, the pattern version had worse measurements for size and granularity while simultaneously having slightly better values for coupling metrics. Complexity and cohesion were not impacted.
Interpretation: For the experiment, our analysis suggests that the difference in efficiency is stronger with more experienced participants and increased from task to task. With respect to the metrics, the patterns introduce additional volume in the system, but also seem to decrease coupling in some areas.
Conclusions: Overall, there was no clear evidence for a decisive positive effect of using service-based patterns, neither for the student experiment nor for the metric analysis. This effect might only be visible in an experiment setting with higher initial effort to understand the system or with more experienced developers.
Sleep is extremely important for physical and mental health. Although polysomnography is an established approach in sleep analysis, it is quite intrusive and expensive. Consequently, developing a non-invasive and non-intrusive home sleep monitoring system with minimal influence on patients, that can reliably and accurately measure cardiorespiratory parameters, is of great interest. The aim of this study is to validate a non-invasive and unobtrusive cardiorespiratory parameter monitoring system based on an accelerometer sensor. This system includes a special holder to install the system under the bed mattress. The additional aim is to determine the optimum relative system position (in relation to the subject) at which the most accurate and precise values of measured parameters could be achieved. The data were collected from 23 subjects (13 males and 10 females). The obtained ballistocardiogram signal was sequentially processed using a sixth-order Butterworth bandpass filter and a moving average filter. As a result, an average error (compared to reference values) of 2.24 beats per minute for heart rate and 1.52 breaths per minute for respiratory rate was achieved, regardless of the subject’s sleep position. For males and females, the errors were 2.28 bpm and 2.19 bpm for heart rate and 1.41 rpm and 1.30 rpm for respiratory rate. We determined that placing the sensor and system at chest level is the preferred configuration for cardiorespiratory measurement. Further studies of the system’s performance in larger groups of subjects are required, despite the promising results of the current tests in healthy subjects.
Sleep is an essential part of human existence, as we are in this state for approximately a third of our lives. Sleep disorders are common conditions that can affect many aspects of life. Sleep disorders are diagnosed in special laboratories with a polysomnography system, a costly procedure requiring much effort for the patient. Several systems have been proposed to address this situation, including performing the examination and analysis at the patient's home, using sensors to detect physiological signals automatically analysed by algorithms. This work aims to evaluate the use of a contactless respiratory recording system based on an accelerometer sensor in sleep apnea detection. For this purpose, an installation mounted under the bed mattress records the oscillations caused by the chest movements during the breathing process. The presented processing algorithm performs filtering of the obtained signals and determines the apnea events presence. The performance of the developed system and algorithm of apnea event detection (average values of accuracy, specificity and sensitivity are 94.6%, 95.3%, and 93.7% respectively) confirms the suitability of the proposed method and system for further ambulatory and in-home use.
Sleep is essential to physical and mental health. However, the traditional approach to sleep analysis—polysomnography (PSG)—is intrusive and expensive. Therefore, there is great interest in the development of non-contact, non-invasive, and non-intrusive sleep monitoring systems and technologies that can reliably and accurately measure cardiorespiratory parameters with minimal impact on the patient. This has led to the development of other relevant approaches, which are characterised, for example, by the fact that they allow greater freedom of movement and do not require direct contact with the body, i.e., they are non-contact. This systematic review discusses the relevant methods and technologies for non-contact monitoring of cardiorespiratory activity during sleep. Taking into account the current state of the art in non-intrusive technologies, we can identify the methods of non-intrusive monitoring of cardiac and respiratory activity, the technologies and types of sensors used, and the possible physiological parameters available for analysis. To do this, we conducted a literature review and summarised current research on the use of non-contact technologies for non-intrusive monitoring of cardiac and respiratory activity. The inclusion and exclusion criteria for the selection of publications were established prior to the start of the search. Publications were assessed using one main question and several specific questions. We obtained 3774 unique articles from four literature databases (Web of Science, IEEE Xplore, PubMed, and Scopus) and checked them for relevance, resulting in 54 articles that were analysed in a structured way using terminology. The result was 15 different types of sensors and devices (e.g., radar, temperature sensors, motion sensors, cameras) that can be installed in hospital wards and departments or in the environment. The ability to detect heart rate, respiratory rate, and sleep disorders such as apnoea was among the characteristics examined to investigate the overall effectiveness of the systems and technologies considered for cardiorespiratory monitoring. In addition, the advantages and disadvantages of the considered systems and technologies were identified by answering the identified research questions. The results obtained allow us to determine the current trends and the vector of development of medical technologies in sleep medicine for future researchers and research.
The article analyzes experimentally and theoretically the influence of microscope parameters on the pinhole-assisted Raman depth profiles in uniform and composite refractive media. The main objective is the reliable mapping of deep sample regions. The easiest to interpret results are found with low magnification, low aperture, and small pinholes. Here, the intensities and shapes of the Raman signals are independent of the location of the emitter relative to the sample surface. Theoretically, the results can be well described with a simple analytical equation containing the axial depth resolution of the microscope and the position of the emitter. The lower determinable object size is limited to 2–4 μm. If sub-micrometer resolution is desired, high magnification, mostly combined with high aperture, becomes necessary. The signal intensities and shapes depend now in refractive media on the position relative to the sample surface. This aspect is investigated on a number of uniform and stacked polymer layers, 2–160 μm thick, with the best available transparency. The experimental depth profiles are numerically fitted with excellent accuracy by inserting a Gaussian excitation beam of variable waist and fill fraction through the focusing lens area, and by treating the Raman emission with geometric optics as spontaneous isotropic process through the lens and the variable pinhole, respectively. The intersectional area of these two solid angles yields the leading factor in understanding confocal (pinhole-assisted) Raman depth profiles.
Gender pay gaps are commonly studied in populations with already completed educational careers. We focus on an earlier stage by investigating the gender pay gap among university students working alongside their studies. With data from five cohorts of a large-scale student survey from Germany, we use regression and wage decomposition techniques to describe gender pay gaps and potential explanations. We find that female students earn about 6% less on average than male students, which reduces to 4.1% when accounting for a rich set of explanatory variables. The largest explanatory factor is the type of jobs male and female students pursue.
In this article feedback linearization for control-affine nonlinear systems is extended to systems where linearization is not feasible in the complete state space by combining state feedback linearization and homotopy numerical continuation in subspaces of the phase space where feedback linearization fails. Starting from the conceptual simplicity of feedback linearization, this new method expands the scope of their applicability to irregular systems with poorly expressed relative degree. The method is illustrated on a simple SISO–system and by controlling the speed and the rotor flux linkage in a three phase induction machine.
Methods for increasing the energy efficiency of induction motors by an appropriate control strategy have been a subject of research during the last years. Several methods for loss minimization have been developed for induction motors operated in a steady state. In recent years, some solutions for the dynamic case have been given as well either using an online or offline optimization approach, implying a certain computational burden, which is undesired in practice. This paper shows that the appropriate application of steady state techniques during transients due to a changing motor torque is a suboptimal strategy with an acceptable performance for efficiency optimization given an induction machine where saturation effects of the main inductance must be considered. The optimization problem is simplified such that a simple suboptimal solution is possible and the quality of the suboptimal solution is investigated by simulations and measurements. The proposed solution is simple, easy to implement, and does not require an online optimization. In addition, the influence of magnetizing induction saturation is considered.
The increasing complexity and need for availability of automated guided vehicles (AGVs) pose challenges to companies, leading to a focus on new maintenance strategies. In this paper, a smart maintenance architecture based on a digital twin is presented to optimize the technical and economic effectiveness of AGV maintenance activities. To realize this, a literature review was conducted to identify the necessary requirements for Smart Maintenance and Digital Twins. The identified requirements were combined into modules and then integrated into an architecture. The architecture was evaluated on a real AGV on the battery as one of the critical components.
Literature reviews are essential for any scientific work, both as part of a dissertation or as a stand-alone work. Scientists benefit from the fact that more and more literature is available in electronic form, and finding and accessing relevant literature has become more accessible through scientific databases. However, a traditional literature review method is characterized by a highly manual process, while technologies and methods in big data, machine learning, and text mining have advanced. Especially in areas where research streams are rapidly evolving, and topics are becoming more comprehensive, complex, and heterogeneous, it is challenging to provide a holistic overview and identify research gaps manually. Therefore, we have developed a framework that supports the traditional approach of conducting a literature review using machine learning and text mining methods. The framework is particularly suitable in cases where a large amount of literature is available, and a holistic understanding of the research area is needed. The framework consists of several steps in which the critical mind of the scientist is supported by machine learning. The unstructured text data is transformed into a structured form through data preparation realized with text mining, making it applicable for various machine learning techniques. A concrete example in the field of smart cities makes the framework tangible.
The shift of populations to cities is creating challenges in many respects, thus leading to increasing demand for smart solutions of urbanization problems. Smart city applications range from technical and social to economic and ecological. The main focus of this work is to provide a systematic literature review of smart city research to answer two main questions: (1) How is current research on smart cities structured? and (2) What directions are relevant for future research on smart cities? To answer these research questions, a text-mining approach is applied to a large number of publications. This provides an overview and gives insights into relevant dimensions of smart city research. Although the main dimensions of research are already described in the literature, an evaluation of the relevance of such dimensions is missing. Findings suggest that the dimensions of environment and governance are popular, while the dimension of economy has received only limited attention.
The benefits of urban data cannot be realized without a political and strategic view of data use. A core concept within this view is data governance, which aligns strategy in data-relevant structures and entities with data processes, actors, architectures, and overall data management. Data governance is not a new concept and has long been addressed by scientists and practitioners from an enterprise perspective. In the urban context, however, data governance has only recently attracted increased attention, despite the unprecedented relevance of data in the advent of smart cities. Urban data governance can create semantic compatibility between heterogeneous technologies and data silos and connect stakeholders by standardizing data models, processes, and policies. This research provides a foundation for developing a reference model for urban data governance, identifies challenges in dealing with data in cities, and defines factors for the successful implementation of urban data governance. To obtain the best possible insights, the study carries out qualitative research following the design science research paradigm, conducting semi-structured expert interviews with 27 municipalities from Austria, Germany, Denmark, Finland, Sweden, and the Netherlands. The subsequent data analysis based on cognitive maps provides valuable insights into urban data governance. The interview transcripts were transferred and synthesized into comprehensive urban data governance maps to analyze entities and complex relationships with respect to the current state, challenges, and success factors of urban data governance. The findings show that each municipal department defines data governance separately, with no uniform approach. Given cultural factors, siloed data architectures have emerged in cities, leading to interoperability and integrability issues. A city-wide data governance entity in a cross-cutting function can be instrumental in breaking down silos in cities and creating a unified view of the city’s data landscape. The further identified concepts and their mutual interaction offer a powerful tool for developing a reference model for urban data governance and for the strategic orientation of cities on their way to data-driven organizations.
For a holistic assessment of the interaction between the human body and tight fitted clothing, it is necessary to consider the mechanical properties of the body. Default avatars in CAD software are usually solid and do not take this interaction into account. For this purpose, a solid avatar is converted to a deformable one by using the soft body physics implementation in the simulation program Blender. The fit of a 3D garment on both avatars are compared, which allows a first evaluation of the differences between these approaches.
The implementation of human resource (HR) policies often proves troublesome due to the appearance, and stubborn persistence, of gaps in the process. Human resource management (HRM) scholars problematise these gaps and advocate tight implementation to reduce gaps and to ensure the desired impact of policies on organisational performance. Drawing on organisational institutionalism, we contend that gaps in implementing HR policies can actually be productive, as they secure organisational legitimacy, and thus enable organisations to operate viably within several institutional environments. We suggest that different approaches to implementation are needed, some of them premised on accepting sustained implementation gaps. We introduce minimum and moderate implementation approaches, rooted in the notion of decoupling, to complement approaches aimed at tight implementation. Our aim is to support the further development of research based on a richer interpretation of HRM implementation challenges and choices they present for HR managers.
»Flexible Arbeitspraktiken: Eine Analyse aus pragmatischer Perspektive«. Traditional human resource management (HRM) research can hardly relate to today's developments in the world of work. Organizational boundaries are blurred because of the complexity due to globalization, digitalization, and demographic changes. In practice, new ways of organizing work can be found that depend on the specifics of the work situation. In this paper, we build on the economics of convention (EC) to elaborate on the current challenges HRM scholarship is confronted with and provide a theoretical lens that goes beyond the tension between market and bureaucracy principles in actual employment settings. We apply EC’s situationalist methodology to examples of the challenging coordination of flexibility in the workplace. We explain two hybrid forms of coordination – compromises and local arrangements – and highlight the dynamics of employment practices in organizations related to these forms. Thereby, we show that different modes of coordination in employment are applied in a fluctuating manner that depends on the specific situations. In doing so, we further seek to remind HRM scholars of the fruitfulness of the pragmatist perspective in analyzing work practices, as well as extending its conceptual toolkit for future analysis.
Consistent supply chain management across all levels of value creation is a common approach in the industrial sector. The implementation in agricultural processes requires rethinking in the supply chain concept. The reasons are the heuristic characterized processes, the stochastic environmental conditions, the mobility of the production facilities and the low division of work.
In this paper we deal with how concepts of innovative supply chain management of Industrie 4.0 could not only deliver a way to overcome said problems but also provide the foundation for the development of new forms of work and business models for Farming 4.0.
In order to decouple economic growth from global material consumption it is necessary to implement material efficiency strategies at the level of single enterprises and their supply chains, and to implement circular economy aspects. Manufacturing firms face multiple implementation challenges like cost limitations, competition, innovation and stakeholder pressure, and supplier and customer relationships, among others. Taking as an example a case of a medium-sized manufacturing company, opportunities to realise material efficiency improvements within the company borders - on the supply chain and by using circular economy measures - are assessed. Deterministic calculations and simulations, performed for the supply chain of this company, show that measures to increase material efficiency in the supply chain are important. However, they need to be complemented by efforts to return waste and used products to the economic cycle, which requires rethinking the traditional linear economic system.
Mature economies which are driven mainly by small and medium sized enterprises (SMEs) are increasingly becoming dependent on material imports. Global material consumption is ever increasing, mainly driven by population increases. Decoupling of material consumption from economic growth is one of the greatest challenges of the 21st century. Within this paper available methods for the assessment of material efficiency on different economic scales are investigated and those detected that are particulary suitable for the use in SMEs. Recommendations for further improvements of the selected tools and an outlook concerning planned research activities in the field of material efficiency in enterprises, supply chains and circular economy aspects are given.
Adaptation of the business model canvas template to develop business models for the circular economy
(2021)
The Business Model Canvas as a template for strategic management serves the development of new or the documentation of existing linear business models. However, the change towards a Circular Economy requires new value creation structures and thus changed business models. To develop business models for circular economies, it is necessary to adapt the existing template, since the actors involved along the value chain take on changed roles. In the context of this paper, a template is presented, based on the existing Business Model Canvas, which allows to develop and document business models for a Circular Economy.
Zero or plus energy office buildings must have very high building standards and require highly efficient energy supply systems due to space limitations for renewable installations. Conventional solar cooling systems use photovoltaic electricity or thermal energy to run either a compression cooling machine or an absorption-cooling machine in order to produce cooling energy during daytime, while they use electricity from the grid for the nightly cooling energy demand. With a hybrid photovoltaic-thermal collector, electricity as well as thermal energy can be produced at the same time. These collectors can produce also cooling energy at nighttime by longwave radiation exchange with the night sky and convection losses to the ambient air. Such a renewable trigeneration system offers new fields of applications. However, the technical, ecological and economical aspects of such systems are still largely unexplored.
In this work, the potential of a PVT system to heat and cool office buildings in three different climate zones is investigated. In the investigated system, PVT collectors act as a heat source and heat sink for a reversible heat pump. Due to the reduced electricity consumption (from the grid) for heat rejection, the overall efficiency and economics improve compared to a conventional solar cooling system using a reversible air-to-water heat pump as heat and cold source.
A parametric simulation study was carried out to evaluate the system design with different PVT surface areas and storage tank volumes to optimize the system for three different climate zones and for two different building standards. It is shown such systems are technically feasible today. With a maximum utilization of PV electricity for heating, ventilation, air conditioning and other electricity demand such as lighting and plug loads, high solar fractions and primary energy savings can be achieved.
Annual costs for such a system are comparable to conventional solar thermal and solar electrical cooling systems. Nevertheless, the economic feasibility strongly depends on country specific energy prices and energy policy. However, even in countries without compensation schemes for energy produced by renewables, this system can still be economically viable today. It could be shown, that a specific system dimensioning can be found at each of the investigated locations worldwide for a valuable economic and ecological operation of an office building with PVT technologies in different system designs.
The global demand for resources such as energy, land, or water is constantly increasing. It is therefore not sur- prising that research on the Food-Energy-Water (FEW) nexus has become a scientific as well as a general focus in recent years. A significant increase in publications since 2015 can be observed, and it can be expected that this trend will continue. A multilevel (macro, meso, and micro) perspective is essential, as the FEW nexus has cross- sectoral interdependencies. Several review studies on the FEW nexus can be found in the literature, in general, it can be concluded that the FEW nexus is a multi-disciplinary and complex topic. The studies examined identify essential fields of action for research, policy, and society. However, questions such as what are the main research fields at each level? Is it possible to divide the research into specific clusters? and do the clusters correlate with the levels, and what are the methods of modeling used in the clusters and levels? are still not fully discussed in the literature. An extensive literature review was conducted to get insight into the existing research areas. Especially in such fields as the FEW nexus, the amount of literature can get huge, and a human could get lost analyzing the literature manually. For that, we created word clouds and performed a cluster- and network-analysis to support the selection of most relevant papers for a detailed reading. In 2021, the most publications were published, with 173 publications, which corresponds to a share of 26.6 %. There has been a significant increase since 2015, and it can be expected that this trend will continue in the coming years. Most of the first authors come from the USA (25.4 %), followed by China with 22.4 %. From the word cloud and the top 20 words, which appear in the title and abstract, it can be deduced that the topic water is the most represented. However, the terms system, resource, model, study, change, development, and management also appear to be very important, which indi- cates the importance of a holistic approach to the topic. In total 9 clusters could be identified at the different levels. It can be seen that three clusters form well. For the others, a rather diffuse picture can be observed. In order to find out which topics are hidden behind the individual clusters, 6 publications from each cluster were subjected to a more detailed examination. With these steps, a number of 54 publications were identified for de- tailed consideration. The modeling approaches that are currently being applied in research can be classified into domain-specific tools (e. g. global water models, crop models or global climate models) and into more general tools to perform for example a life cycle analysis, spatial analysis using geographic information system, or system dynamics for a general understanding of the links between the domains. With the domain-specific tools, detailed research questions can be addressed to answer questions for a specific domain. However, these tools have the disadvantage that especially the links between the sectors food, energy, and water are not fully considered. Many implementations that are made today are at lowest level (micro) relate to bounded spatial areas and are derived from macro and meso level goals.
The paper explains a workflow to simulate the food energy water (FEW) nexus for an urban district combining various data sources like 3D city models, particularly the City Geography Markup Language (CityGML) data model from the Open Geospatial Consortium, Open StreetMap and Census data. A long term vision is to extend the CityGML data model by developing a FEW Application Domain Extension (FEW ADE) to support future FEW simulation workflows such as the one explained in this paper. Together with the mentioned simulation workflow, this paper also identifies some necessary FEW related parameters for the future development of a FEW ADE. Furthermore, relevant key performance indicators are investigated, and the relevant datasets necessary to calculate these indicators are studied. Finally, different calculations are performed for the downtown borough Ville-Marie in the city of Montréal (Canada) for the domains of food waste (FW) and wastewater (WW) generation. For this study, a workflow is developed to calculate the energy generation from anaerobic digestion of FW and WW. In the first step, the data collection and preparation was done. Here relevant data for georeferencing, data for model set-up, and data for creating the required usage libraries, like food waste and wastewater generation per person, were collected. The next step was the data integration and calculation of the relevant parameters, and lastly, the results were visualized for analysis purposes. As a use case to support such calculations, the CityGML level of detail two model of Montréal is enriched with information such as building functions and building usages from OpenStreetMap. The calculation of the total residents based on the CityGML model as the main input for Ville-Marie results in a population of 72,606. The statistical value for 2016 was 89,170, which corresponds to a deviation of 15.3%. The energy recovery potential of FW is about 24,024 GJ/year, and that of wastewater is about 1,629 GJ/year, adding up to 25,653 GJ/year. Relating values to the calculated number of inhabitants in Ville-Marie results in 330.9 kWh/year for FW and 22.4 kWh/year for wastewater, respectively.
A laboratory prototype for hyperspectral imaging in ultra-violet (UV) region from 225 to 400 nm was developed and used to rapidly characterize active pharmaceutical ingredients (API) in tablets. The APIs are ibuprofen (IBU), acetylsalicylic acid (ASA) and paracetamol (PAR). Two sample sets were used for a comparison purpose. Sample set one comprises tablets of 100% API and sample set two consists of commercially available painkiller tablets. Reference measurements were performed on the pure APIs in liquid solutions (transmission) and in solid phase (reflection) using a commercial UV spectrometer. The spectroscopic part of the prototype is based on a pushbroom imager that contains a spectrograph and charge-coupled device (CCD) camera. The tablets were scanned on a conveyor belt that is positioned inside a tunnel made of polytetrafluoroethylene (PTFE) in order to increase the homogeneity of illumination at the sample position. Principal component analysis (PCA) was used to differentiate the hyperspectral data of the drug samples. The first two PCs are sufficient to completely separate all samples. The rugged design of the prototype opens new possibilities for further development of this technique towards real large-scale application.
Hypericin has large potential in modern medicine and exhibits fascinating structural dynamics, such as multiple conformations and tautomerization. However, it is difficult to study individual conformers/tautomers, as they cannot be isolated due to the similarity of their chemical and physical properties. An approach to overcome this difficulty is to combine single molecule experiments with theoretical studies. Time-dependent density functional theory (TD-DFT) calculations reveal that tautomerization of hypericin occurs via a two-step proton transfer with an energy barrier of 1.63 eV, whereas a direct single-step pathway has a large activation energy barrier of 2.42 eV. Tautomerization in hypericin is accompanied by reorientation of the transition dipole moment, which can be directly observed by fluorescence intensity fluctuations. Quantitative tautomerization residence times can be obtained from the autocorrelation of the temporal emission behavior revealing that hypericin stays in the same tautomeric state for several seconds, which can be influenced by the embedding matrix. Furthermore, replacing hydrogen with deuterium further proves that the underlying process is based on tunneling of a proton. In addition, the tautomerization rate can be influenced by a λ/2 Fabry–Pérot microcavity, where the occupation of Raman active vibrations can alter the tunneling rate.