Refine
Document Type
- Journal article (875)
- Conference proceeding (850)
- Book chapter (184)
- Book (61)
- Doctoral Thesis (34)
- Anthology (15)
- Working Paper (13)
- Patent / Standard / Guidelines (6)
- Review (6)
- Issue of a journal (2)
Language
- English (2049) (remove)
Is part of the Bibliography
- yes (2049)
Institute
- Informatik (705)
- ESB Business School (518)
- Technik (346)
- Life Sciences (327)
- Texoversum (151)
- Zentrale Einrichtungen (6)
Publisher
- Springer (331)
- IEEE (252)
- Elsevier (242)
- MDPI (99)
- Wiley (66)
- Hochschule Reutlingen (59)
- Gesellschaft für Informatik e.V (54)
- Association for Computing Machinery (45)
- De Gruyter (43)
- Association for Information Systems (32)
With the progress of technology in modern hospitals, an intelligent perioperative situation recognition will gain more relevance due to its potential to substantially improve surgical workflows by providing situation knowledge in real-time. Such knowledge can be extracted from image data by machine learning techniques but poses a privacy threat to the staff’s and patients’ personal data. De-identification is a possible solution for removing visual sensitive information. In this work, we developed a YOLO v3 based prototype to detect sensitive areas in the image in real-time. These are then deidentified using common image obfuscation techniques. Our approach shows that it is principle suitable for de-identifying sensitive data in OR images and contributes to a privacyrespectful way of processing in the context of situation recognition in the OR.
Digitalization changes the manufacturing dramatically. In regard of employees’ demands, global trends and the technological vision of future factories, automotive manufacturing faces a huge number of diverse challenges. Currently, research focuses on technological aspects of future factories in terms of digitalization. New ways of work and new organizational models for future factories have not been described yet. There are assumptions on how to develop the organization of work in a future factory but up to now, literature shows deficits in scientifically substantiated answers in this research area. Consequently, the objective of this paper is to present an approach on a work organization design for automotive Industry 4.0 manufacturing. Future requirements were analyzed and deducted to criteria that determine future agile organization design. These criteria were then transformed into functional mechanisms, which define the approach for shopfloor organization design
The powder coating of veneered particle boards by the sequence electrostatic powder application -powder curing via hot pressing is studied in order to create high gloss surfaces. To obtain an appealingaspect, veneer Sheets were glued by heat and pressure on top of particle boards and the resulting surfaceswere used as carrier substrates for powder coat finishing. Prior to the powder coating, the veneeredparticle board surfaces were pre-treated by sanding to obtain good uniformity and the boards werestored in a climate chamber at controlled temperature and humidity conditions to adjust an appropriate electrical surface resistance. Characterization of surface texture was done by 3D microscopy. The surfaceelectrical resistance was measured for the six veneers before and after their application on the particleboard surface. A transparent powder top-coat was applied electrostatically onto the veneered particleboard surface. Curing of the powder was done using a heated press at 130◦C for 8 min and a smooth, glossy coating was obtained on the veneered surfaces. By applying different amounts of powder thecoating thickness could be varied and the optimum amount of powder was determined for each veneer type.
In the powder coating of veneered particle boards the highly reactive hybrid epoxy/polyester powder transparent Drylac 530 Series from TIGER Coatings GmbH & Co. KG, Wels, Austria was used. Curing is accelerated by a mixture of catalysts reaching curing times of 3 min at 150 °C or 5 min at 135 °C which allows for energy and time savings making Drylac Series 530 powder suitable for the coating of temperaturesensitive substrates such as MDF and wood.
Decorative laminates based on melamine formaldehyde (MF) resin impregnated papers are used at great extent for surface finishing of engineered wood that is used for furniture, kitchen, and working surfaces, flooring and exterior cladding. In all these applications, optically flawless appearance is a major issue. The work described here is focused on enhancing the cleanability and antifingerprint properties of smooth, matt surface-finished melamine-coated particleboards for furniture fronts, without at the same time changing or deteriorating other important surface parameters such as hardness, roughness or gloss. In order to adjust the surface polarity of a low pressure melamine film, novel interface-active macromolecular compounds were prepared and tested for their suitability as an antifingerprint additive. Two hydroxy-functional surfactants (polydimethysiloxane, PDMS-OH and perfluoroether, PF-OH) were oxidized under mild conditions to the corresponding aldehydes (PDMS-CHO and PF-CHO) using a pyridinium chlorochromate catalyst. With the most promising oxidized polymeric additive, PDMS-CHO, the contact angles against water, n-hexadecane, and squalene increased from 79.8°, 26.3° and 31.4° for the pure MF surface to 108.5°, 54.8°, and 59.3°, respectively, for the modified MF surfaces. While for the laminated MF surface based on the oxidized fluoroether the gloss values were much higher than required, for the surfaces based on oxidized polydimethylsiloxane the technological values as well as the lower gloss values were in agreement with the requirements and showed much improved surface cleanability, as was also confirmed by colorimetric measurements.
Unprecedented formation of sterically stabilized phospholipid liposomes of cuboidal morphology
(2021)
Sterically stabilized phospholipid liposomes of unprecedented cuboid morphology are formed upon introduction in the bilayer membrane of original polymers, based on polyglycidol bearing a lipid-mimetic residue. Strong hydrogen bonding in the polyglycidol sublayers creates attractive forces, which, facilitated by fluidization of the membrane, bring about the flattening of the bilayers and the formation of cuboid vesicles.
Theoretical foundation, effectiveness, and design artefact for machine learning service repositories
(2022)
Machine learning (ML) has played an important role in research in recent years. For companies that want to use ML, finding the algorithms and models that fit for their business is tedious. A review of the available literature on this problem indicates only a few research papers. Given this gap, the aim of this paper is to design an effective and easy-to-use ML service repository. The corresponding research is based on a multi-vocal literature analysis combined with design science research, addressing three research questions: (1) How is current white and gray literature on ML services structured with respect to repositories? (2) Which features are relevant for an effective ML service repository? (3) How is a prototype for an effective ML service repository conceptualized? Findings are relevant for the explanation of user acceptance of ML repositories. This is essential for corporate practice in order to create and use ML repositories effectively.
The tale of 1000 cores: an evaluation of concurrency control on real(ly) large multi-socket hardware
(2020)
In this paper, we set out the goal to revisit the results of “Starring into the Abyss [...] of Concurrency Control with [1000] Cores” and analyse in-memory DBMSs on today’s large hardware. Despite the original assumption of the authors, today we do not see single-socket CPUs with 1000 cores. Instead multi-socket hardware made its way into production data centres. Hence, we follow up on this prior work with an evaluation of the characteristics of concurrency control schemes on real production multi-socket hardware with 1568 cores. To our surprise, we made several interesting findings which we report on in this paper.
In this paper, we propose a radical new approach for scale-out distributed DBMSs. Instead of hard-baking an architectural model, such as a shared-nothing architecture, into the distributed DBMS design, we aim for a new class of so-called architecture-less DBMSs. The main idea is that an architecture-less DBMS can mimic any architecture on a per-query basis on-the-fly without any additional overhead for reconfiguration. Our initial results show that our architecture-less DBMS AnyDB can provide significant speedup across varying workloads compared to a traditional DBMS implementing a static architecture.
In our initial DaMoN paper, we set out the goal to revisit the results of “Starring into the Abyss [...] of Concurrency Control with [1000] Cores” (Yu in Proc. VLDB Endow 8: 209-220, 2014). Against their assumption, today we do not see single-socket CPUs with 1000 cores. Instead, multi-socket hardware is prevalent today and in fact offers over 1000 cores. Hence, we evaluated concurrency control (CC) schemes on a real (Intel-based) multi-socket platform. To our surprise, we made interesting findings opposing results of the original analysis that we discussed in our initial DaMoN paper. In this paper, we further broaden our analysis, detailing the effect of hardware and workload characteristics via additional real hardware platforms (IBM Power8 and 9) and the full TPC-C transaction mix. Among others, we identified clear connections between the performance of the CC schemes and hardware characteristics, especially concerning NUMA and CPU cache. Overall, we conclude that no CC scheme can efficiently make use of large multi-socket hardware in a robust manner and suggest several directions on how CC schemes and overall OLTP DBMS should evolve in future.
In this paper, we present a new approach for achieving robust performance of data structures making it easier to reuse the same design for different hardware generations but also for different workloads. To achieve robust performance, the main idea is to strictly separate the data structure design from the actual strategies to execute access operations and adjust the actual execution strategies by means of so-called configurations instead of hard-wiring the execution strategy into the data structure. In our evaluation we demonstrate the benefits of this configuration approach for individual data structures as well as complex OLTP workloads.
This booklet will give you an overview of the development of CSR from a (brief) historic point of view and will examine the underlying concepts and research. Furthermore, examples of contemporary CSR management will be explored to show how companies Interpret the issue and how they face the challenges of managing the new demands placed upon them. Business, in the end, comes down to figures and numbers which give management, shareholders and stakeholders a chance to measure a company’s success. Therefore, modern methods and approaches for measuring, rating and ranking a company’s CSR management will be presented. Finally, an attempt will be made to evaluate CSR as a tool for increasing global welfare and as a business and management strategy for companies and entrepreneurs.
The present publication reports the purification effort of two natural bone blocks, that is, an allogeneic bone block (maxgraft®, botiss biomaterials GmbH, Zossen, Germany) and a xenogeneic block (SMARTBONE®, IBI S.A., Mezzovico Vira, Switzerland) in addition to previously published results based on histology. Furthermore, specialized scanning electron microscopy (SEM) and in vitro analyses (XTT, BrdU, LDH) for testing of the cytocompatibility based on ISO 10993-5/-12 have been conducted. The microscopic analyses showed that both bone blocks possess a trabecular structure with a lamellar subarrangement. In the case of the xenogeneic bone block, only minor remnants of collagenous structures were found, while in contrast high amounts of collagen were found associated with the allogeneic bone matrix. Furthermore, only island-like remnants of the polymer coating in case of the xenogeneic bone substitute seemed to be detectable. Finally, no remaining cells or cellular remnants were found in both bone blocks. The in vitro analyses showed that both bone blocks are biocompatible. Altogether, the purification level of both bone blocks seems to be favorable for bone tissue regeneration without the risk for inflammatory responses or graft rejection. Moreover, the analysis of the maxgraft® bone block showed that the underlying purification process allows for preserving not only the calcified bone matrix but also high amounts of the intertrabecular collagen matrix.
Introduction: Bioresorbable collagenous barrier membranes are used to prevent premature soft tissue ingrowth and to allow bone regeneration. For volume stable indications, only non-absorbable synthetic materials are available. This study investigates a new bioresorbable hydrofluoric acid (HF)-treated magnesium (Mg) mesh in a native collagen membrane for volume stable situations. Materials and Methods: HF-treated and untreated Mg were compared in direct and indirect cytocompatibility assays. In vivo, 18 New Zealand White Rabbits received each four 8 mm calvarial defects and were divided into four groups: (a) HF-treated Mg mesh/collagen membrane, (b) untreated Mg mesh/collagen membrane (c) collagen membrane and (d) sham operation. After 6, 12 and 18 weeks, Mg degradation and bone regeneration was measured using radiological and histological methods. Results: In vitro, HF-treated Mg showed higher cytocompatibility. Histopathologically, HF-Mg prevented gas cavities and was degraded by mononuclear cells via phagocytosis up to 12 weeks. Untreated Mg showed partially significant more gas cavities and a fibrous tissue reaction. Bone regeneration was not significantly different between all groups. Discussion and Conclusions: HF-Mg meshes embedded in native collagen membranes represent a volume stable and biocompatible alternative to the non-absorbable synthetic materials. HF-Mg shows less corrosion and is degraded by phagocytosis. However, the application of membranes did not result in higher bone regeneration.
Analog-/Mixed-Signal (AMS) design verification is one of the most challenging and time consuming tasks of todays complex system on chip (SoC) designs. In contrast to digital system design, AMS designers have to deal with a continuous state space of conservative quantities, highly nonlinear relationships, non-functional influences, etc. enlarging the number of possibly critical scenarios to infinity. In this special session we demonstrate the verification of functional properties using simulative and formal methods. We combine different approaches including automated abstraction and refinement of mixed-level models, state-space discretization as well as affine arithmetic. To reach sufficient verification coverage with reasonable time and effort, we use enhanced simulation schemes to avoid conventional simulation drawbacks.
An integrated synchronous buck converter with a high resolution dead time control for input voltages up to 48V and 10MHz switching frequency is presented. The benefit of an enhanced dead time control at light loads to enable zero voltage switching at both the high-side and low-side switch at low output load is studied. This way, compact multi-MHz DCDC converters can be implemented at high efficiency over a wide load current range. The concept also eliminates body diode forward conduction losses and minimizes reverse recovery losses. A dead time resolution of 125 ps is realized by an 8-bit differential delay chain. A further efficiency enhancement by soft switching at the high-side switch at light load is achieved with a voltage boost of the switching node by dead time control in forced continuous conduction mode. The monolithic converter is implemented in an 180nm high-voltage BiCMOS technology. At V IN = 48V, V OUT = 5V, 50mA load, 10MHz switching frequency and 500 nH output inductance, the efficiency is measured to be increased by 14.4% compared to a conventional predictive dead time control. A peak efficiency of 80.9% is achieved at 12V input.
Different sensor types using chemical and biochemical principles are described. The former are mainly gas sensors, the latter are applied especially to liquids. Those label-free direct detection methods are compared with applications where assays take advantage of labeled receptors.
Furthermore, selected applications in the area of gas sensors are discussed, and sensors for process control, point-of-care diagnostics, environmental analytics, and food analytics are reviewed. In addition, multiplexing approaches used in microplates and microarrays are described.
On account of the huge number of sensor types and the wide range of possible applications, only the most important ones are selected here.
Salivary gland tumors (SGTs) are a relevant, highly diverse subgroup of head and neck tumors whose entity determination can be difficult. Confocal Raman imaging in combination with multivariate data analysis may possibly support their correct classification. For the analysis of the translational potential of Raman imaging in SGT determination, a multi-stage evaluation process is necessary. By measuring a sample set of Warthin tumor, pleomorphic adenoma and non-tumor salivary gland tissue, Raman data were obtained and a thorough Raman band analysis was performed. This evaluation revealed highly overlapping Raman patterns with only minor spectral differences. Consequently, a principal component analysis (PCA) was calculated and further combined with a discriminant analysis (DA) to enable the best possible distinction. The PCA-DA model was characterized by accuracy, sensitivity, selectivity and precision values above 90% and validated by predicting model-unknown Raman spectra, of which 93% were classified correctly. Thus, we state our PCA-DA to be suitable for parotid tumor and non-salivary salivary gland tissue discrimination and prediction. For evaluation of the translational potential, further validation steps are necessary.
Glioblastoma WHO IV belongs to a group of brain tumors that are still incurable. A promising treatment approach applies photodynamic therapy (PDT) with hypericin as a photosensitizer. To generate a comprehensive understanding of the photosensitizer-tumor interactions, the first part of our study is focused on investigating the distribution and penetration behavior of hypericin in glioma cell spheroids by fluorescence microscopy. In the second part, fluorescence lifetime imaging microscopy (FLIM) was used to correlate fluorescence lifetime (FLT) changes of hypericin to environmental effects inside the spheroids. In this context, 3D tumor spheroids are an excellent model system since they consider 3D cell–cell interactions and the extracellular matrix is similar to tumors in vivo. Our analytical approach considers hypericin as probe molecule for FLIM and as photosensitizer for PDT at the same time, making it possible to directly draw conclusions of the state and location of the drug in a biological system. The knowledge of both state and location of hypericin makes a fundamental understanding of the impact of hypericin PDT in brain tumors possible. Following different incubation conditions, the hypericin distribution in peripheral and central cryosections of the spheroids were analyzed. Both fluorescence microscopy and FLIM revealed a hypericin gradient towards the spheroid core for short incubation periods or small concentrations. On the other hand, a homogeneous hypericin distribution is observed for long incubation times and high concentrations. Especially, the observed FLT change is crucial for the PDT efficiency, since the triplet yield, and hence the O2 activation, is directly proportional to the FLT. Based on the FLT increase inside spheroids, an incubation time 30 min is required to achieve most suitable conditions for an effective PDT.
The early detection of head and neck cancer is a prolonged challenging task. It requires a precise and accurate identification of tissue alterations as well as a distinct discrimination of cancerous from healthy tissue areas. A novel approach for this purpose uses microspectroscopic techniques with special focus on hyperspectral imaging (HSI) methods. Our proof-of-principle study presents the implementation and application of darkfield elastic light scattering spectroscopy (DF ELSS) as a non-destructive, high-resolution, and fast imaging modality to distinguish lingual healthy from altered tissue regions in a mouse model. The main aspect of our study deals with the comparison of two varying HSI detection principles, which are a point-by-point and line scanning imaging, and whether one might be more appropriate in differentiating several tissue types. Statistical models are formed by deploying a principal component analysis (PCA) with the Bayesian discriminant analysis (DA) on the elastic light scattering (ELS) spectra. Overall accuracy, sensitivity, and precision values of 98% are achieved for both models whereas the overall specificity results in 99%. An additional classification of model-unknown ELS spectra is performed. The predictions are verified with histopathological evaluations of identical HE-stained tissue areas to prove the model’s capability of tissue distinction. In the context of our proof-of-principle study, we assess the Pushbroom PCA-DA model to be more suitable for tissue type differentiations and thus tissue classification. In addition to the HE-examination in head and neck cancer diagnosis, the usage of HSI-based statistical models might be conceivable in a daily clinical routine.
Changing requirements and qualification profiles of employees, increasingly complex digital systems up to artificial intelligence, missing standards for the seamless embedding of existing resources and unpredictable return on investments are just a few examples of the challenges of an SME in the age of digitalisation. In most cases there is a lack of suitable tools and methods to support companies in the digital transformation process in the value creation processes, but also of training and learning materials. A European research project (BITTMAS - Business Transformation towards Digitalisation and Smart systems, ERASMUS+, 2016-1 DE02-KA202-003437) with international partners from science, associations and industry has addressed this issue and developed various methods and instruments to support SMEs. Within the scope of a literature search, 16 suitable digitalisation concepts for production and logistics were identified. In the following, a learning platform with a literature database with multivariable sorting options according to branches and keywords of digitalisation, a video gallery with basic and advanced knowledge and a glossary were created in order to provide the user with consolidated and structured specialist knowledge. The 16 identifying concepts for transforming value-added processes in the context of digitalisation were transferred to a learning platform using developed learning paths in coaching and training to online course modules including test questions. A maturity model was developed and implemented in a self assessment tool for the analysis to identify the potential of digitalisation in production and logistics in relation to the current technological digitalisation level of the company. As a result, the user receives one or more of the 16 potential digitalisation concepts suggested or the delta for the necessary, not yet available enabler technologies is presented as a spider diagram. For a successful implementation of the identified suitable digitalisation concepts in production and logistics, a further tool was developed to identify supplementary requirements for all company divisions and stakeholders in relation to the "digital transformation" in the form of a self-evaluation. This paper presents the methods and tools developed, the accompanying learning materials and the learning platform.
Forecasting demand is challenging. Various products exhibit different demand patterns. While demand may be constant and regular for one product, it may be sporadic for another, as well as when demand occurs, it may fluctuate significantly. Forecasting errors are costly and result in obsolete inventory or unsatisfied demand. Methods from statistics, machine learning, and deep learning have been used to predict such demand patterns. Nevertheless, it is not clear for what demand pattern, which algorithm would achieve the best forecast. Therefore, even today a large number of models are used to forecast on a test period. The model with the best result on the test period is used for the actual forecast. This approach is computationally and time intensive and, in most cases, uneconomical. In our paper we show the possibility to use a machine learning classification algorithm, which predicts the best possible model based on the characteristics of a time series. The approach was developed and evaluated on a dataset from a B2B-technical-retailer. The machine learning classification algorithm achieves a mean ROC-AUC of 89%, which emphasizes the skill of the model.
Machine learning (ML) techniques are rapidly evolving, both in academia and practice. However, enterprises show different maturity levels in successfully implementing ML techniques. Thus, we review the state of adoption of ML in enterprises. We find that ML technologies are being increasingly adopted in enterprises, but that small and medium-size enterprises (SME) are struggling with the introduction in comparison to larger enterprises. In order to identify enablers and success factors we conduct a qualitative empirical study with 18 companies in different industries. The results show that especially SME fail to apply ML technologies due to insufficient ML knowhow. However, partners and appropriate tools can compensate this lack of resources. We discuss approaches to bridge the gap for SME.
The unprecedented acceleration in the dynamics of economic development and its dependence on global interactions makes predicting the future especially difficult. Nevertheless, an examination of long-term trends provides an opportunity to begin a discussion about what reality could await us tomorrow and how we want to deal with it. With this food-for-thought paper, the member institutes of the Fraunhofer Group for Innovation Research wish to present a selection of the trends that are destined to have a significant impact on innovation systems in the period leading up to 2030. Based on these trends, the paper derives theses for innovation in the year 2030 and describes the resulting tasks for business, politics, science and society.
Purpose: The purpose of this study was to investigate the value of the web representation of certain fashion hot spots and how these results can be shown on fashion maps in an illustrated way.
Design/methodology/approach: A new ranking was created, which was evaluated with a self-instructed index, to gain solid results. Numbers were collected from Google, Instagram, Facebook, Twitter and web.alert.io. Additionally, fashion maps were created for an illustrative visualization of the results.
Findings: Compared with the ranking of a trend forecasting agency, called Global Language Monitor, which concepted a ranking of non-virtual fashion cities, the web representation and therefore the ranking of the research project, differs mainly in the situation of the cities among the first 10, viz. the rank on which a city occurs, but fewer in the actual cities mentioned.
Research limitations: The research was limited to subjective analysis of data, leading to partly subjective results, as well as the selected number of social media platforms, that had been used.
Originality/value: This is the first study to explore the web representation value of fashion metropolises in comparison to their non-virtual ranking. The results are partly based on results that already existed, concerning transformations of fashion cities or in general which cities own the status of a fashion city.
The basic idea behind a wearable robotic grasp assistancesystem is to support people that suffer from severe motor impairments in daily activities. Such a system needs to act mostly autonomously and according to the user’s intent. Vision-based hand pose estimation could be an integral part of a larger control and assistance framework. In this paper we evaluate the performance of egocentric monocular hand pose estimation for a robot-controlled hand exoskeleton in a simulation. For hand pose estimation we adopt a Convolutional Neural Network (CNN). We train and evaluate this network with computer graphics, created by our own data generator. In order to guide further design decisions we focus in our experiments on two egocentric camera viewpoints tested on synthetic data with the help of a 3D-scanned hand model, with and without an exoskeleton attached to it.We observe that hand pose estimation with a wrist-mounted camera performs more accurate than with a head-mounted camera in the context of our simulation. Further, a grasp assistance system attached to the hand alters visual appearance and can improve hand pose estimation. Our experiment provides useful insights for the integration of sensors into a context sensitive analysis framework for intelligent assistance.
The number of publications in the field of breath analysis using different types of ion mobility spectrometers (IMS) has increased over the last few years. In this paper, the publications between 2010 and 2013 are reviewed with respect to different types of IMS such as differential mobility spectrometers, high-field asymmetric waveform ion mobility spectrometers and multi-capillary columns coupled to conventional IMS. The analytes detected by IMS and declared with significance to a specific medical question were considered further with respect to medical and analytical questions. In total, 42 different analytes were found to be detected using IMS on a high significance level and were compared to findings using other analytical methods with respect to the individual analyte.
Background: Conventional methods for lung cancer detection including computed tomography (CT) and bronchoscopy are expensive and invasive. Thus, there is still a need for an optimal lung cancer detection technique. Methods: The exhaled breath of 50 patients with lung cancer histologically proven by bronchoscopic biopsy samples (32 adenocarcinomas, 10 squamous cell carcinomas, 8 small cell carcinomas), were analyzed using ion mobility spectrometry (IMS) and compared with 39 healthy volunteers. As a secondary assessment, we compared adenocarcinoma patients with and without epidermal growth factor receptor (EGFR) mutation. Results: A decision tree algorithm could separate patients with lung cancer including adenocarcinoma, squamous cell carcinoma and small cell carcinoma. One hundred-fifteen separated volatile organic compound (VOC) peaks were analyzed. Peak-2 noted as n-Dodecane using the IMS database was able to separate values with a sensitivity of 70.0% and a specificity of 89.7%. Incorporating a decision tree algorithm starting with n-Dodecane, a sensitivity of 76% and specificity of 100% was achieved. Comparing VOC peaks between adenocarcinoma and healthy subjects, n-Dodecane was able to separate values with a sensitivity of 81.3% and a specificity of 89.7%. Fourteen patients positive for EGFR mutation displayed a significantly higher n-Dodecane than for the 14 patients negative for EGFR (p<0.01), with a sensitivity of 85.7% and a specificity of 78.6%. Conclusion: In this prospective study, VOC peak patterns using a decision tree algorithm were useful in the detection of lung cancer. Moreover, n-Dodecane analysis from adenocarcinoma patients might be useful to discriminate the EGFR mutation.
Additive Manufacturing is increasingly used in the industrial sector as a result of continuous development. In the Production Planning and Control (PPC) system, AM enables an agile response in the area of detailed and process planning, especially for a large number of plants. For this purpose, a concept for a PPC system for AM is presented, which takes into account the requirements for integration into the operational enterprise software system. The technical applicability will be demonstrated by individual implemented sections. The presented solution approach promises a more efficient utilization of the plants and a more elastic use.
Development work within an experimental environment, in which certain properties are investigated and optimized, requires many test runs and is therefore often associated with long execution times, costs and risks. This can affect product, material and technology development in industry and research. New digital driver technologies offer the possibility to automate complex manual work steps in a cost-effective way, to increase the relevance of the results and to accelerate the processes many times over. In this context, this article presents a low-cost, modular and open-source machine vision system for test execution and evaluates it on the basis of a real industrial application. For this purpose a methodology for the automated execution of the load intervals, the process documentation and for the evaluation of the generated data by means of machine learning to classify wear levels. The software and the mechanical structure are designed to be adaptable to different conditions, components and for a variety of tasks in industry and research. The mechanical structure is required for tracking the test object and represents a motion platform with independent positioning by machine vision operators or machine learning. An evaluation of the state of the test object is performed by the transfer learning after the initial documentation run. The manual procedure for classifying the visually recorded data on the state of the test object is described for the training material. This leads to an increased resource efficiency on the material as well as on the personnel side since on the one hand the significance of the tests performed is increased by the continuous documentation and on the other hand the responsible experts can be assigned time efficiently. The presence and know-how of the experts are therefore only required for defined and decisive events during the execution of the experiments. Furthermore, the generated data are suitable for later use as an additional source of data for predictive maintenance of the developed object.
The blockchain technology enables a common data basis between the participants. Entries are logged and the authenticity of the participants is guaranteed. In the case of a relationship between customers and producers, this would lead to verifiable cooperation, which would be a major step as companies enter into service contracts based on the flow of many small transactions through communication. This paper proposes an architecture that enables the creation and processing of orders between the customer and producers via a blockchain based production network. The handling of larger files which are traceable via the blockchain is also shown and the use of a public or permissioned blockchain for an application case is also considered.
The use of additive manufacturing technologies for industrial production is constantly growing. This technology differs from the known production proecdures. The areas for scheduling, detailed and sequence planning are particularly important for additive production due to the long print times and flexible use of the production area. Therefore, production-relevant variables are considered and used for the production planning and control (PPC) of additive manufacturing machines. For this purpose, an optimization model is presented which shows a time-oriented build space utilization. In the implementation, a nesting algorithm is used to check the combinability of different models for each individual print job.
The promise of immutable documents to make it easier and less expensive for consumers and producers to collaborate in a verifiable way would represent an enormous progress, especially as companies strive for establish service contracts which are based on the flow of many small transactions using machine-to-machine communication. The blockchain technology logs these data, verifies the authenticity and make them available for service offers. This work deals with an architecture enabling to setup order processing between consumers and produceers using blockchain. In this way, the technical feasibility is shown and the special characteristics of blockchain production networks will be discussed.
Additive manufacturing (AM) is a promising manufacturing method for many industrial sectors. For this application, industrial requirements such as high production volumes and coordinated implementation must be taken into account. These tasks of the internal handling of production facilities are carried out by the Production Planning and Control (PPC) information system. A key factor in the planning and scheduling is the exact calculation of manufacturing times. For this purpose we investigate the use of Machine Learning (ML) for the prediction of manufacturing times of AM facilities.
Pre-clinical evaluation of advanced nerve guide conduits using a novel 3D in vitro testing model
(2018)
Autografts are the current gold standard for large peripheral nerve defects in clinics despite the frequently occurring side effects like donor site morbidity. Hollow nerve guidance conduits (NGC) are proposed alternatives to autografts, but failed to bridge gaps exceeding 3 cm in humans. Internal NGC guidance cues like microfibres are believed to enhance hollow NGCs by giving additional physical support for directed regeneration of Schwann cells and axons. In this study, we report a new 3D in vitro model that allows the evaluation of different intraluminal fibre scaffolds inside a complete NGC. The performance of electrospun polycaprolactone (PCL) microfibres inside 5 mm long polyethylene glycol (PEG) conduits were investigated in neuronal cell and dorsal root ganglion (DRG) cultures in vitro. Z-stack confocal microscopy revealed the aligned orientation of neuronal cells along the fibres throughout the whole NGC length and depth. The number of living cells in the centre of the scaffold was not significantly different to the tissue culture plastic (TCP) control. For ex vivo analysis, DRGs were placed on top of fibre-filled NGCs to simulate the proximal nerve stump. In 21 days of culture, Schwann cells and axons infiltrated the conduits along the microfibres with 2.2 ± 0.37 mm and 2.1 ± 0.33 mm, respectively. We conclude that this in vitro model can help define internal NGC scaffolds in the future by comparing different fibre materials, composites and dimensions in one setup prior to animal testing.
Since its early beginnings in the form of correspondence schools, e-learning has generally sought to provide flexibility and high quality education. While these are indeed noble intentions, the reality of today's connected world demands that such programs focus on a different purpose. As the main purpose of e-learning shifts, so must be the design approaches.
Rethinking e-learning requires open-mindedness on the part of academies, designers, cyber educators, legislators, IT and administrators, but also the learners themselves. All who are involved in or impacted by e-learning programs must speak up and finally share their perspectives, but who will be listening? The key to rethinking e-learning lies in the ability of the stakeholders to listen to each other and make decisions which are in the best interest of the learner.
This chapter will propose a new purpose for e-learning and explore promising possibilities for learner-centered design. The future of e-learning can be shaped by the decisions made today, but before any decisions can be made, one must acknowledge e-learning's successes as well as its shortcomings. The purpose of this chapter is to encourage those who are impacted by e-learning to think about the future.
There is no denying that organizations, whether domestic or global, whether educational, governmental, or business, are undergoing rapid transformation. However, what is causing it? Prompted by the need to remain relevant and competitive, organizations constantly try to reinvent themselves. Those that do not, according to the laws of economics, will simply serve no purpose and will eventually cease to exist. Regardless of sector or industry, an organization's success pivots around its human talent. Hence, it is crucial to manage it and cultivate certain traits, knowledge, and skills. In today's global economy, organizations are more interconnected than ever before and thus the challenges they face require that employees possess not only expert knowledge, problem-solving, cross-cultural, and cross-functional teaming skills, but also good communications skills and agile thinking.
Many researchers have explored the phenomenon of intercultural communication since Edward T. Hall first brought it to light in the late 1950s. Although the literature is quite extensive, the ongoing sociopolitical struggles are evidence that even in the twenty-first century, society has limited intercultural as well as intracultural communication competence. This limited understanding continues to bring about discord in every facet of life, including work.
The modern workforce is expected to possess certain knowledge, skills, and attitudes that are inherently different from those expected from previous generations. Due to globalization, intercultural competence and highly effective communication skills are at the top of the list - a working knowledge of English as the lingua franca of today's business world can be considered as a first step.
In thermopervaporation the same economically favorable driving force as in membrane distillation, i.e., a temperature difference between feed and permeate for the transport, is used but with non-porous thin-film composite membranes. Membrane pores cannot be wetted and long-term operational stability can be achieved with the appropriate coating layer, but normally with a decrease of the flux compared to membrane distillation with porous hydrophobic membranes.
Porous asymmetric PVDF membranes were made to achieve low permeation resistance and pores which could be overcoated with polyelectrolyte polymers. This coating prohibits pore wetting and strongly reduces adsorption of organic substances.
Those membranes showed a high permeation rate for water due to a structure of phase-separated hydrophilic and hydrophobic three-dimensional domains. The permeation rates of these composite membranes for water is between 6 and 12 l/(h m²) at a feed temperature of 60 °C and permeate at a temperature of 40 °C of a 2% saline solution feed depending on the operational parameters. This is only a slight reduction of 10–15% in permeation rate compared to membrane distillation with porous hydrophobic membranes.
In whey dewatering experiment this membrane showed a constant performance over 4 days in intermittent operation mode and stability in cleaning with strong alkaline solution.
A vapor permeation processes for the separation of aromatic compounds from aliphatic compounds
(2014)
A number of rubbery and glassy membranes have been prepared and evaluated in vapor permeation experiments for separation of aromatic/aliphatic mixtures, using 5/95 (wt:wt) toluene/methylcyclohexane (MCH) as a model solution. Candidate membranes that met the required toluene/MCH selectivity of ≥ 10 were identified. The stability of the candidate membranes was tested by cycling the experiment between higher toluene concentrations and the original 5 wt% level. The best membrane produced has a toluene permeance of 280 gpu and a toluene/MCH selectivity of 13 when tested with a vapor feed of the model mixture at its boiling point and at atmospheric pressure. When a series of related membrane materials are compared, there is a sharp trade-off between membrane permeance and membrane selectivity. A process design study based on the experimental results was conducted. The best preliminary membrane design uses 45% of the energy of a conventional distillation process.
Long-term stability of membranes in membrane distillation operation is a problem nowadays which prevents the industrial breakthrough of this separation process. Fouling or slow pore wetting are the basic reasons for this.
Membrane distillation membranes were made by NIPS process rendering the membrane asymmetrically to achieve low permeation resistance and pores which can be over coated with polyelectrolyte polymers thus leading to thermopervaporation membranes. Those membranes prohibit pore wetting and may strongly reduce resorption of organic substances on for membrane distillation typically used hydrophobic surfaces thus leading to longterm operation stability in dewatering including stable membrane cleaning.
Asymmetric PVDF membranes have been coated with cation exchange polyelectrolyte leading to a very thin, defect-free layer which has a high permeation rate for water due to the domain structure of phase-separated hydrophilic and hydrophobic three-dimensional structures.
Military organizations have special features like following different organizational laws in times of peace and war and their specific embeddedness in society and politics. Especially the latter aspect has made the military an important object of study since the beginnings of modern sociology. In the wake of establishing specific sociological accounts, military sociology has been developed, dedicated to the different facets of the military. This research is based on different theoretical perspectives, but has hardly embraced the frameworks from economics and sociology of conventions (EC/SC) so far. The aim of the chapter is to explore and demonstrate the potentials of this approach. In a first step, the state of the art of military sociology research is outlined, and potential avenues for analyzing military forces based on EC/SC are identified. It is argued that especially the connection to organizational theory (military as organization) and civil-military relations, including leadership and professionalism, offer starting points. After introducing existing studies addressing military-related topics with reference to EC/SC, relevant concepts and approaches of convention theory that prove to be particularly enriching for military research are discussed. An outlook on possible further fields and topics of research is given to concretize how an inclusion of the perspective of EC/SC could look like.
The performance and scalability of modern data-intensive systems are limited by massive data movement of growing datasets across the whole memory hierarchy to the CPUs. Such traditional processor-centric DBMS architectures are bandwidth- and latency-bound. Processing-in-Memory (PIM) designs seek to overcome these limitations by integrating memory and processing functionality on the same chip. PIM targets near- or in-memory data processing, leveraging the greater in-situ parallelism and bandwidth.
In this paper, we introduce pimDB and provide an initial comparison of processor-centric and PIM-DBMS approaches under different aspects, such as scalability and parallelism, cache-awareness, or PIM-specific compute/bandwidth tradeoffs. The evaluation is performed end-to-end on a real PIM hardware system from UPMEM.
Even though near-data processing (NDP) can provably reduce data transfers and increase performance, current NDP is solely utilized in read-only settings. Slow or tedious to implement synchronization and invalidation mechanisms between host and smart storage make NDP support for data-intensive update operations difficult. In this paper, we introduce a low-latency cache-coherent shared lock table for update NDP settings in disaggregated memory environments. It utilizes the novel CCIX interconnect technology and is integrated in neoDBMS, a near-data processing DBMS for smart storage. Our evaluation indicates end-to-end lock latencies of ∼80-100ns and robust performance under contention.
Multi-versioning and MVCC are the foundations of many modern DBMSs. Under mixed workloads and large datasets, the creation of the transactional snapshot can become very expensive, as long-running analytical transactions may request old versions, residing on cold storage, for reasons of transactional consistency. Furthermore, analytical queries operate on cold data, stored on slow persistent storage. Due to the poor data locality, snapshot creation may cause massive data transfers and thus lower performance. Given the current trend towards computational storage and near-data processing, it has become viable to perform such operations in-storage to reduce data transfers and improve scalability. neoDBMS is a DBMS designed for near-data processing and computational storage. In this paper, we demonstrate how neoDBMS performs snapshot computation in-situ. We showcase different interactive scenarios, where neoDBMS outperforms PostgreSQL 12 by up to 5×.
The Internet of Things (IoT) refers to the interconnectedness of physical objects, and works by equipping the latter with sensors and actuators as a means to connect to the internet. The number of connected things has increased threefold over the past five years. Consequently, firms expect the IoT to become a source of new business models driven by technology. However, only a few early adopters have started to install and use IoT appliances on a frequent basis. So it is still unclear which factors drive technological acceptance of IoT appliances. Confronting this gap in current research, the present paper explores how IoT appliances are conceptually defined, which factors drive technological acceptance of IoT appliances, and how firms can use results in order to improve value propositions in corresponding business models. lt is discovered that IoT appliance vendors need to support a broad focus as the potential buyers expose a large variety. As conclusions from this insight, the paper illustrates some flexible marketing strategies.
Maintenance is an increasingly complex and knowledge-intensive field. In order to address these challenges, assistance systems based on augmented, mixed, or virtual reality can be applied. Therefore, the objective of this paper is to present a framework that can be used to identify, select, and implement an assistance system based on reality technology in the maintenance environment. The development of the framework is based on a systematic literature review and subject matter expert interviews. The framework provides the best technological and economic solution in several steps. The validation of the framework is carried out through a case study.