Refine
Document Type
- Journal article (991) (remove)
Has full text
- yes (991) (remove)
Is part of the Bibliography
- yes (991)
Institute
- ESB Business School (421)
- Life Sciences (237)
- Informatik (186)
- Technik (103)
- Texoversum (37)
- Zentrale Einrichtungen (7)
Publisher
- Elsevier (155)
- MDPI (97)
- Springer (83)
- Wiley (40)
- De Gruyter (33)
- IEEE (28)
- MIM, Marken-Institut München (21)
- Emerald (20)
- Sage (15)
- American Chemical Society (14)
Tech hubs (THs) and cognate structures are nowadays ubiquitous in the innovation ecosystem of Sub-Saharan African (SSA) countries. However, the concept of THs is fuzzy due to the lack of a clear and universally accepted definition. This ambiguity is further compounded by the diverse range of organizations that self-identify as hubs, or are categorized as such by others. As a result, research on THs in SSA remained limited. Against the backdrop of established research on the interconnectedness of technology, innovation and entrepreneurship in different organizational forms, this paper is meant to provide fresh insights into the study of THs in SSA. To advance future research, first, it reveals what is special about THs in SSA and how they are related to existing concepts. I particularly argue that they contour a fourth-wave model of incubation. Second, four main categories are unfolded to delineate THs in SSA which is the cornerstone for future research.
New business opportunities appeared using the potential of the Internet and related digital technologies, like the Internet of Things, services computing, artificial intelligence, cloud, edge, and fog computing, social networks, big data with analytics, mobile systems, collaboration networks, and cyber-physical systems. Companies are transforming their strategy and product base, as well as their culture, processes and information systems to adopt digital transformation or to approach for digital leadership. Digitalization fosters the development of IT environments with many rather small and distributed structures, like the Internet of Things, Microservices, or other micro-granular elements. Digitalization has a substantial impact for architecting the open and complex world of highly distributed digital servcies and products, as part of a new digital enterprise architecture, which structure and direct service-dominant digital products and services. The present research paper investigates mechanisms for supporting the evolution of digital enterprise architectures with user-friendly methods and instruments of interaction, visualization, and intelligent decision management during the exploration of multiple and interconnected perspectives by an architecture management cockpit.
Artificial Intelligence-based Assistants AIAs are spreading quickly both in homes and offices. They already have left their original habitats of "intelligent speakers" providing easy access to music collections. The initiated a multitude of new devices and are already populating devices such as TV sets. Characteristic for the intelligent digital assistants is the formation of platforms around their core functionality. Thus, AIS capabilities of the assistants are used to offer new services and create new interfaces for business processes. There are positive network effects between the assistants and the services as well as within the services. Therefore, many companies see the need to get involved in the field of digital assistants but lack a framework to align their initiatives with their corporate strategies. In order to lay the foundation for a comprehensive method, we are therefore investigating intelligent digital assistants. Based on this analysis, we are developing a framework of strategic opportunities and challenges.
Handling complexity in modern software engineering : editorial introduction to issue 32 of CSIMQ
(2022)
The potential of the Internet and related digital technologies, such as the Internet of Things (IoT), cognition and artificial intelligence, data analytics, services computing, cloud computing, mobile systems, collaboration networks, and cyber-physical systems, are both strategic drivers and enablers of modern digital platforms with fast-evolving ecosystems of intelligent services for digital products. This issue of CSIMQ presents three recent articles on modern software engineering. First, we focus on continuous software development and place it in the context of software architectures and digital transformation. The first contribution is followed by the description of the basis of specific security requirements and adequate digital monitoring mechanisms. Finally, we present a practical example of the digital management of livestock farming.
The digitization of our society changes the way we live, work, learn, communicate, and collaborate. The Internet of Things, enterprise social networks, adaptive case management, mobility systems, analytics for big data, and cloud services environments are emerging to support smart connected products and services and the digital transformation. Biological metaphors of living and adaptable ecosystems provide the logical foundation for self-optimizing and resilient run-time environments for intelligent business services and service-oriented enterprise architectures. Our aim is to support flexibility and agile transformations for both business domains and related information technology. The present research paper investigates mechanisms for decision analytics in the context of multi-perspective explorations of enterprise services and their digital enterprise architectures by extending original architecture reference models with state of art elements for agile architectural engineering for the digitization and collaborative architectural decision support. The paper’s context focuses on digital transformations of business and IT and integrates fundamental mappings between adaptable digital enterprise architectures and service-oriented information systems. We are putting a spotlight on the example domain – Internet of Things.
An interactive clothing design and a personalized virtual display with user’s own face are presented in this paper to meet the requirement of personalized clothing customization. A customer interactive clothing design approach based on genetic engineering ideas is analyzed by taking suit as an example. Thus, customers could rearrange the clothing style elements, chose available color, fabric and come up with their own personalized suit style. A web 3D customization prototype system of personalized clothing is developed based on the Unity3D and VR technology. The layout of the structure and functions combined with the flow of the system are given. Practical issues such as 3D face scanning, suit style design, fabric selection, and accessory choices are addressed also. Tests to the prototype system indicate that it could show realistic clothing and fabric effect and offer effective visual and customization experience to users.
Background
The actual task of electrocardiographic examinations is to increase the reliability of diagnosing the condition of the heart. Within the framework of this task, an important direction is the solution of the inverse problem of electrocardiography, based on the processing of electrocardiographic signals of multichannel cardio leads at known electrode coordinates in these leads (Titomir et al. Noninvasiv electrocardiotopography, 2003), (Macfarlane et al. Comprehensive Electrocardiology, 2nd ed. (Chapter 9), 2011).
Results
In order to obtain more detailed information about the electrical activity of the heart, we carry out a reconstruction of the distribution of equivalent electrical sources on the heart surface. In this area, we hold reconstruction of the equivalent sources during the cardiac cycle at relatively low hardware cost. ECG maps of electrical potentials on the surface of the torso (TSPM) and electrical sources on the surface of the heart (HSSM) were studied for different times of the cardiac cycle. We carried out a visual and quantitative comparison of these maps in the presence of pathological regions of different localization. For this purpose we used the model of the heart electrical activity, based on cellular automata.
Conclusions
The model of cellular automata allows us to consider the processes of heart excitation in the presence of pathological regions of various sizes and localization. It is shown, that changes in the distribution of electrical sources on the surface of the epicardium in the presence of pathological areas with disturbances in the conduction of heart excitation are much more noticeable than changes in ECG maps on the torso surface.
Purpose
Computerized medical imaging processing assists neurosurgeons to localize tumours precisely. It plays a key role in recent image-guided neurosurgery. Hence, we developed a new open-source toolkit, namely Slicer-DeepSeg, for efficient and automatic brain tumour segmentation based on deep learning methodologies for aiding clinical brain research.
Methods
Our developed toolkit consists of three main components. First, Slicer-DeepSeg extends the 3D Slicer application and thus provides support for multiple data input/ output data formats and 3D visualization libraries. Second, Slicer core modules offer powerful image processing and analysis utilities. Third, the Slicer-DeepSeg extension provides a customized GUI for brain tumour segmentation using deep learning-based methods.
Results
The developed Slicer-DeepSeg was validated using a public dataset of high-grade glioma patients. The results showed that our proposed platform’s performance considerably outperforms other 3D Slicer cloud-based approaches.
Conclusions
Developed Slicer-DeepSeg allows the development of novel AI-assisted medical applications in neurosurgery. Moreover, it can enhance the outcomes of computer-aided diagnosis of brain tumours. Open-source Slicer-DeepSeg is available at github.com/razeineldin/Slicer-DeepSeg.
Intraoperative imaging can assist neurosurgeons to define brain tumours and other surrounding brain structures. Interventional ultrasound (iUS) is a convenient modality with fast scan times. However, iUS data may suffer from noise and artefacts which limit their interpretation during brain surgery. In this work, we use two deep learning networks, namely UNet and TransUNet, to make automatic and accurate segmentation of the brain tumour in iUS data. Experiments were conducted on a dataset of 27 iUS volumes. The outcomes show that using a transformer with UNet is advantageous providing an efficient segmentation modelling long-range dependencies between each iUS image. In particular, the enhanced TransUNet was able to predict cavity segmentation in iUS data with an inference rate of more than 125 FPS. These promising results suggest that deep learning networks can be successfully deployed to assist neurosurgeons in the operating room.
Intraoperative brain deformation, so called brain shift, affects the applicability of preoperative magnetic resonance imaging (MRI) data to assist the procedures of intraoperative ultrasound (iUS) guidance during neurosurgery. This paper proposes a deep learning-based approach for fast and accurate deformable registration of preoperative MRI to iUS images to correct brain shift. Based on the architecture of 3D convolutional neural networks, the proposed deep MRI-iUS registration method has been successfully tested and evaluated on the retrospective evaluation of cerebral tumors (RESECT) dataset. This study showed that our proposed method outperforms other registration methods in previous studies with an average mean squared error (MSE) of 85. Moreover, this method can register three 3D MRI-US pair in less than a second, improving the expected outcomes of brain surgery.
Purpose: Gliomas are the most common and aggressive type of brain tumors due to their infiltrative nature and rapid progression. The process of distinguishing tumor boundaries from healthy cells is still a challenging task in the clinical routine. Fluid attenuated inversion recovery (FLAIR) MRI modality can provide the physician with information about tumor infiltration. Therefore, this paper proposes a new generic deep learning architecture, namely DeepSeg, for fully automated detection and segmentation of the brain lesion using FLAIR MRI data.
Methods: The developed DeepSeg is a modular decoupling framework. It consists of two connected core parts based on an encoding and decoding relationship. The encoder part is a convolutional neural network (CNN) responsible for spatial information extraction. The resulting semantic map is inserted into the decoder part to get the full-resolution probability map. Based on modified U-Net architecture, different CNN models such as residual neural network (ResNet), dense convolutional network (DenseNet), and NASNet have been utilized in this study.
Results: The proposed deep learning architectures have been successfully tested and evaluated on-line based on MRI datasets of brain tumor segmentation (BraTS 2019) challenge, including s336 cases as training data and 125 cases for validation data. The dice and Hausdorff distance scores of obtained segmentation results are about 0.81 to 0.84 and 9.8 to 19.7 correspondingly.
Conclusion: This study showed successful feasibility and comparative performance of applying different deep learning models in a new DeepSeg framework for automated brain tumor segmentation in FLAIR MR images. The proposed DeepSeg is open source and freely available at https://github.com/razeineldin/DeepSeg/.
Accurate and safe neurosurgical intervention can be affected by intra-operative tissue deformation, known as brain-shift. In this study, we propose an automatic, fast, and accurate deformable method, called iRegNet, for registering pre-operative magnetic resonance images to intra-operative ultrasound volumes to compensate for brain-shift. iRegNet is a robust end-to-end deep learning approach for the non-linear registration of MRI-iUS images in the context of image-guided neurosurgery. Pre-operative MRI (as moving image) and iUS (as fixed image) are first appended to our convolutional neural network, after which a non-rigid transformation field is estimated. The MRI image is then transformed using the output displacement field to the iUS coordinate system. Extensive experiments have been conducted on two multi-location databases, which are the BITE and the RESECT. Quantitatively, iRegNet reduced the mean landmark errors from pre-registration value of (4.18 ± 1.84 and 5.35 ± 4.19 mm) to the lowest value of (1.47 ± 0.61 and 0.84 ± 0.16 mm) for the BITE and RESECT datasets, respectively. Additional qualitative validation of this study was conducted by two expert neurosurgeons through overlaying MRI-iUS pairs before and after the deformable registration. Experimental findings show that our proposed iRegNet is fast and achieves state-of-the-art accuracies outperforming state-of-the-art approaches. Furthermore, the proposed iRegNet can deliver competitive results, even in the case of non-trained images as proof of its generality and can therefore be valuable in intra-operative neurosurgical guidance.
Purpose
Artificial intelligence (AI), in particular deep neural networks, has achieved remarkable results for medical image analysis in several applications. Yet the lack of explainability of deep neural models is considered the principal restriction before applying these methods in clinical practice.
Methods
In this study, we propose a NeuroXAI framework for explainable AI of deep learning networks to increase the trust of medical experts. NeuroXAI implements seven state-of-the-art explanation methods providing visualization maps to help make deep learning models transparent.
Results
NeuroXAI has been applied to two applications of the most widely investigated problems in brain imaging analysis, i.e., image classification and segmentation using magnetic resonance (MR) modality. Visual attention maps of multiple XAI methods have been generated and compared for both applications. Another experiment demonstrated that NeuroXAI can provide information flow visualization on internal layers of a segmentation CNN.
Conclusion
Due to its open architecture, ease of implementation, and scalability to new XAI methods, NeuroXAI could be utilized to assist radiologists and medical professionals in the detection and diagnosis of brain tumors in the clinical routine of cancer patients. The code of NeuroXAI is publicly accessible at https://github.com/razeineldin/NeuroXAI.
Recent advances in artificial intelligence have enabled promising applications in neurosurgery that can enhance patient outcomes and minimize risks. This paper presents a novel system that utilizes AI to aid neurosurgeons in precisely identifying and localizing brain tumors. The system was trained on a dataset of brain MRI scans and utilized deep learning algorithms for segmentation and classification. Evaluation of the system on a separate set of brain MRI scans demonstrated an average Dice similarity coefficient of 0.87. The system was also evaluated through a user experience test involving the Department of Neurosurgery at the University Hospital Ulm, with results showing significant improvements in accuracy, efficiency, and reduced cognitive load and stress levels. Additionally, the system has demonstrated adaptability to various surgical scenarios and provides personalized guidance to users. These findings indicate the potential for AI to enhance the quality of neurosurgical interventions and improve patient outcomes. Future work will explore integrating this system with robotic surgical tools for minimally invasive surgeries.
DMOS transistors are often subject to high power dissipation and thus substantial self-heating. This limits their safe operating area because very high device temperatures can lead to thermal runaway and subsequent destruction. Because the peak temperature usually occurs only in a small region in the device, it is possible to redistribute part of the dissipated power from the hot region to the cooler device areas. In this way, the peak temperature is reduced, whereas the total power dissipation is still the same. Assuming that a certain temperature must not be exceeded for safe operation, the improved device is now capable of withstanding higher amounts of energy with an unchanged device area. This paper presents two simple methods to redistribute the power dissipation density and thus lower the peak device temperature. The presented methods only require layout changes. They can easily be applied to modern power technologies without the need of process modifications. Both methods are implemented in test structures and investigated by simulations and measurements.
Vehicles have been so far improved in terms of energy-efficiency and safety mainly by optimising the engine and the power train. However, there are opportunities to increase energy-efficiency and safety by adapting the individual driving behaviour in the given driving situation. In this paper, an improved rule match algorithm is introduced, which is used in the expert system of a human-centred driving system. The goal of the driving system is to optimise the driving behaviour in terms of energy-efficiency and safety by giving recommendations to the driver. The improved rule match algorithm checks the incoming information against the driving rules to recognise any breakings of a driving rule. The needed information is obtained by monitoring the driver, the current driving situation as well as the car, using in-vehicle sensors and serial-bus systems. On the basis of the detected broken driving rules, the expert system will create individual recommendations in terms of energy-efficiency and safety, which will allow eliminating bad driving habits, while considering the driver needs.
Detecting the adherence of driving rules in an energy-efficient, safe and adaptive driving system
(2016)
An adaptive and rule-based driving system is being developed that tries to improve the driving behavior in terms of the energy-efficiency and safety by giving recommendations. Therefore, the driving system has to monitor the adherence of driving rules by matching the rules to the driving behavior. However, existing rule matching algorithms are not sufficient, as the data within a driving system is changing frequently. In this paper a rule matching algorithm is introduced that is able to handle frequently changing data within the context of the driving system. 15 journeys were used to evaluate the performance of the rule matching algorithms. The results showed that the introduced algorithm outperforms existing algorithms in the context of the driving system. Thus, the introduced algorithm is suited for matching frequently changing data against rules with a higher performance, why it will be used in the driving system for the detection of broken energy-efficiency of safety-relevant driving rules.
In vitro, hydrogel-based ECMs for functionalizing surfaces of various material have played an essential role in mimicking native tissue matrix. Polydimethylsiloxane (PDMS) is widely used to build microfluidic or organ-on-chip devices compatible with cells due to its easy handling in cast replication. Despite such advantages, the limitation of PDMS is its hydrophobic surface property. To improve wettability of PDMS-based devices, alginate, a naturally derived polysaccharide, was covalently bound to the PDMS surface. This alginate then crosslinked further hydrogel onto the PDMS surface in desired layer thickness. Hydrogel-modified PDMS was used for coating a topography chip system and in vitro investigation of cell growth on the surfaces. Moreover, such hydrophilic hydrogel-coated PDMS is utilized in a microfluidic device to prevent unspecific absorption of organic solutions. Hence, in both exemplary studies, PDMS surface properties were modified leading to improved devices.
Introducing continuous experimentation in large software-intensive product and service organisations
(2017)
Software development in highly dynamic environments imposes high risks to development organizations. One such risk is that the developed software may be of only little or no value to customers, wasting the invested development efforts.Continuous experiment ation, as an experiment-driven development approach, may reduce such development risks by iteratively testing product and service assumptions that are critical to the success of the software. Although several experiment-driven development approaches are available, there is little guidance available on how to introduce continuous experimentation into an organization. This article presents a multiple-case study that aims at better understanding the process of introducing continuous experimentation into an organization with an already established development process. The results from the study show that companies are open to adopting such an approach and learning throughout the introduction process. Several benefits were obtained, such as reduced development efforts, deeper customer insights, and better support for development decisions. Challenges included complex stakeholder structures, difficulties in defining success criteria, and building experimen- tation skills. Our findings indicate that organizational factors may limit the benefits of experimentation. Moreover, introducing continuous experimentation requires fundamental changes in how companies operate, and a systematic introduction process can increase the chances of a successful start.
Purpose: Human breath analysis is proposed with increasing frequency as a useful tool in clinical application. We performed this study to find the characteristic volatile organic compounds (VOCs) in the exhaled breath of patients with idiopathic pulmonary fibrosis (IPF) for discrimination from healthy subjects. Methods: VOCs in the exhaled breath of 40 IPF patients and 55 healthy controls were measured using a multi-capillary column and ion mobility spectrometer. The patients were examined by pulmonary function tests, blood gas analysis, and serum biomarkers of interstitial pneumonia. Results: We detected 85 VOC peaks in the exhaled breath of IPF patients and controls. IPF patients showed 5 significant VOC peaks; p-cymene, acetoin, isoprene, ethylbenzene, and an unknown compound. The VOC peak of p-cymene was significantly lower (p < 0.001), while the VOC peaks of acetoin, isoprene, ethylbenzene, and the unknown compound were significantly higher (p < 0.001 for all) compared with the peaks of controls. Comparing VOC peaks with clinical parameters, negative correlations with VC (r =−0.393, p = 0.013), %VC (r =−0.569, p < 0.001), FVC (r = −0.440, p = 0.004), %FVC (r =−0.539, p < 0.001), DLco (r =−0.394, p = 0.018), and %DLco (r =−0.413, p = 0.008) and a positive correlation with KL-6 (r = 0.432, p = 0.005) were found for p-cymene. Conclusion: We found characteristic 5 VOCs in the exhaled breath of IPF patients. Among them, the VOC peaks of p-cymene were related to the clinical parameters of IPF. These VOCs may be useful biomarkers of IPF.
In this paper, it aims to model wind speed time series at multiple sites. The five-parameter Johnson distribution is deployed to relate the wind speed at each site to a Gaussian time series, and the resultant m-dimensional Gaussian stochastic vector process Z(t) is employed to model the temporal-spatial correlation of wind speeds at m different sites. In general, it is computationally tedious to obtain the autocorrelation functions (ACFs) and cross-correlation functions (CCFs) of Z(t), which are different to those of wind speed times series. In order to circumvent this correlation distortion problem, the rank ACF and rank CCF are introduced to characterize the temporal-spatial correlation of wind speeds, whereby the ACFs and CCFs of Z(t) can be analytically obtained. Then, Fourier transformation is implemented to establish the cross-spectral density matrix of Z(t), and an analytical approach is proposed to generate samples of wind speeds at m different sites. Finally, simulation experiments are performed to check the proposed methods, and the results verify that the five-parameter Johnson distribution can accurately match distribution functions of wind speeds, and the spectral representation method can well reproduce the temporal-spatial correlation of wind speeds.
Here, we report the mechanical and water sorption properties of a green composite based on Typha latifolia fibres. The composite was prepared either completely binder-less or bonded with 10% (w/w) of a bio-based resin which was a mixture of an epoxidized linseed oil and a tall-oil based polyamide. The flexural modulus of elasticity, the flexural strength and the water absorption of hot pressed Typha panels were measured and the influence of pressing time and panel density on these properties was investigated. The cure kinetics of the biobased resin was analyzed by differential scanning calorimetry (DSC) in combination with the iso-conversional kinetic analysis method of Vyazovkin to derive the curing conditions required for achieving completely cured resin. For the binderless Typha panels the best technological properties were achieved for panels with high density. By adding 10% of the binder resin the flexural strength and especially the water absorption were improved significantly.
Powder coating of engineered wood panels such as medium density fibreboards (MDF) is gaining industrial interest due to ecological and economic advantages of powder coating technology. For transferring powder coating technology to temperature-sensitive substrates like MDF, a thorough understanding of the melting, flowing and curing behaviour of the used low-bake resins is required. In the present study, thermo-analysis in combination with iso-conversional kinetic data analysis as well as rheometry is applied to characterise the properties of an epoxy-based powder coating. Neat resin and cured powder coating films are examined in order to define an ideal production window within which the resin is preferably applied and processed to yield satisfactory surface performance on the one hand and without exposing the carrier MDF too high a temperature load on the other hand to prevent the panel from deteriorating in mechanical strength. In order to produce powder coated films of high surface gloss – a feature that has not yet successfully been realized on MDF with powder coatings – a new curing technology, in-mould surface finishing, has been applied.
Aimed at the problem that the accuracy of face image classification in complex environment is not high, a network model F-Net suitable for aesthetic classification of face images is proposed. Based on LeNet-5, the model uses convolutional layers to extract facial image features in complex backgrounds, optimized parameters in the network model, and changes the number of convolutional layers and fully connected layer feature elements in the model. The experimental results show that the F-Net network model proposed in this paper has a face image classifation accuracy of 73% in complex environment background, which is better than other classical convolutional neural network classification models.
The world population is growing and alternative ways of satisfying the increasing demand for meat are being explored, such as using animal cells for the fabrication of cultured meat. Edible biomaterials are required as supporting structures. Hence, we chose agarose, gellan and a xanthan-locust bean gum blend (XLB) as support materials with pea and soy protein additives and analyzed them regarding material properties and biocompatibility. We successfully built stable hydrogels containing up to 1% pea or soy protein. Higher amounts of protein resulted in poor handling properties and unstable gels. The gelation temperature range for agarose and gellan blends is between 23–30 °C, but for XLB blends it is above 55 °C. A change in viscosity and a decrease in the swelling behavior was observed in the polysaccharide-protein gels compared to the pure polysaccharide gels. None of the leachates of the investigated materials had cytotoxic effects on the myoblast cell line C2C12. All polysaccharide-protein blends evaluated turned out as potential candidates for cultured meat. For cell-laden gels, the gellan blends were the most suitable in terms of processing and uniform distribution of cells, followed by agarose blends, whereas no stable cell-laden gels could be formed with XLB blends.
Using predictive maintenance, more efficient processes can be implemented, leading to fewer maintenance costs and increased availability. The development of a predictive maintenance solution currently requires high efforts in time and capacity as well as often interdisciplinary cooperation. This paper presents a standardized model to describe a predictive maintenance use case. The description model is used to collect, present, and document the required information for the implementation of predictive maintenance use cases by and for different stakeholders. Based on this model, predictive maintenance solutions can be introduced more efficiently. The method is validated across departments in the automotive sector.
Global trade is plagued by slow and inefficient manual processes associated with physical documents. Firms are constantly looking for new ways to improve transparency and increase the resilience of their supply chains. This can be solved by the digitalisation of supply chains and the automation of document- and information-sharing processes. Blockchain is touted as a solution to these issues due to its unique combination of features, such as immutability, decentralisation and transparency. A lack of business cases that quantify the costs and benefits causes uncertainty regarding the truth of these claims. This paper explores how the costs and benefits of a blockchain-based solution for digitalising and automating documentation flows in cross-border supply chains compare to a conventional centralised relational database solution. The research described in this paper uses primary data collected through semi-structured interviews with industry experts, as well as secondary data from literature. Two models based on existing services were developed and the costs and benefits compared and then analysed using the Architecture Trade-off Analysis Method (ATAM) and the Analytic Network Process (ANP). Findings from the analysis show that a consortium blockchain solution like TradeLens is the favourable solution for digitalising and automating information flows in cross-border supply chains.
Size and cost of a switched mode power supply can be reduced by increasing the switching frequency. This leads especially at a high input voltage to a decreasing efficiency caused by switching losses. Conventional calculations are not suitable to predict the efficiency as parasitic capacitances have a significant loss contribution. This paper presents an analytical efficiency model which considers parasitic capacitances separately and calculates the power loss contribution of each capacitance to any resistive element. The proposed model is utilized for efficiency optimization of converters with switching frequencies >10MHz and input voltages up to 40V. For experimental evaluation a DCDC converter was manufactured in a 180 nm HV BiCMOS technology. The model matches a transistor level simulation and measurement results with an accuracy better than 3.5 %. The accuracy of the parasitic capacitances of the high voltage transistor determines the overall accuracy of the efficiency model. Experimental capacitor measurements can be fed into the model. Based on the model, different architectures have been studied.
This paper presents a wide-Vin step-down parallel-resonant converter (PRC), comprising an integrated 5-bit capacitor array and a 300-nH resonant coil, placed in parallel to a conventional buck converter. Soft-switching resonant converters are beneficial for high-Vin multi-MHz converters to reduce dominant switching losses, enabling higher switching frequencies. The output filter inductor is optimized based on an empirical study of available inductors. The study shows that faster switching significantly reduces not only the inductor value but also volume, price, and even the inductor losses. In addition, unlike conventional resonant concepts, soft-switching control as part of the proposed PRC eliminates input voltage-dependent losses over a wide operating range, resulting in 76.3% peak efficiency. At Vin = 48 V, a loss reduction of 35% is achieved compared with the conventional buck converter. Adjusting an integrated capacitor array, and selecting the number of oscillation periods, keeps the switching frequency within a narrow range. This ensures high efficiency across a wide range of Vin = 12–48 V, 100–500-mA load, and 5-V output at up to 25-MHz switching frequency. Thanks to the low output current ripple, the output capacitor can be as small
as 50 nF.
A highly integrated synchronous buck converter with a predictive dead time control for input voltages >18 V with 10 MHz switching frequency is presented. A high resolution dead time of ˜125 ps allows to reduce dead time dependent losses without requiring body diode conduction to evaluate the dead time. High resolution is achieved by frequency compensated sampling of the switching node and by an 8 bit differential delay chain. Dead time parameters are derived in a comprehensive study of dead time depended losses. This way, the efficiency of fast switching DC-DC converters can be optimized by eliminating the body diode forward conduction losses, minimizing reverse recovery losses and by achieving zero voltage switching. High-speed circuit blocks for fast switching operation are presented including level shifter, gate driver, PWM generator. The converter has been implemented in a 180 nm high-voltage BiCMOS technology.
In recent years robotic systems have matured enough to perform simple home or office tasks, guide visitors in environments such as museums or stores and aid people in their daily life. To make the interaction with service and even industrial robots as fast and intuitive as possible, researchers strive to create transparent interfaces close to human-human interaction. As facial expressions play a central role in human-human communication, robot faces were implemented with varying degrees of human-likeness and expressiveness. We propose an emotion model to parameterize a screen based facial animation via inter-process communication. A software will animate transitions and add additional animations to make a digital face appear “alive” and equip a robotic system with a virtual face. The result will be an inviting appearance to motivate potential users to seek interaction with the robot.
Railway operators are being challenged by increasing complexity and safeguarding the availability of passenger rolling stock, bringing maintenance and especially emerging technologies into the focus. This paper presents a model for selection and implementation of Industry 4.0 technologies in rolling stock maintenance. The model consists of different stages and considers the main components of rolling stock, the related appropriate maintenance strategies and Industry 4.0 technologies considering the maturity level of the railway operators. Relevant criteria and main prerequisites of the technologies were identified. The model proposes relevant activities and was validated by industry experts.
Successful transitions to a sustainable bioeconomy require novel technologies, processes, and practices as well as a general agreement about the overarching normative direction of innovation. Both requirements necessarily involve collective action by those individuals who purchase, use, and co-produce novelties: the consumers. Based on theoretical considerations borrowed from evolutionary innovation economics and consumer social responsibility, we explore to what extent consumers’ scope of action is addressed in the scientific bioeconomy literature. We do so by systematically reviewing bioeconomy-related publications according to (i) the extent to which consumers are regarded as passive vs. active, and (ii) different domains of consumer responsibility (depending on their power to influence economic processes). We find all aspects of active consumption considered to varying degrees but observe little interconnection between domains. In sum, our paper contributes to the bioeconomy literature by developing a novel coding scheme that allows us to pinpoint different aspects of consumer activity, which have been considered in a rather isolated and undifferentiated manner. Combined with our theoretical considerations, the results of our review reveal a central research gap which should be taken up in future empirical and conceptual bioeconomy research. The system-spanning nature of a sustainable bioeconomy demands an equally holistic exploration of the consumers’ prospective and shared responsibility for contributing to its coming of age, ranging from the procurement of information on bio-based products and services to their disposal.
Purpose: Medical processes can be modeled using different methods and notations.Currently used modeling systems like Business Process Model and Notation (BPMN) are not capable of describing the highly flexible and variable medical processes in sufficient detail.
Methods: We combined two modeling systems, Business Process Management (BPM) and Adaptive Case Management (ACM), to be able to model non-deterministic medical processes. We used the new Standards Case Management Model and Notation (CMMN) and Decision Management Notation (DMN).
Results: First, we explain how CMMN, DMN and BPMN could be used to model non-deterministic medical processes. We applied this methodology to model 79 cataract operations provided by University Hospital Leipzig, Germany, and four cataract operations provided by University Eye Hospital Tuebingen, Germany. Our model consists of 85 tasks and about 20 decisions in BPMN. We were able to expand the system with more complex situations that might appear during an intervention.
Conclusion: An effective modeling of the cataract intervention is possible using the combination of BPM and ACM. The combination gives the possibility to depict complex processes with complex decisions. This combination allows a significant advantage for modeling perioperative processes.
With the digital transformation, companies will experience a change that focuses on shaping the organization into an agile organizational form. In today's competitive and fast-moving business environment, it is necessary to react quickly to changing market conditions. Agility represents a promising option for overcoming these challenges. The path to an agile organization represents a development process that requires consideration of countless levels of the enterprise. This paper examines the impact of digital transformation on agile working practices and the benefits that can be achieved through technology. To enable a solution for today's so-called VUCA (Volatility, Uncertainty, Complexity und Ambiguity) world, agile ways of working can be applied project management requires adaptation. In the qualitative study, expert interviews were conducted and analyzed using the grounded theory method. As a result, a model can be presented that shows the influencing factors and potentials of agile management in the context of the digital transformation of medium-sized companies.
Hardboards (HBs) (wet-process high-density fibreboards) were made in an industrial trial using a binder system consisting of cationic mimosa tannin and laccase or just cationic tannin without any thermosetting adhesive. The boards displayed superior mechanical strength compared to reference boards made with phenol–formaldehyde, easily exceeding the European standards for general-purpose HBs. The thickness swell of most of the boards was slightly greater than the standards would allow, so some optimisation is required in this area. The improved board properties appear to be mainly associated with ionic interactions involving quaternary amino groups in cationic tannin and negatively charged wood fibres rather than to cross-linking of fibres via laccase-assisted formation and coupling of radicals in tannin and fibre lignin.
Nowadays CHP units are discussed for the production of electricity on demand rather than for generation of heat providing electricity as a by-product. By this means, CHP units are capable of satisfying a higher share of the electricity demand on-site and in this new role, CHP units are able to reduce the load on the power grid and to compensate for high fluctuations of solar and wind power.
Evidently, a novel control strategy for CHP units is required in order to shift the operation oriented at the heat demand to an operation led by the electricity demand. Nevertheless, the heat generated by the CHP unit needs to be utilized completely in any case, for maintaining energy as well as economic efficiency. Such a strategy has been developed at Reutlingen University, and it will be presented in the paper. Part of the strategy is an intelligent management for the thermal energy storage (TES) ensuring that the storage is at low level in terms of its heat content just before an electricity demand is calling the CHP unit into operation. Moreover, a proper forecast of both, heat and electricity demand, is incorporated and the requirements of the CHP unit in terms of maintenance and lifetime are considered by limiting the number of starts and stops per unit time and by maintaining a certain minimum length of the operation intervals.
All aspects of this novel control strategy are revealed in the paper, which has been implemented on a controller for further testing at two sites in the field. Results from these tests are given as well as results from a simulation model, which is able to evaluate the performance of the control strategy for an entire year.
Exogenous factors of influence on exhaled breath analysis by ion-mobility spectrometry (MCC/IMS)
(2019)
The interpretation of exhaled breath analysis needs to address to the influence of exogenous factors, especially to a transfer of confounding analytes by the test persons. A test person who was exposed to a disinfectant had exhaled breath analysis by MCC/IMS (Bioscout®) after different time intervals. Additionally, a new sampling method with inhalation of synthetic air before breath analysis was tested. After exposure to the disinfectant, 3-Pentanone monomer, 3-Pentanone dimer, Hexanal, 3-Pentanone trimer, 2-Propanamine, 1-Propanol, Benzene, Nonanal showed significantly higher intensities, in exhaled breath and air of the examination room, compared to the corresponding baseline measurements. Only one ingredient of the disinfectant (1-Propanol) was identical to the 8 analytes. Prolonging the time intervals between exposure and breath analysis showed a decrease of their intensities. However, the half-time of the decrease was different. The inhalation of synthetic air - more than consequently airing the examination room with fresh air - reduced the exogenous and also relevant endogenous analytes, leading to a reduction and even changing polarity of the alveolar gradient. The interpretation of exhaled breath needs further knowledge about the former residence of the proband and the likelihood and relevance of the inhalation of local, site-specific and confounding exogenous analytes by him. Their inhalation facilitates a transfer to the examination room and a detection of high concentrations in room air and exhaled breath, but also the exhalation of new analytes. This may lead to a misinterpretation of these analytes as endogenous resp. disease-specific ones.
Standardisation of breath sampling is important for application of breath analysis in clinical settings. By studying the effect of room airing on indoor and breath analytes and by generating time series of room air with different sampling intervals we sought to get further insights into room air metabolism, to detect the relevance of exogenous VOCs and to make conclusions about their consideration for the interpretation of exhaled breath. Room air and exhaled breath of a healthy subject were analysed before and after room airing. Furthermore a time series of room air with doors and windows closed was taken over 84 h by an automatic sampling every 180 min. A second times series studied room air analytes over 70 h with samples taken every 16.5 min. For breath and room air measurements an IMS coupled to a multi-capillary column (IMS/MCC) [Bio-Scout® - B&S Analytik GmbH, Dortmund, Germany] was used. The peaks were characterized using the Software Visual Now (B&S Analytik, Dortmund Germany) and identified using the software package MIMA (version 1.1, provided by the Max Planck Institute for Informatics, Saarbrücken, Germany) and the database 20160426_SubstanzDbNIST_122 (B & S Analytik GmbH, Dortmund, Germany). In the morning 4 analytes (Decamethylcylopentasiloxane [541-02-6]; Pentan-2-one [107-87-9] – Dimer; Hexan-1-al [66-25-1]; Pentan-2-one [107-87-9]) – Monomer showed high intensities in the room air and exhaled breath. They were significantly but not equally reduced by room airing. The time series about 84 h showed a time dependent decrease of analytes (limonen-monomer and -dimer; Decamethylcylopentasiloxane, Butan-1-ol, Butan-1-ol) as well as increase (Pentan-2-one [107-87-9] – Dimer). Shorter sampling intervals exhibited circadian variations of analyte concentrations for many analytes. Breath sampling in the morning needs room airing before starting. Then the variation of the intensity of indoor analytes can be kept small. The time series of indoor analytes show, that their intensities have a different behaviour, with time dependent declines, constant increases and circadian variations, dependent on room airing. This has implications on the breath sampling procedure and the intrepretation of exhaled breath.
Adoption of artificial intelligence (AI) has risen sharply in recent years but many firms are not successful in realising the expected benefits or even terminate projects before completion. While there are a number of previous studies that highlight challenges in AI projects, critical factors that lead to project failure are mostly unknown. The aim of this study is therefore to identify distinct factors that are critical for failure of AI projects. To address this, interviews with experts in the field of AI from different industries are conducted and the results are analyzed using qualitative analysis methods. The results show that both, organizational and technological issues can cause project failure. Our study contributes to knowledge by reviewing previously identified challenges in terms of their criticality for project failure based on new empirical data, as well as, by identifying previously unknown factors.
The coculture of osteogenic and angiogenic cells and the resulting paracrine signaling via soluble factors are supposed to be crucial for successfully engineering vascularized bone tissue equivalents. In this study, a coculture system combining primary human adiposederived stem cells (hASCs) and primary human dermal microvascular endothelial cells (HDMECs) within two types of hydrogels based on methacryloyl‐modified gelatin (GM) as three‐dimensional scaffolds was examined for its support of tissue specific cell functions. HDMECs, together with hASCs as supporting cells, were encapsulated in soft GM gels and were indirectly cocultured with hASCs encapsulated in stiffer GM hydrogels additionally containing methacrylate‐modified hyaluronic acid and hydroxyapatite particles. After 14 days, the hASC in the stiffer gels (constituting the “bone gels”) expressed matrix proteins like collagen type I and fibronectin, as well as bone‐specific proteins osteopontin and alkaline phosphatase. After 14 days of coculture with HDMEC‐laden hydrogels, the viscoelastic properties of the bone gels were significantly higher compared with the gels in monoculture. Within the soft vascularization gels, the formed capillary‐like networks were significantly longer after 14 days of coculture than the structures in the control gels. In addition, the stability as well as the complexity of the vascular networks was significantly increased by coculture. We discussed and concluded that osteogenic and angiogenic signals from the culture media as well as from cocultured cell types, and tissue‐specific hydrogel composition all contribute to stimulate the interplay between osteogenesis and angiogenesis in vitro and are a basis for engineering vascularized bone.
In bioprinting approaches, the choice of bioink plays an important role since it must be processable with the selected printing method, but also cytocompatible and biofunctional. Therefore, a crosslinkable gelatin-based ink was modified with hydroxyapatite (HAp) particles, representing the composite buildup of natural bone. The inks’ viscosity was significantly increased by the addition of HAp, making the material processable with extrusion-based methods. The storage moduli of the formed hydrogels rose significantly, depicting improved mechanical properties. A cytocompatibility assay revealed suitable ranges for photoinitiator and HAp concentrations. As a proof of concept, the modified ink was printed together with cells, yielding stable three-dimensional constructs containing a homogeneously distributed mineralization and viable cells.
Im Frühjahr 1817 unternahm der damalige Professor Friedrich List an der Universität Tübingen eine Reise nach Frankfurt a. M., wo zu dieser Zeit die berühmte Ostermesse stattfand. Dort traf er mit den Anführern der Kaufleute zusammen, die darüber klagten, dass die zaghafte wirtschaftliche Entwicklung unter den vielen Zollschranken und den Billigimporten aus England stark zu leiden habe. Deshalb forderten sie die Abschaffung der Binnenzölle und die Bildung einer Wirtschaftsunion. Im Auftrag der Kaufleute verfasste List seine berühmt gewordene Petition an die Bundesversammlung, die lose Interessenvertretung des Deutschen Bundes in Frankfurt. Als die Petition mit großem Beifall aufgenommen wurde, gründete List im Hochgefühl seines Erfolges spontan den "Allgemeinen Deutschen Handels- und Gewerbsverein" – die erste Interessenvertretung deutscher Kaufleute. Er legte damit den Grundstein für den politischen Prozess zur Gründung des Zollvereins von 1834, der wiederum die Vorstufe zur Gründung des Deutschen Reiches von 1871 bildete. Lists damalige Forderungen sind zurzeit wieder hoch aktuell.
Seit 5 Jahrzehnten steht die Erforschung von Leben, Werk und Wirkungsgeschichte von Friedrich List (1789–1846) im Zentrum der wissenschaftlichen Arbeit von Eugen Wendler. Im Laufe der Zeit sind ca. 30 Monographien und eine größere Anzahl von wissenschaftlichen Aufsätzen und journalistischen Artikeln entstanden. Dabei baute Eugen Wendler auf der unschätzbaren Vorarbeit der Herausgeber der Gesamtausgabe von Lists Werken von 1925 bis 1935 auf.
Der vorliegende Aufsatz vermittelt einen Überblick über die Buchpublikationen von Eugen Wendler zur List-Forschung. Mit seinem eindrucksvollen Oeuvre bekennt er sich zum letzten lebenden Fossil in der Nachfolge der FLG und erweist damit den Herausgebern die gebührende und längst überfällige Wertschätzung und Achtung.
Unter den widrigsten wirtschaftlichen und politischen Verhältnissen und Bedingungen wurde die Friedrich-List-Gesellschaft (FLG) 1925 gegründet und bis 1934 fortgeführt. Sie verfolgte vor allem den Zweck, die weit verstreuten, schwer zugänglichen und vielfach unbekannten Schriften, Reden und Briefe von Friedrich List (1789-1846) zusammenzutragen und in Form einer Gesamtausgabe zu publizieren.
Weder diese 10- bzw. 12-bändige Gesamtausgabe, noch die Namen ihrer Herausgeber haben in der Wirtschaftswissenschaft die gebührende Wertschätzung und Aufmerksamkeit erfahren. Die längst überfällige Dankesschuld wird in dem vorliegenden Beitrag nach nahezu 100 Jahren abgetragen. Ohne den engagierten und mutigen Einsatz der Herausgeber, insbesondere von Edgar Salin, wäre die List-Forschung undenkbar und die deutsche Wirtschaftswissenschaft um ein ruhmreiches Kapitel ärmer.
In buchstäblich letzter Minute haben sich die englische Regierung und die Europäische Union auf ein umfangreiches Abkommen geeinigt, um einen ungeregelten Brexit zu verhindern. Nach dem jahrelangen zähen Verhandlungsmarathon fällt der Jubel verhalten aus, dennoch herrscht auf beiden Seiten des Ärmelkanals Erleichterung, weil ein Modus Vivendi gefunden wurde, auf dem sich die künftigen Beziehungen aufbauen und fortführen lassen. Ob sich die englischen Blütenträume, die an den Brexit geknüpft wurden, erfüllen werden, wird die Zukunft erweisen.
Die Strategie und Taktik der englischen Regierungen zum Brexit und bei den Austrittsverhandlungen spiegeln sich in den Erfahrungen wider, die Friedrich List vor genau 175 Jahren bei seinen Bemühungen um eine deutsch-englische Allianz machen musste. Wegen der von England schon damals strikt befolgten Insular und Handelssuprematie musste er sich eingestehen, dass England diese Position hartnäckig verteidigt und deshalb frustriert und ernüchtert seine Pläne aufgeben. Deshalb setzte er seine Hoffnung auf eine "Kontinentalallianz" der europäischen Nationen, wie sie nun nach dem Austritt Großbritanniens aus der Europäischen Union entstanden ist. Vielleicht werden wir uns nun an den Begriff "Kontinentalallianz" gewöhnen müssen und dabei an die Weitsicht von Friedrich List erinnert.
Andererseits gilt auch für die englische Politik das Motto von Lists zweiter Pariser Preisschrift: "Le monde marche - Die Welt bewegt sich", allerdings mit völlig anderen Vorzeichen als vor 175 Jahren: Die Welthandelsachse hat sich von der westlichen auf die östliche Halbkugel verlagert; das britische Weltreich ist Geschichte, die Fließgeschwindigkeit des globalen Wandels hat sich dramatisch beschleunigt und trotz der Lingua Franca erscheint England, vor allem aus asiatischer Sicht, nur noch als kleiner Fleck auf der Weltkarte. Falls die schottische Regierung ihre Absicht durchsetzen und die Unabhängigkeit vom Vereinigten Königreich erreichen sollte, würde sich der Brexit als verhängnisvoller Bumerang erweisen.
Die Annexion der Krim, die Kriegsführung in Syrien, das finanzielle Engagement in Zypern, das Tauziehen um die Ukraine und Weißrussland oder die Namensgebung Sputnik 5 für den Impfstoff gegen die Corona Epidemie sind eindeutige Belege für das aktuelle russische Machtstreben – und seine Expansionspolitik. Deshalb ist es nicht uninteressant zu fragen, welches Meinungsbild Friedrich List (1789–1846) von Russland hatte, zumal es heute noch so aktuell, wie vor 180 bis 190 Jahren erscheint und in seinen Schriften dargelegt ist. Dieses Meinungsbild wird in diesem Aufsatz erstmals untersucht und umfassend dargestellt.
Automatic content creation system for augmented reality maintenance applications for legacy machines
(2024)
Augmented reality (AR) applications have great potential to assist maintenance workers in their operations. However, creating AR solutions is time-consuming and laborious, which limits its widespread adoption in the industry. It therefore often happens that even with the latest generation machines, instead of an AR solution, the user only receives an electronic manual for the equipment operation and maintenance. This is commonplace with legacy machines. For this reason, solutions are required that simplify the creation of such AR solutions. This paper presents an approach using an electronic manual as a basis to create fast and cost-effective AR solutions for maintenance. As part of the approach, an application was developed to automatically identify and subdivide the chapters of electronic manuals via the bookmarks in the table of contents. The contents are then automatically uploaded to a central server and indexed with a suitable marker to make the data retrievable. The prepared content can then be accessed for creating context-related AR instructions via the marker. The application is characterized by the fact that no developers or experts are required to prepare the information. In addition to complying with common design criteria, the clear presentation of the contents and the intuitive use of the system offer added value for the performance of maintenance tasks. Together, these two elements form a novel way to retrofit legacy machines with AR maintenance instructions. The practical validation of the system took place in a factory environment. For this purpose, the content was created for a filter change on a CNC milling machine. The results show that inexperienced users can extract appropriate content with the software application. Furthermore, it is shown that maintenance workers, can access the content with an AR application developed for the Microsoft HoloLens 2 and complete simple tasks provided in the manufacturer's electronic manual.
In recent years, machine learning algorithms have made a huge development in performance and applicability in industry and especially maintenance. Their application enables predictive maintenance and thus offers efficiency increases. However, a successful implementation of such solutions still requires high effort in data preparation to obtain the right information, interdisciplinarity in teams as well as a good communication to employees. Here, small and medium sized enterprises (SME) often lack in experience, competence and capacity. This paper presents a systematic and practice-oriented method for an implementation of machine learning solutions for predictive maintenance in SME, which has already been validated.
Melamine–formaldehyde (MF) resins are widely used as adhesives and finishing materials in the wood industry. During resin cure, either methylene ether or methylene bridges are formed, leading to the formation of a three‐dimensional resin network. Not only the curing degree, but also the chemical species present in the cured resin determine the quality of the final product. Analytical methods allowing a detailed investigation of network formation are of great benefit to manufacturers. In the present work, resin cure of an MF precondensate is studied at different temperatures (100–200 °C) without considering the initial pH as a factor. Isoconversional kinetic analysis based on exothermal curing enthalpies enables calculation of the crosslinking degree at a given time/temperature regime. A semiquantitative determination of the chemical groups present is performed based on solid‐state nuclear magnetic resonance data. Fourier transform infrared spectroscopy has shown to be a fast and reliable analytical tool with high sensitivity toward functional groups and with great potential for at‐line process control.