Refine
Year of publication
- 2021 (214) (remove)
Document Type
- Journal article (118)
- Conference proceeding (76)
- Book chapter (14)
- Doctoral Thesis (3)
- Anthology (2)
- Book (1)
Language
- English (214) (remove)
Is part of the Bibliography
- yes (214)
Institute
- Informatik (75)
- ESB Business School (61)
- Life Sciences (44)
- Technik (27)
- Texoversum (6)
- Zentrale Einrichtungen (2)
Publisher
- Springer (33)
- Elsevier (25)
- MDPI (24)
- IEEE (16)
- Wiley (9)
- De Gruyter (7)
- ACS (5)
- Association for Information Systems (AIS) (5)
- SSRN (5)
- VDE Verlag GmbH (5)
We analyze economics PhDs’ collaborations in peer-reviewed journals from 1990 to 2014 and investigate such collaborations’ quality in relation to each co-author’s research quality, field and specialization. We find that a greater overlap between co-authors’ previous research fields is significantly related to a greater publication success of co-authors’ joint work and this is robust to alternative specifications. Co-authors that engage in a distant collaboration are significantly more likely to have a large research overlap, but this significance is lost when co-authors’ social networks are accounted for. High quality collaboration is more likely to emerge as a result of an interaction between specialists and generalists with overlapping fields of expertise. Regarding interactions across subfields of economics (interdisciplinarity), it is more likely conducted by co- authors who already have interdisciplinary portfolios, than by co-authors who are specialized or starred in different subfields.
The current advancement of Artificial Intelligence (AI) combined with other digitalization efforts significantly impacts service ecosystems. Artificial intelligence has a substantial impact on new opportunities for the co-creation of value and the development of intelligent service ecosystems. Motivated by experiences and observations from digitalization projects, this paper presents new methodological perspectives and experiences from academia and practice on architecting intelligent service ecosystems and explores the impact of artificial intelligence through real cases supporting an ongoing validation. Digital enterprise architecture models serve as an integral representation of business, information, and technological perspectives of intelligent service-based enterprise systems to support management and development. This paper focuses on architectural models for intelligent service ecosystems, showing the fundamental business mechanism of AI-based value co-creation, the corresponding digital architecture, and management models. The focus of this paper presents the key architectural model perspectives for the development of intelligent service ecosystems.
Platforms and their surrounding ecosystems are becoming increasingly important components of many companies' strategies. Artificial Intelligence, in particular, has created new opportunities to create and develop ecosystems around the platform. However, there is not yet a methodology to systematically develop these new opportunities for enterprise development strategy. Therefore, this paper aims to lay a foundation for the conceptualization of Artificial Intelligence-based service ecosystems exploiting a Service-Dominant Logic. The basis for conceptualization is the study of value creation and particularly effective network effects. This research investigates the fundamental idea of extending specific digital concepts considering the influence of Artificial Intelligence on the design of intelligent services, along with their architecture of digital platforms and ecosystems, to enable a smooth evolutionary path and adaptability for human-centric collaborative systems and services. The paper explores an extended digital enterprise conceptual model through a combined, iterative, and permanent task of co-creating value between humans and intelligent systems as part of a new idea of cognitively adapted intelligent services.
Enterprises are currently transforming their strategy, processes, and their information systems to extend their degree of digitalization. The potential of the Internet and related digital technologies, like Internet of Things, services computing, cloud computing, artificial intelligence, big data with analytics, mobile systems, collaboration networks, and cyber physical systems both drives and enables new business designs. Digitalization deeply disrupts existing businesses, technologies and economies and fosters the architecture of digital environments with many rather small and distributed structures. This has a strong impact for new value producing opportunities and architecting digital services and products guiding their design through exploiting a Service-Dominant Logic. The main result of the book chapter extends methods for integral digital strategies with value-oriented models for digital products and services which are defined in the framework of a multi-perspective digital enterprise architecture reference model.
This chapter presents an introduction to the emerging trends for architecting the digital transformation having a strong focus on digital products, intelligent services, and related systems together with methods, models and architectures. The primary aim of this book is to highlight some of the most recent research results in the field. We are providing a focused set of brief descriptions of the chapters included in the book.
This paper presents a permanent magnet tubular linear generator system for powering passive sensors using vertical vibration harvesting energy. The system consists of a permanent magnet tubular linear vibration generator and electric circuits. By using the design of mechanical resonant movers, the generator is capable of converting low frequencies small amplitude vertical vibration energy into more regular sinusoidal electrical energy. The distribution of the magnetic field and electromotive force are calculated by Finite Element Analysis. The characteristics of the linear vibration generator system are observed. The experimental results show the generator can produce about 0.4W~1.6W electrical power when the vibration source's amplitude is fixed on 2mm and the frequencies are between 13Hz and 22Hz.
Purpose
Computerized medical imaging processing assists neurosurgeons to localize tumours precisely. It plays a key role in recent image-guided neurosurgery. Hence, we developed a new open-source toolkit, namely Slicer-DeepSeg, for efficient and automatic brain tumour segmentation based on deep learning methodologies for aiding clinical brain research.
Methods
Our developed toolkit consists of three main components. First, Slicer-DeepSeg extends the 3D Slicer application and thus provides support for multiple data input/ output data formats and 3D visualization libraries. Second, Slicer core modules offer powerful image processing and analysis utilities. Third, the Slicer-DeepSeg extension provides a customized GUI for brain tumour segmentation using deep learning-based methods.
Results
The developed Slicer-DeepSeg was validated using a public dataset of high-grade glioma patients. The results showed that our proposed platform’s performance considerably outperforms other 3D Slicer cloud-based approaches.
Conclusions
Developed Slicer-DeepSeg allows the development of novel AI-assisted medical applications in neurosurgery. Moreover, it can enhance the outcomes of computer-aided diagnosis of brain tumours. Open-source Slicer-DeepSeg is available at github.com/razeineldin/Slicer-DeepSeg.
A hybrid deep registration of MR scans to interventional ultrasound for neurosurgical guidance
(2021)
Despite the recent advances in image-guided neurosurgery, reliable and accurate estimation of the brain shift still remains one of the key challenges. In this paper, we propose an automated multimodal deformable registration method using hybrid learning-based and classical approaches to improve neurosurgical procedures. Initially, the moving and fixed images are aligned using classical affine transformation (MINC toolkit), and then the result is provided to the convolutional neural network, which predicts the deformation field using backpropagation. Subsequently, the moving image is transformed using the resultant deformation into a moved image. Our model was evaluated on two publicly available datasets: the retrospective evaluation of cerebral tumors (RESECT) and brain images of tumors for evaluation (BITE). The mean target registration errors have been reduced from 5.35 ± 4.29 to 0.99 ± 0.22 mm in the RESECT and from 4.18 ± 1.91 to 1.68 ± 0.65 mm in the BITE. Experimental results showed that our method improved the state-of-the-art in terms of both accuracy and runtime speed (170 ms on average). Hence, the proposed method provides a fast runtime for 3D MRI to intra-operative US pair in a GPU-based implementation, which shows a promise for its applicability in assisting the neurosurgical procedures compensating for brain shift.
Accurate and safe neurosurgical intervention can be affected by intra-operative tissue deformation, known as brain-shift. In this study, we propose an automatic, fast, and accurate deformable method, called iRegNet, for registering pre-operative magnetic resonance images to intra-operative ultrasound volumes to compensate for brain-shift. iRegNet is a robust end-to-end deep learning approach for the non-linear registration of MRI-iUS images in the context of image-guided neurosurgery. Pre-operative MRI (as moving image) and iUS (as fixed image) are first appended to our convolutional neural network, after which a non-rigid transformation field is estimated. The MRI image is then transformed using the output displacement field to the iUS coordinate system. Extensive experiments have been conducted on two multi-location databases, which are the BITE and the RESECT. Quantitatively, iRegNet reduced the mean landmark errors from pre-registration value of (4.18 ± 1.84 and 5.35 ± 4.19 mm) to the lowest value of (1.47 ± 0.61 and 0.84 ± 0.16 mm) for the BITE and RESECT datasets, respectively. Additional qualitative validation of this study was conducted by two expert neurosurgeons through overlaying MRI-iUS pairs before and after the deformable registration. Experimental findings show that our proposed iRegNet is fast and achieves state-of-the-art accuracies outperforming state-of-the-art approaches. Furthermore, the proposed iRegNet can deliver competitive results, even in the case of non-trained images as proof of its generality and can therefore be valuable in intra-operative neurosurgical guidance.
Railway operators are being challenged by increasing complexity and safeguarding the availability of passenger rolling stock, bringing maintenance and especially emerging technologies into the focus. This paper presents a model for selection and implementation of Industry 4.0 technologies in rolling stock maintenance. The model consists of different stages and considers the main components of rolling stock, the related appropriate maintenance strategies and Industry 4.0 technologies considering the maturity level of the railway operators. Relevant criteria and main prerequisites of the technologies were identified. The model proposes relevant activities and was validated by industry experts.
Successful transitions to a sustainable bioeconomy require novel technologies, processes, and practices as well as a general agreement about the overarching normative direction of innovation. Both requirements necessarily involve collective action by those individuals who purchase, use, and co-produce novelties: the consumers. Based on theoretical considerations borrowed from evolutionary innovation economics and consumer social responsibility, we explore to what extent consumers’ scope of action is addressed in the scientific bioeconomy literature. We do so by systematically reviewing bioeconomy-related publications according to (i) the extent to which consumers are regarded as passive vs. active, and (ii) different domains of consumer responsibility (depending on their power to influence economic processes). We find all aspects of active consumption considered to varying degrees but observe little interconnection between domains. In sum, our paper contributes to the bioeconomy literature by developing a novel coding scheme that allows us to pinpoint different aspects of consumer activity, which have been considered in a rather isolated and undifferentiated manner. Combined with our theoretical considerations, the results of our review reveal a central research gap which should be taken up in future empirical and conceptual bioeconomy research. The system-spanning nature of a sustainable bioeconomy demands an equally holistic exploration of the consumers’ prospective and shared responsibility for contributing to its coming of age, ranging from the procurement of information on bio-based products and services to their disposal.
The isothermal curing of melamine resin is investigated by in-line infrared spectroscopy at different temperatures. The infrared spectra are decomposed into time courses of characteristic spectral patterns using Multivariate Curve Resolution (MCR). It was found that depending on the applied curing temperature, melamine films with different spectral fingerprints and correspondingly different chemical network structures are formed. The network structures of fully cured resin films are specific for the applied curing temperatures used and cannot simply be compensated by changes in the curing time. For industrial curing processes, this means that cure temperature is the main system determining factor at constant M:F ratio. However, different MF resin networks can be specifically obtained from one and the same melamine resin by suitable selection of the curing time and temperatures profiles to design resin functionality. The spectral fingerprints after short curing time as well as after long curing time reflect the fundamental differences in the thermoset networks that can be obtained with industrial short-cycle and multi-daylight presses.
During curing of thermosetting resins the technologically relevant properties of binders and coatings develop. However, curing is difficult to monitor due to the multitude of chemical and physical processes taking place. Precise prediction of specific technological properties based on molecular properties is very difficult. In this study, the potential of principal component analysis (PCA) and principal component regression (PCR) in the analysis of Fourier transform infrared (FTIR) spectra is demonstrated using the example of melamine-formaldehyde (MF) resin curing in solid state. FTIR/PCA-based reaction trajectories are used to visualize the influence of temperature on isothermal cure. An FTIR/PCR model for predicting the hydrolysis resistance of cured MF resin from their spectral fingerprints is presented which illustrates the advantages of FTIR/PCR compared to the combination differential scanning calorimetry/isoconversional kinetic analysis. The presented methodology is transferable to the curing reactions of any thermosetting resin and can be applied to model other technologically relevant final properties as well.
Massive data transfers in modern data-intensive systems resulting from low data-locality and data-to-code system design hurt their performance and scalability. Near-Data processing (NDP) and a shift to code-to-data designs may represent a viable solution as packaging combinations of storage and compute elements on the same device has become feasible. The shift towards NDP system architectures calls for revision of established principles. Abstractions such as data formats and layouts typically spread multiple layers in traditional DBMS, the way they are processed is encapsulated within these layers of abstraction. The NDP-style processing requires an explicit definition of cross-layer data formats and accessors to ensure in-situ executions optimally utilizing the properties of the underlying NDP storage and compute elements. In this paper, we make the case for such data format definitions and investigate the performance benefits under RocksDB and the COSMOS hardware platform.
Near-Data Processing is a promising approach to overcome the limitations of slow I/O interfaces in the quest to analyze the ever-growing amount of data stored in database systems. Next to CPUs, FPGAs will play an important role for the realization of functional units operating close to data stored in non-volatile memories such as Flash.It is essential that the NDP-device understands formats and layouts of the persistent data, to perform operations in-situ. To this end, carefully optimized format parsers and layout accessors are needed. However, designing such FPGA-based Near-Data Processing accelerators requires significant effort and expertise. To make FPGA-based Near-Data Processing accessible to non-FPGA experts, we will present a framework for the automatic generation of FPGA-based accelerators capable of data filtering and transformation for key-value stores based on simple data-format specifications.The evaluation shows that our framework is able to generate accelerators that are almost identical in performance compared to the manually optimized designs of prior work, while requiring little to no FPGA-specific knowledge and additionally providing improved flexibility and more powerful functionality.
This paper presents a generic method to enhance performance and incorporate temporal information for cardiorespiratory-based sleep stage classification with a limited feature set and limited data. The classification algorithm relies on random forests and a feature set extracted from long-time home monitoring for sleep analysis. Employing temporal feature stacking, the system could be significantly improved in terms of Cohen’s κ and accuracy. The detection performance could be improved for three classes of sleep stages (Wake, REM, Non-REM sleep), four classes (Wake, Non-REM-Light sleep, Non-REM Deep sleep, REM sleep), and five classes (Wake, N1, N2, N3/4, REM sleep) from a κ of 0.44 to 0.58, 0.33 to 0.51, and 0.28 to 0.44 respectively by stacking features before and after the epoch to be classified. Further analysis was done for the optimal length and combination method for this stacking approach. Overall, three methods and a variable duration between 30 s and 30 min have been analyzed. Overnight recordings of 36 healthy subjects from the Interdisciplinary Center for Sleep Medicine at Charité-Universitätsmedizin Berlin and Leave-One-Out-Cross-Validation on a patient-level have been used to validate the method.
The incudo-malleal joint (IMJ) in the human middle ear is a true diarthrodial joint and it has been known that the flexibility of this joint does not contribute to better middle-ear sound transmission. Previous studies have proposed that a gliding motion between the malleus and the incus at this joint prevents the transmission of large displacements of the malleus to the incus and stapes and thus contributes to the protection of the inner ear as an immediate response against large static pressure changes. However, dynamic behavior of this joint under static pressure changes has not been fully revealed. In this study, effects of the flexibility of the IMJ on middle-ear sound transmission under static pressure difference between the middle-ear cavity and the environment were investigated. Experiments were performed in human cadaveric temporal bones with static pressures in the range of +/- 2 kPa being applied to the ear canal (relative to middle-ear cavity). Vibrational motions of the umbo and the stapes footplate center in response to acoustic stimulation (0.2-8 kHz) were measured using a 3D-Laser Doppler vibrometer for (1) the natural IMJ and (2) the IMJ with experimentally-reduced flexibility. With the natural condition of the IMJ, vibrations of the umbo and the stapes footplate center under static pressure loads were attenuated at low frequencies below the middle-ear resonance frequency as observed in previous studies. After the flexibility of the IMJ was reduced, additional attenuations of vibrational motion were observed for the umbo under positive static pressures in the ear canal (EC) and the stapes footplate center under both positive and negative static EC pressures. The additional attenuation of vibration reached 4~7 dB for the umbo under positive static EC pressures and the stapes footplate center under negative EC pressures, and 7~11 dB for the stapes footplate center under positive EC pressures. The results of this study indicate an adaptive mechanism of the flexible IMJ in the human middle ear to changes of static EC pressure by reducing the attenuation of the middle-ear sound transmission. Such results are expected to be used for diagnosis of the IMJ stiffening and to be applied to design of middle-ear prostheses.
Despite its success against cancer, photothermal therapy (PTT) (>50 °C) suffers from several limitations such as triggering inflammation and facilitating immune escape and metastasis and also damage to the surrounding normal cells. Mild-temperature PTT has been proposed to override these shortcomings. We developed a nanosystem using HepG2 cancer cell membrane-cloaked zinc glutamate-modified Prussian blue nanoparticles with triphenylphosphine-conjugated lonidamine (HmPGTL NPs). This innovative approach achieved an efficient mild-temperature PTT effect by downregulating the production of intracellular ATP. This disrupts a section of heat shock proteins that cushion cancer cells against heat. The physicochemical properties, anti-tumor efficacy, and mechanisms of HmPGTL NPs both in vitro and in vivo were investigated. Moreover, the nanoparticles cloaked with the HepG2 cell membrane substantially prolonged the circulation time in vivo. Overall, the designed nanocomposites enhance the efficacy of mild-temperature PTT by disrupting the production of ATP in cancer cells. Thus, we anticipate that the mild-temperature PTT nanosystem will certainly present its enormous potential in various biomedical applications.
Metalworking fluids (MWFs) are widely used to cool and lubricate metal workpieces during processing to reduce heat and friction. Extending a MWF’s service life is of importance from both economical and ecological points of view. Knowledge about the effects of processing conditions on the aging behavior and reliable analytical procedures are required to properly characterize the aging phenomena. While so far no quantitative estimations of ageing effects on MWFs have been described in the literature other than univariate ones based on single parameter measurements, in the present study we present a simple spectroscopy-based set-up for the simultaneous monitoring of three quality parameters of MWF and a mathematical model relating them to the most influential process factors relevant during use. For this purpose, the effects of MWF concentration, pH and nitrite concentration on the droplet size during aging were investigated by means of a response surface modelling approach. Systematically varied model MWF fluids were characterized using simultaneous measurements of absorption coefficients µa and effective scattering coefficients µ’s. Droplet size was determined via dynamic light scattering (DLS) measurements. Droplet size showed non-linear dependence on MWF concentration and pH, but the nitrite concentration had no significant effect. pH and MWF concentration showed a strong synergistic effect, which indicates that MWF aging is a rather complex process. The observed effects were similar for the DLS and the µ’s values, which shows the comparability of the methodologies. The correlations of the methods were R2c = 0.928 and R2P = 0.927, as calculated by a partial least squares regression (PLS-R) model. Furthermore, using µa, it was possible to generate a predictive PLS-R model for MWF concentration (R2c = 0.890, R2P = 0.924). Simultaneous determination of the pH based on the µ’s is possible with good accuracy (R²c = 0.803, R²P = 0.732). With prior knowledge of the MWF concentration using the µa-PLS-R model, the predictive capability of the µ’s-PLS-R model for pH was refined (10 wt%: R²c = 0.998, R²p = 0.997). This highlights the relevance of the combined measurement of µa and µ’s. Recognizing the synergistic nature of the effects of MWF concentration and pH on the droplet size is an important prerequisite for extending the service life of an MWF in the metalworking industry. The presented method can be applied as an in-process analytical tool that allows one to compensate for ageing effects during use of the MWF by taking appropriate corrective measures, such as pH correction or adjustment of concentration.
Context: The manufacturing industry is facing a transformation with regard to Industry 4.0 (I4). A transformation towards full automation of production including a multitude of innovations is necessary. Startups and entrepreneurial processes can support such a transformation as has been shown in other industries. However, I4 has some specifics, so it is unclear how entrepreneurship can be adapted in I4. Understanding these specifics is important to develop suitable training programs for I4 startups and to accelerate the transformation.
Objective: This study identifies and outlines the essential characteristics and constraints of entrepreneurial processes in I4.
Method: 14 semi-structured interviews were conducted with experts in the field of I4 entrepreneurship. The interviews were analysed and categorized by qualitative analyses.
Results: The interviews revealed several characteristics of I4 that have a significant impact on the various phases of the entrepreneurial process. Examples of such specifics include the difficult access to customers, the necessary deep understanding of the customer and the domain, the difficulty of testing risky assumptions, and the complex development and productization of solutions. The complexity of hardware and software components, cost structures, and necessary customer-specific customizations affect the scalability of I4 startups. These essential characteristics also require specialised skills and resources from I4 startups.
Monodisperse polystyrene spheres are functional materials with interesting properties, such as high cohesion strength, strong adsorptivity, and surface reactivity. They have shown a high application value in biomedicine, information engineering, chromatographic fillers, supercapacitor electrode materials, and other fields. To fully understand and tailor particle synthesis, the methods for characterization of their complex 3D morphological features need to be further explored. Here we present a chemical imaging study based on three-dimensional confocal Raman microscopy (3D-CRM), scanning electron microscopy (SEM), focused ion beam (FIB), diffuse reflectance infrared Fourier transform (DRIFT), and nuclear magnetic resonance (NMR) spectroscopy for individual porous swollen polystyrene/poly (glycidyl methacrylate-co-ethylene di-methacrylate) particles. Polystyrene particles were synthesized with different co-existing chemical entities, which could be identified and assigned to distinct regions of the same particle. The porosity was studied by a combination of SEM and FIB. Images of milled particles indicated a comparable porosity on the surface and in the bulk. The combination of standard analytical techniques such as DRIFT and NMR spectroscopies yielded new insights into the inner structure and chemical composition of these particles. This knowledge supports the further development of particle synthesis and the design of new strategies to prepare particles with complex hierarchical architectures.
Product roadmaps in the new mobility domain: state of the practice and industrial experiences
(2021)
Context: The New Mobility industry is a young market that includes high market dynamics and is therefore associated with a high degree of uncertainty. Traditional product roadmapping approaches such a detailed planning of features over a long-time horizon typically fail in such environments. For this reason, companies that are active in the field of New Mobility are faced with the challenge of keeping their product roadmaps reliable for stakeholders while at the same time being able to react flexibly to changing market requirements.
Objective: The goal of this paper is to identify the state of practice regarding product roadmapping of New Mobility companies. In addition, the related challenges within the product roadmapping process as well as the success factors to overcome these challenges will be highlighted.
Method: We conducted semi-structured expert interviews with 8 experts (7 German company and one Finnish company) from the field of New Mobility and performed a content analysis.
Results: Overall the results of the study showed that the participating companies are aware of the requirements that the New Mobility sector entails. Therefore, they exhibit a high level of maturity in terms of product roadmapping. Nevertheless, some aspects were revealed that pose specific challenges for the participating companies. One major challenge, for example, is that New Mobility in terms of public clients is often a tender business with non-negotiable product requirements. Thus, the product roadmap can be significantly influenced from the outside. As factors for a successful product roadmapping mainly soft factors such as trust between all people involved in the product development process and transparency throughout the entire roadmapping process were mentioned.
Software is an integrated part of new features within the automotive sector, car manufacturers, the Hersteller Initiative Software (HIS) consortium defined metrics to determine software quality. Yet, problems with assigning metrics to quality attributes often occur in practice. The specified boundary values lead to discussions between contractors and clients as different standards and metric sets are used. This paper studies metrics used in the automotive sector and the quality attributes they address. The HIS, ISO/IEC 25010:2011, and ISO/IEC 26262:2018 are utilized to draw a big picture illustrating (i) which metrics and boundary values are reported in literature, (ii) how the metrics match the standards, (iii) which quality attributes are addressed, and (iv) how the metrics are supported by tools. Our findings from analyzing 38 papers include a catalog of 112 metrics of which 17 define boundary values and 48 are supported by tools. Most of the metrics are concerned with source code, are generic, and not specifically designed for automotive software development. We conclude that many metrics exist, but a clear definition of the metrics' context, notably regarding the construction of flexible and efficient measurement suites, is missing.
This paper presents the concept of the system architecture of a flexible cyber-physical factory control system. The system allows the automation of process structures using cyber-physical fractal nodes. These nodes have a functional and independent form and can be clustered to larger structures. This makes it possible to equip the factory with a flexible, freely scalable, modular system. The description of this system architecture and the associated rules and conditions is outlined in the concept.
Human bestrophin-1 protein (hBest1) is a transmembrane channel associated with the calcium-dependent transport of chloride ions in the retinal pigment epithelium as well as with the transport of glutamate and GABA in nerve cells. Interactions between hBest1, sphingomyelins, phosphatidylcholines and cholesterol are crucial for hBest1 association with cell membrane domains and its biological functions. As cholesterol plays a key role in the formation of lipid rafts, motional ordering of lipids and modeling/remodeling of the lateral membrane structure, we examined the effect of different cholesterol concentrations on the surface tension of hBest1/POPC (1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine) and hBest1/SM Langmuir monolayers in the presence/absence of Ca2+ ions using surface pressure measurements and Brewster angle microscopy studies. Here, we report that cholesterol: (1) has negligible condensing effect on pure hBest1 monolayers detected mainly in the presence of Ca2+ ions, and; (2) induces a condensing effect on composite hBest1/POPC and hBest1/SM monolayers. These results offer evidence for the significance of intermolecular protein–lipid interactions for the conformational dynamics of hBest1 and its biological functions as multimeric ion channel.
The maintenance of railway infrastructure remains a challenge. Data acquisition technologies have evolved because of Industry 4.0, expanding the capabilities of predictive maintenance. Despite the advances, the potential of these emerging technologies has not been fully realised. This paper presents a technology selection framework in support of railway infrastructure predictive maintenance, which is based on qualitative methods. It consists of three stages, including the mapping of the infrastructure characteristics with the identified technologies, the evaluation of the most appropriate technologies, and the sourcing thereof. This presents the collective decision support output of the framework.
Since the beginning of the energy sector liberalization, the design of energy markets has become a prominent field of research. Markets nowadays facilitate efficient resource allocation in many fields of energy system operation, such as plant dispatch, control reserve provisioning, delimitation of related carbon emissions, grid congestion management, and, more recently, smart grid concepts and local energy trading. Therefore, good market designs play an important role in enabling the energy transition toward a more sustainable energy supply for all. In this chapter, we retrace how market engineering shaped the development of energy markets and how the research focus shifted from national wholesale markets to more decentralized and location-sensitive concepts.
Reacting to ever-changing business environments, in the last decade complex systems of systems accomplished giant leaps forward leading to great technological flexibility. However, this dimension of flexibility is often limited by the rigidity of super-ordinated planning systems. Especially when hybrid teams of automated and human resources are in place, the dynamic assignment of tasks taking into account ergonomics remains a challenge. After exposing a gap in the state of the art on the topic, this paper presents an approach to include ergonomics in dynamic resource allocation models. Combining and complementing existing approaches, the presented method monitors the actual ergonomic burden of the resources during a shift and it provides a linear optimization model to steer the resource allocation process.
Ambitious goals set by the European Union strategy towards the emission reduction of multimodal logistic chains and new requirements for intermodal terminals set by the evolution of customer needs, contribute to a shift in the driver for the infrastructure development: from economy of scale to economy of density. This paper aims to present an innovative method for designing a process oriented technology chain for intermodal terminals in order to fulfill these new demanding requirements. The results of the case study of the Zero Emission Logistic Terminal Reutlingen are presented, highlighting how this particular context enables the design and development of a modular concept, paving the way for the generalization of the findings towards the transfer to similar contexts of other European cities.
The seamless fusion of the virtual world of information with the real physical world of things is considered the key for mastering the increasing complexity of production networks in the context of Industry 4.0. This fusion, widely referred to as the Internet of Things (IoT), is primarily enabled through the use of automatic identification (Auto-ID) technologies as an interface between the two worlds. Existing Auto-ID technologies almost exclusively rely on artificial features or identifiers that are attached to an object for the sole purpose of identification. In fact, using artificial features for the purpose of identification causes additional efforts and is not even always applicable. This paper, therefore, follows an approach of using multiple natural object features defined by the technical product information from computer-aided design (CAD) models for direct identification. By extending optical instance-level 3D-Object recognition by means of additional non-optical sensors, a multi-sensor automatic identification system (AIS) is realised, capable of identifying unpackaged piece goods without the need for artificial identifiers. While the implementation of a prototype confirms the feasibility of the approach, first experiments show improved accuracy and distinctiveness in identification compared to optical instance-level 3D-Object recognition. This paper aims to introduce the concept of multisensor identification and to present the prototype multi-sensor AIS.
So-called cloud-based management information systems are a fairly new phenomenon in management accounting in recent years. Quite a few companies (and especially their business managers and management accountants) do not always work via the cloud, but with hybrid solutions or on-premise solutions of ERP software such as SAP or Oracle, but often still with "manual" solutions such as Microsoft Excel.
This contribution presents a three-phase power stage for motor control with continuous output voltages using wide bandgap semiconductors and an asynchronous delta-sigma based switching signal generation. The focus of the paper is on an active damping approach for the LC output filter based on inductor current feedback.
This paper illustrates the implementation of series connected hardware modules as part of a scalable and modular power electronics device, which is ideally suited in the field of electric vehicles using wide bandgap semiconductor devices. The main benefit of the modular concept is that different current or voltage requirements can be satisfied based on the appropriate series or parallel connection of single modules. The particular design is based on the fact that the single modules generate a continuous and specified output voltage from a given dc voltage. The current work focuses on a brief classification of this work in different series connected concepts of power converters and in particular on an active damping approach for the series connected LC output filters based on inductor current feedback.
This paper presents a modular and scalable power electronics concept for motor control with continuous output voltage. In contrast to multilevel concepts, modules with continuous output voltage are connected in series. The continuous output voltage of each module is obtained by using gallium nitride (GaN) high electron motility transistor (HEMT)s as switches inside the modules with a switching frequency in the range between 500 kHz and 1 MHz. Due to this high switching frequency a LC filter is integrated into the module resulting in a continuous output voltage. A main topic of the paper is the active damping of this LC output filter for each module and the analysis of the series connection of the damping behaviour. The results are illustrated with simulations and measurements.
Escherichia coli (E. coli) is considered the most common life-threatening infectious bacteria in our daily life and poses a major challenge to human health. However, antibiotics frequently overused and misused has triggered increased multidrug resistance, hinders therapeutic outcomes, and causes higher mortalities. Herein, we addressed near-infrared (NIR) laser-excited human serum albumin (HSA) mediated graphene oxide loaded palladium nano-dots (HSA-GO-Pd) that can effectively combat Gram-negative E. coli in vitro. NIR laser-excited designed hybrid material highly generates singlet oxygen and hydroxyl radical by electron spin-resonance (ESR) analysis. Transmission electron microscope (TEM) images show small spherical sizes PdNPs on the surface of GO nano-sheets. The zeta (ζ) potential study indicates that in an aqueous medium, the average PdNPs size and surface capped charge comes from human body protein (HSA), HSA-GO-Pd is 5–8 nm, and +25 mV, respectively. The spectroscopic characterization reveals that in the synthesized HSA-GO-Pd nanocomposite, PdNPs successfully well-dispersed decorated on the surface of graphene oxide. The as-synthesized HSA-GO-Pd shows excellent antibacterial activity against gram-negative pathogen by killing 95% bacteria within 5 h. HSA-GO-Pd having very biocompatible and shows significant antibacterial activities. Owing to their intense photothermal conversation potential, low toxicity to normal cells, the as-addressed hybrid (HSA-GO-Pd) combined with NIR-irradiation will catch up valuable insight into the effective ablation of pathogenic bacteria.
We present the modification of ethylene-propylene rubber (EPM) with vinyltetra-methydisiloxane (VTMDS) via reactive extrusion to create a new silicone-based material with the potential for high-performance applications in the automotive, industrial and biomedical sectors. The radical-initiated modification is achieved with a peroxide catalyst starting the grafting reaction. The preparation process of the VTMDS-grafted EPM was systematically investigated using process analytical technology (in-line Raman spectroscopy) and the statistical design of experiments (DoE). By applying an orthogonal factorial array based on a face-centered central composite experimental design, the identification, quantification and mathematical modeling of the effects of the process factors on the grafting result were undertaken. Based on response surface models, process windows were defined that yield high grafting degrees and good grafting efficiency in terms of grafting agent utilization. To control the grafting process in terms of grafting degree and grafting efficiency, the chemical changes taking place during the modification procedure in the extruder were observed in real-time using a spectroscopic in-line Raman probe which was directly inserted into the extruder. Successful grafting of the EPM was validated in the final product by 1H-NMR and FTIR spectroscopy.
This paper presents a machine learning powered, procedural sizing methodology based on pre-computed look-up tables containing operating point characteristics of primitive devices. Several Neural Networks are trained for 90nm and 45nm technologies, mapping different electrical parameters to the corresponding dimensions of a primitive device. This transforms the geometric sizing problem into the domain of circuit design experts, where the desired electrical characteristics are now inputs to the model. Analog building blocks or entire circuits are expressed as a sequence of model evaluations, capturing the sizing strategy and intention of the designer in a procedure, which is reusable across different technology nodes. The methodology is employed for the sizing of two operational amplifiers, and evaluated for two technology nodes, showing the versatility and efficiency of this approach.
Respiratory diseases are leading causes of death and disability in the world. The recent COVID-19 pandemic is also affecting the respiratory system. Detecting and diagnosing respiratory diseases requires both medical professionals and the clinical environment. Most of the techniques used up to date were also invasive or expensive.
Some research groups are developing hardware devices and techniques to make possible a non-invasive or even remote respiratory sound acquisition. These sounds are then processed and analysed for clinical, scientific, or educational purposes.
We present the literature review of non-invasive sound acquisition devices and techniques.
The results are about a huge number of digital tools, like microphones, wearables, or Internet of Thing devices, that can be used in this scope.
Some interesting applications have been found. Some devices make easier the sound acquisition in a clinic environment, but others make possible daily monitoring outside that ambient. We aim to use some of these devices and include the non-invasive recorded respiratory sounds in a Digital Twin system for personalized health.
Context: Many companies are facing an increasingly dynamic and uncertain market environment, making traditional product roadmapping practices no longer sufficiently applicable. As a result, many companies need to adapt their product roadmapping practices for continuing to operate successfully in today’s dynamic market environment. However, transforming product roadmapping practices is a difficult process for organizations. Existing literature offers little help on how to accomplish such a process.
Objective: The objective of this paper is to present a product roadmap transformation approach for organizations to help them identify appropriate improvement actions for their roadmapping practices using an analysis of their current practices.
Method: Based on an existing assessment procedure for evaluating product roadmapping practices, the first version of a product roadmap transformation approach was developed in workshops with company experts. The approach was then given to eleven practitioners and their perceptions of the approach were gathered through interviews.
Results: The result of the study is a transformation approach consisting of a process describing what steps are necessary to adapt the currently applied product roadmapping practice to a dynamic and uncertain market environment. It also includes recommendations on how to select areas for improvement and two empirically based mapping tables. The interviews with the practitioners revealed that the product roadmap transformation approach was perceived as comprehensible, useful, and applicable. Nevertheless, we identified potential for improvements, such as a clearer presentation of some processes and the need for more improvement options in the mapping tables. In addition, minor usability issues were identified.
Context: Currently, most companies apply approaches for product roadmapping that are based on the assumption that the future is highly predicable. However, nowadays companies are facing the challenge of increasing market dynamics, rapidly evolving technologies, and shifting user expectations. Together with the adaption of lean and agile practices it makes it increasingly difficult to plan and predict upfront which products, services or features will satisfy the needs of the customers. Therefore, they are struggling with their ability to provide product roadmaps that fit into dynamic and uncertain market environments and that can be used together with lean and agile software development practices.
Objective: To gain a better understanding of modern product roadmapping processes, this paper aims to identify suitable processes for the creation and evolution of product roadmaps in dynamic and uncertain market environments.
Method: We performed a Grey Literature Review (GLR) according to the guidelines from Garousi et al.
Results: 32 approaches to product roadmapping were identified. Typical characteristics of these processes are the strong connection between the product roadmap and the product vision, an emphasis on stakeholder alignment, the definition of business and customer goals as part of the roadmapping process, a high degree of flexibility with respect to reaching these goals, and the inclusion of validation activities in the roadmapping process. An overall goal of nearly all approaches is to avoid waste by early reducing development and business risks. From the list of the 32 approaches found, four representative roadmapping processes are described in detail.
Context: The software-intensive business is characterized by increasing market dynamics, rapid technological changes, and fast-changing customer behaviors. Organizations face the challenge of moving away from traditional roadmap formats to an outcome-oriented approach that focuses on delivering value to the customer and the business. An important starting point and a prerequisite for creating such outcome-oriented roadmaps is the development of a product vision to which internal and external stakeholders can be aligned. However, the process of creating a product vision is little researched and understood.
Objective: The goal of this paper is to identify lessons-learned from product vision workshops, which were conducted to develop outcome-oriented product roadmaps.
Method: We conducted a multiple-case study consisting of two different product vision workshops in two different corporate contexts.
Results: Our results show that conducting product vision workshops helps to create a common understanding among all stakeholders about the future direction of the products. In addition, we identified key organizational aspects that contribute to the success of product vision workshops, including the participation of employees from functionally different departments.
Context: Nowadays, companies are challenged by increasing market dynamics, rapid changes and disruptive participants entering the market. To survive in such an environment, companies must be able to quickly discover product ideas that meet the needs of both customers and the company and deliver these products to customers. Dual-track agile is a new type of agile development that combines product discovery and delivery activities in parallel, iterative, and cyclical ways. At present, many companies have difficulties in finding and establishing suitable approaches for implementing dual-track agile in their business context.
Objective: In order to gain a better understanding of how product discovery and product delivery can interact with each other and how this interaction can be implemented in practice, this paper aims to identify suitable approaches to dual-track agile.
Method: We conducted a grey literature review (GLR) according to the guidelines to Garousi et al.
Results: Several approaches that support the integration of product discovery with product delivery were identified. This paper presents a selection of these approaches, i.e., the Discovery-Delivery Cycle model, Now-Next-Later Product Roadmaps, Lean Sprints, Product Kata, and Dual-Track Scrum. The approaches differ in their granularity but are similar in their underlying rationales. All approaches aim to ensure that only validated ideas turn into products and thus promise to lead to products that are better received by their users.
How to prioritize your product roadmap when everything feels important: a grey literature review
(2021)
Context: A key factor in achieving product success is to identify what and in which order outputs must be launched in order to deliver the most value to the customer and the business. Therefore, a well-established process to discover and prioritize the content of the product roadmap in the right way is crucial for the success of a company. However, most companies prioritize their product roadmap items based on opinions of experts or the management. Additionally, increasing market dynamics, rapidly evolving technologies and fast changing customer behavior complicate the conduction of the prioritization process. Therefore, many companies are struggling to finding and establishing suitable techniques for prioritizing their product roadmap.
Objective: In order to gain a better understanding of the prioritization process in a dynamic and uncertain market environment, this paper aims to identify suitable techniques for the prioritization in such environments.
Method: We conducted a Grey Literature Review according to the guidelines of Garousi et al.
Results: 18 techniques for the prioritization of the product roadmap could be identified. 15 techniques are primarily used to prioritize outputs by considering factors such as the expected impact or effort. Two technique are most suitable for prioritizing risky assumptions that need to be validated and one technique focuses on the prioritization of outcomes. All techniques have in common that they should be conducted as cross-functional team activity in order to include different perspectives in the prioritization process.
Public enterprises find themselves in increasingly competitive markets, a situation that makes having an entrepreneurial orientation (EO) an urgent need, given that EO is an indispensable driver of performance. Research describes politicians delaying the strategic change of public enterprises when serving as board members, but empirical evidence of the impact of board behavior on EO in public enterprises is lacking. We draw on stakeholder-agency theory (SAT) and resource dependence theory (RDT) and use structural equation modeling (SEM) to investigate survey data collected from 110 German energy suppliers that are majority government owned. Results indicate that board strategy control and board networking do not seem to predict EO on first sight. Closer analysis reveals a board networking–EO relationship depending on ownership structure. Remarkably, we find that it is not the usually suspected local municipal owner who hinders EO in our sample organizations but minority shareholders engaging in board networking activities. The results shed light on the intersection of governance and entrepreneurship with special reference to the fine-grained conceptualization of RDT.
Corporate entrepreneurship in the public sector: exploring the peculiarities of public enterprises
(2021)
Entrepreneurship is predominantly treated as a private-sector phenomenon and consequently its increasing importance in the public sector goes largely unremarked. That impedes the research field of entrepreneurship being capable of spanning multiple sectors. Accordingly, recent research calls for the study of corporate entrepreneurship (CE) as it manifests in the public sector where it can be labeled public entrepreneurship (PE). This dissertation considers government an essential entrepreneurial actor and is led by the central research question: What are the peculiarities of the public sector and how do they impact public enterprises’ entrepreneurial orientation (EO)?
Accordingly, this dissertation includes three studies focusing on public enterprises. Two of the studies set the scope of this thesis by investigating a specific type of organization in a specific context—German majority-government-owned energy suppliers. These enterprises operate in a liberalized market experiencing environmental uncertainties like competitiveness and business transformation.
The aims and results of the studies included in this dissertation can be summarized as follows: The systematic literature review illuminates the stimuli of and barriers to entrepreneurial activities in public enterprises and the potential outcomes of such activities discussed so far. The review reveals that research on EO has tended to focus on the private sector and consequently that barriers to and outcomes of entrepreneurial activities in the public sector remain under-researched. Building on these findings, the qualitative study focuses on the interrelated barriers affecting entrepreneurship in public enterprises and the outcomes of entrepreneurial activities being inhibited. The study adopts an explorative comparative causal mapping approach to address the above-mentioned research goal and the lack of clarity around how barriers identified in the public sphere are interrelated. Furthermore, the study bases its investigation on the different business segments of sales (competitive market) and the distribution grid (natural monopoly) to account for recent calls for fine-grained research on PE. Results were compared with prior findings in the public and private sector. That comparison indicates that the barriers revealed align with aspects discussed in prior research findings relating to both sectors. Examples include barriers associated with the external environment such as legal constraints and barriers originating from within the organization such as employee behavior linked to a value system that hampers entrepreneurial action. However, the most important finding is that a public enterprise’s supervisory board can hinder its progress, a finding running counter to those of previous private-sector research and one that underscores the widespread prejudice that the involvement of a public shareholder and its nominated board of directors has a negative effect on EO. The third study is quantitative (data collection via a questionnaire) and builds on both its predecessors to examine the little understood topic of board behavior and public enterprises’ social orientation as predictors of EO. The study’s results indicate that social orientation represses EO, whereas board strategy control (BSC) does not seem to predict EO. Regarding BSC, we find that the local government owners in our sample are less involved in BSC. The third study also examines board networking and finds its relationship with EO depends on the ownership structure of the public-sector organization. An important finding is that minority shareholders, such as majority privately-owned enterprises and hub firms, repress EO when engaging in board networking.
In summary, this doctoral thesis contributes to the under-researched topic of CE in the public sector. It investigates the peculiarities of this sector by focusing on the supervisory board and social oriented activities and their impact on the enterprise’s EO in the quantitative study. The thesis addresses institutional questions regarding ownership and the last study in particular contributes to expanding resource dependence theory, and invites a nuanced perspective: The original perspective suggests that interorganizational arrangements like interfirm network ties and equity holdings reduce external resource dependency and consequently improve firm performance. The findings within this thesis expose resource delivery to potential contrary effects to extend the understanding of interorganizational action with important implications for practice.
Although spiral antennas have undergone continuous development and refinement since Edwin Turner conceived them in 1954, only a few compact planar arrays exist. The shortcoming is even more significant when it comes to spiral antenna arrays in mode M2 operation. The present work addresses this issue, among other things. It presents two planar arrays of spiral antennas operating in the same frequency band and radiating for the first one an axial mode M1 and a conical mode M2 for the second. Both arrays are modeled, simulated, and fed with a corporate feeding network embedded in a dielectric substrate. It is shown that keeping the same topology, the array for conical M1 mode can be obtained from the array for mode M2 by a simple introduction of a phase shift on one branch of the feed and vice versa, providing thus the possibility to obtain in the same structure a spiral antenna array operating in both modes in the same frequency band simultaneously. Comparison between simulated and measured data shows good agreement.
Logistics has undergone tremendous changes over the past few decades. Above all with the advent of the digital age, we have witnessed the significant impact of new technologies on supply chains in terms of business transformation, increased agility and performance. However, many businesses have chosen to harness the full potential of these technologies to create further value (Bughin et al, 2017). High investment costs, fears for cyber security, a lack of expertise in the workforce and insufficient awareness of the concrete benefits of these technologies are just some of the factors hampering the decision to adopt digital technologies.
The following chapter draws on the findings of both recent quantitative and qualitative research conducted by practitioners und academics.
Annotations of character IDs in news images are critical as ground truth for news retrieval and recommendation system. Universality and accuracy optimization of deep neural network models constitutes the key technology to improve the precision and computing efficiency of automatic news character identification, which is attracting increased attention globally. This paper explores the optimized deep neural network model for automatic focus personage identification in multi-lingual news. First, the face model of the focus personage is trained by using the corresponding face images from German news as positive samples. Next, the scheme of Recurrent Convolutional Neural Network (RCNN) + Bi-directional Long-Short Term Memory (Bi-LSTM) + Conditional Random Field (CRF) is utilized to label the focus name, and the RCNN-RCNN encoder–decoder is applied to translate names of people into multiple languages. Third, face features are described by combining the advantages of Local Gabor Binary Pattern Histogram Sequence (LGBPHS) and RCNN, and iterative quantization (ITQ) is used to binarize codes. Finally, a name semantic network is built for different domains. Experiments are performed on a dataset which comprises approximately 100,000 news images. The experimental results demonstrate that the proposed method achieves a significant improvement over other algorithms.
The integration of renewable energy sources in single family homes is challenging. Advance knowledge of the demand of electrical energy, heat, and domestic hot water (DHW) is useful to schedule projectable devices like heat pumps. In this work, we consider demand time series for heat and DHW from 2018 for a single family home in Germany. We compare different forecasting methods to predict such demands for the next day. While the 1-day-back forecast method led to the prediction of heat demand, the N-day-average performed best for DHW demand when Unbiased Exponentially Moving Average (UEMA) is used with a memory of 2.5 days. This is surprising as these forecasting methods are very simple and do not leverage additional information sources such as weather forecasts.
Continuous monitoring of individual vital parameters can provide information for the assessment of one’s health and indications of medical problems in the context of personalized medicine. Correlations between parameters and health issues are to be evaluated. As one project in this topic area, a telemedicine platform is implemented to gather data of outpatients via wearables and accumulate them for physicians and researchers to review. This work extracts requirements, draws use case scenarios, and shows the current system architecture consisting of a patient application, a physician application with a web server, and a backend server application. In further work, the prototype will assist to develop a vendor-free and open monitoring solution. A conclusion on functionality and usability will be evaluated in an imminent first study.