Refine
Document Type
- Journal article (78) (remove)
Language
- English (78) (remove)
Is part of the Bibliography
- yes (78)
Institute
- Informatik (34)
- Life Sciences (28)
- Technik (7)
- ESB Business School (6)
- Texoversum (2)
- Zentrale Einrichtungen (1)
Publisher
- Springer (78) (remove)
Flame-retardant finishing of cotton fabrics using DOPO functionalized alkoxy- and amido alkoxysilane
(2023)
In the present study, DOPO-based alkoxysilane (DOPO-ETES) and amido alkoxysilane (DOPO-AmdPTES) were synthesized by one-step and without by-products as halogen-free flame retardants. The flame retardants were applied on cotton fabric utilizing sol–gel method and pad-dry-cure finishing process. The flame retardancy, the thermal stability and the combustion ehaviour of treated cotton were evaluated by surface and bottom edge ignition flame test (according to EN ISO 15025), thermogravimetric analysis (TGA) and micro-scale combustion calorimeter (MCC). Unlike CO/DOPO-ETES sample, cotton treated with DOPO-AmdPTES nanosols exhibits self-extinguishing ehaviour with high char residue, an improvement of the LOI value and a significant reduction of the PHRR, HRC and THR compared to pristine cotton. Cotton finished with DOPO-AmdPTES reveals a semi-durability after ten laundering cycles keeping the flame-retardant properties unchanged. According to the results obtained from TGA-FTIR, Py-GC/MS and XPS, the major activity of flame retardant occurs in the condensed phase via catalytic induced char formation as physical barrier along with the activity in the gas phase derived mainly from the dilution effect. The early degradation of CO/DOPO-AmdPTES compared to CO/DOPO-ETES, triggered by the cleavage of the weak bond between P and C=O, as the DFT study indicated, provides the beneficial effect of this flame retardant on the fire resistance of cellulose.
Characterization of low density polyethylene greenhouse films during the composting of rose residues
(2022)
This study presents an evaluation of a potential alternative to plastic degradation in the form of organic composting. It stems from the urgent need of finding solutions to the plastic residues and focuses on the compost-based degradation of greenhouse film covers in an important rose exporter company in Ecuador. Thus, this study analyzes the physical, chemical, and biological changes of rose wastes composting, and also evaluates the stability of new and aged agricultural plastic under these conditions. Interestingly, results of compost characterization show a slow degradation rate of organic matter and total organic carbon, along with a significant increase in pH and rise of bacterial populations. However, the results demonstrate that despite these findings, composting conditions had no significant influence on plastic degradation, and while deterioration of aged plastic samples was reported in some tests, it may be the result of environmental conditions and a prolonged exposure to solar radiation. Importantly, these factors could facilitate the adhesion of microorganisms and promote plastic biodegradation. Hence, it is encouraged for future studies to analyze the ecotoxicity of plastics in the compost, as well as isolate, identify, and evaluate the possible biodegradative potential of these microorganisms as an alternative to plastic waste management.
In our initial DaMoN paper, we set out the goal to revisit the results of “Starring into the Abyss [...] of Concurrency Control with [1000] Cores” (Yu in Proc. VLDB Endow 8: 209-220, 2014). Against their assumption, today we do not see single-socket CPUs with 1000 cores. Instead, multi-socket hardware is prevalent today and in fact offers over 1000 cores. Hence, we evaluated concurrency control (CC) schemes on a real (Intel-based) multi-socket platform. To our surprise, we made interesting findings opposing results of the original analysis that we discussed in our initial DaMoN paper. In this paper, we further broaden our analysis, detailing the effect of hardware and workload characteristics via additional real hardware platforms (IBM Power8 and 9) and the full TPC-C transaction mix. Among others, we identified clear connections between the performance of the CC schemes and hardware characteristics, especially concerning NUMA and CPU cache. Overall, we conclude that no CC scheme can efficiently make use of large multi-socket hardware in a robust manner and suggest several directions on how CC schemes and overall OLTP DBMS should evolve in future.
Glioblastoma WHO IV belongs to a group of brain tumors that are still incurable. A promising treatment approach applies photodynamic therapy (PDT) with hypericin as a photosensitizer. To generate a comprehensive understanding of the photosensitizer-tumor interactions, the first part of our study is focused on investigating the distribution and penetration behavior of hypericin in glioma cell spheroids by fluorescence microscopy. In the second part, fluorescence lifetime imaging microscopy (FLIM) was used to correlate fluorescence lifetime (FLT) changes of hypericin to environmental effects inside the spheroids. In this context, 3D tumor spheroids are an excellent model system since they consider 3D cell–cell interactions and the extracellular matrix is similar to tumors in vivo. Our analytical approach considers hypericin as probe molecule for FLIM and as photosensitizer for PDT at the same time, making it possible to directly draw conclusions of the state and location of the drug in a biological system. The knowledge of both state and location of hypericin makes a fundamental understanding of the impact of hypericin PDT in brain tumors possible. Following different incubation conditions, the hypericin distribution in peripheral and central cryosections of the spheroids were analyzed. Both fluorescence microscopy and FLIM revealed a hypericin gradient towards the spheroid core for short incubation periods or small concentrations. On the other hand, a homogeneous hypericin distribution is observed for long incubation times and high concentrations. Especially, the observed FLT change is crucial for the PDT efficiency, since the triplet yield, and hence the O2 activation, is directly proportional to the FLT. Based on the FLT increase inside spheroids, an incubation time 30 min is required to achieve most suitable conditions for an effective PDT.
The early detection of head and neck cancer is a prolonged challenging task. It requires a precise and accurate identification of tissue alterations as well as a distinct discrimination of cancerous from healthy tissue areas. A novel approach for this purpose uses microspectroscopic techniques with special focus on hyperspectral imaging (HSI) methods. Our proof-of-principle study presents the implementation and application of darkfield elastic light scattering spectroscopy (DF ELSS) as a non-destructive, high-resolution, and fast imaging modality to distinguish lingual healthy from altered tissue regions in a mouse model. The main aspect of our study deals with the comparison of two varying HSI detection principles, which are a point-by-point and line scanning imaging, and whether one might be more appropriate in differentiating several tissue types. Statistical models are formed by deploying a principal component analysis (PCA) with the Bayesian discriminant analysis (DA) on the elastic light scattering (ELS) spectra. Overall accuracy, sensitivity, and precision values of 98% are achieved for both models whereas the overall specificity results in 99%. An additional classification of model-unknown ELS spectra is performed. The predictions are verified with histopathological evaluations of identical HE-stained tissue areas to prove the model’s capability of tissue distinction. In the context of our proof-of-principle study, we assess the Pushbroom PCA-DA model to be more suitable for tissue type differentiations and thus tissue classification. In addition to the HE-examination in head and neck cancer diagnosis, the usage of HSI-based statistical models might be conceivable in a daily clinical routine.
Purpose
Injury or inflammation of the middle ear often results in the persistent tympanic membrane (TM) perforations, leading to conductive hearing loss (HL). However, in some cases the magnitude of HL exceeds that attributable by the TM perforation alone. The aim of the study is to better understand the effects of location and size of TM perforations on the sound transmission properties of the middle ear.
Methods
The middle ear transfer functions (METF) of six human temporal bones (TB) were compared before and after perforating the TM at different locations (anterior or posterior lower quadrant) and to different degrees (1 mm, ¼ of the TM, ½ of the TM, and full ablation). The sound-induced velocity of the stapes footplate was measured using single-point laser-Doppler-vibrometry (LDV). The METF were correlated with a Finite Element (FE) model of the middle ear, in which similar alterations were simulated.
Results
The measured and calculated METF showed frequency and perforation size dependent losses at all perforation locations. Starting at low frequencies, the loss expanded to higher frequencies with increased perforation size. In direct comparison, posterior TM perforations affected the transmission properties to a larger degree than anterior perforations. The asymmetry of the TM causes the malleus-incus complex to rotate and results in larger deflections in the posterior TM quadrants than in the anterior TM quadrants. Simulations in the FE model with a sealed cavity show that small perforations lead to a decrease in TM rigidity and thus to an increase in oscillation amplitude of the TM mainly above 1 kHz.
Conclusion
Size and location of TM perforations have a characteristic influence on the METF. The correlation of the experimental LDV measurements with an FE model contributes to a better understanding of the pathologic mechanisms of middle-ear diseases. If small perforations with significant HL are observed in daily clinical practice, additional middle ear pathologies should be considered. Further investigations on the loss of TM pretension due to perforations may be informative.
Context
Microservices as a lightweight and decentralized architectural style with fine-grained services promise several beneficial characteristics for sustainable long-term software evolution. Success stories from early adopters like Netflix, Amazon, or Spotify have demonstrated that it is possible to achieve a high degree of flexibility and evolvability with these systems. However, the described advantageous characteristics offer no concrete guidance and little is known about evolvability assurance processes for microservices in industry as well as challenges in this area. Insights into the current state of practice are a very important prerequisite for relevant research in this field.
Objective
We therefore wanted to explore how practitioners structure the evolvability assurance processes for microservices, what tools, metrics, and patterns they use, and what challenges they perceive for the evolvability of their systems.
Method
We first conducted 17 semi-structured interviews and discussed 14 different microservice-based systems and their assurance processes with software professionals from 10 companies. Afterwards, we performed a systematic grey literature review (GLR) and used the created interview coding system to analyze 295 practitioner online resources.
Results
The combined analysis revealed the importance of finding a sensible balance between decentralization and standardization. Guidelines like architectural principles were seen as valuable to ensure a base consistency for evolvability and specialized test automation was a prevalent theme. Source code quality was the primary target for the usage of tools and metrics for our interview participants, while testing tools and productivity metrics were the focus of our GLR resources. In both studies, practitioners did not mention architectural or service-oriented tools and metrics, even though the most crucial challenges like Service Cutting or Microservices Integration were of an architectural nature.
Conclusions
Practitioners relied on guidelines, standardization, or patterns like Event-Driven Messaging to partially address some reported evolvability challenges. However, specialized techniques, tools, and metrics are needed to support industry with the continuous evaluation of service granularity and dependencies. Future microservices research in the areas of maintenance, evolution, and technical debt should take our findings and the reported industry sentiments into account.
Context
Web APIs are one of the most used ways to expose application functionality on the Web, and their understandability is important for efficiently using the provided resources. While many API design rules exist, empirical evidence for the effectiveness of most rules is lacking.
Objective
We therefore wanted to study 1) the impact of RESTful API design rules on understandability, 2) if rule violations are also perceived as more difficult to understand, and 3) if demographic attributes like REST-related experience have an influence on this.
Method
We conducted a controlled Web-based experiment with 105 participants, from both industry and academia and with different levels of experience. Based on a hybrid between a crossover and a between-subjects design, we studied 12 design rules using API snippets in two complementary versions: one that adhered to a rule and one that was a violation of this rule. Participants answered comprehension questions and rated the perceived difficulty.
Results
For 11 of the 12 rules, we found that violation performed significantly worse than rule for the comprehension tasks. Regarding the subjective ratings, we found significant differences for 9 of the 12 rules, meaning that most violations were subjectively rated as more difficult to understand. Demographics played no role in the comprehension performance for violation.
Conclusions
Our results provide first empirical evidence for the importance of following design rules to improve the understandability of Web APIs, which is important for researchers, practitioners, and educators.
Software evolvability is an important quality attribute, yet one difficult to grasp. A certain base level of it is allegedly provided by service- and microservice-based systems, but many software professionals lack systematic understanding of the reasons and preconditions for this. We address this issue via the proxy of architectural modifiability tactics. By qualitatively mapping principles and patterns of Service Oriented Architecture (SOA) and microservices onto tactics and analyzing the results, we cannot only generate insights into service-oriented evolution qualities, but can also provide a modifiability comparison of the two popular service-based architectural styles. The results suggest that both SOA and microservices possess several inherent qualities beneficial for software evolution. While both focus strongly on loose coupling and encapsulation, there are also differences in the way they strive for modifiability (e.g. governance vs. evolutionary design). To leverage the insights of this research, however, it is necessary to find practical ways to incorporate the results as guidance into the software development process.
The article analyzes experimentally and theoretically the influence of microscope parameters on the pinhole-assisted Raman depth profiles in uniform and composite refractive media. The main objective is the reliable mapping of deep sample regions. The easiest to interpret results are found with low magnification, low aperture, and small pinholes. Here, the intensities and shapes of the Raman signals are independent of the location of the emitter relative to the sample surface. Theoretically, the results can be well described with a simple analytical equation containing the axial depth resolution of the microscope and the position of the emitter. The lower determinable object size is limited to 2–4 μm. If sub-micrometer resolution is desired, high magnification, mostly combined with high aperture, becomes necessary. The signal intensities and shapes depend now in refractive media on the position relative to the sample surface. This aspect is investigated on a number of uniform and stacked polymer layers, 2–160 μm thick, with the best available transparency. The experimental depth profiles are numerically fitted with excellent accuracy by inserting a Gaussian excitation beam of variable waist and fill fraction through the focusing lens area, and by treating the Raman emission with geometric optics as spontaneous isotropic process through the lens and the variable pinhole, respectively. The intersectional area of these two solid angles yields the leading factor in understanding confocal (pinhole-assisted) Raman depth profiles.
Gender pay gaps are commonly studied in populations with already completed educational careers. We focus on an earlier stage by investigating the gender pay gap among university students working alongside their studies. With data from five cohorts of a large-scale student survey from Germany, we use regression and wage decomposition techniques to describe gender pay gaps and potential explanations. We find that female students earn about 6% less on average than male students, which reduces to 4.1% when accounting for a rich set of explanatory variables. The largest explanatory factor is the type of jobs male and female students pursue.
Public transport maps are typically designed in a way to support route finding tasks for passengers, while they also provide an overview about stations, metro lines, and city-specific attractions. Most of those maps are designed as a static representation, maybe placed in a metro station or printed in a travel guide. In this paper, we describe a dynamic, interactive public transport map visualization enhanced by additional views for the dynamic passenger data on different levels of temporal granularity. Moreover, we also allow extra statistical information in form of density plots, calendar-based visualizations, and line graphs. All this information is linked to the contextual metro map to give a viewer insights into the relations between time points and typical routes taken by the passengers. We also integrated a graph-based view on user-selected routes, a way to interactively compare those routes, an attribute- and property-driven automatic computation of specific routes for one map as well as for all available maps in our repertoire, and finally, also the most important sights in each city are included as extra information to include in a user-selected route. We illustrate the usefulness of our interactive visualization and map navigation system by applying it to the railway system of Hamburg in Germany while also taking into account the extra passenger data. As another indication for the usefulness of the interactively enhanced metro maps we conducted a controlled user experiment with 20 participants.
IOS 2.0 : new aspects on inter-organizational integration through enterprise 2.0 technologies
(2015)
This special theme of „Electronic Markets“ focuses on research concerned with the use of social technologies and "2.0" principles in the interaction between organization (i.e., with "inter-organizational systems (IOS) 2.0"). This theme falls within the larger space of Enterprise 2.0 research, but focuses in particular on inter-organizational use (between enterprises), not intra-organizational use (in a single enterprise). While there is great interest in practice regarding the use of 2.0 technologies to support intra-organizational communication, collaboration and interaction, information systems (IS) research has largely been oblivious to this important use of social technologies.
Introduction to the special issue on self‑managing and hardware‑optimized database systems 2022
(2023)
Data management systems have evolved in terms of functionality, performance characteristics, complexity, and variety during the last 40 years. Particularly, the relational database management systems and the big data systems (e.g., Key-Value stores, Document stores, Graph stores and Graph Computation Systems, Spark, MapReduce/Hadoop, or Data Stream Processing Systems) have evolved with novel additions and extensions. However, the systems administration and tasks have become highly complex and expensive, especially given the simultaneous and rapid hardware evolution in processors, memory, storage, or networking. These developments present new open problems and challenges to data management systems as well as new opportunities.
The SMDB (International Workshop on Self-Managing Database Systems) and HardBD&Active (Joint International Workshop on Big Data Management on Emerging Hardware and Data Management on Virtualized Active Systems) workshops organized in conjunction with the IEEE ICDE (International Conference on Data Engineering) offered two distinct platforms for examining the above system-related challenges from different perspectives. The SMDB workshop looks into developing autonomic or self-* features in database and data management systems to tackle complex administrative tasks, while the HardBD&Active workshop focuses on harnessing hardware technologies to enhance efficiency and performance of data processing and management tasks. As a result of these workshops, we are delighted to present the third special issue of DAPD titled “Self-Managing and Hardware-Optimized Database Systems 2022,” which showcases the best contributions from the SMDB 2021/2022 and HardBD&Active 2021/2022 workshops.
The critical process parameters cell density and viability during mammalian cell cultivation are assessed by UV/VIS spectroscopy in combination with multivariate data analytical methods. This direct optical detection technique uses a commercial optical probe to acquire spectra in a label-free way without signal enhancement. For the cultivation, an inverse cultivation protocol is applied, which simulates the exponential growth phase by exponentially replacing cells and metabolites of a growing Chinese hamster ovary cell batch with fresh medium. For the simulation of the death phase, a batch of growing cells is progressively replaced by a batch with completely starved cells. Thus, the most important parts of an industrial batch cultivation are easily imitated. The cell viability was determined by the well-established method partial least squares regression (PLS). To further improve process knowledge, the viability has been determined from the spectra based on a multivariate curve resolution (MCR) model. With this approach, the progress of the cultivations can be continuously monitored solely based on an UV/VIS sensor. Thus, the monitoring of critical process parameters is possible inline within a mammalian cell cultivation process, especially the viable cell density. In addition, the beginning of cell death can be detected by this method which allows us to determine the cell viability with acceptable error. The combination of inline UV/VIS spectroscopy with multivariate curve resolution generates additional process knowledge complementary to PLS and is considered a suitable process analytical tool for monitoring industrial cultivation processes.
Programmable nano-bio interfaces driven by tuneable vertically configured nanostructures have recently emerged as a powerful tool for cellular manipulations and interrogations. Such interfaces have strong potential for ground-breaking advances, particularly in cellular nanobiotechnology and mechanobiology. However, the opaque nature of many nanostructured surfaces makes non-destructive, live-cell characterization of cellular behavior on vertically aligned nanostructures challenging to observe. Here, a new nanofabrication route is proposed that enables harvesting of vertically aligned silicon (Si) nanowires and their subsequent transfer onto an optically transparent substrate, with high efficiency and without artefacts. We demonstrate the potential of this route for efficient live-cell phase contrast imaging and subsequent characterization of cells growing on vertically aligned Si nanowires. This approach provides the first opportunity to understand dynamic cellular responses to a cell-nanowire interface, and thus has the potential to inform the design of future nanoscale cellular manipulation technologies.
Empirical software engineering experts on the use of students and professionals in experiments
(2018)
Using students as participants remains a valid simplification of reality needed in laboratory contexts. It is an effective way to advance software engineering theories and technologies but, like any other aspect of study settings, should be carefully considered during the design, execution, interpretation, and reporting of an experiment. The key is to understand which developer population portion is being represented by the participants in an experiment. Thus, a proposal for describing experimental participants is put forward.
The relative pros and cons of using students or practitioners in experiments in empirical software engineering have been discussed for a long time and continue to be an important topic. Following the recent publication of “Empirical software engineering experts on the use of students and professionals in experiments” by Falessi, Juristo, Wohlin, Turhan, Münch, Jedlitschka, and Oivo (EMSE, February 2018) we received a commentary by Sjøberg and Bergersen. Given that the topic is of great methodological interest to the community and requires nuanced treatment, we invited two editorial board members, Martin Shepperd and Per Runeson, respectively, to provide additional views.
The analysis of exhaled metabolites has become a promising field of research in recent decades. Several volatile organic compounds reflecting metabolic disturbance and nutrition status have even been reported. These are particularly important for long-term measurements, as needed in medical research for detection of disease progression and therapeutic efficacy. In this context, it has become urgent to investigate the effect of fasting and glucose treatment for breath analysis. In the present study, we used amodel of ventilated rats that fasted for 12 h prior to the experiment. Ten rats per group were randomly assigned for continuous intravenous infusion without glucose or an infusion including 25 mg glucose per 100 g per hour during an observation period of 12 h. Exhaled gas was analysed using multicapillary column ion-mobility spectrometry. Analytes were identified by the BS-MCC/IMS database (version 1209; B & S Analytik, Dortmund, Germany). Glucose infusion led to a significant increase in blood glucose levels (p<0.05 at 4 h and thereafter) and cardiac output (p<0.05 at 4 h and thereafter). During the observation period, 39 peaks were found collectively. There were significant differences between groups in the concentration of ten volatile organic compounds: p<0.001 at 4 h and thereafter for isoprene, cyclohexanone, acetone, p-cymol, 2-hexanone, phenylacetylene, and one unknown compound, and p<0.001 at 8 h and thereafter for 1-pentanol, 1-propanol, and 2-heptanol. Our results indicate that for long-term measurement, fasting and the withholding of glucose could contribute to changes of volatile metabolites in exhaled air.
Social and environmental risk management in supply chains : a survey in the clothing industry
(2015)
Almost daily, news indicates that there are environmental and social problems in globally fragmented supply chains. Even though conceptualisations of sustainable supply chain management suggest supplier-related risk management for sustainable products and processes as substantial for companies, research on how risk management for environmental and social issues in supply chains is performed has so far been neglected. This study aims at analysing both why companies in the clothing industry are performing management of social and environmental risks in their supply chain and what kind of action they are taking. Based on the literature on sustainable supply chain management and supply chain risk management as well as 10 expert interviews, a conceptual model for risk management in sustainable supply chains was developed. This model was tested in an empirical study in the clothing industry. The data were analysed by structural equation modelling. Results of the research show high statistical significance for the respective conceptual model. The main driver to perform risk management in environmental and social affairs is pressures and incentives from stakeholders. While companies’ corporate orientation mainly drives social actions, top management drives environmental affairs for differentiating themselves from competitors.
Children undergoing systemic chemotherapy often suffer from severe immunosuppression usually associated to severe neutropenia (neutrophils < 0.5 x 109/l). Clinical courses during those periods range from asymptomatic to septic general conditions. Development of septic symptoms can be very fast and life-threatening. Swift detection of risk factors in those patients is therefore needed. So far no early, rapid and reliable marker or tool exists. Ion-Mobility-Spectrometry coupled with a Multi-Capillary-Column (IMS-MCC) can analyze more than 600 volatile components from exhaled air within a few minutes and hence is a potential, rapid detection-tool. As a proof of concept we measured the exhaled breath of 11 patients with neutropenia and 10 healthy controls ranging from 3 to 18 years of age at the time of measurement. Ten milliliters breath samples were taken at the outpatient clinic and analyzed with an onsite IMS-MCC (BreathDiscovery, B&S Analytik, Dortmund, Germany). Dead-space-volume was adapted to two groups (small 250 ml, large 500 ml). Interestingly 59 differing peaks were measured. Eleven were significantly different (p ≤ 0.05), three of which highly significant (p ≤ 0.01) in Mann-Whitney-Rank-Sum-testing. The corresponding analytes used in the decision tree are 2-Propanol, D-Limonene and Acetone. The analytes with the lowest rank sum identified are 2-Hexanone, Iso-Propylamine and 1-Butanol. Eventually we were able to show a three-step-decision-tree, which discerns the 21 samples except one from each group. Sensitivity was 90 % and specificity was 91 %. Naturally these findings need further confirmation within a bigger population. Our pilot-study proves that Ion-Mobility-Spectrometry coupled with a Multi-Capillary-Column is a feasible rapid diagnostic tool in the setting of a pediatric oncology out-patient clinic for patients 3 years and older. Our first results furthermore encourage additional analysis as to whether patients at risk for septic events during immunosuppression can be diagnosed in advance by rapidly assessing risk factors such as Neutropenia in exhaled breath.
The scoring of sleep stages is one of the essential tasks in sleep analysis. Since a manual procedure requires considerable human and financial resources, and incorporates some subjectivity, an automated approach could result in several advantages. There have been many developments in this area, and in order to provide a comprehensive overview, it is essential to review relevant recent works and summarise the characteristics of the approaches, which is the main aim of this article. To achieve it, we examined articles published between 2018 and 2022 that dealt with the automated scoring of sleep stages. In the final selection for in-depth analysis, 125 articles were included after reviewing a total of 515 publications. The results revealed that automatic scoring demonstrates good quality (with Cohen's kappa up to over 0.80 and accuracy up to over 90%) in analysing EEG/EEG + EOG + EMG signals. At the same time, it should be noted that there has been no breakthrough in the quality of results using these signals in recent years. Systems involving other signals that could potentially be acquired more conveniently for the user (e.g. respiratory, cardiac or movement signals) remain more challenging in the implementation with a high level of reliability but have considerable innovation capability. In general, automatic sleep stage scoring has excellent potential to assist medical professionals while providing an objective assessment.
In the era of precision medicine, digital technologies and artificial intelligence, drug discovery and development face unprecedented opportunities for product and business model innovation, fundamentally changing the traditional approach of how drugs are discovered, developed and marketed. Critical to this transformation is the adoption of new technologies in the drug development process, catalyzing the transition from serendipity-driven to data-driven medicine. This paradigm shift comes with a need for both translation and precision, leading to a modern Translational Precision Medicine approach to drug discovery and development. Key components of Translational Precision Medicine are multi-omics profiling, digital biomarkers, model-based data integration, artificial intelligence, biomarker-guided trial designs and patient-centric companion diagnostics. In this review, we summarize and critically discuss the potential and challenges of Translational Precision Medicine from a cross-industry perspective.
In this paper, we deal with optimizing the monetary costs of executing parallel applications in cloud-based environments. Specifically, we investigate on how scalability characteristics of parallel applications impact the total costs of computations. We focus on a specific class of irregularly structured problems, where the scalability typically depends on the input data. Consequently, dynamic optimization methods are required for minimizing the costs of computation. For quantifying the total monetary costs of individual parallel computations, the paper presents a cost model that considers the costs for the parallel infrastructure employed as well as the costs caused by delayed results. We discuss a method for dynamically finding the number of processors for which the total costs based on our cost model are minimal. Our extensive experimental evaluation gives detailed insights into the performance characteristics of our approach.
This paper studies whether a monetary union can be managed solely by a rule based approach. The Five Presidents’ Report of the European Union rejects this idea. It suggests a centralisation of powers. We analyse the philosophy of policy rules from the vantage point of the German economic school of thought. There is evidence that a monetary union consisting of sovereign states is well organised by rules, together with the principle of subsidiarity. The root cause of the euro crisis is rather the weak enforcement of rules, compounded by structural problems. Therefore, we suggest a genuine rule-based paradigm for a stable future of the Economic and Monetary Union.
Newly developed active pharmaceutical ingredients (APIs) are often poorly soluble in water. As a result the bioavailability of the API in the human body is reduced. One approach to overcome this restriction is the formulation of amorphous solid dispersions (ASDs), e.g., by hot-melt extrusion (HME). Thus, the poorly soluble crystalline form of the API is transferred into a more soluble amorphous form. To reach this aim in HME, the APIs are embedded in a polymer matrix. The resulting amorphous solid dispersions may contain small amounts of residual crystallinity and have the tendency to recrystallize. For the controlled release of the API in the final drug product the amount of crystallinity has to be known. This review assesses the available analytical methods that have been recently used for the characterization of ASDs
and the quantification of crystalline API content. Well established techniques like near- and mid-infrared spectroscopy (NIR and MIR, respectively), Raman spectroscopy, and emerging ones like UV/VIS, terahertz, and ultrasonic spectroscopy are considered in detail. Furthermore, their advantages and limitations are discussed with regard to general practical applicability as process analytical technology (PAT) tools in industrial manufacturing. The review focuses on spectroscopic methods which have been proven as most suitable for in-line and on-line process analytics. Further aspects are spectroscopic techniques that have been or could be integrated into an extruder.
Back to the future: origins and directions of the “Agile Manifesto” – views of the originators
(2018)
In 2001, seventeen professionals set up the manifesto for agile software development. They wanted to define values and basic principles for better software development. On top of brought into focus, the manifesto has been widely adopted by developers, in software-developing organizations and outside the world of IT. Agile principles and their implementation in practice have paved the way for radical new and innovative ways of software and product development. In parallel, the understanding of the manifesto’s underlying principles evolved over time. This, in turn, may affect current and future applications of agile principles. This article presents results from a survey and an interview study in collaboration with the original contributors of the manifesto for agile software development. Furthermore, it comprises the results from a workshop with one of the original authors. This publication focuses on the origins of the manifesto, the contributors’ views from today’s perspective, and their outlook on future directions. We evaluated 11 responses from the survey and 14 interviews to understand the viewpoint of the contributors. They emphasize that agile methods need to be carefully selected and agile should not be seen as a silver bullet. They underline the importance of considering the variety of different practices and methods that had an influence on the manifesto. Furthermore, they mention that people should question their current understanding of "agile" and recommend reconsidering the core ideas of the manifesto.
Stronger than they look
(2019)
Engineering of large vascularized adipose tissue constructs is still a challenge for the treatment of extensive high-graded burns or the replacement of tissue after tumor removal. Communication between mature adipocytes and endothelial cells is important for homeostasis and the maintenance of adipose tissue mass but, to date, is mainly neglected in tissue engineering strategies. Thus, new coculture strategies are needed to integrate adipocytes and endothelial cells successfully into a functional construct. This review focuses on the cross-talk of mature adipocytes and endothelial cells and considers their influence on fatty acid metabolism and vascular tone. In addition, the properties and challenges with regard to these two cell types for vascularized tissue engineering are highlighted.
Despite the significant potential offered by the powder coating process for finishing wood-based materials, until now it has been used almost exclusively for coating Medium Density Fiber Board (MDF). A research project aims to develop processes and substrate materials that will allow lightweight boards to be powder coated.
The capability of the method of Immersion transmission ellipsometry (ITE) (Jung et al. Int Patent WO, 2004/109260) to not only determine three-dimensional refractive indices in anisotropic thin films (which was already possible in the past), but even their gradients along the z-direction (perpendicular to the film plane) is investigated in this paper. It is shown that the determination of orientation gradients in deep-sub-lm films becomes possible by applying ITE in combination with reflection ellipsometry. The technique is supplemented by atomic force microscopy for measuring the film thickness. For a photooriented thin film, no gradient was found, as expected. For a photo-oriented film, which was subsequently annealed in a nematic liquid crystalline phase, an order was found similar to the one applied in vertically aligned nematic displays, with a tilt angle varying along the z-direction. For fresh films, gradients were only detected for the refractive index perpendicular to the film plane, as expected.
Purpose
Context awareness in the operating room (OR) is important to realize targeted assistance to support actors during surgery. A situation recognition system (SRS) is used to interpret intraoperative events and derive an intraoperative situation from these. To achieve a modular system architecture, it is desirable to de-couple the SRS from other system components. This leads to the need of an interface between such an SRS and context-aware systems (CAS). This work aims to provide an open standardized interface to enable loose coupling of the SRS with varying CAS to allow vendor-independent device orchestrations.
Methods
A requirements analysis investigated limiting factors that currently prevent the integration of CAS in today's ORs. These elicited requirements enabled the selection of a suitable base architecture. We examined how to specify this architecture with the constraints of an interoperability standard. The resulting middleware was integrated into a prototypic SRS and our system for intraoperative support, the OR-Pad, as exemplary CAS for evaluating whether our solution can enable context-aware assistance during simulated orthopedical interventions.
Results
The emerging Service-oriented Device Connectivity (SDC) standard series was selected to specify and implement a middleware for providing the interpreted contextual information while the SRS and CAS are loosely coupled. The results were verified within a proof of concept study using the OR-Pad demonstration scenario. The fulfillment of the CAS’ requirements to act context-aware, conformity to the SDC standard series, and the effort for integrating the middleware in individual systems were evaluated. The semantically unambiguous encoding of contextual information depends on the further standardization process of the SDC nomenclature. The discussion of the validity of these results proved the applicability and transferability of the middleware.
Conclusion
The specified and implemented SDC-based middleware shows the feasibility of loose coupling an SRS with unknown CAS to realize context-aware assistance in the OR.
The focus of the developed maturity model was set on processes. The concept of the widespread CMM and its practices has been transferred to the perioperative domain and the concept of the new maturity model. Additional optimization goals and technological as well as networking-specific aspects enable a process- and object-focused view of the maturity model in order to ensure broad coverage of different subareas. The evaluation showed that the model is applicable to the perioperative field. Adjustments and extensions of the maturity model are future steps to improve the rating and classification of the new maturity model.
One of the key challenges for automatic assistance is the support of actors in the operating room depending on the status of the procedure. Therefore, context information collected in the operating room is used to gain knowledge about the current situation. In literature, solutions already exist for specific use cases, but it is doubtful to what extent these approaches can be transferred to other conditions. We conducted a comprehensive literature research on existing situation recognition systems for the intraoperative area, covering 274 articles and 95 cross-references published between 2010 and 2019. We contrasted and compared 58 identified approaches based on defined aspects such as used sensor data or application area. In addition, we discussed applicability and transferability. Most of the papers focus on video data for recognizing situations within laparoscopic and cataract surgeries. Not all of the approaches can be used online for real-time recognition. Using different methods, good results with recognition accuracies above 90% could be achieved. Overall, transferability is less addressed. The applicability of approaches to other circumstances seems to be possible to a limited extent. Future research should place a stronger focus on adaptability. The literature review shows differences within existing approaches for situation recognition and outlines research trends. Applicability and transferability to other conditions are less addressed in current work.
Purpose
For the modeling, execution, and control of complex, non-standardized intraoperative processes, a modeling language is needed that reflects the variability of interventions. As the established Business Process Model and Notation (BPMN) reaches its limits in terms of flexibility, the Case Management Model and Notation (CMMN) was considered as it addresses weakly structured processes.
Methods
To analyze the suitability of the modeling languages, BPMN and CMMN models of a Robot-Assisted Minimally Invasive Esophagectomy and Cochlea Implantation were derived and integrated into a situation recognition workflow. Test cases were used to contrast the differences and compare the advantages and disadvantages of the models concerning modeling, execution, and control. Furthermore, the impact on transferability was investigated.
Results
Compared to BPMN, CMMN allows flexibility for modeling intraoperative processes while remaining understandable. Although more effort and process knowledge are needed for execution and control within a situation recognition system, CMMN enables better transferability of the models and therefore the system. Concluding, CMMN should be chosen as a supplement to BPMN for flexible process parts that can only be covered insufficiently by BPMN, or otherwise as a replacement for the entire process.
Conclusion
CMMN offers the flexibility for variable, weakly structured process parts, and is thus suitable for surgical interventions. A combination of both notations could allow optimal use of their advantages and support the transferability of the situation recognition system.
Intra-operative fluoroscopy-guided assistance system for transcatheter aortic valve implantation
(2014)
A new surgical assistance system has been developed to assist the correct positioning of the AVP during transapical TAVI. The developed assistance system automatically defines the target area for implanting the AVP under live 2-D fluoroscopy guidance. Moreover, this surgical assistance system works with low levels of contrast agent for the final deployment of AVP, reducing therefore long-term negative effects, such as renal failure in the elderly and high-risk patients.
Container virtualization evolved into a key technology for deployment automation in line with the DevOps paradigm. Whereas container management systems facilitate the deployment of cloud applications by employing container based artifacts, parts of the deployment logic have been applied before to build these artifacts. Current approaches do not integrate these two deployment phases in a comprehensive manner. Limited knowledge on application software and middleware encapsulated in container-based artifacts leads to maintainability and configuration issues. Besides, the deployment of cloud applications is based on custom orchestration solutions leading to lock in problems. In this paper, we propose a two-phase deployment method based on the TOSCA standard. We present integration concepts for TOSCA-based orchestration and deployment automation using container-based artifacts. Our two-phase deployment method enables capturing and aligning all the deployment logic related to a software release leading to better maintainability. Furthermore, we build a container management system, which is composed of a TOSCA-based orchestrator on Apache Mesos, to deploy container-based cloud applications automatically.
Parallel applications are the computational backbone of major industry trends and grand challenges in science. Whereas these applications are typically constructed for dedicated High Performance Computing clusters and supercomputers, the cloud emerges as attractive execution environment, which provides on-demand resource provisioning and a pay-per-use model. However, cloud environments require specific application properties that may restrict parallel application design. As a result, design trade-offs are required to simultaneously maximize parallel performance and benefit from cloud-specific characteristics.
In this paper, we present a novel approach to assess the cloud readiness of parallel applications based on the design decisions made. By discovering and understanding the implications of these parallel design decisions on an application’s cloud readiness, our approach supports the migration of parallel applications to the cloud.We introduce an assessment procedure, its underlying meta model, and a corresponding instantiation to structure this multi-dimensional design space. For evaluation purposes, we present an extensive case study comprising three parallel applications and discuss their cloud readiness based on our approach.
Elasticity is considered to be the most beneficial characteristic of cloud environments, which distinguishes the cloud from clusters and grids. Whereas elasticity has become mainstream for web-based, interactive applications, it is still a major research challenge how to leverage elasticity for applications from the high-performance computing (HPC) domain, which heavily rely on efficient parallel processing techniques. In this work, we specifically address the challenges of elasticity for parallel tree search applications. Well-known meta-algorithms based on this parallel processing technique include branch-and-bound and backtracking search. We show that their characteristics render static resource provisioning inappropriate and the capability of elastic scaling desirable. Moreover, we discuss how to construct an elasticity controller that reasons about the scaling behavior of a parallel system at runtime and dynamically adapts the number of processing units according to user-defined cost and efficiency thresholds. We evaluate a prototypical elasticity controller based on our findings by employing several benchmarks for parallel tree search and discuss the applicability of the proposed approach. Our experimental results show that, by means of elastic scaling, the performance can be controlled according to user-defined thresholds, which cannot be achieved with static resource provisioning.
The cloud evolved into an attractive execution environment for parallel applications, which make use of compute resources to speed up the computation of large problems in science and industry. Whereas Infrastructure as a Service (IaaS) offerings have been commonly employed, more recently, serverless computing emerged as a novel cloud computing paradigm with the goal of freeing developers from resource management issues. However, as of today, serverless computing platforms are mainly used to process computations triggered by events or user requests that can be executed independently of each other and benefit from on-demand and elastic compute resources as well as per-function billing. In this work, we discuss how to employ serverless computing platforms to operate parallel applications. We specifically focus on the class of parallel task farming applications and introduce a novel approach to free developers from both parallelism and resource management issues. Our approach includes a proactive elasticity controller that adapts the physical parallelism per application run according to user-defined goals. Specifically, we show how to consider a user-defined execution time limit after which the result of the computation needs to be present while minimizing the associated monetary costs. To evaluate our concepts, we present a prototypical elastic parallel system architecture for self-tuning serverless task farming and implement two applications based on our framework. Moreover, we report on performance measurements for both applications as well as the prediction accuracy of the proposed proactive elasticity control mechanism and discuss our key findings.
Background: Internationally, teledermatology has proven to be a viable alternative to conventional physical referrals. Travel cost and referral times are reduced while patient safety is preserved. Especially patients from rural areas benefit from this healthcare innovation. Despite these established facts and positive experiences from EU neighboring countries like the Netherlands or the United Kingdom, Germany has not yet implemented store-and-forward teledermatology in routine care.
Methods: The TeleDerm study will implement and evaluate store-and-forward teledermatology in 50 general practitioner (GP) practices as an alternative to conventional referrals. TeleDerm aims to confirm that the possibility of store-and-forward teledermatology in GP practices is going to lead to a 15% (n = 260) reduction in referrals in the intervention arm. The study uses a cluster-randomized controlled trial design. Randomization is planned for the cluster “county”. The main observational unit is the GP practice. Poisson distribution of referrals is assumed. The evaluation of secondary outcomes like acceptance, enablers and barriers uses a mixed methods design with questionnaires and interviews.
Discussion: Due to the heterogeneity of GP practice organization, patient management software, information technology service providers, GP personal technical affinity and training, we expect several challenges in implementing teledermatology in German GP routine care. Therefore, we plan to recruit 30% more GPs than required by the power calculation. The implementation design and accompanying evaluation is expected to deliver vital insights into the specifics of implementing telemedicine in German routine care.
Several diseases occur due to asbestos exposure. Until today, asbestos predicted mortality and morbidity will increase because of the long latency period. Actually, the methods to investigate asbestos related disease are mostly invasive. Therefore, the aim of the present paper was to investigate, whether signals in human breath could be correlated to Asbestos related lung diseases using a multi-capillary column (MCC) connected to an ion mobility spectrometer (IMS) as non-invasive method. Here, the breath samples of 10 mL of 25 patients suffering from asbestos related diseases. This group includes patients with asbestos related pleural thickening with and without pulmonary fibrosis. Twelve healthy persons constitute the control group and the breath samples are compared with those of the BK4103 patients. In total 83 peaks are found in the IMS-Chromatogram. A discrimination was possible with p-values <0.001 for two peaks (99.9 %), <0.01 (99 %) for 5 peaks and <0.05 (95 %) for 17 peaks. The most discrimination peaks alpha pinene and 4-ethyltoluol were identified among some others with lower p-values. The corresponding Box-and-Whisker-Plots comparing both groups are presented. In addition, a decision tree including all peaks was created that shows a differentiation with alpha pinene between BK4103 (pleural plaques group) and the control group. In addition, the sensitivity was calculated to 96 %, specificity was 50 %, positive and negative predictive values were 80 % and 86 %. Ion mobility spectrometry was introduced as non-invasive method to separate both groups Asbestos related and healthy. Naturally, the findings need further confirmation on larger population groups, but encourage further investigations, too.
Online measurement of drug concentrations in patient's breath is a promising approach for individualized dosage. A direct transfer from breath- to blood-concentrations is not possible. Measured exhaled concentrations are following the blood-concentration with a delay in non-steady-state situations. Therefore, it is necessary to integrate the breath-concentration into a pharmacological model. Two different approaches for pharmacokinetic modelling are presented. Usually a 3-compartment model is used for pharmacokinetic calculations of blood concentrations. This 3-compartment model is extended with a 2-compartment model based on the first compartment of the 3-compartment model and a new lung compartment. The second approach is to calculate a time delay of changes in the concentration of the first compartment to describe the lung-concentration. Exemplarily both approaches are used for modelling of exhaled propofol. Based on time series of exhaled propofol measurements using an ion-mobility-spectrometer every minute for 346 min a correlation of calculated plasma and the breath concentration was used for modelling to deliver R2 = 0.99 interdependencies. Including the time delay modelling approach the new compartment coefficient ke0lung was calculated to ke0lung = 0.27 min−1 with R2 = 0.96. The described models are not limited to propofol. They could be used for any kind of drugs, which are measurable in patient's breath.
Recently, practitioners have begun appraising an effective customer journey design (CJD) as an important source of customer value in increasingly complex and digitalized consumer markets. Research, however, has neither investigated what constitutes the effectiveness of CJD from a consumer perspective nor empirically tested how it affects important variables of consumer behavior. The authors define an effective CJD as the extent to which consumers perceive multiple brand-owned touchpoints as designed in a thematically cohesive, consistent, and context-sensitive way. Analyzing consumer data from studies in two countries (4814 consumers in total), they provide evidence of the positive influence of an effective CJD on customer loyalty through brand attitude — over and above the effects of brand experience. Importantly, an effective CJD more strongly influences utilitarian brand attitudes, while brand experience more strongly affects hedonic brand attitudes. These underlying mechanisms are also prevalent when testing for the contingency factors services versus goods, perceived switching costs, and brand involvement.
Ion mobility spectrometry coupled to multi capillary columns (MCC/IMS) combines highly sensitive spectrometry with a rapid separation technique. MCC\IMS is widely used for biomedical breath analysis. The identification of molecules in such a complex sample necessitates a reference database. The existing IMS reference databases are still in their infancy and do not allow to actually identify all analytes. With a gas chromatograph coupled to a mass selective detector (GC/MSD) setup in parallel to a MCC/IMS instrumentation we may increase the accuracy of automatic analyte identification. To overcome the time-consuming manual evaluation and comparison of the results of both devices, we developed a software tool MIMA (MS-IMS-Mapper), which can computationally generate analyte layers for MCC/IMS spectra by using the corresponding GC/MSD data. We demonstrate the power of our method by successfully identifying the analytes of a seven-component mixture. In conclusion, the main contribution of MIMA is a fast and easy computational method for assigning analyte names to yet un-assigned signals in MCC/IMS data. We believe that this will greatly impact modern MCC/IMS-based biomarker research by 'giving a name' to previously detected disease-specific molecules.
Monday is unique for its reputation as a “bad” day—one that is characterized by pessimism and reluctance as noted by Rystrom and Benson (Financ Anal J 45(5):75–78, 1989). But the extent to which this applies to stock markets is still in dispute. While early evidence points to a Monday effect leading to negative returns, recent studies tend to suggest its disappearance or reversal.As a replication study, this paper searches for new evidence of this effect in the German stock market.We use data on the German blue-chip index DAX between 2000 and 2017 to test for the presence of a Monday effect by applying regression and controlling with GARCH analysis. The observation period provides a detailed insight into different market phases in one of the most liquid and information efficient international stock markets. Our results contribute no evidence to the persistent existence of a Monday effect on the German stock market. Our analysis is robust against the background of different market sentiments before, during and after the financial crisis.
Digitalization and enterprise architecture management: a perspective on benefits and challenges
(2023)
Many companies digitally transform their business models, processes, and services. They have also been using Enterprise Architecture Management approaches for a long time to synchronize corporate strategy and information technology. Such digitalization projects bring different challenges for Enterprise Architecture Management. Without understanding and addressing them, Enterprise Architecture Management projects will fail or not deliver the expected value. Since existing research has not yet addressed these challenges, they were investigated based on a qualitative expert study with leading industry experts from Europe. Furthermore, potential benefits of digitalization projects for Enterprise Architecture Management were researched. Our results provide a theoretical framework consisting of five identified challenges, triggers and a number of benefits. Furthermore, we discuss in what ways digitalization and EAM is a promising topic for future research.
In this paper a method for the generation of gSPM with ontology-based generalization was presented. The resulting gSPM was modeled with BPMN/BPMNsix in an efficient way and could be executed with BPMN workflow engines. In the next step the implementation of resource concepts, anatomical structures, and transition probabilities for workflow execution will be realized.