Refine
Document Type
- Journal article (875) (remove)
Language
- English (875) (remove)
Is part of the Bibliography
- yes (875)
Institute
- ESB Business School (280)
- Life Sciences (277)
- Informatik (180)
- Technik (90)
- Texoversum (45)
- Zentrale Einrichtungen (4)
Publisher
- Elsevier (198)
- MDPI (98)
- Springer (78)
- Wiley (53)
- De Gruyter (39)
- IEEE (30)
- American Chemical Society (24)
- Emerald (21)
- Sage Publishing (15)
- Taylor & Francis (14)
The food system represents a key industry for Europe and Germany in particular. However, it is also the single most significant contributor to climate and environmental change. A food system transformation is necessary to overcome the system’s major and constantly increasing challenges in the upcoming decades. One possible facilitator for this transformation are radical and disruptive innovations that start-ups develop. There are many challenges for start-ups in general and food start-ups in particular. Various support opportunities and resources are crucial to ensure the success of food start-ups. One aim of this study is to identify how the success of start-ups in the food system can be supported and further strengthened by actors in the innovation ecosystem in Germany. There is still room for improvement and collaboration toward a thriving innovation ecosystem. A successful innovation ecosystem is characterised by a well-organised, collaborative, and supportive environment with a vivid exchange between the members in the ecosystem. The interviewees confirmed this, and although the different actors are already cooperating, there is still room for improvement. The most common recommendation for improving cooperation is learning from other countries and bringing the best to Germany.
Royal Philip's goal was to use innovation to improve the lives of three billion people a year by 2025. To reach that goal, the company was shifting from selling medical products in a transactional manner to providing integrated healthcare solutions based on digital health technology ("HealthTech").
This shift required a dual transformation. On one hand, the company needed to transform how healthcare was conducted. Healthcare professionals would have to change the way they worked and reimbursement schemes needed to change to incentivize payers, providers, and patients in vastly different ways. On the other hand, Philips needed to redesign how it worked internally. The company componentized its business, introduced digital platforms, and co-created solutions with the various stakeholders of the healthcare industry.
In other words: Royal Philips was transforming itself in order to reinvent healthcare in the digital age.
The COVID-19 pandemic necessitated significant changes in foreign language education, forcing teachers to reconstruct their identities and redefine their roles as language educators. To better understand these adaptations and perspectives, it is crucial to study how the pandemic has influenced teaching practices. This mixed-methods study focused on the less-explored aspects of foreign language teaching during the pandemic, specifically examining how language teachers adapted and perceived their practices, including rapport building and learner autonomy, during emergency remote teaching (ERT) in higher education institutions. It also explored teachers’ intentions for their teaching in the post-pandemic era. An online survey was conducted, involving 118 language educators primarily from Germany, with a smaller representation from New Zealand, the United States, and the United Kingdom. The analysis of participants’ responses revealed issues and opportunities regarding lesson formats, tool usage, rapport, and learner autonomy. Our findings offer insights into the desired changes participants envisioned for the post-pandemic era. The results highlight the opportunities ERT had created in terms of teacher development, and we offer suggestions to enhance professional development programmes based on these findings.
Blockchains have become increasingly important in recent years and have expanded their applicability to many domains beyond finance and cryptocurrencies. This adoption has particularly increased with the introduction of smart contracts, which are immutable, user-defined programs directly deployed on blockchain networks. However, many scenarios require business transactions to simultaneously access smart contracts on multiple, possibly heterogeneous blockchain networks while ensuring the atomicity and isolation of these transactions, which is not natively supported by current blockchain systems. Therefore, in this work, we introduce the Transactional Cross-Chain Smart Contract Invocation (TCCSCI) approach that supports such distributed business transactions while ensuring their global atomicity and serializability. The approach introduces the concept of Resource Manager Smart Contracts, and 2PC for Blockchains (2PC4BC), a client-driven Atomic Commit Protocol (ACP) specialized for blockchain-based distributed transactions. We validate our approach using a prototypical implementation, evaluate its introduced overhead, and prove its correctness.
In countries such as Germany, where municipalities have planning sovereignty, problems of urban sprawl often arise. As the dynamics of land development have not substantially subsided over the last years, the national government decided to test the instrument of ‘Tradable Planning Permits’ (TPP) in a nationwide field experiment with 87 municipalities involved. The field experiment was able to implement the key features of a TPP system in a laboratory setting with approximated real socioeconomic and planning conditions. In a TPP system allocated planning permits must be used by municipalities for developing land. The permits can be traded between local jurisdictions, so that they have flexibility in deciding how to comply with the regulation. In order to evaluate the performance of such a system, specific field data about future building areas and their impact on community budgets for the period 2014–2028 were collected. The field experiment contains several sessions with representatives of the municipalities and with students. The participants were confronted with two (municipalities) and four (students) schemes. The results show that a trading system can curb down land development in an effective and also efficient manner. However, depending on the regulatory framework, the trading schemes show different price developments and distributional effects. The unexperienced representatives of the local authorities can easily handle with the permits in the administration and in the established market. A trading scheme sets very high incentives to save open space and to direct development activities to areas within existing planning boundaries. It is therefore a promising instrument for Germany and also other regions or countries with an established land-use planning system.
We investigate the toxicity of different types and sizes of microplastic particles (0.3–4 mm) under different conditions (new particles, aged particles with biofilm, and particles with adsorbed Tributyltin) on the freshwater amphipod Gammarus fossarum in 3-week exposures. All types of plastic particles, which were randomly taken up to a small extent, were mostly Polyphenylenoxide, Polybutylentherephthalate and Polypropylene, with particles < 1 mm in size. Plastic particles did not affect the feeding and locomotory behaviour of gammarids, and there was no strong difference between pristine plastic particles and aged particles with biofilm. Mortality tended to be higher compared with the control. Tributyltinhydride (TBTH) adsorbed to microplastic particles had no effect on uptake, survival, feeding and locomotory behaviour during the 3 weeks of exposure. Dissolved TBTH, however, was already very toxic after few days of exposure (LC50-96h < 1 ng l–1).
Hardly any software development process is used as prescribed by authors or standards. Regardless of company size or industry sector, a majority of project teams and companies use hybrid development methods (short: hybrid methods) that combine different development methods and practices. Even though such hybrid methods are highly individualized, a common understanding of how to systematically construct synergetic practices is missing. In this article, we make a first step towards a statistical construction procedure for hybrid methods. Grounded in 1467 data points from a large‐scale practitioner survey, we study the question: What are hybrid methods made of and how can they be systematically constructed? Our findings show that only eight methods and few practices build the core of modern software development. Using an 85% agreement level in the participants' selections, we provide examples illustrating how hybrid methods can be characterized by the practices they are made of. Furthermore, using this characterization, we develop an initial construction procedure, which allows for defining a method frame and enriching it incrementally to devise a hybrid method using ranked sets of practice.
With the progress of technology in modern hospitals, an intelligent perioperative situation recognition will gain more relevance due to its potential to substantially improve surgical workflows by providing situation knowledge in real-time. Such knowledge can be extracted from image data by machine learning techniques but poses a privacy threat to the staff’s and patients’ personal data. De-identification is a possible solution for removing visual sensitive information. In this work, we developed a YOLO v3 based prototype to detect sensitive areas in the image in real-time. These are then deidentified using common image obfuscation techniques. Our approach shows that it is principle suitable for de-identifying sensitive data in OR images and contributes to a privacyrespectful way of processing in the context of situation recognition in the OR.
Towards Automated Surgical Documentation using automatically generated checklists from BPMN models
(2021)
The documentation of surgeries is usually created from memory only after the operation, which is an additional effort for the surgeon and afflicted with the possibility of imprecisely, shortend reports. The display of process steps in the form of checklists and the automatic creation of surgical documentation from the completed process steps could serve as a reminder, standardize the surgical procedure and save time for the surgeon. Based on two works from Reutlingen University, which implemented the creation of dynamic checklists from Business Process Modelling Notation (BPMN) models and the storage of times at which a process step was completed, a prototype was developed for an android tablet, to expand the dynamic checklists by functions such as uploading photos and files, manual user entries, the interception of foreseeable deviations from the normal course of operations and the automatic creation of OR documentation.
Intraoperative brain deformation, so called brain shift, affects the applicability of preoperative magnetic resonance imaging (MRI) data to assist the procedures of intraoperative ultrasound (iUS) guidance during neurosurgery. This paper proposes a deep learning-based approach for fast and accurate deformable registration of preoperative MRI to iUS images to correct brain shift. Based on the architecture of 3D convolutional neural networks, the proposed deep MRI-iUS registration method has been successfully tested and evaluated on the retrospective evaluation of cerebral tumors (RESECT) dataset. This study showed that our proposed method outperforms other registration methods in previous studies with an average mean squared error (MSE) of 85. Moreover, this method can register three 3D MRI-US pair in less than a second, improving the expected outcomes of brain surgery.
The euphoria around microservices has decreased over the years, but the trend of modernizing legacy systems to this novel architectural style is unbroken to date. A variety of approaches have been proposed in academia and industry, aiming to structure and automate the often long-lasting and cost-intensive migration journey. However, our research shows that there is still a need for more systematic guidance. While grey literature is dominant for knowledge exchange among practitioners, academia has contributed a significant body of knowledge as well, catching up on its initial neglect. A vast number of studies on the topic yielded novel techniques, often backed by industry evaluations. However, practitioners hardly leverage these resources. In this paper, we report on our efforts to design an architecture-centric methodology for migrating to microservices. As its main contribution, a framework provides guidance for architects during the three phases of a migration. We refer to methods, techniques, and approaches based on a variety of scientific studies that have not been made available in a similarly comprehensible manner before. Through an accompanying tool to be developed, architects will be in a position to systematically plan their migration, make better informed decisions, and use the most appropriate techniques and tools to transition their systems to microservices.
With the expansion of cyber-physical systems (CPSs) across critical and regulated industries, systems must be continuously updated to remain resilient. At the same time, they should be extremely secure and safe to operate and use. The DevOps approach caters to business demands of more speed and smartness in production, but it is extremely challenging to implement DevOps due to the complexity of critical CPSs and requirements from regulatory authorities. In this study, expert opinions from 33 European companies expose the gap in the current state of practice on DevOps-oriented continuous development and maintenance. The study contributes to research and practice by identifying a set of needs. Subsequently, the authors propose a novel approach called Secure DevOps and provide several avenues for further research and development in this area. The study shows that, because security is a cross-cutting property in complex CPSs, its proficient management requires system-wide competencies and capabilities across the CPSs development and operation.
Towards a model for holistic mapping of supply chains by means of tracking and tracing technologies
(2022)
The usage of tracking and tracing technologies not only enables transparency and visibility of supply chains but also offers far-reaching advantages for companies, such as ensuring product quality or reducing supplier risks. Increasing the amount of shared information supports both internal and external planning processes as well as the stability and resilience of globally operating value chains. This paper aims to differentiate and define the functionalities of tracking and tracing technologies that are frequently used interchangeably in literature. Furthermore, this paper incorporates influencing factors impacting a sequencing of the connected world in Industry4.0 supply chain networks. This includes legal influences, the embedment of supply chain-related standards, and new possibilities of emerging technologies. Finally, the results are summarized in a model for the holistic mapping of supply chains by means of tracking and tracing technologies. The resulting technological solutions that can be derived from the model enable companies to address missing elements in order to enable the holistic mapping of supply chain events as well as the transparent representation of a digital shadow throughout the entire supply chain.
The benefits of urban data cannot be realized without a political and strategic view of data use. A core concept within this view is data governance, which aligns strategy in data-relevant structures and entities with data processes, actors, architectures, and overall data management. Data governance is not a new concept and has long been addressed by scientists and practitioners from an enterprise perspective. In the urban context, however, data governance has only recently attracted increased attention, despite the unprecedented relevance of data in the advent of smart cities. Urban data governance can create semantic compatibility between heterogeneous technologies and data silos and connect stakeholders by standardizing data models, processes, and policies. This research provides a foundation for developing a reference model for urban data governance, identifies challenges in dealing with data in cities, and defines factors for the successful implementation of urban data governance. To obtain the best possible insights, the study carries out qualitative research following the design science research paradigm, conducting semi-structured expert interviews with 27 municipalities from Austria, Germany, Denmark, Finland, Sweden, and the Netherlands. The subsequent data analysis based on cognitive maps provides valuable insights into urban data governance. The interview transcripts were transferred and synthesized into comprehensive urban data governance maps to analyze entities and complex relationships with respect to the current state, challenges, and success factors of urban data governance. The findings show that each municipal department defines data governance separately, with no uniform approach. Given cultural factors, siloed data architectures have emerged in cities, leading to interoperability and integrability issues. A city-wide data governance entity in a cross-cutting function can be instrumental in breaking down silos in cities and creating a unified view of the city’s data landscape. The further identified concepts and their mutual interaction offer a powerful tool for developing a reference model for urban data governance and for the strategic orientation of cities on their way to data-driven organizations.
Purpose
For the modeling, execution, and control of complex, non-standardized intraoperative processes, a modeling language is needed that reflects the variability of interventions. As the established Business Process Model and Notation (BPMN) reaches its limits in terms of flexibility, the Case Management Model and Notation (CMMN) was considered as it addresses weakly structured processes.
Methods
To analyze the suitability of the modeling languages, BPMN and CMMN models of a Robot-Assisted Minimally Invasive Esophagectomy and Cochlea Implantation were derived and integrated into a situation recognition workflow. Test cases were used to contrast the differences and compare the advantages and disadvantages of the models concerning modeling, execution, and control. Furthermore, the impact on transferability was investigated.
Results
Compared to BPMN, CMMN allows flexibility for modeling intraoperative processes while remaining understandable. Although more effort and process knowledge are needed for execution and control within a situation recognition system, CMMN enables better transferability of the models and therefore the system. Concluding, CMMN should be chosen as a supplement to BPMN for flexible process parts that can only be covered insufficiently by BPMN, or otherwise as a replacement for the entire process.
Conclusion
CMMN offers the flexibility for variable, weakly structured process parts, and is thus suitable for surgical interventions. A combination of both notations could allow optimal use of their advantages and support the transferability of the situation recognition system.
The conventional view of the value-creation chain suggests offering high-value propositions at the product level (in terms of benefits provided by elements of the product) to attain high-value perceptions at the customer level, which should ultimately result in high-value appropriation at the firm level (i.e. relationship, volume, pricing and financial success). This study challenges this view and provides a differentiated understanding of the value creation chain. With a multi-industry sample of 339 companies and a sample of 626 customers to validate managerial assessments, the authors apply a configurational approach to identify whether and to what extent offering high-value propositions at the product level is necessary or sufficient for achieving superior value perceptions at the customer level and high-value appropriation at the firm level. Taking into account the company-internal and company-external environment of the value-creation chain, the study identifies seven value creation chain constellations.
Purpose: This study aims to conceptualize and test the effect of consumers´ perceptions of complaint handling quality (PCHQ) in both traditional and social media channels.
Design/methodology/approach: Study 1 systematically reviews the relevant literature and then carries out a consumer and manager survey. This approach aims to conceptualize the dimensionality of PCHQ. Study 2 tests the effect of PCHQ on key marketing outcomes. Using survey data from a German telecommunications company, the study provides an explanation for the differences in outcomes across traditional (hotline) and social media channels.
Findings: Study 1 reveals that PCHQ is best conceptualized as a five dimensional construct with 15 facets. There are significant differences between customers and managers in terms of the importance attached to the various dimensions. The construct shows strong psychometric properties with high reliability and validity, thereby opening up opportunities to treat these facets as measurement indicators for the construct. Study 2 indicates that the effect of PCHQ on consumer loyalty and word-of-mouth (WOM) communication is stronger in social media than in traditional channels. Procedural justice and the overall quality of service solutions emerge as general dimensions of PCHQ because they are equally important in both channels. In contrast, interactional justice, distributive justice and customer effort have varying effects across the two channels.
Research limitations/implications: This study contributes to the understanding of a firm´s channel selection for complaint handling in two ways. First, it evaluates and conceptualizes the PCHQ construct. Second, it compares the effects of different dimensions of PCHQ on key marketing outcomes across traditional and socialmedia channels.
Practical implications: This study enables managers to understand the difference in efficacy attached to different dimensions of PCHQ. It further highlights such differences across traditional and social media service channels. For example, the effect of complaint handling on social media is of particular importance when generating WOM communication.
Originality/value: This study offers a comprehensive conceptualization of the PCHQ construct and reveals the general and channel contingent effects of its different dimensions on key marketing outcomes.
The development of new materials that mimic cartilage and its function is an unmet need that will allow replacing the damaged parts of the joints, instead of the whole joint. Polyvinyl alcohol (PVA) hydrogels have raised special interest for this application due to their biocompatibility, high swelling capacity and chemical stability. In this work, the effect of post-processing treatments (annealing, high hydrostatic pressure (HHP) and gamma-radiation) on the performance of PVA gels obtained by cast-drying was investigated and, their ability to be used as delivery vehicles of the anti-inflammatories diclofenac or ketorolac was evaluated. HHP damaged the hydrogels, breaking some bonds in the polymeric matrix, and therefore led to poor mechanical and tribological properties. The remaining treatments, in general, improved the performance of the materials, increasing their crystallinity. Annealing at 150 °C generated the best mechanical and tribological results: higher resistance to compressive and tensile loads, lower friction coefficients and ability to support higher loads in sliding movement. This material was loaded with the anti-inflammatories, both without and with vitamin E (Vit.E) or Vit.E + cetalkonium chloride (CKC). Vit.E + CKC helped to control the release of the drugs which occurred in 24 h. The material did not induce irritability or cytotoxicity and, therefore, shows high potential to be used in cartilage replacement with a therapeutic effect in the immediate postoperative period.
Container virtualization evolved into a key technology for deployment automation in line with the DevOps paradigm. Whereas container management systems facilitate the deployment of cloud applications by employing container based artifacts, parts of the deployment logic have been applied before to build these artifacts. Current approaches do not integrate these two deployment phases in a comprehensive manner. Limited knowledge on application software and middleware encapsulated in container-based artifacts leads to maintainability and configuration issues. Besides, the deployment of cloud applications is based on custom orchestration solutions leading to lock in problems. In this paper, we propose a two-phase deployment method based on the TOSCA standard. We present integration concepts for TOSCA-based orchestration and deployment automation using container-based artifacts. Our two-phase deployment method enables capturing and aligning all the deployment logic related to a software release leading to better maintainability. Furthermore, we build a container management system, which is composed of a TOSCA-based orchestrator on Apache Mesos, to deploy container-based cloud applications automatically.
Mature economies which are driven mainly by small and medium sized enterprises (SMEs) are increasingly becoming dependent on material imports. Global material consumption is ever increasing, mainly driven by population increases. Decoupling of material consumption from economic growth is one of the greatest challenges of the 21st century. Within this paper available methods for the assessment of material efficiency on different economic scales are investigated and those detected that are particulary suitable for the use in SMEs. Recommendations for further improvements of the selected tools and an outlook concerning planned research activities in the field of material efficiency in enterprises, supply chains and circular economy aspects are given.
Enterprise Architectures (EA) consist of a multitude of architecture elements, which relate in manifold ways to each other. As the change of a single element hence impacts various other elements, mechanisms for architecture analysis are important to stakeholders. The high number of relationships aggravates architecture analysis and makes it a complex yet important task. In practice EAs are often analyzed using visualizations. This article contributes to the field of visual analytics in enterprise architecture management (EAM) by reviewing how state-of-the-art software platforms in EAM support stakeholders with respect to providing and visualizing the “right” information for decision-making tasks. We investigate the collaborative decision-making process in an experiment with master students using professional EAM tools by developing a research study. We evaluate the students’ findings by comparing them with the experience of an enterprise architect.
Titanium(IV) surface complexes bearing chelating catecholato ligands for enhanced band-gap reduction
(2023)
Protonolysis reactions between dimethylamido titanium(IV) catecholate [Ti(CAT)(NMe2)2]2 and neopentanol or tris(tert-butoxy)silanol gave catecholato-bridged dimers [(Ti(CAT)(OCH2tBu)2)(HNMe2)]2 and [Ti(CAT){OSi(OtBu)3}2(HNMe2)2]2, respectively. Analogous reactions using the dimeric dimethylamido titanium(IV) (3,6-di-tert-butyl)catecholate [Ti(CATtBu2-3,6)(NMe2)2]2 yielded the monomeric Ti(CATtBu2-3,6)(OCH2tBu)2(HNMe2)2 and Ti(CATtBu2-3,6)[OSi(OtBu)3]2(HNMe2)2. The neopentoxide complex Ti(CATtBu2-3,6)(OCH2tBu)2(HNMe2)2 engaged in further protonolysis reactions with Si–OH groups and was consequentially used for grafting onto mesoporous silica KIT-6. Upon immobilization, the surface complex [Ti(CATtBu2-3,6)(OCH2tBu)2(HNMe2)2]@[KIT-6] retained the bidentate chelating geometry of the catecholato ligand. This convergent grafting strategy was compared with a sequential and an aqueous approach, which gave either a mixture of bidentate chelating species with a bipodally anchored Ti(IV) center along with other physisorbed surface species or not clearly identifiable surface species. Extension of the convergent and aqueous approaches to anatase mesoporous titania (m-TiO2) enabled optical and electronic investigations of the corresponding surface species, revealing that the band-gap reduction is more pronounced for the bidentate chelating species (convergent approach) than for that obtained via the aqueous approach. The applied methods include X-ray photoelectron spectroscopy, ultraviolet photoelectron spectroscopy, and solid-state UV/vis spectroscopy. The energy-level alignment for the surface species from the aqueous approach, calculated from experimental data, accounts for the well-known type II excitation mechanism, whereas the findings indicate a distinct excitation mechanism for the bidentate chelating surface species of the material [Ti(CATtBu2-3,6)(OCH2tBu)2(HNMe2)2]@[m-TiO2].
Successful transitions to a sustainable bioeconomy require novel technologies, processes, and practices as well as a general agreement about the overarching normative direction of innovation. Both requirements necessarily involve collective action by those individuals who purchase, use, and co-produce novelties: the consumers. Based on theoretical considerations borrowed from evolutionary innovation economics and consumer social responsibility, we explore to what extent consumers’ scope of action is addressed in the scientific bioeconomy literature. We do so by systematically reviewing bioeconomy-related publications according to (i) the extent to which consumers are regarded as passive vs. active, and (ii) different domains of consumer responsibility (depending on their power to influence economic processes). We find all aspects of active consumption considered to varying degrees but observe little interconnection between domains. In sum, our paper contributes to the bioeconomy literature by developing a novel coding scheme that allows us to pinpoint different aspects of consumer activity, which have been considered in a rather isolated and undifferentiated manner. Combined with our theoretical considerations, the results of our review reveal a central research gap which should be taken up in future empirical and conceptual bioeconomy research. The system-spanning nature of a sustainable bioeconomy demands an equally holistic exploration of the consumers’ prospective and shared responsibility for contributing to its coming of age, ranging from the procurement of information on bio-based products and services to their disposal.
When forecasting sales figures, not only the sales history but also the future price of a product will influence the sales quantity. At first sight, multivariate time series seem to be the appropriate model for this task. Nonetheless, in real life history is not always repeatable, i.e., in the case of sales history there is only one price for a product at a given time. This complicates the design of a multivariate time series. However, for some seasonal or perishable products the price is rather a function of the expiration date than of the sales history. This additional information can help to design a more accurate and causal time series model. The proposed solution uses an univariate time series model but takes the price of a product as a parameter that influences systematically the prediction based on a calculated periodicity. The price influence is computed based on historical sales data using correlation analysis and adjustable price ranges to identify products with comparable history. The periodicity is calculated based on a novel approach that is based on data folding and Pearson Correlation. Compared to other techniques this approach is easy to compute and allows to preset the price parameter for predictions and simulations. Tests with data from the Data Mining Cup 2012 as well as artificial data demonstrate better results than established sophisticated time series methods.
Standardisation of breath sampling is important for application of breath analysis in clinical settings. By studying the effect of room airing on indoor and breath analytes and by generating time series of room air with different sampling intervals we sought to get further insights into room air metabolism, to detect the relevance of exogenous VOCs and to make conclusions about their consideration for the interpretation of exhaled breath. Room air and exhaled breath of a healthy subject were analysed before and after room airing. Furthermore a time series of room air with doors and windows closed was taken over 84 h by an automatic sampling every 180 min. A second times series studied room air analytes over 70 h with samples taken every 16.5 min. For breath and room air measurements an IMS coupled to a multi-capillary column (IMS/MCC) [Bio-Scout® - B&S Analytik GmbH, Dortmund, Germany] was used. The peaks were characterized using the Software Visual Now (B&S Analytik, Dortmund Germany) and identified using the software package MIMA (version 1.1, provided by the Max Planck Institute for Informatics, Saarbrücken, Germany) and the database 20160426_SubstanzDbNIST_122 (B & S Analytik GmbH, Dortmund, Germany). In the morning 4 analytes (Decamethylcylopentasiloxane [541-02-6]; Pentan-2-one [107-87-9] – Dimer; Hexan-1-al [66-25-1]; Pentan-2-one [107-87-9]) – Monomer showed high intensities in the room air and exhaled breath. They were significantly but not equally reduced by room airing. The time series about 84 h showed a time dependent decrease of analytes (limonen-monomer and -dimer; Decamethylcylopentasiloxane, Butan-1-ol, Butan-1-ol) as well as increase (Pentan-2-one [107-87-9] – Dimer). Shorter sampling intervals exhibited circadian variations of analyte concentrations for many analytes. Breath sampling in the morning needs room airing before starting. Then the variation of the intensity of indoor analytes can be kept small. The time series of indoor analytes show, that their intensities have a different behaviour, with time dependent declines, constant increases and circadian variations, dependent on room airing. This has implications on the breath sampling procedure and the intrepretation of exhaled breath.
This study introduces a straightforward approach to construct three-dimensional (3D) surface-enhanced Raman spectroscopy (SERS) substrates using chemically modified silica particles as microcarriers and by attaching metal nanoparticles (NPs) onto their surfaces. Tollens’ reagent and sputtering techniques are utilized to prepare the SERS substrates from mercapto-functionalized silica particles. Treatment with Tollens’ reagent generates a variety of silver NPs, ranging from approximately 10 to 400 nm, while sputtering with gold (Au) yields uniformly distributed NPs with an island-like morphology. Both substrates display wide plasmon resonances in the scattering spectra, making them effective for SERS in the visible spectral range, with enhancement factors (ratio of the analyte’s intensity at the hotspot compared to that on the substrate in the absence of metal nanoparticles) of up to 25. These 3D substrates have a significant advantage over traditional SERS substrates because their active surface area is not limited to a 2D surface but offers a much greater active surface due to the 3D arrangement of the NPs. This feature may enable achieving much higher SERS intensity from within streaming liquids or inside cells/tissues.
The data presented in this article characterize the thermomechanical and microhardness properties of a novel melamine-formaldehyde resin (MF) intended for the use as a self-healing surface coating. The investigated MF resin is able to undergo reversible crosslinking via Diels Alder reactive groups. The microhardness data were obtained from nanoindentation measurements performed on solid resin film samples at different stages of the self-healing cycle. Thermomechanical analysis was performed under dynamic load conditions. The data provide supplemental material to the manuscript published by Urdl et al. 2020 (https://doi.org/10.1016/j.eurpolymj.2020.109601) on the self-healing performance of this resin, where a more thorough discussion on the preparation, the properties of this coating material and its application in impregnated paper-based decorative laminates can be found.
In the course of a more intensive energy generation from regenerative sources, an increased number of energy storages is required. In addition to the widespread means of storing electric energy, storing energy thermally can contribute significantly. However, limited research exists on the behaviour of thermal energy storages (TES) in practical operation. While the physical processes are well known, it is nevertheless often not possible to adequately evaluate its performance with respect to the quality of thermal stratification inside the tank, which is crucial for the thermodynamic effectiveness of the TES. The behaviour of a TES is experimentally investigated in cyclic charging and discharging operation in interaction with a cogeneration (CHP) unit at a test rig in the lab. From the measurements the quality of thermal stratification is evaluated under varying conditions using different metrics such as normalised stratification factor, modified MIX number, exergy number and exergy efficiency, which extends the state of art for CHP applications. The results show that the positioning of the temperature sensors for turning the CHP unit on and off has a significant influence on both the effective capacity of a TES and the quality of thermal stratification inside the tank. It is also revealed that the positioning of at least one of these sensors outside the storage tank, i.e. in the return line to the CHP unit, prevents deterioration of thermal stratification, thereby enhancing thermodynamic effectiveness. Furthermore, the effects of thermal load and thermal load profile on effective capacity and thermal stratification are discussed, even though these are much smaller compared to the effect of positioning the temperature sensors.
This paper investigates the electrothermal stability and the predominant defect mechanism of a Schottky gate AlGaN/GaN HEMT. Calibrated 3-D electrothermal simulations are performed using a simple semiempirical dc model, which is verified against high-temperature measurements up to 440°C. To determine the thermal limits of the safe operating area, measurements up to destruction are conducted at different operating points. The predominant failure mechanism is identified to be hot-spot formation and subsequent thermal runaway, induced by large drain–gate leakage currents that occur at high temperatures. The simulation results and the high temperature measurements confirm the observed failure patterns.
Theory and practice of implementing a successful enterprise IoT strategy in the industry 4.0 era
(2021)
Since the arrival of the internet and affordable access to technologies, digital technologies have occupied a growing place in industries, propelling us towards a 4th industrial revolution: Industry 4.0. In today’s era of digital upheaval, enterprises are increasingly undergoing transformations that are leading to their digitalization. The traditional manufacturing industry is in the throes of a digital transformation that is accelerated by exponentially growing technologies (e.g., intelligent robots, Internet of Things, sensors, 3D printing). Around the world, enterprises are in a frantic race to implement solutions based on IoT to improve their productivity, innovation, and reduce costs and improve their markets on the international scene. Considering the immense transformative potential that IoTs and big data have to bring to the industrial sector, the adoption of IoT in all industrial systems is a challenge to remain competitive and thus transform the industry into a smart factory. This paper presents the description of the innovation and digitalization process, following the Industry 4.0 paradigm to implement a successful enterprise IoT strategy.
Hypericin has large potential in modern medicine and exhibits fascinating structural dynamics, such as multiple conformations and tautomerization. However, it is difficult to study individual conformers/tautomers, as they cannot be isolated due to the similarity of their chemical and physical properties. An approach to overcome this difficulty is to combine single molecule experiments with theoretical studies. Time-dependent density functional theory (TD-DFT) calculations reveal that tautomerization of hypericin occurs via a two-step proton transfer with an energy barrier of 1.63 eV, whereas a direct single-step pathway has a large activation energy barrier of 2.42 eV. Tautomerization in hypericin is accompanied by reorientation of the transition dipole moment, which can be directly observed by fluorescence intensity fluctuations. Quantitative tautomerization residence times can be obtained from the autocorrelation of the temporal emission behavior revealing that hypericin stays in the same tautomeric state for several seconds, which can be influenced by the embedding matrix. Furthermore, replacing hydrogen with deuterium further proves that the underlying process is based on tunneling of a proton. In addition, the tautomerization rate can be influenced by a λ/2 Fabry–Pérot microcavity, where the occupation of Raman active vibrations can alter the tunneling rate.
Thematic issue on human-centred ambient intelligence: cognitive approaches, reasoning and learning
(2017)
This editorial presents advances on human-centred Ambient Intelligence applications which take into account cognitive issues when modelling users (i.e. stress, attention disorders), and learn users’ activities/preferences and adapt to them (i.e. at home, driving a car). These papers also show AmI applications in health and education, which make them even more valuable for the general society.
We investigated the state of artificial intelligence (AI) in pharmaceutical research and development (R&D) and outline here a risk and reward perspective regarding digital R&D. Given the novelty of the research area, a combined qualitative and quantitative research method was chosen, including the analysis of annual company reports, investor relations information, patent applications, and scientific publications of 21 pharmaceutical companies for the years 2014 to 2019. As a result, we can confirm that the industry is in an ‘early mature’ phase of using AI in R&D. Furthermore, we can demonstrate that, despite the efforts that need to be managed, recent developments in the industry indicate that it is worthwhile to invest to become a ‘digital pharma player’.
In recent years, the Graph Model has become increasingly popular, especially in the application domain of social networks. The model has been semantically augmented with properties and labels attached to the graph elements. It is difficult to ensure data quality for the properties and the data structure because the model does not need a schema. In this paper, we propose a schema bound Typed Graph Model with properties and labels. These enhancements improve not only data quality but also the quality of graph analysis. The power of this model is provided by using hyper-nodes and hyper-edges, which allows to present data structures on different abstraction levels. We prove that the model is at least equivalent in expressive power to most popular data models. Therefore, it can be used as a supermodel for model management and data integration. We illustrate by example the superiority of this model over the property graph data model of Hidders and other prevalent data models, namely the relational, object-oriented, XML model, and RDF Schema.
Purpose
This field study aims to investigate the interactive relationships of millennial employee’s gender, supervisor’s gender and country culture on the conflict-management strategies (CMS) in ten countries (USA, China, Turkey, Germany, Bangladesh, Portugal, Pakistan, Italy, Thailand and Hong Kong).
Design/methodology/approach
This exploratory study extends past research by examining the interactive effects of gender × supervisor’s gender × country on the CMS within a single generation of workers, millennials. The Rahim Organizational Conflict Inventory–II, Form A was used to assess the use of the five CMS (integrating, obliging, dominating, avoiding and compromising). Data analysis found CMS used in the workplace are associated with the interaction of worker and supervisor genders and the national context of their work.
Findings
Data analysis (N = 2,801) was performed using the multivariate analysis of covariance with work experience as a covariate. The analysis provided support for the three-way interaction. This interaction suggests how one uses the CMS depends on self-gender, supervisor’s gender and the country where the parties live. Also, the covariate – work experience – was significantly associated with CMS.
Research limitations/implications
One of the limitations of this study is that the authors collected data from a collegiate sample of employed management students in ten countries. There are significant implications for leading global teams and training programs for mid-level millennials.
Practical implications
There are various conflict situations where one conflict strategy may be more appropriate than others. Organizations may have to change their policies for recruiting employees who are more effective in conflict management.
Social implications
Conflict management is not only important for managers but it is also important for all human beings. Individuals handle conflict every day and it would be really good if they could handle it effectively and improve their gains.
Originality/value
To the best of the authors’ knowledge, no study has tested a three-way interaction of variables on CMS. This study has a wealth of information on CMS for global managers.
Recently described rhizolutin and collinolactone isolated from Streptomyces Gç 40/10 share the same novel carbon scaffold. Analyses by NMR and X-Ray crystallography verify the structure of collinolactone and propose a revision of rhizolutins stereochemistry. Isotope-labeled precursor feeding shows that collinolactone is biosynthesized via type I polyketide synthase with Baeyer–Villiger oxidation. CRISPR-based genetic strategies led to the identification of the biosynthetic gene cluster and a high-production strain. Chemical semisyntheses yielded collinolactone analogues with inhibitory effects on L929 cell line. Fluorescence microscopy revealed that only particular analogues induce monopolar spindles impairing cell division in mitosis. Inspired by the Alzheimerprotective activity of rhizolutin, we investigated the neuroprotective effects of collinolactone and its analogues on glutamate-sensitive cells (HT22) and indeed, natural collinolactone displays distinct neuroprotection from intracellular oxidative stress.
Few unfocused factories outperform competitors, but Focus is elusive because the environment is constantly evolving and this requires changes to a factory’s key tasks. So how can focus be achieved and sustained? We present insights derived from an historical analysis of the German Hewlett-Packard server plant which went through a series of Focus changes over the years. Using this example, we provide clues for the right timing of Focus changes and discuss critical structural and infrastructural changes required during the Focus transitions, as well as cross-functional coordination and leadership challenges. Our assertion is that production operations constitute a system that can adapt to disruptive Change by using the levers of manufacturing policies to stay focused on a limited but absolutely essential task which creates a strategic advantage.
This article explores current debate on the use of soft power in international higher education, highlighting existing tensions between competing political and academic discourses. It draws on examples from practice and relevant insights in soft power scholarship to capture varying paradoxes and dilemmas that emerge as nations try to leverage the power of international tertiary education to enhance their brand and attract foreign audiences in the name of public diplomacy. Whilst exposing cases of hubris and hidden agendas, this study also addresses issues of inequality and responds to a growing call for knowledge diplomacy aimed at tackling common global problems.
Implementation of product-service systems (PSS) requires structural changes in the way that business in manufacturing industries is traditionally conducted. Literature frequently mentions the importance of human resource management (HRM), since people are involved in the entire process of PSS development and employees are the primary link to customers. However, to this day, no study has provided empirical evidence whether and in what way HRM of firms that implement PSS differs from HRM of firms that solely run a traditional manufacturing based business model. The aim of this study is to contribute to closing this gap by investigating the particular HR components of manufacturing firms that implement PSS and compare it with the HRM of firms that do not. The context of this study is the fashion industry, which is an ideal setting since it is a mature and highly competitive industry that is well-documented for causing significant environmental impact. PSS present a promising opportunity for fashion firms to differentiate and mitigate the industry’s ecological footprint. Analysis of variance (ANOVA) was conducted to analyze data of 102 international fashion firms. Findings reveal a significant higher focus on nearly the entire spectrum of HRM components of firms that implement PSS compared with firms that do not. Empirical findings and their interpretation are utilized to propose a general framework of the role of HRM for PSS implementation. This serves as a departure point for both scholars and practitioners for further research, and fosters the understanding of the role of HRM for managing PSS implementation.
Context: Development of software intensive products and services increasingly occurs by continuously deploying product or service increments, such as new features and enhancements, to customers. Product and service developers must continuously find out what customers want by direct customer feedback and usage behaviour observation. Objective: This paper examines the preconditions for setting up an experimentation system for continuous customer experiments. It describes the RIGHT model for Continuous Experimentation (Rapid Iterative value creation Gained through High-frequency Testing), illustrating the building blocks required for such a system. Method: An initial model for continuous experimentation is analytically derived from prior work. The model is matched against empirical case study findings from two startup companies and further developed. Results: Building blocks for a continuous experimentation system and infrastructure are presented. Conclusions: A suitable experimentation system requires at least the ability to release minimum viable products or features with suitable instrumentation, design and manage experiment plans, link experiment results with a product roadmap, and manage a flexible business strategy. The main challenges are proper, rapid design of experiments, advanced instrumentation of software to collect, analyse, and store relevant data, and the integration of experiment results in both the product development cycle and the software development process.
Purpose – The purpose of this paper is to examine the mediating effect of psychological contract breach on the relationship between job insecurity and counterproductive workplace behavior (CWB) and the moderating effect of employment status in this relationship.
Design/methodology/approach – Data were collected from 212 supervisor–subordinate dyads in a large Chinese state-owned air transportation group. AMOS 17.0 software was used to examine the hypothesized predictions and the theoretical model.
Findings – The results showed that psychological contract breach partially mediates the effect of job insecurity on CWB, including organizational counterproductive workplace behavior and interpersonal counterproductive workplace behavior. In addition, the relationships between job insecurity, psychological contract breach and CWB differ significantly between permanent workers and contract workers.
Originality/value – The present study provides a new insight into explaining the linkage between job insecurity and negative work behaviors as well as suggestions to managers on minimizing the harmful effects of job insecurity.
Public transport maps are typically designed in a way to support route finding tasks for passengers, while they also provide an overview about stations, metro lines, and city-specific attractions. Most of those maps are designed as a static representation, maybe placed in a metro station or printed in a travel guide. In this paper, we describe a dynamic, interactive public transport map visualization enhanced by additional views for the dynamic passenger data on different levels of temporal granularity. Moreover, we also allow extra statistical information in form of density plots, calendar-based visualizations, and line graphs. All this information is linked to the contextual metro map to give a viewer insights into the relations between time points and typical routes taken by the passengers. We also integrated a graph-based view on user-selected routes, a way to interactively compare those routes, an attribute- and property-driven automatic computation of specific routes for one map as well as for all available maps in our repertoire, and finally, also the most important sights in each city are included as extra information to include in a user-selected route. We illustrate the usefulness of our interactive visualization and map navigation system by applying it to the railway system of Hamburg in Germany while also taking into account the extra passenger data. As another indication for the usefulness of the interactively enhanced metro maps we conducted a controlled user experiment with 20 participants.
THE PROBLEM: Companies create problems for customers and employees when product innovation goes unmanaged. Eventually, excessive operational complexity hurts the bottom line.
THREE SOLUTIONS: Focus on product integration, not product proliferation. Make sure your product developers work closely with customerfacing and operational employees. And settle on a high-level purpose that can guide decision making.
Artificial intelligence (AI) technologies, such as machine learning or deep learning, have been predicted to highly impact future organizations and radically change the way how projects are managed. The Project Management Institute (PMI), the network of around 1.1 million certified project managers, ranked AI as one of the top three disruptors of their profession. In an own study on the effect of AI, 37% of the project management processes can be executed by machine learning and other AI technologies. In addition, Gartner recently postulated that 80% of the work of today's project managers may be eliminated by AI in 2030.
This editorial aims to outline today's project and portfolio management in context of pharmaceutical research and development (R&D), followed by an AI-vision and a more tangible mission, and illustrate what the consequences of an AI-enabled project and portfolio management could be for pharmaceutical R&D.
The paper describes a new stimulus using learning factories and an academic research programme - an M.Sc. in Digital Industrial Management and Engineering (DIME) comprising a double degree - to enhance international collaboration between four partner universities. The programme will be structured in such a way as to maintain or improve the level of innovation at the learning factories of each partner. The partners agreed to use Learning Factory focus areas along with DIME learning modules to stimulate international collaboration. Furthermore, they identified several research areas within the framework of the DIME program to encourage horizontal and vertical collaboration. Vertical collaboration connects faculty expertise across the Learning Factory network to advance knowledge in one of the focus areas, while Horizontal collaboration connects knowledge and expertise across multiple focus areas. Together they offer a platform for students to develop disciplinary and cross-disciplinary applied research skills necessary for addressing the complex challenges faced by industry. Hence, the university partners have the opportunity to develop the learning factory capabilities in alignment with the smart manufacturing concept. The learning factory is thus an important pillar in this venture. While postgraduate students/researchers in the DIME program are the enablers to ensure the success of entire projects, the learning factory provides a learning environment which is entirely conducive to fostering these successful collaborations. Ultimately, the partners are focussed on utilising smart technologies in line with the digitalization of the production process.
The fashion industry is well documented for causing significant environmental impact. Product-service systems (PSS) present a promising way to solve this challenge. PSS shift the focus toward complementary service offers, which decouples customer satisfaction from material consumption and entails dematerialization. However, PSS are not ecoefficient by nature but need to be accompanied by corporate environmental management (CEM) practices. The objective of this article is to examine the potential of PSS to contribute to the environmental sustainability of today's fashion industry by investigating if fashion firms with a positive attitude toward PSS implementation also pursue goals related to the ecological environment. For this purpose, analysis of variance (ANOVA) is conducted to analyze data of 102 fashion firms. Results reveal that the diffusion of PSS in today's fashion industry is low and few firms consider implementing PSS. Results, furthermore, demonstrate that PSS implementation is positively related to CEM. This indicates that existing structures of CEM favor PSS implementation and unlock the eco-efficient potential of implemented PSS in the fashion industry.
Purpose
Injury or inflammation of the middle ear often results in the persistent tympanic membrane (TM) perforations, leading to conductive hearing loss (HL). However, in some cases the magnitude of HL exceeds that attributable by the TM perforation alone. The aim of the study is to better understand the effects of location and size of TM perforations on the sound transmission properties of the middle ear.
Methods
The middle ear transfer functions (METF) of six human temporal bones (TB) were compared before and after perforating the TM at different locations (anterior or posterior lower quadrant) and to different degrees (1 mm, ¼ of the TM, ½ of the TM, and full ablation). The sound-induced velocity of the stapes footplate was measured using single-point laser-Doppler-vibrometry (LDV). The METF were correlated with a Finite Element (FE) model of the middle ear, in which similar alterations were simulated.
Results
The measured and calculated METF showed frequency and perforation size dependent losses at all perforation locations. Starting at low frequencies, the loss expanded to higher frequencies with increased perforation size. In direct comparison, posterior TM perforations affected the transmission properties to a larger degree than anterior perforations. The asymmetry of the TM causes the malleus-incus complex to rotate and results in larger deflections in the posterior TM quadrants than in the anterior TM quadrants. Simulations in the FE model with a sealed cavity show that small perforations lead to a decrease in TM rigidity and thus to an increase in oscillation amplitude of the TM mainly above 1 kHz.
Conclusion
Size and location of TM perforations have a characteristic influence on the METF. The correlation of the experimental LDV measurements with an FE model contributes to a better understanding of the pathologic mechanisms of middle-ear diseases. If small perforations with significant HL are observed in daily clinical practice, additional middle ear pathologies should be considered. Further investigations on the loss of TM pretension due to perforations may be informative.
Due to the wide variety of benign and malignant salivary gland tumors, classification and malignant behavior determination based on histomorphological criteria can be difficult and sometimes impossible. Spectroscopical procedures can acquire molecular biological information without destroying the tissue within the measurement processes. Since several tissue preparation procedures exist, our study investigated the impact of these preparations on the chemical composition of healthy and tumorous salivary gland tissue by Fourier-transform infrared (FTIR) microspectroscopy. Sequential tissue cross-sections were prepared from native, formalin-fixed and formalin-fixed paraffin-embedded (FFPE) tissue and analyzed. The FFPE cross-sections were dewaxed and remeasured. By using principal component analysis (PCA) combined with a discriminant analysis (DA), robust models for the distinction of sample preparations were built individually for each parotid tissue type. As a result, the PCA-DA model evaluation showed a high similarity between native and formalin-fixed tissues based on their chemical composition. Thus, formalin-fixed tissues are highly representative of the native samples and facilitate a transfer from scientific laboratory analysis into the clinical routine due to their robust nature. Furthermore, the dewaxing of the cross-sections entails the loss of molecular information. Our study successfully demonstrated how FTIR microspectroscopy can be used as a powerful tool within existing clinical workflows.
Throughout the past decade the rapid proliferation and widespread adoption of social media for marketing purposes can be observed across all technological and digital touch points. This paper focuses on the implementation of social media marketing during mega sports events. We examine impacts by analyzing Adidas’ and Nike’s social media campaigns in the frame of the FIFA World Cup 2014 in Brazil. What impact did the social media activities of Nike and Adidas have on their Twitter and Facebook presence? Which additional value did the social media activities contribute to their respective targets of the entire marketing campaign? In order to answer these questions an empirical study was conducted. Several hypotheses were formulated and tested.
Perivascular stromal cells, including mesenchymal stem/stromal cells (MSCs), secrete paracrine factor in response to exercise training that can facilitate improvements in muscle remodeling. This study was designed to test the capacity for muscle-resident MSCs (mMSCs) isolated from young mice to release regenerative proteins in response to mechanical strain in vitro, and subsequently determine the extent to which strain-stimulated mMSCs can enhance skeletal muscle and cognitive performance in a mouse model of uncomplicated aging. Protein arrays confirmed a robust increase in protein release at 24 h following an acute bout of mechanical strain in vitro (10%, 1 Hz, 5 h) compared to non-strain controls. Aged (24 month old), C57BL/6 mice were provided bilateral intramuscular injection of saline, non strain control mMSCs, or mMSCs subjected to a single bout of mechanical strain in vitro (4 ×104). No significant changes were observed in muscle weight, myofiber size, maximal force, or satellite cell quantity at 1 or 4 wks between groups. Peripheral perfusion was significantly increased in muscle at 4 wks post-mMSC injection (p < 0.05), yet no difference was noted between control and preconditioned mMSCs. Intramuscular injection of preconditioned mMSCs increased the number of new neurons and astrocytes in the dentate gyrus of the hippocampus compared to both control groups (p < 0.05), with a trend toward an increase in water maze performance noted (p=0.07). Results from this study demonstrate that acute injection of exogenously stimulated muscle-resident stromal cells do not robustly impact aged muscle structure and function, yet increase the survival of new neurons in the hippocampus.