Refine
Year of publication
- 2021 (234) (remove)
Document Type
- Journal article (135)
- Conference proceeding (73)
- Book chapter (13)
- Report (5)
- Doctoral Thesis (3)
- Issue of a journal (2)
- Working Paper (2)
- Book (1)
Has full text
- yes (234) (remove)
Is part of the Bibliography
- yes (234)
Institute
- ESB Business School (86)
- Informatik (69)
- Life Sciences (40)
- Technik (28)
- Texoversum (5)
- Zentrale Einrichtungen (4)
Publisher
- Springer (27)
- MDPI (24)
- Elsevier (20)
- IEEE (16)
- Springer Gabler (10)
- Wiley (9)
- De Gruyter (7)
- Hochschule Reutlingen (6)
- ACS (5)
- SSRN (5)
Effektives Risiko-Management sollte neben quantifizierbaren, bekannten Risiken auch Ereignisse berücksichtigen, die entweder in ähnlicher Art bereits eingetreten oder grundsätzlich vorstellbar sind. Für eine Identifikation dieser "Grauen Schwäne" müssen institutionell-organisatorische Voraussetzungen geschaffen und analytisch-konzeptionelle Instrumente bereitgestellt werden.
Dieser Beitrag entwickelt ein Managementmodell, das Unternehmen dabei unterstützt, relevante Aktionsfelder zur nachhaltigen Steuerung von Konsumenten entlang der eigenen Customer Journey zu identifizieren. Aufbauend auf dem SHIFT-Modell, als strukturelle Abbildung des nachhaltigen Käuferverhaltens, wird die Customer Journey entlang der owned, paid und earned Touchpoints aufgezogen. Mithilfe des faktisch analytischen Ansatzes, der die Integration neuer Erkenntnisse in die Forschungsstrategie unterstützt, werden Aktionsfelder identifiziert, die als grundlegende Logik Unternehmen dazu anleiten sollen, bei der Ausgestaltung der eigenen nachhaltigen Customer Journey dieses Strukturraster anzunehmen.
Problem: Die Covid-19 Pandemie verschärft nicht nur die wirtschaftlichen, sondern auch die öko-sozialen Rahmenbedingungen vieler Unternehmen. Nachhaltiges Handeln ist daher wichtiger denn je. Unternehmen wählen unterschiedliche Wege, um Nachhaltigkeit in das Managementsystem der oberen Führungsebene zu integrieren. Dadurch besteht die Chance, Nachhaltigkeit nicht nur in Form von Einzelmaßnahmen zu sehen, sondern als Element der Strategie- und Organisationsentwicklung zu verstehen. Für die gesamthafte Betrachtung kommen u. a. die Gemeinwohlbilanz (GWB) und die Nachhaltigkeits-Balanced Scorecard (N-BSC) in Betracht, wie die Beispiele von Vaude und der Sparda Bank München, die die GWB nutzen (siehe https://web.ecogood.org/de/die-bewegung/pionier-unternehmen/), sowie Alpha und Axel Springer, die Nachhaltigkeit in ihre BSC integrieren (Hansen/Schaltegger, 2016, S. 207), zeigen.
Ziel: Diskussion der GWB und der N-BSC als Möglichkeiten zur Integration öko-sozialer Aspekte in das Managementsystem.
Methode: Aufzeigen wesentlicher Grundzüge der GWB und N-BSC
Das textile Bauen ist ein seit vielen Jahren wachsender Bereich der Textilindustrie. Durch die Verwendung textiler Materialien bieten sich nicht zuletzt für die Architektur neue gestalterische Möglichkeiten, die mit konventionellen Baumaterialien nicht realisierbar sind. Bekannte Beispiele für textile Bauwerke sind große Sportarenen, Bahnhöfe und Flughäfen. Dabei sind Leichtbauweisen und zumindest teilweise Transparenz der Bauwerke auf einer Seite herausragende Eigenschaften, auf der anderen Seite stellen diese Gebäude besondere Anforderungen an das Klima- und Energiemanagement. Der Innenraum kann sich bei Sonneneinstrahlung stark aufheizen, da neben dem sichtbaren Licht vor allem ein Großteil des Infrarotanteils der solaren Strahlung transmittieren kann. Im konventionellen Bauen existieren bereits hohe Anforderungen an die energietechnische Ausgestaltung von Bauwerken, die u.a. über eine effiziente Wärmedämmung erfüllt werden. Dies wird in der Regel mit Hilfe von voluminösen, offenporigen Dämmstoffen erreicht. Ziel ist es dabei vornehmlich, den Verlust von Wärme aus dem Innenraum zu verringern, gleichzeitig können schlecht wärmeleitende Stoffe bei hoher Masse und hoher spezifischer Wärmekapazität Temperaturspitzen im Sommer abpuffern. Auch für das textile Bauen ist die Energieeffizienz ein wichtiger Aspekt. Die Verwendung von schweren Dämmstoffen widerspricht dabei aber der Idee der flexiblen textilen Leichtbauweise.
Trotz Niedrigzinsphase bleibt das Working Capital Management ein wichtiger Treiber für Wertgrößen in Unternehmen und wichtiges Managementinstrument. Unsere Ergebnisse über 115 Unternehmen aus den wichtigsten deutschen Indizes in den Jahren 2011 bis 2017 zeigen, dass effektives Working Capital Management einen positiven Einfluss auf die Rentabilität und den Unternehmenswert haben kann. Gleichzeitig zeigen unsere Ergebnisse aber auch, dass dem Working Capital Management jüngst weniger Aufmerksamkeit zuteilgeworden ist und digitale Innovationen vermutlich noch nicht in dem Umfang zur Effizienzsteigerung eingesetzt werden, wie dies möglich erscheint. Selbst vor dem Hintergrund andauernd niedriger Kapitalmarktzinsen ist dies kritisch zu sehen.
Mit diesem Strategiepapier formulieren die Universitäts-, Landes- und Hochschulbibliotheken des Landes Baden-Württemberg die aus ihrer Sicht zentralen Entwicklungsfelder und Herausforderungen der kommenden Jahre. Die Bibliotheken und das Bibliotheksservice-Zentrum Baden-Württemberg (BSZ) sorgen als Wissenschafts- und Kultureinrichtungen gemeinsam für die akademische Informationsinfrastruktur. Sie nehmen die Herausforderungen der Digitalisierung an und gestalten den Wandel im Dialog mit Forschenden, Lehrenden und Studierenden aktiv mit.
In buchstäblich letzter Minute haben sich die englische Regierung und die Europäische Union auf ein umfangreiches Abkommen geeinigt, um einen ungeregelten Brexit zu verhindern. Nach dem jahrelangen zähen Verhandlungsmarathon fällt der Jubel verhalten aus, dennoch herrscht auf beiden Seiten des Ärmelkanals Erleichterung, weil ein Modus Vivendi gefunden wurde, auf dem sich die künftigen Beziehungen aufbauen und fortführen lassen. Ob sich die englischen Blütenträume, die an den Brexit geknüpft wurden, erfüllen werden, wird die Zukunft erweisen.
Die Strategie und Taktik der englischen Regierungen zum Brexit und bei den Austrittsverhandlungen spiegeln sich in den Erfahrungen wider, die Friedrich List vor genau 175 Jahren bei seinen Bemühungen um eine deutsch-englische Allianz machen musste. Wegen der von England schon damals strikt befolgten Insular und Handelssuprematie musste er sich eingestehen, dass England diese Position hartnäckig verteidigt und deshalb frustriert und ernüchtert seine Pläne aufgeben. Deshalb setzte er seine Hoffnung auf eine "Kontinentalallianz" der europäischen Nationen, wie sie nun nach dem Austritt Großbritanniens aus der Europäischen Union entstanden ist. Vielleicht werden wir uns nun an den Begriff "Kontinentalallianz" gewöhnen müssen und dabei an die Weitsicht von Friedrich List erinnert.
Andererseits gilt auch für die englische Politik das Motto von Lists zweiter Pariser Preisschrift: "Le monde marche - Die Welt bewegt sich", allerdings mit völlig anderen Vorzeichen als vor 175 Jahren: Die Welthandelsachse hat sich von der westlichen auf die östliche Halbkugel verlagert; das britische Weltreich ist Geschichte, die Fließgeschwindigkeit des globalen Wandels hat sich dramatisch beschleunigt und trotz der Lingua Franca erscheint England, vor allem aus asiatischer Sicht, nur noch als kleiner Fleck auf der Weltkarte. Falls die schottische Regierung ihre Absicht durchsetzen und die Unabhängigkeit vom Vereinigten Königreich erreichen sollte, würde sich der Brexit als verhängnisvoller Bumerang erweisen.
Several studies analyzed existing Web APIs against the constraints of REST to estimate the degree of REST compliance among state-of-the-art APIs. These studies revealed that only a small number of Web APIs are truly RESTful. Moreover, identified mismatches between theoretical REST concepts and practical implementations lead us to believe that practitioners perceive many rules and best practices aligned with these REST concepts differently in terms of their importance and impact on software quality. We therefore conducted a Delphi study in which we confronted eight Web API experts from industry with a catalog of 82 REST API design rules. For each rule, we let them rate its importance and software quality impact. As consensus, our experts rated 28 rules with high, 17 with medium, and 37 with low importance. Moreover, they perceived usability, maintainability, and compatibility as the most impacted quality attributes. The detailed analysis revealed that the experts saw rules for reaching Richardson maturity level 2 as critical, while reaching level 3 was less important. As the acquired consensus data may serve as valuable input for designing a tool-supported approach for the automatic quality evaluation of RESTful APIs, we briefly discuss requirements for such an approach and comment on the applicability of the most important rules.
Together with many success stories, promises such as the increase in production speed and the improvement in stakeholders' collaboration have contributed to making agile a transformation in the software industry in which many companies want to take part. However, driven either by a natural and expected evolution or by contextual factors that challenge the adoption of agile methods as prescribed by their creator(s), software processes in practice mutate into hybrids over time. Are these still agile In this article, we investigate the question: what makes a software development method agile We present an empirical study grounded in a large-scale international survey that aims to identify software development methods and practices that improve or tame agility. Based on 556 data points, we analyze the perceived degree of agility in the implementation of standard project disciplines and its relation to used development methods and practices. Our findings suggest that only a small number of participants operate their projects in a purely traditional or agile manner (under 15%). That said, most project disciplines and most practices show a clear trend towards increasing degrees of agility. Compared to the methods used to develop software, the selection of practices has a stronger effect on the degree of agility of a given discipline. Finally, there are no methods or practices that explicitly guarantee or prevent agility. We conclude that agility cannot be defined solely at the process level. Additional factors need to be taken into account when trying to implement or improve agility in a software company. Finally, we discuss the field of software process-related research in the light of our findings and present a roadmap for future research.
Ziel des Projekts ist es, die Schutzwirkung von Schweißerschutzkleidung zu verbessern. Der Fokus lag dabei auf den Fragestellungen: Kann man durch eine Ausrüstung die Beständigkeit der Textilien gegen Tropfen von flüssigem Metall erhöhen und gleichzeitig einen besseren UV-Schutz erhalten? Diese Schutzfaktoren von Schweißerschutzkleidung hängen stark vom Flächengewicht des verwendeten Textils ab. Je höher das Flächengewicht, desto beständiger ist die Kleidung gegenüber Metallspritzern und desto weniger UV wird durch die Kleidung hindurchgelassen. Jedoch gilt, je höher das Flächengewicht, desto schlechter ist der Tragekomfort, da ein hohes Flächengewicht u.a. das Schwitzen fördert. Schweißerschutzkleidung wird nach zwei Klassen unterteilt. Im Fall von Kleidung der Klasse 1 darf ein Temperaturanstieg von 40 K auf der Rückseite des Textils erst nach dem 15. aufgetroffenen Tropfen flüssigen Eisens auftreten. Im Fall der Klasse 2 darf der Temperaturanstieg erst nach 25 Tropfen auftreten. Als Ausgang für dieses Projekt wurden Gewebe ausgewählt, welche die Klasse 1 erfüllen. Es wurde versucht, diese Gewebe durch die Ausrüstung entweder mit wärmeleitfähigen Kompositen oder durch eine Nanostrukturierung ("Lotuseffekt") entsprechend auszurüsten, so dass die Anforderungen für Klasse 2 erfüllt werden. Wärmeleitfähige Komposite sollten für die Ausrüstung ein schnelles Ableiten und Verteilen der Wärme der Metalltropfen auf der Oberfläche garantieren, wodurch sichergestellt werden sollte, dass die Erwärmung der Rückseite des Gewebes deutlich verlangsamt wird. Mit dieser Ausrüstung konnte die Klasse 2 nicht erreicht werden, sie führte jedoch zu keiner Verschlechterung des Tragekomforts des leichteren Gewebes, und die Transmission von schädlicher UV-Strahlung wurde verringert. Durch eine Nanostrukturierung sollte ein "Lotuseffekt" für kleine Metalltropfen erzielt werden. Durch die Nanostrukturierung trifft der Metalltropfen zuerst auf die Oberfläche der Nanopartikel auf, wobei isolierende Luft zwischen Metalltropfen und Gewebeoberfläche eingeschlossen wird und so das Gewebe vor dem Tropfen selbst schützt. Dieser Ansatz lässt vermuten, dass sich der Effekt gut über die aufgetragenen Menge Nanopartikel / Binder einstellen lässt. Im Fall von Binderkonzentrationen zwischen 1,25 und 2,5 % wird die Flexibilität nur geringfügig beeinträchtigt, wobei mit unterschiedlichen Partikeln (SiO2, ZnO, AlOx und TiO2) die Schweißerschutzklasse 2 erreicht werden kann. Der Tragekomfort der Gewebe wird nicht beeinflusst. Das Verfahren bietet KMU aus dem Bereich der Textilveredlung neue innovative Produkte für den Arbeitsschutzsektor. Die Verwendung von leichterer Kleidung im Bereich der PSA (Persönliche Schutzausrüstung) erhöht die Akzeptanz dieser, da der Tragekomfort im Vergleich zu Schweißerschutzkleidung der Klasse 2 durch das im Projekt entwickelte Verfahren der Nanostrukturierung von Kleidung der Schweißerschutzklasse 1 einen deutlich verbesserten Tragekomfort mit sich bringt. Dadurch können von KMU, welche sich auf den Sektor PSA spezialisiert haben, neue und auch internationale Absatzmärkte eröffnet werden.
Hyperspectral imaging and reflectance spectroscopy in the range from 200–380 nm were used to rapidly detect and characterize copper oxidation states and their layer thicknesses on direct bonded copper in a non-destructive way. Single-point UV reflectance spectroscopy, as a well-established method, was utilized to compare the quality of the hyperspectral imaging results. For the laterally resolved measurements of the copper surfaces an UV hyperspectral imaging setup based on a pushbroom imager was used. Six different types of direct bonded copper were studied. Each type had a different oxide layer thickness and was analyzed by depth profiling using X-ray photoelectron spectroscopy. In total, 28 samples were measured to develop multivariate models to characterize and predict the oxide layer thicknesses. The principal component analysis models (PCA) enabled a general differentiation between the sample types on the first two PCs with 100.0% and 96% explained variance for UV spectroscopy and hyperspectral imaging, respectively. Partial least squares regression (PLS-R) models showed reliable performance with R2c = 0.94 and 0.94 and RMSEC = 1.64 nm and 1.76 nm, respectively. The developed in-line prototype system combined with multivariate data modeling shows high potential for further development of this technique towards real large-scale processes.
The paper explains a workflow to simulate the food energy water (FEW) nexus for an urban district combining various data sources like 3D city models, particularly the City Geography Markup Language (CityGML) data model from the Open Geospatial Consortium, Open StreetMap and Census data. A long term vision is to extend the CityGML data model by developing a FEW Application Domain Extension (FEW ADE) to support future FEW simulation workflows such as the one explained in this paper. Together with the mentioned simulation workflow, this paper also identifies some necessary FEW related parameters for the future development of a FEW ADE. Furthermore, relevant key performance indicators are investigated, and the relevant datasets necessary to calculate these indicators are studied. Finally, different calculations are performed for the downtown borough Ville-Marie in the city of Montréal (Canada) for the domains of food waste (FW) and wastewater (WW) generation. For this study, a workflow is developed to calculate the energy generation from anaerobic digestion of FW and WW. In the first step, the data collection and preparation was done. Here relevant data for georeferencing, data for model set-up, and data for creating the required usage libraries, like food waste and wastewater generation per person, were collected. The next step was the data integration and calculation of the relevant parameters, and lastly, the results were visualized for analysis purposes. As a use case to support such calculations, the CityGML level of detail two model of Montréal is enriched with information such as building functions and building usages from OpenStreetMap. The calculation of the total residents based on the CityGML model as the main input for Ville-Marie results in a population of 72,606. The statistical value for 2016 was 89,170, which corresponds to a deviation of 15.3%. The energy recovery potential of FW is about 24,024 GJ/year, and that of wastewater is about 1,629 GJ/year, adding up to 25,653 GJ/year. Relating values to the calculated number of inhabitants in Ville-Marie results in 330.9 kWh/year for FW and 22.4 kWh/year for wastewater, respectively.
Avatars are in use when interacting in virtual environments in different contexts, in collaborative work, as well as in gaming and also in virtual meetings with friends. Therefore it is important to understand how the relationship between user and avatar works. In this study, an online survey is used to determine how the perception of an avatar changes in different contexts by relating it to existing avatar relationship typologies. Additionally, it is determined whether in each context a realistic, abstract or comic-like representation is preferred by the participants. One result was a preference of low poly representations in the work context, which are associated with the perception of the avatar as a tool. In the context of meeting friends, a realistic representation is perceived as more appropriate, which is perceived as an accurate self-representation. In the gaming context, the results are less clear, which can be attributed to different gaming preferences. Here, unlike in the other contexts, a comic-like representation is also perceived as appropriate, which is associated with the perception of the avatar as a friend. A symbiotic user-avatar relationship is not directly related to any form of representation, but always lies in the midfield, which is attributed to the fact that it represents a whole spectrum between other categories.
To correctly assess the cleanliness of technical surfaces in a production process, corresponding online monitoring systems must provide sufficient data. A promising method for fast, large-area, and non-contact monitoring is hyperspectral imaging (HSI), which was used in this paper for the detection and quantification of organic surface contaminations. Depending on the cleaning parameter constellation, different levels of organic residues remained on the surface. Afterwards, the cleanliness was determined by the carbon content in the atom percent on the sample surfaces, characterized by XPS and AES. The HSI data and the XPS measurements were correlated, using machine learning methods, to generate a predictive model for the carbon content of the surface. The regression algorithms elastic net, random forest regression, and support vector machine regression were used. Overall, the developed method was able to quantify organic contaminations on technical surfaces. The best regression model found was a random forest model, which achieved an R2 of 0.7 and an RMSE of 7.65 At.-% C. Due to the easy-to-use measurement and the fast evaluation by machine learning, the method seems suitable for an online monitoring system. However, the results also show that further experiments are necessary to improve the quality of the prediction models.
Forecasting demand is challenging. Various products exhibit different demand patterns. While demand may be constant and regular for one product, it may be sporadic for another, as well as when demand occurs, it may fluctuate significantly. Forecasting errors are costly and result in obsolete inventory or unsatisfied demand. Methods from statistics, machine learning, and deep learning have been used to predict such demand patterns. Nevertheless, it is not clear for what demand pattern, which algorithm would achieve the best forecast. Therefore, even today a large number of models are used to forecast on a test period. The model with the best result on the test period is used for the actual forecast. This approach is computationally and time intensive and, in most cases, uneconomical. In our paper we show the possibility to use a machine learning classification algorithm, which predicts the best possible model based on the characteristics of a time series. The approach was developed and evaluated on a dataset from a B2B-technical-retailer. The machine learning classification algorithm achieves a mean ROC-AUC of 89%, which emphasizes the skill of the model.
Coopetitive endeavors offer valuable strategic options for firms. Yet, many of them are failure-prone as partners must balance collective and private interest. While interpartner trust is considered central for alliance success, paradoxically, the role and dynamics of trust is still not understood. We synthesize a computational model, capturing relational dynamics of an alliance, encompassing coevolution of trust, partner contributions, and (relative) alliance interactions. Analyzing alliance dynamics using simulation we find and explore a tipping boundary, separating a regime of alliance failure and success. We identify implications for collaborative (aspirations) and private strategies (openness). Our analyses reveal that strategies informed by a static mental model of partner trust, contributions, and openness tend to yield subpar alliance results and hidden failure-risk. We discuss implications for management theory.
In the era of precision medicine, digital technologies and artificial intelligence, drug discovery and development face unprecedented opportunities for product and business model innovation, fundamentally changing the traditional approach of how drugs are discovered, developed and marketed. Critical to this transformation is the adoption of new technologies in the drug development process, catalyzing the transition from serendipity-driven to data-driven medicine. This paradigm shift comes with a need for both translation and precision, leading to a modern Translational Precision Medicine approach to drug discovery and development. Key components of Translational Precision Medicine are multi-omics profiling, digital biomarkers, model-based data integration, artificial intelligence, biomarker-guided trial designs and patient-centric companion diagnostics. In this review, we summarize and critically discuss the potential and challenges of Translational Precision Medicine from a cross-industry perspective.
Ambitious goals set by the European Union strategy towards the emission reduction of multimodal logistic chains and new requirements for intermodal terminals set by the evolution of customer needs, contribute to a shift in the driver for the infrastructure development: from economy of scale to economy of density. This paper aims to present an innovative method for designing a process oriented technology chain for intermodal terminals in order to fulfill these new demanding requirements. The results of the case study of the Zero Emission Logistic Terminal Reutlingen are presented, highlighting how this particular context enables the design and development of a modular concept, paving the way for the generalization of the findings towards the transfer to similar contexts of other European cities.
Towards Automated Surgical Documentation using automatically generated checklists from BPMN models
(2021)
The documentation of surgeries is usually created from memory only after the operation, which is an additional effort for the surgeon and afflicted with the possibility of imprecisely, shortend reports. The display of process steps in the form of checklists and the automatic creation of surgical documentation from the completed process steps could serve as a reminder, standardize the surgical procedure and save time for the surgeon. Based on two works from Reutlingen University, which implemented the creation of dynamic checklists from Business Process Modelling Notation (BPMN) models and the storage of times at which a process step was completed, a prototype was developed for an android tablet, to expand the dynamic checklists by functions such as uploading photos and files, manual user entries, the interception of foreseeable deviations from the normal course of operations and the automatic creation of OR documentation.
While there has been increased digitization of private homes, only little has been done to understand these specific home technologies, how they serve consumers, among other issues. “Smart home technology” (SHT) refer to a wide range of artifacts from cleaning aids to energy advisors. Given this breadth, clarity surrounding the key characteristics and the multi-faceted impact of SHT is needed to conduct more directed research on SHT. We propose a taxonomy to help outline the salient intended outcomes of SHT. Through a process involving five iterations, we analyzed and classified 79 technologies (gathered from literature and industry reports). This uncovered seven dimensions encompassing 20 salient characteristics. We believe these dimensions/characteristics will help researchers and organizations better design and study the impacts of these technologies. Our long-term agenda is to use the proposed taxonomy for an exploratory inquiry to understand tensions occurring when personal and sustainability-related outcomes compete.
In this work, a comparison between different brushless harmonic-excited wound-rotor synchronous machines is performed. The general idea of all topologies is the elimination of the slip rings and auxiliary windings by using the already existing stator and rotor winding for field excitation. This is achieved by injecting a harmonic airgap field with the help of power electronics. This harmonic field does not interact with the fundamental field, it just transfers the excitation power across the airgap. Alternative methods with varying number of phases, different pole-pair combinations, and winding layouts are covered and compared with a detailed Finite-Element-parameterized model. Parasitic effects due to saturation and coupling between the harmonic and main windings are considered.
Distributed ledger technologies such as the blockchain technology offer an innovative solution to increase visibility and security to reduce supply chain risks. This paper proposes a solution to increase the transparency and auditability of manufactured products in collaborative networks by adopting smart contract-based virtual identities. Compared with existing approaches, this extended smart contract-based solution offers manufacturing networks the possibility of involving privacy, content updating, and portability approaches to smart contracts. As a result, the solution is suitable for the dynamic administration of complex supply chains.
Successful transitions to a sustainable bioeconomy require novel technologies, processes, and practices as well as a general agreement about the overarching normative direction of innovation. Both requirements necessarily involve collective action by those individuals who purchase, use, and co-produce novelties: the consumers. Based on theoretical considerations borrowed from evolutionary innovation economics and consumer social responsibility, we explore to what extent consumers’ scope of action is addressed in the scientific bioeconomy literature. We do so by systematically reviewing bioeconomy-related publications according to (i) the extent to which consumers are regarded as passive vs. active, and (ii) different domains of consumer responsibility (depending on their power to influence economic processes). We find all aspects of active consumption considered to varying degrees but observe little interconnection between domains. In sum, our paper contributes to the bioeconomy literature by developing a novel coding scheme that allows us to pinpoint different aspects of consumer activity, which have been considered in a rather isolated and undifferentiated manner. Combined with our theoretical considerations, the results of our review reveal a central research gap which should be taken up in future empirical and conceptual bioeconomy research. The system-spanning nature of a sustainable bioeconomy demands an equally holistic exploration of the consumers’ prospective and shared responsibility for contributing to its coming of age, ranging from the procurement of information on bio-based products and services to their disposal.
Theory and practice of implementing a successful enterprise IoT strategy in the industry 4.0 era
(2021)
Since the arrival of the internet and affordable access to technologies, digital technologies have occupied a growing place in industries, propelling us towards a 4th industrial revolution: Industry 4.0. In today’s era of digital upheaval, enterprises are increasingly undergoing transformations that are leading to their digitalization. The traditional manufacturing industry is in the throes of a digital transformation that is accelerated by exponentially growing technologies (e.g., intelligent robots, Internet of Things, sensors, 3D printing). Around the world, enterprises are in a frantic race to implement solutions based on IoT to improve their productivity, innovation, and reduce costs and improve their markets on the international scene. Considering the immense transformative potential that IoTs and big data have to bring to the industrial sector, the adoption of IoT in all industrial systems is a challenge to remain competitive and thus transform the industry into a smart factory. This paper presents the description of the innovation and digitalization process, following the Industry 4.0 paradigm to implement a successful enterprise IoT strategy.
Hypericin has large potential in modern medicine and exhibits fascinating structural dynamics, such as multiple conformations and tautomerization. However, it is difficult to study individual conformers/tautomers, as they cannot be isolated due to the similarity of their chemical and physical properties. An approach to overcome this difficulty is to combine single molecule experiments with theoretical studies. Time-dependent density functional theory (TD-DFT) calculations reveal that tautomerization of hypericin occurs via a two-step proton transfer with an energy barrier of 1.63 eV, whereas a direct single-step pathway has a large activation energy barrier of 2.42 eV. Tautomerization in hypericin is accompanied by reorientation of the transition dipole moment, which can be directly observed by fluorescence intensity fluctuations. Quantitative tautomerization residence times can be obtained from the autocorrelation of the temporal emission behavior revealing that hypericin stays in the same tautomeric state for several seconds, which can be influenced by the embedding matrix. Furthermore, replacing hydrogen with deuterium further proves that the underlying process is based on tunneling of a proton. In addition, the tautomerization rate can be influenced by a λ/2 Fabry–Pérot microcavity, where the occupation of Raman active vibrations can alter the tunneling rate.
In recent years, the Graph Model has become increasingly popular, especially in the application domain of social networks. The model has been semantically augmented with properties and labels attached to the graph elements. It is difficult to ensure data quality for the properties and the data structure because the model does not need a schema. In this paper, we propose a schema bound Typed Graph Model with properties and labels. These enhancements improve not only data quality but also the quality of graph analysis. The power of this model is provided by using hyper-nodes and hyper-edges, which allows to present data structures on different abstraction levels. We prove that the model is at least equivalent in expressive power to most popular data models. Therefore, it can be used as a supermodel for model management and data integration. We illustrate by example the superiority of this model over the property graph data model of Hidders and other prevalent data models, namely the relational, object-oriented, XML model, and RDF Schema.
Recently described rhizolutin and collinolactone isolated from Streptomyces Gç 40/10 share the same novel carbon scaffold. Analyses by NMR and X-Ray crystallography verify the structure of collinolactone and propose a revision of rhizolutins stereochemistry. Isotope-labeled precursor feeding shows that collinolactone is biosynthesized via type I polyketide synthase with Baeyer–Villiger oxidation. CRISPR-based genetic strategies led to the identification of the biosynthetic gene cluster and a high-production strain. Chemical semisyntheses yielded collinolactone analogues with inhibitory effects on L929 cell line. Fluorescence microscopy revealed that only particular analogues induce monopolar spindles impairing cell division in mitosis. Inspired by the Alzheimerprotective activity of rhizolutin, we investigated the neuroprotective effects of collinolactone and its analogues on glutamate-sensitive cells (HT22) and indeed, natural collinolactone displays distinct neuroprotection from intracellular oxidative stress.
Purpose
Injury or inflammation of the middle ear often results in the persistent tympanic membrane (TM) perforations, leading to conductive hearing loss (HL). However, in some cases the magnitude of HL exceeds that attributable by the TM perforation alone. The aim of the study is to better understand the effects of location and size of TM perforations on the sound transmission properties of the middle ear.
Methods
The middle ear transfer functions (METF) of six human temporal bones (TB) were compared before and after perforating the TM at different locations (anterior or posterior lower quadrant) and to different degrees (1 mm, ¼ of the TM, ½ of the TM, and full ablation). The sound-induced velocity of the stapes footplate was measured using single-point laser-Doppler-vibrometry (LDV). The METF were correlated with a Finite Element (FE) model of the middle ear, in which similar alterations were simulated.
Results
The measured and calculated METF showed frequency and perforation size dependent losses at all perforation locations. Starting at low frequencies, the loss expanded to higher frequencies with increased perforation size. In direct comparison, posterior TM perforations affected the transmission properties to a larger degree than anterior perforations. The asymmetry of the TM causes the malleus-incus complex to rotate and results in larger deflections in the posterior TM quadrants than in the anterior TM quadrants. Simulations in the FE model with a sealed cavity show that small perforations lead to a decrease in TM rigidity and thus to an increase in oscillation amplitude of the TM mainly above 1 kHz.
Conclusion
Size and location of TM perforations have a characteristic influence on the METF. The correlation of the experimental LDV measurements with an FE model contributes to a better understanding of the pathologic mechanisms of middle-ear diseases. If small perforations with significant HL are observed in daily clinical practice, additional middle ear pathologies should be considered. Further investigations on the loss of TM pretension due to perforations may be informative.
The physicochemical properties of synthetically produced bone substitute materials (BSM) have a major impact on biocompatibility. This affects bony tissue integration, osteoconduction, as well as the degradation pattern and the correlated inflammatory tissue responses including macrophages and multinucleated giant cells (MNGCs). Thus, influencing factors such as size, special surface morphologies, porosity, and interconnectivity have been the subject of extensive research. In the present publication, the influence of the granule size of three identically manufactured bone substitute granules based on the technology of hydroxyapatite (HA)-forming calcium phosphate cements were investigated, which includes the inflammatory response in the surrounding tissue and especially the induction of MNGCs (as a parameter of the material degradation). For the in vivo study, granules of three different size ranges (small = 0.355–0.5 mm; medium = 0.5–1 mm; big = 1–2 mm) were implanted in the subcutaneous connective tissue of 45 male BALB/c mice. At 10, 30, and 60 days post implantationem, the materials were explanted and histologically processed. The defect areas were initially examined histopathologically. Furthermore, pro- and anti-inflammatory macrophages were quantified histomorphometrically after their immunohistochemical detection. The number of MNGCs was quantified as well using a histomorphometrical approach. The results showed a granule size-dependent integration behavior. The surrounding granulation tissue has passivated in the groups of the two bigger granules at 60 days post implantationem including a fibrotic encapsulation, while a granulation tissue was still present in the group of the small granules indicating an ongoing cell-based degradation process. The histomorphometrical analysis showed that the number of proinflammatory macrophages was significantly increased in the small granules at 60 days post implantationem. Similarly, a significant increase of MNGCs was detected in this group at 30 and 60 days post implantationem. Based on these data, it can be concluded that the integration and/or degradation behavior of synthetic bone substitutes can be influenced by granule size.
Electronic design automation approaches can roughly be divided into optimizers and procedures. While the former have enabled highly automated synthesis flows for digital integrated circuits, the latter play a vital (but mostly underestimated role) in the analog domain. This paper describes both automation strategies in comparison, identifying two fundamentally different automation paradigms that reflect the two basic design practices known as “top-down” and “bottom-up”. Then, with a focus on the latter, the history of procedural approaches is traced from their
early beginnings until today’s evolvements and future prospects to underline their practical importance and to accentuate their scientific value, both in itself and in the overall context of EDA.
Collagen-based barrier membranes are an essential component in Guided Bone Regeneration (GBR) procedures. They act as cell-occlusive devices that should maintain a micromilieu where bone tissue can grow, which in turn provides a stable bed for prosthetic implantation. However, the standing time of collagen membranes has been a challenging area, as native membranes are often prematurely resorbed. Therefore, consolidation techniques, such as chemical cross-linking, have been used to enhance the structural integrity of the membranes, and by consequence, their standing time. However, these techniques have cytotoxic tendencies and can cause exaggerated inflammation and in turn, premature resorption, and material failures. However, tissues from different extraction sites and animals are variably cross-linked. For the present in vivo study, a new collagen membrane based on bovine dermis was extracted and compared to a commercially available porcine-sourced collagen membrane extracted from the pericardium. The membranes were implanted in Wistar rats for up to 60 days. The analyses included well-established histopathological and histomorphometrical methods, including histochemical and immunohistochemical staining procedures, to detect M1- and M2-macrophages as well as blood vessels. Initially, the results showed that both membranes remained intact up to day 30, while the bovine membrane was fragmented at day 60 with granulation tissue infiltrating the implantation beds. In contrast, the porcine membrane remained stable without signs of material-dependent inflammatory processes. Therefore, the bovine membrane showed a special integration pattern as the fragments were found to be overlapping, providing secondary porosity in combination with a transmembraneous vascularization. Altogether, the bovine membrane showed comparable results to the porcine control group in terms of biocompatibility and standing time. Moreover, blood vessels were found within the bovine membranes, which can potentially serve as an additional functionality of barrier membranes that conventional barrier membranes do not provide.
This paper covers test and verification of a forecast-based Monte Carlo algorithm for an optimized, demand-oriented operation of combined heat and power (CHP) units using the hardware-in-the-loop approach. For this purpose, the optimization algorithm was implemented at a test bench at Reutlingen University for controlling a CHP unit in combination with a thermal energy storage, both in real hardware. In detail, the hardware-in-the-loop tests are intended to reveal the effects of demand forecasting accuracy, the impact of thermal energy storage capacity and the influence of load profiles on demand-oriented operation of CHP units. In addition, the paper focuses on the evaluation of the content of energy in the thermal energy storage under practical conditions. It is shown that a 5-layer model allows to determine the energy stored quite accurately, which is verified by experimental results. The hardware-in-the-loop tests disclose that demand forecasting accuracies, especially electricity demand forecasting, as well as load profiles strongly impact the potential for CHP electricity utilization on-site in demand-oriented mode. Moreover, it is shown that a larger effective capacity of the thermal energy storage positively affects demand-oriented operation. In the hardware-in-the-loop tests, the fraction of electricity generated by the CHP unit utilized on-site could thus be increased by a maximum of 27% compared to heat-led operation, which is still the most common modus operandi of small-scale CHP plants. Hence, the hardware-in-the-loop tests were adequate to prove the significant impact of the proposed algorithm for optimization of demand-oriented operation of CHP units.
Context: Agile practices as well as UX methods are nowadays well-known and often adopted to develop complex software and products more efficiently and effectively. However, in the so called VUCA environment, which many companies are confronted with, the sole use of UX research is not sufficient to find the best solutions for customers. The implementation of Design Thinking can support this process. But many companies and their product owners don’t know how much resources they should spend for conducting Design Thinking.
Objective: This paper aims at suggesting a supportive tool, the “Discovery Effort Worthiness (DEW) Index”, for product owners and agile teams to determine a suitable amount of effort that should be spent for Design Thinking activities.
Method: A case study was conducted for the development of the DEW index. Design Thinking was introduced into the regular development cycle of an industry Scrum team. With the support of UX and Design Thinking experts, a formula was developed to determine the appropriate effort for Design Thinking.
Results: The developed “Discovery Effort Worthiness Index” provides an easy-to-use tool for companies and their product owners to determine how much effort they should spend on Design Thinking methods to discover and validate requirements. A company can map the corresponding Design Thinking methods to the results of the DEW Index calculation, and product owners can select the appropriate measures from this mapping. Therefore, they can optimize the effort spent for discovery and validation.
We present the modification of ethylene-propylene rubber (EPM) with vinyltetra-methydisiloxane (VTMDS) via reactive extrusion to create a new silicone-based material with the potential for high-performance applications in the automotive, industrial and biomedical sectors. The radical-initiated modification is achieved with a peroxide catalyst starting the grafting reaction. The preparation process of the VTMDS-grafted EPM was systematically investigated using process analytical technology (in-line Raman spectroscopy) and the statistical design of experiments (DoE). By applying an orthogonal factorial array based on a face-centered central composite experimental design, the identification, quantification and mathematical modeling of the effects of the process factors on the grafting result were undertaken. Based on response surface models, process windows were defined that yield high grafting degrees and good grafting efficiency in terms of grafting agent utilization. To control the grafting process in terms of grafting degree and grafting efficiency, the chemical changes taking place during the modification procedure in the extruder were observed in real-time using a spectroscopic in-line Raman probe which was directly inserted into the extruder. Successful grafting of the EPM was validated in the final product by 1H-NMR and FTIR spectroscopy.
This article studies the renewed interest surrounding sustainable public finance and the topic of tax evasion as well as the new theory of information inattention. Extending a model of tax evasion with the notion of inattention reveals novel findings about policy instruments that can be used to mitigate tax evasion. We show that the attention parameters regarding tax rates, financial penalty schemes and income levels are as important as the level of the detection probability and the financial penalty incurred. Thus, our theory recommends the enhancement of sustainability in public policy, particularly in tax policy. Consequently, the paper contributes both to the academic and public policy debate.
The technologies of digital transformation, such as the Internet-of-Things (IoT), artificial intelligence or predictive maintenance enable significant efficiency gains in industry and are becoming increasingly important as a competitive factor. However, their successful implementation and creative, future application requires the broad acceptance and knowledge of non-IT-related groups, such as production management students, engineers or skilled workers, which is still lacking today. This paper presents a low-threshold training concept bringing IoT-technologies and applications into manufacturing related higher education and employee training. The concept addresses the relevant topics starting from IoT-basics to predictive maintenance using mobile low-cost hardware and infrastructure.
This article studies the effects of reverse factoring in a supply chain when the buyer company facilitates its lower short-term borrowing rates to the supplier corporation in return for extended payment terms. We explore the role of interest rate changes, rating changes, and the business cycle position on the cost and benefit trade-off from a supplier perspective. We utilize a combined empirical approach consisting of an event study in Step 1 and a simulation model in Step 2. The event study identifies the quantitative magnitude of central bank decisions and rating changes on the interest rate differential. The simulation computes with a rolling-window methodology the daily cost and benefits of reverse factoring from 2010 to 2018 under the assumption of the efficient market hypothesis. Our major finding is that changes of crucial financial variables such as interest rates, ratings, or news alerts will turn former win-win into win-lose situations for the supplier contingent to the business cycle. Overall, our results exhibit sophisticated trade-offs under reverse factoring and consequently require a careful evaluation in managerial decisions.
In the current age of innovative business financing opportunities available from fintech apps, social media crowdfunding sites such as Kickstarter, Indiegogo, and RocketHub, et.al., and friends and family private equity investors, start-up firms can strategically source their venture capital funds from many globally disperse organizations and individuals. As the firm in this case learned, the benefit of alternative investing sources comes with a critical hidden risk for corporate governance. After a financial restructuring, a typical Silicon Valley software start-up found itself with close to 300 external individual shareholders, some of whom had not been documented as accredited investors. The regulatory agency could decide that the prior actions of the founders and the decisions of the board had been prejudicial to the interests of the minority investors. The management of this small private company faced an atypical investor relations dilemma, before its initial public offering (IPO).
This paper explores why and how dominant international social standards used in the fashion industry are prone to implementation failures. A qualitative multiple-case study method was conducted, using purposive sampling to select 13 apparel supply chain actors. Data were collected through on-site semi-structured face-to-face interviews. The findings of the study are interpreted by using core tenets of agency theory. The case study findings clearly highlight why and how multi-tier apparel supply chains fail to implement social standards effectively. As a consequence of substantial goal conflicts and information asymmetries, sourcing agents and suppliers are driven to perform opportunistic behaviors in form of hidden characteristics, hidden intentions, and hidden actions, which significantly harm social standards. Fashion retailers need to empower their corporate social responsibility (CSR) departments by awarding an integrative role to sourcing decisions. Moreover, accurate calculation of orders, risk sharing, cost sharing, price premiums, and especially guaranteed order continuity for social compliance are critical to reduce opportunistic behaviors upstream of the supply chain. The development of social standards is highly suggested, e.g., by including novel metrics such as the assessment of buying practices or the evaluation of capacity planning at factories and the strict inclusion of subcontractors’ social performances. This paper presents evidence from multiple Vietnamese and Indonesian cases involving sourcing agents as well as Tier 1 and Tier 2 suppliers on a highly sensitive topic. With the development of the conceptual framework and the formulation of seven related novel propositions, this paper unveils the ineffectiveness of social standards, offers guidance for practitioners, and contributes to the neglected social dimension in sustainable supply chain management research and accountability literature.
Imagine a world in which the search for tomorrow's trends is not subject to a long and laborious data search but is possible with a single mouse click. Through the use of artificial intelligence (AI), this reality is made possible and is to be further advanced through research. The study therefore aims to provide an initial overview of the young research field. Based on research, expert interviews, company and student surveys, current application possibilities of AI in the innovation process (defined as Smart Innovation), existing challenges that slow down the further development are discussed in more detail and future application possibilities are presented. Finally, a recommendation for action is made for business, politics and science to help overcome the current obstacles together and thus drive the future of Smart Innovation.
Die vorliegende Studie zeigt, dass das Thema Smart Innovation (der Einsatz von KI-Systemen im Innovationsprozess) von hoher Relevanz ist und Zustimmung für den Einsatz von KI im Innovationsprozess besteht. Sowohl von den Unternehmen als auch von den Studierenden werden Effizienzsteigerung, schnellere Bearbeitung großer Datenmengen, die Steigerung der Wettbewerbsfähigkeit und Kosteneinsparungen als Gründe für den Einsatz von KI im Innovationsprozess gesehen. In Deutschland finden KI-Technologien bereits jetzt punktuell und branchenunabhängig Anwendung im Innovationsprozess. Einflussfaktoren, wie Hochschulkooperationen, Innovationsabteilungen und Open Innovation können den Einsatz fördern. Vor allem KMU aus den frühen Phasen der Industrialisierung sollten davon Gebrauch machen. In einem Zusammenspiel von menschlicher Expertise und der schnellen und präzisen Datenverarbeitung der KI liegt das Erfolgsgeheimnis eines möglichst effizienten Innovationsprozesses. Es wird deutlich, dass verschiedene Einflussfaktoren erforderlich sind, um die Anwendung von Smart Innovation praktikabel zu gestalten. So gilt es zunächst die technischen Voraussetzungen einer funktionierenden IT-Infrastruktur zu erfüllen. Gleichbedeutend sind offene Fragestellungen hinsichtlich der Datenverfügbarkeit, des Dateneigentums und der Datensicherheit. Ohne rechtlichen Rahmen sind kaum Akteure gewillt, ihre Daten zu teilen und zugänglich zu machen. Erschwert wird der Einsatz von KI durch den nationalen IT-Fachkräftemangel. So sehen sowohl Unternehmen als auch die Studierenden das größte Hindernis im Mangel von KI-relevantem Know-how. Dies hemmt einerseits die Forschung, andererseits fehlt es den Unternehmen an erforderlichen Fachkräften für eine Einführung von KI im Unternehmen. Es ist jedoch notwendig, den Unternehmen durch das Aufzeigen von Anwendungsbeispielen, die Potenziale und Chancen von Smart Innovation zu vermitteln. Es gilt, die anwendungsorientierte Forschung zu fördern und einen reibungslosen Transfer in die Wirtschaft sicherzustellen. Dieser Wissensaustausch erfordert zudem eine höhere unternehmerische Risikobereitschaft. Es wächst die Notwendigkeit, unternehmensspezifische KI-Strategien zu entwerfen. Die Technologien entwickeln sich schnell, es gilt daher auch für Unternehmen sich diesem Fortschritt anzupassen, um den Anschluss nicht zu verlieren und die Wettbewerbsfähigkeit zu sichern. So liegt die größte Herausforderung im grundlegenden Wandel der Geschäftsmodelle, denn die Wertschöpfung erfolgreicher Unternehmen basiert zunehmend auf "digitalen assets". Daten gelten generell als die neue Ressource, als Rohstoff, auch für Smarte Innovationen. Die Bedeutung von Smart Innovation wird in Zukunft weiterhin ansteigen. Kurz- und mittelfristig unterstützt die Schwache KI vor allem bei der Datensammlung und -analyse, bei der Prozessautomatisierung sowie bei der Bedürfnis- und Trendidentifikation. Weiter werden sich inkrementelle Veränderungen im Innovationsmanagement mithilfe von Simulationen und der zufälligen Kombination von Technologien erhofft. Langfristig wird eine stärkere KI den Einsatz der Menschen im Innovationsprozess in Teilen ersetzen können. Ob autonomes Innovieren zukünftig möglich sein wird, hängt zunächst von dem Ausmaß der Neuheit einer Innovation, aber vor allem auch von der Möglichkeit einer kreativen KI ab. Es ist davon auszugehen, dass die Fortschritte im Bereich der KI nicht nur radikale Innovationen ermöglichen werden, sondern auch zu einer strukturellen Veränderung unseres heutigen Verständnisses des Innovationsmanagements führen.
Imagine a world in which the search for tomorrow's trends of (software) products is not subject to a long and laborious data search but is possible with a single mouse click. Through the use of artificial intelligence (AI), this reality is made possible and is to be further advanced through research. The study therefore aims to provide an initial overview of the young research field. Based on research, expert interviews, company and student surveys, current application possibilities of AI in the innovation process (defined as Smart Innovation), existing challenges that slow down the further development are discussed in more detail and future application possibilities are presented. Finally, a recommendation for action is made for business, politics and science to help overcome the current obstacles together and thus drive the future of Smart Innovation.
Purpose
Computerized medical imaging processing assists neurosurgeons to localize tumours precisely. It plays a key role in recent image-guided neurosurgery. Hence, we developed a new open-source toolkit, namely Slicer-DeepSeg, for efficient and automatic brain tumour segmentation based on deep learning methodologies for aiding clinical brain research.
Methods
Our developed toolkit consists of three main components. First, Slicer-DeepSeg extends the 3D Slicer application and thus provides support for multiple data input/ output data formats and 3D visualization libraries. Second, Slicer core modules offer powerful image processing and analysis utilities. Third, the Slicer-DeepSeg extension provides a customized GUI for brain tumour segmentation using deep learning-based methods.
Results
The developed Slicer-DeepSeg was validated using a public dataset of high-grade glioma patients. The results showed that our proposed platform’s performance considerably outperforms other 3D Slicer cloud-based approaches.
Conclusions
Developed Slicer-DeepSeg allows the development of novel AI-assisted medical applications in neurosurgery. Moreover, it can enhance the outcomes of computer-aided diagnosis of brain tumours. Open-source Slicer-DeepSeg is available at github.com/razeineldin/Slicer-DeepSeg.
Metalworking fluids (MWFs) are widely used to cool and lubricate metal workpieces during processing to reduce heat and friction. Extending a MWF’s service life is of importance from both economical and ecological points of view. Knowledge about the effects of processing conditions on the aging behavior and reliable analytical procedures are required to properly characterize the aging phenomena. While so far no quantitative estimations of ageing effects on MWFs have been described in the literature other than univariate ones based on single parameter measurements, in the present study we present a simple spectroscopy-based set-up for the simultaneous monitoring of three quality parameters of MWF and a mathematical model relating them to the most influential process factors relevant during use. For this purpose, the effects of MWF concentration, pH and nitrite concentration on the droplet size during aging were investigated by means of a response surface modelling approach. Systematically varied model MWF fluids were characterized using simultaneous measurements of absorption coefficients µa and effective scattering coefficients µ’s. Droplet size was determined via dynamic light scattering (DLS) measurements. Droplet size showed non-linear dependence on MWF concentration and pH, but the nitrite concentration had no significant effect. pH and MWF concentration showed a strong synergistic effect, which indicates that MWF aging is a rather complex process. The observed effects were similar for the DLS and the µ’s values, which shows the comparability of the methodologies. The correlations of the methods were R2c = 0.928 and R2P = 0.927, as calculated by a partial least squares regression (PLS-R) model. Furthermore, using µa, it was possible to generate a predictive PLS-R model for MWF concentration (R2c = 0.890, R2P = 0.924). Simultaneous determination of the pH based on the µ’s is possible with good accuracy (R²c = 0.803, R²P = 0.732). With prior knowledge of the MWF concentration using the µa-PLS-R model, the predictive capability of the µ’s-PLS-R model for pH was refined (10 wt%: R²c = 0.998, R²p = 0.997). This highlights the relevance of the combined measurement of µa and µ’s. Recognizing the synergistic nature of the effects of MWF concentration and pH on the droplet size is an important prerequisite for extending the service life of an MWF in the metalworking industry. The presented method can be applied as an in-process analytical tool that allows one to compensate for ageing effects during use of the MWF by taking appropriate corrective measures, such as pH correction or adjustment of concentration.
Durch das Verbot der ozonschädigenden Fluor-Chlorkohlenwasserstoffen als Kältemittel und der heute überwiegend eingesetzten Fluor-Kohlenwasserstoffe, welche sich negativ auf den Treibhauseffekt auswirken, gewinnt das umweltfreundlichere CO2 (Kohlendioxid) in der Verwendung als Kältemittel an Bedeutung. Ausgangspunkt dieser Arbeit sind ein Prototyp einer reversiblen CO2 Wärmepumpe und ein Simulationsmodell derselbigen. Ziel dieser Arbeit ist es das Simulationsmodell, anhand von realen Messergebnissen des Prototyps, zu verifizieren. Durch die Berechnung von Vergleichsparametern, das Festlegen von Randbedingungen und geeigneten Messpunkten am Prototyp wird die Simulation optimiert. Abschließend folgt die Bewertung der Ergebnisse im Hinblick auf die Funktionalität der Wärmepumpe und deren Abbild in der Simulation.
Silicon photonic micro-ring resonators (MRR) developed on the silicon-on-insulator (SOI) platform, owing to their high sensitivity and small footprint, show great potential for many chemical and biological sensing applications such as label-free detection in environmental monitoring, biomedical engineering, and food analysis. In this tutorial, we provide the theoretical background and give design guidelines for SOI-based MRR as well as examples of surface functionalization procedures for label-free detection of molecules. After introducing the advantages and perspectives of MRR, fundamentals of MRR are described in detail, followed by an introduction to the fabrication methods, which are based on a complementary metal-oxide semiconductor (CMOS) technology. Optimization of MRR for chemical and biological sensing is provided, with special emphasis on the optimization of waveguide geometry. At this point, the difference between chemical bulk sensing and label-free surface sensing is explained, and definitions like waveguide sensitivity, ring sensitivity, overall sensitivity as well as the limit of detection (LoD) of MRR are introduced. Further, we show and explain chemical bulk sensing of sodium chloride (NaCl) in water and provide a recipe for label-free surface sensing.
The cloud evolved into an attractive execution environment for parallel applications, which make use of compute resources to speed up the computation of large problems in science and industry. Whereas Infrastructure as a Service (IaaS) offerings have been commonly employed, more recently, serverless computing emerged as a novel cloud computing paradigm with the goal of freeing developers from resource management issues. However, as of today, serverless computing platforms are mainly used to process computations triggered by events or user requests that can be executed independently of each other and benefit from on-demand and elastic compute resources as well as per-function billing. In this work, we discuss how to employ serverless computing platforms to operate parallel applications. We specifically focus on the class of parallel task farming applications and introduce a novel approach to free developers from both parallelism and resource management issues. Our approach includes a proactive elasticity controller that adapts the physical parallelism per application run according to user-defined goals. Specifically, we show how to consider a user-defined execution time limit after which the result of the computation needs to be present while minimizing the associated monetary costs. To evaluate our concepts, we present a prototypical elastic parallel system architecture for self-tuning serverless task farming and implement two applications based on our framework. Moreover, we report on performance measurements for both applications as well as the prediction accuracy of the proposed proactive elasticity control mechanism and discuss our key findings.
Science-based analysis for climate action: how HSBC Bank uses the En-ROADS climate policy simulation
(2021)
In 2018, the Intergovernmental Panel on Climate Change (IPCC, 2018) found that rapid decarbonization and net negative greenhouse gas (GHG) emissions by mid-century are required to "hold the increase in global average temperature to well below 2°C above pre-industrial levels and pursue efforts to limit the temperature increase to 1.5°C," as stipulated by the Paris Agreement (UNFCCC, 2015, p. 2). Meeting these goals reduces physical climate-related risks from, for example, sea-level rise, ocean acidification, extreme weather, water shortages, declining crop yields, and other impacts. These impacts threaten our economy, security, health, and lives.
At the same time, policies to mitigate these harms by rapidly reducing GHG emissions can create transition risks for businesses - for example, stranded assets and loss of market value for fossil fuel producers and firms dependent on fossil energy (Carney, 2019). Rapid decarbonization requires an unprecedented energy transition (IEA, 2021a) driven by and affecting economic players including businesses, asset managers, and investors in all sectors and all countries (Kriegler et al., 2014).
However, GHG emissions are not falling rapidly enough to meet the goals of the Paris Agreement (Holz et al., 2018). The UNFCCC, 2021 found that the emissions reductions pledged by all nations as of early 2021 "fall far short of what is required, demonstrating the need for Parties to further strengthen their mitigation commitments under the Paris Agreement" (2021, p. 5). Businesses are faring no better. Despite high-profile calls to action from influential firms such as BlackRock (Fink, 2018, 2021), corporate action to meet climate goals has thus far fallen short (e.g. the Right, 2019 analysis of the German DAX 30 companies' emissions targets by NGO "right."). Instead of implementing climate strategies that might mitigate the risks, managers are often caught up in "firefighting" and capability traps that erode the resources needed for ambitious climate action (Sterman, 2015). Firms may also exaggerate environmental accomplishments, leading to greenwashing (Lyon and Maxwell, 2011); implement policies that are vague, rely on unproven offsets, or are not climate neutral (e.g. Sterman et al., 2018); or simply take no action at all (Delmas and Burbano, 2011; Sterman, 2015).
Adding to the confusion are difficulties evaluating the effectiveness of different climate policies. Misperceptions include wait-and-see approaches (Dutt and Gonzalez, 2012; Sterman, 2008), underestimating time delays and ignoring the unintended consequences of policies (Sterman, 2008), and beliefs in "silver bullet" solutions (Gilbert, 2009; Kriegler et al., 2013; Shackley and Dütschke, 2012). These beliefs arise in part because the climate–energy system is a high-dimensional dynamic system characterized by long time delays, multiple feedback loops, and nonlinearities (Sterman, 2011), while even simple systems are difficult for people to understand (Booth Sweeney and Sterman, 2000; Cronin et al., 2009; Kapmeier et al., 2017). Although senior executives might receive briefings on climate change, simply providing more information does not necessarily lead to more effective action (Pearce et al., 2015; Sterman, 2011).
Alternatively, interactive approaches to learning about climate change and policies to mitigate it can trigger climate action (Creutzig and Kapmeier, 2020). Decision-makers require tools and methods grounded in science that enable them to learn for themselves how a low-carbon economy can be achieved and how climate policies condition physical and transition risks. The system dynamics climate–energy simulation En-ROADS (Energy-Rapid Overview and Decision Support; Jones et al., 2019b), codeveloped by the climate think-tank Climate Interactive and the MIT Sloan Sustainability Initiative, provides such a tool.
Here we show how En-ROADS helps HSBC Bank U.S.A., the American subsidiary of U.K.-based multinational financial services company HSBC Holdings plc, focus its global sustainability strategy on activities with higher impact and relevance, communicate and implement the strategy, understand transition risks, and better align the strategy with global climate goals. We show how the versatility and interactivity of En-ROADS increases its reach throughout the organization. Finally, we discuss challenges and lessons learned that may be helpful to other organizations.
Forecasting demand is challenging. Various products exhibit different demand patterns. While demand may be constant and regular for one product, it may be sporadic for another, as well as when demand occurs, it may fluctuate significantly. Forecasting errors are costly and result in obsolete inventory or unsatisfied demand. Methods from statistics, machine learning, and deep learning have been used to predict such demand patterns. Nevertheless, it is not clear for what demand pattern, which algorithm would achieve the best forecast. Therefore, even today a large number of models are used to forecast on a test period. The model with the best result on the test period is used for the actual forecast. This approach is computationally and time intensive and, in most cases, uneconomical. In our paper we show the possibility to use a machine learning classification algorithm, which predicts the best possible model based on the characteristics of a time series. The approach was developed and evaluated on a dataset from a B2B-technical-retailer. The machine learning classification algorithm achieves a mean ROC-AUC of 89%, which emphasizes the skill of the model.
Supply chains have become increasingly complex, making it difficult to ensure transparency throughout the whole supply chain. In this context, first approaches came up, adopting the immutable, decentralised, and secure characteristics of the blockchain technology to increase the transparency, security, authenticity, and auditability of assets in supply chains. This paper investigates recent publications combining the blockchain technology and supply chain management and classifies them regarding the complexity to be mapped on the blockchain. As a result, the increase of supply chain transparency is identified as the main objective of recent blockchain projects in supply chain management. Thereby, most of the recent publications deal with simple supply chains and products. The few approaches dealing with complex parts only map sub-areas of supply chains. Currently no example exists which has the aim of increasing the transparency of complex manufacturing supply chains, and which enables the mapping of complex assembly processes, an efficient auditability of all assets, and an implementation of dynamic adjustments.
Gold bipyramids (AuBPs) attract significant attention due to the large enhancement of the electric field around their sharp tips and well-defined tunability of their plasmon resonances. Excitation patterns of single AuBPs are recorded using raster-scanning confocal microscopy combined with radially and azimuthally polarized laser beams. Photoluminescence spectra (PL) and excitation patterns of the same AuBPs are acquired with three different excitation wavelengths. The isotropic excitation patterns suggest that the AuBPs are mainly excited by interband transitions with 488/530 nm radiation, while excitation patterns created with a 633 nm laser exhibit a double-lobed shape that indicates a single-dipole excitation process associated with the longitudinal plasmon resonance mode. We are able to determine the three-dimensional orientation of single AuBPs nonperturbatively by comparing experimental patterns with theoretical simulations. The asymmetric patterns show that the AuBPs are lying on the substrate with an out-of-plane tilt angle of around 10–15°.
Kopainsky et al., (2020) examines intended and unintended transition effects of the Swiss food system on the system's structure and the environment. Kopainsky et al.'s research refers to studies on and is embedded in research streams in global health (Jamison et al., 2013) and sustainable food systems (Willett et al., 2019). It also addresses many of Steffen et al.'s (2015) planetary boundaries, the United Nations' (2015) sustainability goals (SDGs), and potentially could address how they are interrelated, following Randers et al. (2019). It is furthermore embedded in research on natural and human systems, particularly in the intertwined business, supply and demand, governance, ecological and health feedback loops (Swinburn et al., 2019). This feedback view enhances understanding and assessment of drivers towards improving human and ecological health and mitigating climate change.
Warum haben wir ausgerechnet einen Roboter für unseren Titel ausgesucht, der starke Ähnlichkeit hat mit Robbi aus „Robbi, Tobbi und das Fliewatüüt“. Ein Roboter aus den 80ern als Sinnbild für die Zukunft der Arbeit? Nicht ganz. Er steht vielmehr für die Anfänge der Automatisierung, mit der das Ende der Arbeit prophezeit wurde. Heute schaut sein moderner, agiler Nachfolger keck ums Eck. Der „Robbi“ von heute ist hervorgegangen aus einer ständigen technologischen Entwicklung, die unsere Arbeitswelt in erheblichem Maße verändert hat und noch verändern wird. Wie Sie sehen werden, war „Robbi“ sehr wandlungsfähig. Doch was bedeutet das für uns? Wie könnte sie aussehen, die Zukunft der Arbeit? Und was verändert sich dadurch für jeden Einzelnen von uns? Diese Fragen haben wir Professorinnen und Professoren aller Fakultäten gestellt. Sie beschäftigen sich in ihrer Forschung mit digitalen Arbeitsmodellen und zukunftsfähigen Bildungskonzepten, mit Krankenhäusern der Zukunft und Künstlicher Intelligenz. Vieles unterscheidet sich, doch vieles ist auch gleich: Es geht um Vertrauen. Was bedeutet die zunehmende Digitalisierung für unsere Arbeitskultur? Es geht um Verantwortung, uns selbst und anderen gegenüber. Es geht um Vielfalt. Wer sind sie, die Arbeitskräfte von morgen? Es geht um Vernetzung, denn die ist in einer digitalen Welt allgegenwärtig. Für uns als Hochschule ist die „Zukunft der Arbeit“ ein besonders wichtiges Thema, denn unsere Studierenden heute werden sich morgen in dieser neuen Arbeitswelt bewegen und sie gestalten. Welche Kompetenzen müssen wir Ihnen vermitteln? Das ist eine Frage, die wir uns immer wieder neu stellen.
Comparative analysis of the R&D efficiency of 14 leading pharmaceutical companies for the years 1999–2018 shows that there is a close positive correlation between R&D spending and the two investigated R&D output parameters, approved NMEs and the cumulative impact factor of their publications. In other words, higher R&D investments (input) were associated with higher R&D output. Second, our analyses indicate that there are "economies of scale" (size) in pharmaceutical R&D.
Context: The software-intensive business is characterized by increasing market dynamics, rapid technological changes, and fast-changing customer behaviors. Organizations face the challenge of moving away from traditional roadmap formats to an outcome-oriented approach that focuses on delivering value to the customer and the business. An important starting point and a prerequisite for creating such outcome-oriented roadmaps is the development of a product vision to which internal and external stakeholders can be aligned. However, the process of creating a product vision is little researched and understood.
Objective: The goal of this paper is to identify lessons-learned from product vision workshops, which were conducted to develop outcome-oriented product roadmaps.
Method: We conducted a multiple-case study consisting of two different product vision workshops in two different corporate contexts.
Results: Our results show that conducting product vision workshops helps to create a common understanding among all stakeholders about the future direction of the products. In addition, we identified key organizational aspects that contribute to the success of product vision workshops, including the participation of employees from functionally different departments.
Product roadmaps in the new mobility domain: state of the practice and industrial experiences
(2021)
Context: The New Mobility industry is a young market that includes high market dynamics and is therefore associated with a high degree of uncertainty. Traditional product roadmapping approaches such a detailed planning of features over a long-time horizon typically fail in such environments. For this reason, companies that are active in the field of New Mobility are faced with the challenge of keeping their product roadmaps reliable for stakeholders while at the same time being able to react flexibly to changing market requirements.
Objective: The goal of this paper is to identify the state of practice regarding product roadmapping of New Mobility companies. In addition, the related challenges within the product roadmapping process as well as the success factors to overcome these challenges will be highlighted.
Method: We conducted semi-structured expert interviews with 8 experts (7 German company and one Finnish company) from the field of New Mobility and performed a content analysis.
Results: Overall the results of the study showed that the participating companies are aware of the requirements that the New Mobility sector entails. Therefore, they exhibit a high level of maturity in terms of product roadmapping. Nevertheless, some aspects were revealed that pose specific challenges for the participating companies. One major challenge, for example, is that New Mobility in terms of public clients is often a tender business with non-negotiable product requirements. Thus, the product roadmap can be significantly influenced from the outside. As factors for a successful product roadmapping mainly soft factors such as trust between all people involved in the product development process and transparency throughout the entire roadmapping process were mentioned.
Polymeric micelle-like nanoparticles have demonstrated effectiveness for the delivery of some poorly soluble or hydrophobic anticancer drugs. In this study, a hydrophobic moiety, deoxycholic acid (DCA) was first bonded on a polysaccharide, chitosan (CS), for the preparation of amphiphilic chitosan (CS-DCA), which was further modified with a cationic glycidyltrimethylammounium chloride (GTMAC) to form a novel soluble chitosan derivative (HT-CS-DCA). The cationic amphiphilic HT-CS-DCA was easily self-assembled to micelle-like nanoparticles about 200 nm with narrow size distribution (PDI 0.08–0.18). The zeta potential of nanoparticles was in the range of 14 to 24 mV, indicating higher positive charges. Then, doxorubicin (DOX), an anticancer drug with poor solubility, was entrapped into HT-CS-DCA nanoparticles. The DOX release test was performed in PBS (pH 7.4) at 37 °C, and the results showed that there was no significant burst release in the first two hours, and the cumulative release increased steadily and slowly in the following hours. HT-CS-DCA nanoparticles loaded with DOX could easily enter into MCF-7 cells, as observed by a confocal microscope. As a result, DOX-loaded HT-CS-DCA nanoparticles demonstrated a significant inhibition activity on MCF-7 growth without obvious cellular toxicity in comparison with blank nanoparticles. Therefore, the anticancer efficacy of these cationic HT-CS-DCA nanoparticles showed great promise for the delivery of DOX in cancer therapy.
Preliminary results of homomorphic deconvolution application to surface EMG signals during walking
(2021)
Homomorphic deconvolution is applied to sEMG signals recorded during walking. Gastrocnemius lateralis and tibialis anterior signals were acquired according to SENIAM recommendation. MUAP parameters like amplitude and scale were estimated, whilst the MUAP shape parameter was fixed. This features a useful time-frequency representation of sEMG signal. Estimation of scale MUAP parameter was verified extracting the mean frequency of filtered EMG signal, extracted from the scale parameter estimated with two different MUAP shape values.
Melamine-formaldehyde (MF) resins are widely used as surface finishes for engineered wood-based panels in decorative laminates. Since no additional glue is applied in lamination, the overall residual curing capacity of MF resins is of great technological importance. Residual curing capacity is measured by differential scanning calorimetry (DSC) as the exothermic curing enthalpy integral of the liquid resin. After resin synthesis is completed, the resulting pre-polymer has a defined chemical structure with a corresponding residual curing capacity. Predicting the residual curing capacity of a resin batch already at an early stage during synthesis would enable corrective measures to be taken by making adjustments while synthesis is still in progress. Thereby, discarding faulty batches could be avoided. Here, by using a batch modelling approach, it is demonstrated how quantitative predictions of MF residual curing capacity can be derived from inline Fourier Transform infrared (FTIR) spectra recorded during resin synthesis using partial least squares regression. Not only is there a strong correlation (R2 = 0.89) between the infrared spectra measured at the end of MF resin synthesis and the residual curing capacity. The inline reaction spectra obtained already at the point of complete dissolution of melamine upon methylolation during the initial stage of resin synthesis are also well suited for predicting final curing performance of the resin. Based on these IR spectra, a valid regression model (R2 = 0.85) can be established using information obtained at a very early stage of MF resin synthesis.
Um den Übergang von der Schule zur Hochschule zu erleichtern, brauchen Studierende technischer Fächer häufig eine Auffrischung ihrer Kenntnisse in Mathematik und Physik. Ein Online-Lernsystem für Physik kann Studierende bei der Beschäftigung mit physikalischen Inhalten unterstützen. Zudem kann ein Physik-Wissenstest Lücken im individuellen Wissensstand aufzeigen und zum Lernen der fehlenden Themen motivieren. Die Arbeitsgruppe "eLearning in der Physik" der Hochschulföderation Süd-West (HfSW) bestehend aus den baden-württembergischen Hochschulen Aalen, Esslingen, Heilbronn, Mannheim und Reutlingen hat einen Aufgabenpool von über 200 Physikaufgaben für Erstsemester erarbeitet. Sie stehen den Studierenden mit Lösungen in Lernmanagementsystemen zum Selbststudium und jetzt auch im "Zentralen Open Educational Resources Repositorium der Hochschulen in Baden-Württemberg" (ZOERR) zur Verfügung. In diesem Beitrag wird über den Einsatz der Online-Übungsaufgaben in 2020/2021 berichtet, über die Ergebnisse der Wissenstests und über die in der Corona-Zeit neu eingerichteten eTutorien.
Die Corona-Pandemie hat zu einer Einschränkung des Alltags der medizinischen Versorgung geführt. Das zeigt sich u.a. in zum Teil erheblichen Zugangsbeschränkungen zu Krankenhäusern und Praxen mit stark reduzierter Einbestellung von Patienten, der Einhaltung von gesteigerten Hygienemaßnahmen mit entsprechend längeren Wartezeiten, dem Zugangsverbot für Begleitpersonen und nicht zuletzt der Angst vieler Patienten vor einer Ansteckung bei einem Aufenthalt in medizinischen Bereichen. Folge dessen war und ist, dass ein deutlich wahrnehmbarer Rückgang der Patientenzahlen in den Krankenhausambulanzen und Praxen zu verzeichnen war. Davon war die Augenheilkunde als Fachdisziplin mit einem hohen Anteil an ambulanten und geplanten, chirurgischen Eingriffen in besonderem Maße betroffen.
Organisationale Identität in digitalisierten Arbeitswelten: Grundlagen für gelingende Kooperation
(2021)
Organisationen bilden Identitäten aus und beantworten dabei die Fragen „Wer sind wir? Und wer sind wir nicht?“. Vorstellungen zur organisationalen Identität gehen zunächst von traditionellen Organisationen aus. Durch die Digitalisierung können bisher integrierte Aufgaben stärker modularisiert werden, sodass die Koordination der organisatorischen Gesamtaufgabe stärker sach- und weniger personenorientiert erfolgt. Zudem lassen sich organisationale Aufgaben zunehmend projektorientiert und virtuell abbilden, sodass externe Aufgabenträger leichter integriert werden können. Unsere Vorstellungen zu Organisationsgrenzen und -mitgliedschaften verändern sich dadurch. Dies wirft die Frage auf, inwiefern sich in solchen sach- und projektorientierten, grenzaufgelösten Organisationen eine gemeinsame organisationale Identität ausbildet. Im Beitrag wird argumentiert, dass sich die Wege der Identitätsentwicklung verändern, die Funktionen der organisationalen Identität für gelingende Kooperation aber erhalten bleiben.
In this paper, we examine the political gridlock in reforming the Economic and Monetary Union. We utilize a two–stage game with imperfect information in order to study the optimal sequencing. The main results are: first, optimal sequencing requires for incompliant Member States a default option in stage–two, which in principle is related to the today's fiscal architecture (EMU-I). Second, we show that compliant countries prefer a reform equilibrium today if and only if they have a free choice about the preferred fiscal architecture at the end — either EMU-II with binding European coordination or EMU-I related to Maastricht. Noteworthy, our sequencing approach works for any design of the EMU-II architecture.
Programmable nano-bio interfaces driven by tuneable vertically configured nanostructures have recently emerged as a powerful tool for cellular manipulations and interrogations. Such interfaces have strong potential for ground-breaking advances, particularly in cellular nanobiotechnology and mechanobiology. However, the opaque nature of many nanostructured surfaces makes non-destructive, live-cell characterization of cellular behavior on vertically aligned nanostructures challenging to observe. Here, a new nanofabrication route is proposed that enables harvesting of vertically aligned silicon (Si) nanowires and their subsequent transfer onto an optically transparent substrate, with high efficiency and without artefacts. We demonstrate the potential of this route for efficient live-cell phase contrast imaging and subsequent characterization of cells growing on vertically aligned Si nanowires. This approach provides the first opportunity to understand dynamic cellular responses to a cell-nanowire interface, and thus has the potential to inform the design of future nanoscale cellular manipulation technologies.
Conventional production systems are evolving through cyber-physical systems and application-oriented approaches of AI, more and more into "smart" production systems, which are characterized among other things by a high level of communication and integration of the individual components. The exchange of information between the systems is usually only oriented towards the data content, where semantics is usually only implicitly considered. The adaptability required by external and internal influences requires the integration of new or the redesign of existing components. Through an open application-oriented ontology the information and communication exchange are extended by explicit semantic information. This enables a better integration of new and an easier reconfiguration of existing components. The developed ontology, the derived application and use of the semantic information will be evaluated by means of a practical use case.
Massive data transfers in modern data-intensive systems resulting from low data-locality and data-to-code system design hurt their performance and scalability. Near-Data processing (NDP) and a shift to code-to-data designs may represent a viable solution as packaging combinations of storage and compute elements on the same device has become feasible. The shift towards NDP system architectures calls for revision of established principles. Abstractions such as data formats and layouts typically spread multiple layers in traditional DBMS, the way they are processed is encapsulated within these layers of abstraction. The NDP-style processing requires an explicit definition of cross-layer data formats and accessors to ensure in-situ executions optimally utilizing the properties of the underlying NDP storage and compute elements. In this paper, we make the case for such data format definitions and investigate the performance benefits under RocksDB and the COSMOS hardware platform.
Respiratory diseases are leading causes of death and disability in the world. The recent COVID-19 pandemic is also affecting the respiratory system. Detecting and diagnosing respiratory diseases requires both medical professionals and the clinical environment. Most of the techniques used up to date were also invasive or expensive.
Some research groups are developing hardware devices and techniques to make possible a non-invasive or even remote respiratory sound acquisition. These sounds are then processed and analysed for clinical, scientific, or educational purposes.
We present the literature review of non-invasive sound acquisition devices and techniques.
The results are about a huge number of digital tools, like microphones, wearables, or Internet of Thing devices, that can be used in this scope.
Some interesting applications have been found. Some devices make easier the sound acquisition in a clinic environment, but others make possible daily monitoring outside that ambient. We aim to use some of these devices and include the non-invasive recorded respiratory sounds in a Digital Twin system for personalized health.
Escherichia coli (E. coli) is considered the most common life-threatening infectious bacteria in our daily life and poses a major challenge to human health. However, antibiotics frequently overused and misused has triggered increased multidrug resistance, hinders therapeutic outcomes, and causes higher mortalities. Herein, we addressed near-infrared (NIR) laser-excited human serum albumin (HSA) mediated graphene oxide loaded palladium nano-dots (HSA-GO-Pd) that can effectively combat Gram-negative E. coli in vitro. NIR laser-excited designed hybrid material highly generates singlet oxygen and hydroxyl radical by electron spin-resonance (ESR) analysis. Transmission electron microscope (TEM) images show small spherical sizes PdNPs on the surface of GO nano-sheets. The zeta (ζ) potential study indicates that in an aqueous medium, the average PdNPs size and surface capped charge comes from human body protein (HSA), HSA-GO-Pd is 5–8 nm, and +25 mV, respectively. The spectroscopic characterization reveals that in the synthesized HSA-GO-Pd nanocomposite, PdNPs successfully well-dispersed decorated on the surface of graphene oxide. The as-synthesized HSA-GO-Pd shows excellent antibacterial activity against gram-negative pathogen by killing 95% bacteria within 5 h. HSA-GO-Pd having very biocompatible and shows significant antibacterial activities. Owing to their intense photothermal conversation potential, low toxicity to normal cells, the as-addressed hybrid (HSA-GO-Pd) combined with NIR-irradiation will catch up valuable insight into the effective ablation of pathogenic bacteria.
During curing of thermosetting resins the technologically relevant properties of binders and coatings develop. However, curing is difficult to monitor due to the multitude of chemical and physical processes taking place. Precise prediction of specific technological properties based on molecular properties is very difficult. In this study, the potential of principal component analysis (PCA) and principal component regression (PCR) in the analysis of Fourier transform infrared (FTIR) spectra is demonstrated using the example of melamine-formaldehyde (MF) resin curing in solid state. FTIR/PCA-based reaction trajectories are used to visualize the influence of temperature on isothermal cure. An FTIR/PCR model for predicting the hydrolysis resistance of cured MF resin from their spectral fingerprints is presented which illustrates the advantages of FTIR/PCR compared to the combination differential scanning calorimetry/isoconversional kinetic analysis. The presented methodology is transferable to the curing reactions of any thermosetting resin and can be applied to model other technologically relevant final properties as well.
The isothermal curing of melamine resin is investigated by in-line infrared spectroscopy at different temperatures. The infrared spectra are decomposed into time courses of characteristic spectral patterns using Multivariate Curve Resolution (MCR). It was found that depending on the applied curing temperature, melamine films with different spectral fingerprints and correspondingly different chemical network structures are formed. The network structures of fully cured resin films are specific for the applied curing temperatures used and cannot simply be compensated by changes in the curing time. For industrial curing processes, this means that cure temperature is the main system determining factor at constant M:F ratio. However, different MF resin networks can be specifically obtained from one and the same melamine resin by suitable selection of the curing time and temperatures profiles to design resin functionality. The spectral fingerprints after short curing time as well as after long curing time reflect the fundamental differences in the thermoset networks that can be obtained with industrial short-cycle and multi-daylight presses.
The seamless fusion of the virtual world of information with the real physical world of things is considered the key for mastering the increasing complexity of production networks in the context of Industry 4.0. This fusion, widely referred to as the Internet of Things (IoT), is primarily enabled through the use of automatic identification (Auto-ID) technologies as an interface between the two worlds. Existing Auto-ID technologies almost exclusively rely on artificial features or identifiers that are attached to an object for the sole purpose of identification. In fact, using artificial features for the purpose of identification causes additional efforts and is not even always applicable. This paper, therefore, follows an approach of using multiple natural object features defined by the technical product information from computer-aided design (CAD) models for direct identification. By extending optical instance-level 3D-Object recognition by means of additional non-optical sensors, a multi-sensor automatic identification system (AIS) is realised, capable of identifying unpackaged piece goods without the need for artificial identifiers. While the implementation of a prototype confirms the feasibility of the approach, first experiments show improved accuracy and distinctiveness in identification compared to optical instance-level 3D-Object recognition. This paper aims to introduce the concept of multisensor identification and to present the prototype multi-sensor AIS.
Teaching at assembly workstations in production in SMEs (small and medium sized companies) often does not take place at all or only insufficiently. In addition to the lack of technical content, there are also aggravatingly incorrect movement sequences from an ergonomic point of view, which "untrained" people usually automatically acquire. An AI based approach is used to analyze a definite workflow for a specific assembly scope regarding the behavior of several employees. Based on these different behaviors, the AI gives feedback at which points in time, work steps and movement’s particularly dangerous incorrect postures occur. Motion capturing and digital human model simulation in combination with the results of the AI define the optimized workflow. Individual employees can be trained directly due to the fact that AI identifies their most serious incorrect postures and provide them with a direct analogy of their “wrong” posture and “easy on the joints posture”. With the assistance of various test persons, the AI can conduct a study in which the most frequently occurring incorrect postures can be identified. This could be realized in general or tailored to specific groups of people (e.g. "People over 1.90m tall must be particularly careful not to make the following mistake...). The approach will be tested and validated at the Werk150, the factory of the ESB Business School, on the campus of the Reutlingen University. The new gained knowledge will be used subsequently for training in SMEs.
Software is an integrated part of new features within the automotive sector, car manufacturers, the Hersteller Initiative Software (HIS) consortium defined metrics to determine software quality. Yet, problems with assigning metrics to quality attributes often occur in practice. The specified boundary values lead to discussions between contractors and clients as different standards and metric sets are used. This paper studies metrics used in the automotive sector and the quality attributes they address. The HIS, ISO/IEC 25010:2011, and ISO/IEC 26262:2018 are utilized to draw a big picture illustrating (i) which metrics and boundary values are reported in literature, (ii) how the metrics match the standards, (iii) which quality attributes are addressed, and (iv) how the metrics are supported by tools. Our findings from analyzing 38 papers include a catalog of 112 metrics of which 17 define boundary values and 48 are supported by tools. Most of the metrics are concerned with source code, are generic, and not specifically designed for automotive software development. We conclude that many metrics exist, but a clear definition of the metrics' context, notably regarding the construction of flexible and efficient measurement suites, is missing.
The present work proposes the use of modern ICT technologies such as smartphones, NFCs, internet, and web technologies, to help patients in carrying out their therapies. The implemented system provides a calendar with a reminder of the assumptions, ensures the drug identification through NFC, allows remote assistance from healthcare staff and family members to check and manage the therapy in real-time. The system also provides centralized information on the patient's therapeutic situation, helpful in choosing new compatible therapies.
As consumer awareness surrounding impacts of the climate crisis continues to be a notable threat, businesses are searching for new models to make their sustainability profile even better. As a result, the implementation of a company’s sustainability vision following the SDGs has to be linked closely to the integration of customers into strategic action. One success factor is the management of customers over their entire life cycle. The Customer Journey serves as a model to systematise this approach, by designing touchpoints throughout the purchasing process in order to motivate consumers to act sustainably. Based on behaviour models, the authors develop recommendations for the food industry to design a sustainable Customer Journey that helps to reduce the percentage of consumers reporting positive attitudes to sustainable products while not exhibiting corresponding behaviour.
This paper presents a machine learning powered, procedural sizing methodology based on pre-computed look-up tables containing operating point characteristics of primitive devices. Several Neural Networks are trained for 90nm and 45nm technologies, mapping different electrical parameters to the corresponding dimensions of a primitive device. This transforms the geometric sizing problem into the domain of circuit design experts, where the desired electrical characteristics are now inputs to the model. Analog building blocks or entire circuits are expressed as a sequence of model evaluations, capturing the sizing strategy and intention of the designer in a procedure, which is reusable across different technology nodes. The methodology is employed for the sizing of two operational amplifiers, and evaluated for two technology nodes, showing the versatility and efficiency of this approach.
Soft lithography, a tool widely applied in biology and life sciences with numerous applications, uses the soft molding of photolithography-generated master structures by polymers. The central part of a photolithography set-up is a mask-aligner mostly based on a high-pressure mercury lamp as an ultraviolet (UV) light source. This type of light source requires a high level of maintenance and shows a decreasing intensity over its lifetime, influencing the lithography outcome. In this paper, we present a low-cost, bench-top photolithography tool based on ninety-eight 375 nm light-emitting diodes (LEDs). With approx. 10 W, our presented lithography set-up requires only a fraction of the energy of a conventional lamp, the LEDs have a guaranteed lifetime of 1000 h, which becomes noticeable by at least 2.5 to 15 times more exposure cycles compared to a standard light source and with costs less than 850 C it is very affordable. Such a set-up is not only attractive to small academic and industrial fabrication facilities who want to enable work with the technology of photolithography and cannot afford a conventional set-up, but also microfluidic teaching laboratories and microfluidic research and development laboratories, in general, could benefit from this cost-effective alternative. With our self-built photolithography system, we were able to produce structures from 6 μm to 50 μm in height and 10 μm to 200 μm in width. As an optional feature, we present a scaled-down laminar flow hood to enable a dust-free working environment for the photolithography process.
Already more than 75 countries pledged to become climate neutral by 2050 and the share of global emissions falling into an emission pricing scheme has steeply increased over the past two years. Even where there are no direct implications for industry (yet), there is a series of subtle pressure points driving an increasing number of companies across the globe to work towards climate neutrality and pledging ambitious carbon reduction goals.
This article sheds light on what the pressure points are, what the subtle triggers and what the underlying considerations, as well as hoped-for benefits of industrial companies to achieve decarbonisation. The observations and ideas presented in this paper are derived from quantitative and qualitative data. The quantitative data was collected within the framework of Energy Efficiency Index of German Industry (EEI). The qualitative data has been collected from interviews in industrial organisations and media documents as well as from professional practice.
Not only societal, work force, supply chain and investor expectations play a large role, but also many strategic considerations which have the potential to make the business more resilient and profitable. Those companies that do not move towards decarbonisation on the other hand may face a costly late mover disadvantage.
This piece uncovers subtle interconnections helping stakeholders from industry and beyond to grasp opportunities and challenges ahead. Taking account of these calls for rethinking economic viability calculations and investment decision making. Doing so may subsequently lead to on-site carbon reduction measures being prioritised to decarbonise effectively.
It has always been interesting for scientists to look at economic indicators in order to explain current economic developments and to forecast a boom or a recession. In addition to classic, hard economic indicators such as the Gross National Product or the Ifo index, there are also a number of psychological and soft indicators that economists consult. The lipstick effect is one of these psychological indicators. The paper introduces the lipstick effect, explains the causes behind the phenomenon, shows the connection to brand management and provides references to the current Corona pandemic.
Das Weltwirtschaftswachstum der vergangenen Jahrzehnte war durch die Dynamik der Digitalisierung und Globalisierung in den Lieferketten geprägt. Die Corona-Pandemie hat die Abhängigkeit und Verletzlichkeit der Lieferketten offengelegt. Trotz einer Vielzahl verbindlicher Standards haben Unternehmen die Digitalisierung und Arbeitsteilung auch für regulatorische Arbitrage genutzt. Einerseits erhöht das die Effizienz der Wirtschaft - was mithin ökologische Ressourcen schont - andererseits werden damit internationale Standards konterkariert. Globalisierung und Digitalisierung sind Segen und Fluch zugleich.
Learning factories on demand
(2021)
Learning Factories are research and learning environments that demonstrate new concepts and technologies for the industry in a practical environment. The interaction between physical and virtual components is a central aspect. The mediation and presentation usually occur directly in the learning factory and are thus limited in time and concerning the user group. A learning factory- on-demand- can be provided by dividing and virtualizing the individual components via containers and microservices. This enables both local operation and operation hybrid cloud or cloud systems. Physical components can be mapped either through standardized interfaces or suitable emulators. Using the example of the Learning Factory at Reutlingen University (Werk150), it will be shown how different use cases can be made available utilizing software-based orchestration, thus promoting broader and more independent teaching.
Kontrolle bei New Collaboration Work: Über die Fantasien von Purpose, Wachstum und Zugehörigkeit
(2021)
Derzeit geht es in vielen Unternehmen darum, Zusammenarbeit zukunftsfähig zu gestalten: Hierarchien werden flacher, Teams stärker selbstorganisiert, und Prozesse wie agile Frameworks regeln die Abläufe.
Was aber passiert in solchen kollaborativen Arbeitskontexten, wenn es um Kontrolle geht? In hierarchischen Organisationen ist das Thema vergleichsweise einfach zu greifen: Führungskräfte kontrollieren Arbeitsprozesse über Arbeitsteilung und -zuweisung, über disziplinarische Hoheit und motivierendes Führungsverhalten (Mitarbeitergespräche, Kritik, Lob).
Ziel des Forschungsvorhabens war es, unter Verwendung von photokatalytisch aktiven Zinkoxid- und/oder Titandioxid-Partikeln Kombinationsausrüstungen für die Textilindustrie zu entwickeln, welche einen hohen UV-Schutz (UPF-Wert: 50+), eine hohe antimikrobielle Wirksamkeit und selbsteinigende Eigenschaften garantieren, um so neue hygienischere Textilien zu schaffen. Hierzu sollten wässrige Ausrüstungen entwickelt werden, die über konventionelle Veredlungstechniken – „pad-dry-cure“ – appliziert werden können. Die Aktivität der Partikel sollte unter Einstrahlung von Raumlicht gegeben sein. Daher sollten die Partikel so modifiziert werden, dass ihre Absorption im Wellenlängenbereich des sichtbaren Lichtes liegt.
Für die Erfüllung der Projektziele wurden verschiedene dotierte TiO2- und ZnO-Nanopartikel synthetisiert, die durch das Einbringen von Dotanden eine Verschiebung der Absorption elektromagnetischer Strahlung erfahren haben. Ein Aktivitätsscreening geeigneter Kandidaten zeigte, dass einige einen Abbau organischer Referenzmaterialien katalysierten und eine antibakterielle Aktivität vorwiesen. Eisendotiertes Zinkoxid (Fe-ZnO) vereinte die beiden gewünschten Eigenschaften in ausreichendem Maße und verfügte zudem über eine hohe Absorption von UV-Strahlung, sodass damit auch das dritte Projektziel - ein ausreichender UV-Schutz - erreicht werden konnte.
Die wiederholte Synthese von Fe-ZnO gelang im Labormaßstab. Die Partikel konnten durch das Sol-Gel-Verfahren mittels anorganischem Tetraethoxysilan, sowie über einen organischen Polyurethanbasierten Binder durch Foulardierverfahren an verschiedenen Textilien immobilisiert werden. Die Waschstabilität war gegeben und eine Photodegradation des Binders und der Textilien konnte zumindest für das TEOS-System ausgeschlossen werden. Das Aktivitätsscreening der ausgerüsteten Textilien zeigte, dass immobilisierte Nanopartikel zwar zum Erreichen der anvisierten
Projektziele genügen, jedoch konnte die Aktivität des als Referenz verwendeten TiO2 nicht übertroffen werden.
Insgesamt ergab sich ein Einblick in den Nutzen von Nanopartikeln als katalytisch aktive Substanz, die zur Ausrüstung von Textilien geeignet ist. Um eine genügende Aktivität im sichtbaren Wellenlängenbereich zu erzielen und damit einen Nutzen für eine Innenraumanwendung zu generieren, müssen jedoch deutlich besser die Grundlagen der Dotierung und ihre Auswirkung auf die ROS-Generierung verstanden werden.
Die Ziele des Forschungsvorhabens wurden zum Teil erreicht.
Although spiral antennas have undergone continuous development and refinement since Edwin Turner conceived them in 1954, only a few compact planar arrays exist. The shortcoming is even more significant when it comes to spiral antenna arrays in mode M2 operation. The present work addresses this issue, among other things. It presents two planar arrays of spiral antennas operating in the same frequency band and radiating for the first one an axial mode M1 and a conical mode M2 for the second. Both arrays are modeled, simulated, and fed with a corporate feeding network embedded in a dielectric substrate. It is shown that keeping the same topology, the array for conical M1 mode can be obtained from the array for mode M2 by a simple introduction of a phase shift on one branch of the feed and vice versa, providing thus the possibility to obtain in the same structure a spiral antenna array operating in both modes in the same frequency band simultaneously. Comparison between simulated and measured data shows good agreement.
Accurate and safe neurosurgical intervention can be affected by intra-operative tissue deformation, known as brain-shift. In this study, we propose an automatic, fast, and accurate deformable method, called iRegNet, for registering pre-operative magnetic resonance images to intra-operative ultrasound volumes to compensate for brain-shift. iRegNet is a robust end-to-end deep learning approach for the non-linear registration of MRI-iUS images in the context of image-guided neurosurgery. Pre-operative MRI (as moving image) and iUS (as fixed image) are first appended to our convolutional neural network, after which a non-rigid transformation field is estimated. The MRI image is then transformed using the output displacement field to the iUS coordinate system. Extensive experiments have been conducted on two multi-location databases, which are the BITE and the RESECT. Quantitatively, iRegNet reduced the mean landmark errors from pre-registration value of (4.18 ± 1.84 and 5.35 ± 4.19 mm) to the lowest value of (1.47 ± 0.61 and 0.84 ± 0.16 mm) for the BITE and RESECT datasets, respectively. Additional qualitative validation of this study was conducted by two expert neurosurgeons through overlaying MRI-iUS pairs before and after the deformable registration. Experimental findings show that our proposed iRegNet is fast and achieves state-of-the-art accuracies outperforming state-of-the-art approaches. Furthermore, the proposed iRegNet can deliver competitive results, even in the case of non-trained images as proof of its generality and can therefore be valuable in intra-operative neurosurgical guidance.
This paper takes a holistic view on an IP-traceability process in interorganizational R&D projects, as a particular Open innovation mode, aiming at showing different technologies which can be used in the front and backend of a traceability process and discussing these technologies in terms of their suitability for data from creativity processes in these projects. To achieve this goal a two-stage literature review on different technologies in the context of traceability was conducted. Then, criteria were derived from the characteristics of data from creativity processes and of interorganizational R&D projects, with which the resulting technologies were discussed. At the end, recommendations regarding suitable technologies for tracing individual creativity artifacts in interorganizational R&D projects were given.
In today’s marketplace, the consumption of luxury goods is at a peak due to increasing global wealth and low interest rates, resulting in a vast supply of goods and services to which customer experiences are more relevant than ever before. One of the most recent developments in this field shows that consumers no longer simply purchase a product or service based on the fact sheet; they are also interested in the experience around the product. Successful brands must develop and maintain individual images to sustain their competitive advantage and build brand equity that is beneficial for customers and firms. Ideally, these will lead to satisfaction and loyalty between a brand, its products, and its customers. Existing research about brand experience and brand equity has mainly focused on functional aspects, which seem to differ for high-value luxury goods. Most studies have focused on industries like retail and fashion brands, sampling university students or visitors to shopping malls, and some have even mixed different types of industries together. This underpins the need for research within a single luxury industry with actual luxury customers who have a solid background with brand experiences.
The purpose of this study was to explore the brand experience spectrum within the automotive industry in Germany, particularly in the affordable luxury sport car sector. Identifying the factors and components that constitute, influence, or leverage/drive a brand experience from their perspective was a clear aim of the study. To achieve this, the study collected data from indepth interviews with German (n=60) respondents who had experience with affordable and luxury sport cars. The conceptual framework was based on two empirically tested models guiding this exploratory consumer research. The first model to build on was the consumerbased brand equity model, empirically tested by Çifci et al. (2016) and Nam et al. (2011). The second conceptual framework was Lemon and Verhoef’s (2016) customer journey model consisting of relevant touchpoints along the following three stages: pre-purchase, purchase, and post-purchase.
The findings of the research demonstrate that, although the six brand equity concepts – brand awareness, physical quality, staff behaviour, self-congruence, brand identification, and lifestyle – are broadly applicable in understanding customer experience in the affordable luxury car industry, the content of these dimensions differs from that suggested by the previous authors. The research established that cognitive and affective (or symbolic) components build the foundation of customer brand experience and supports Çifci et al.’s (2016) and Nam et al.’s (2011) study results. The study also identified brand trust as an important and highly relevant concept for customer brand experience in the luxury automotive car industry. Brand trust influences customer satisfaction and loyalty, therefore improving and complementing the existing model. Furthermore, the study confirmed Lemon and Verhoef’s (2016) process model of the customer journey and experience; however, it suggested two different customer journeys depending on the customers’ previous experience (first-time and experienced buyers). The differences between the two groups and the relevance of the journey touchpoints within the three purchase stages vary significantly in terms and are distinct. Identified key touchpoints for both groups are the contact to a dealer as well as information gathering online. Differences have been found in the length of purchase stages and across the customer journey. The study highlights the importance of trust, identification, and product quality for customer brand experience. Moreover, the findings of this study complement the brand equity model of Çifci et al. (2016) by adding the new concept of trust, which is highly relevant. The current knowledge is complemented by a new understanding and mapping of the customer journey for luxury sports cars in Germany. This study can assist practitioners and managers by providing a compass indicating which touchpoints are relevant to which customer group. Social value can be achieved by encouraging interactions between brand and consumer (e.g. central product launch events) and through brand-oriented interactions among consumers (e.g. dealer events, clubs, or communities). Customers are motivated to express their distinctiveness through product experience and brand identification (belonging/distinction) and to develop a loyal link to brands.
Die leistungsfähigen Verfahren des maschinellen Lernens halten unaufhaltsam Einzug in die verschiedensten Anwendungsbereiche im Finanzsektor. Während sie von einer großen Gemeinschaft von Forschern und Anwendern laufend weiterentwickelt werden, nimmt sich auch die Bankenaufsicht dieses Themas aktiv an und bezieht in Richtlinien und Diskussionspapieren Stellung.
Interkulturelles Management
(2021)
Kultur lässt sich beschreiben als „kollektive Programmierung des Geistes, die die Mitglieder einer Gruppe oder Kategorie von Menschen von anderen unterscheidet“ (Hofstede/Hofstede/Minkov, S.6). Die Muster des Glaubens, Denkens, Fühlens und Handelns, die diese mentalen Programme oder kulturellen Denkweisen ausmachen, werden von den Gruppenmitgliedern im Rahmen ihres Sozialisierungsprozesses erlernt. Sie sind die gemeinsamen „ungeschriebenen Regeln“, wie Dinge in diesem sozialen Umfeld gehandhabt werden (vgl. Hofstede/Hofstede/Minkov, S.6). Die Art und Weise, wie sich Menschen in einer bestimmten Kultur verhalten, ihre Praktiken und Normen, ist daher nicht willkürlich, sondern wird in hohem Maße von diesen erlernten Prinzipien und Werten beeinflusst. Das Wissen über eine Kultur ermöglicht es, Reaktionen ihrer Mitglieder zu erklären und vorherzusagen (vgl. Lewis, S. XII).
Lehre und Lernen unterliegt einem stetigen Wandel, wobei Interaktion als ein zentrales Element der Motivationssteigerung im Lernkontext angesehen wird. Der vorliegende Beitrag zeigt verschiedene Ansätze zur Gestaltung von interaktivem und kollaborativem Lehren und Lernen in einem virtuellen Klassenzimmer auf und stellt ein Beispiel für die Umsetzung und den Einsatz eines solchen Systems vor. Die Mehrwerte und Erfolgsfaktoren, die sich beim Einsatz virtueller Klassenzimmer und deren Gestaltung in Form einer interaktiven blended-learning Umgebung ergeben, werden dargestellt und diskutiert. Mit dem System Accelerator wird eine CSILT (Computer Supported Interactive Learning and Teaching)-Umgebung vorgestellt, in der diese Faktoren zum Einsatz kommen.
Die Bereitstellung klinischer Informationen im Operationssaal ist ein wichtiger Aspekt zur Unterstützung des chirurgischen Teams. Die roboter-assistierte Ösophagusresektion ist ein besonders komplexer Eingriff, der Potenzial zur workflowbasierten Unterstützung bietet. Wir präsentieren erste Ergebnisse der Entwicklung eines Checklisten-Tools mit der zugrundeliegenden Modellierung des chirurgischen Workflows und Informationsbedarf der Chirurgen. Das Checklisten-Tool zeigt hierfür die durchzuführenden Schritte chronologisch an und stellt zusätzliche Informationen kontextadaptiert bereit. Eine automatische Dokumentation von Start- und Endzeiten einzelner OP-Phasen und Schritte soll zukünftige Prozessanalysen der Operation ermöglichen.
Access to clinical information during interventions is an important aspect to support the surgeon and his team in the OR. The OR-Pad research project aims at displaying clinically relevant information close to the patient during surgery. With the OR-Pad system, the surgeon shall be able to access case-specific information, displayed on a sterile-packaged, portable display device. Therefore, information shall be prepared before surgery and also be available afterwards. The project follows an user-centered design process. Within the third iteration, the interaction concept was finalized, resulting in an application that can be used in two modes, mobile and intraoperative, to support the surgeon before/after and during surgery, respectively. By supporting the surgeon perioperatively, it is expected to improve the information situation in the OR and thereby the quality of surgical results. Based on this concept, the system architecture was designed in detail, using a client-server architecture. Components, communication interfaces, exchanged data, and intended standards for data exchange of the OR-Pad system including connecting systems were conceived. Expert interviews by using a clickable prototype were conducted to evaluate the concepts.
This article describes the concept and the implementation of an interdisciplinary seminar that was held at the University of Education in Freiburg, Germany. Student teachers for elementary school subjects were first taught in Design Thinking. Then they used their acquired knowledge to create learning scenarios for the subjects Art/Crafts and General Science and Social Studies. The article highlights the results and offers the opportunity to discuss the potentials of Design Thinking with regard to its transfer to classroom and teacher education against the background of fostering children’s creativity, problem-solving skills, and collaborative work.
Guerrilla marketing is the selection of atypical and non-dogmatic marketing activities that aim to achieve the greatest possible impact – in the ideal case with a comparable minimum investment. Guerrilla marketing has developed into a basic strategy overarching the marketing mix, a basic marketing policy attitude for market development that goes off the beaten track to consciously seek new, unconventional, previously disregarded, possibly even frown-upon possibilities for the deployment of tools. Digital marketing tools such as social media provide new ways and promising opportunities for innovative guerrilla marketing. This article provides an overview of innovative digital guerrilla marketing. It describes and structures guerrilla marketing in a novel form and shows illustrating examples as well as developmental trends.
F&E-Bereiche nur kostenorientiert zu steuern, wird der immensen Bedeutung von Innovationserfolgen für die Zukunft von Unternehmen nicht gerecht. Ein agiles Innovationsmanagement benötigt ein solides F&E-Performance-Managementsystem als Basis. Entscheidend ist neben der Wahl "richtiger" Methoden und Kennzahlen die Wahrnehmung der Business-Partnerrolle durch die Controller. In diesem Beitrag wird das Innovation Performance Management beispielhaft für WMF konkretisiert.
Schema and data integration have been a challenge for more than 40 years. While data warehouse technologies are quite a success story, there is still a lack of information integration methods, especially if the data sources are based on different data models or do not have a schema. Enterprise Information Integration has to deal with heterogeneous data sources and requires up-to-date high-quality information to provide a reliable basis for analysis and decision-making. The paper proposes virtual integration using the Typed Graph Model to support schema mediation. The integration process first converts the structure of each source into a typed graph schema, which is then matched to the mediated schema. Mapping rules define transformations between the schemata to reconcile semantics. The mapping can be visually validated by experts. It provides indicators and rules to achieve a consistent schema mapping, which leads to high data integrity and quality.
Seit über 12 Jahren findet nun die Informatics Inside als Informatikkonferenz an der Hochschule Reutlingen statt, in diesem Jahr zum zweiten Mal in einem halbjährigen Rhythmus, d.h. auch im Herbst. Diese Wissenschaftliche Konferenz des Masterstudiengangs Human-Centered Computing wird von den Studierenden selbst organisiert und durchgeführt. Sie erhalten während ihres Masterstudiums die Gelegenheit sich in einem selbstgewählten Fachthema zu vertiefen. Dies kann an der Hochschule, in einem Unternehmen, einem Forschungsinstitut oder im Ausland durchgeführt werden. Gerade diese flexible Ausgestaltung des Moduls „Wissenschaftliche Vertiefung“ führt zu einem sehr breiten Themenspektrum, das von den Studierenden bearbeitet wird. Neben der eigentlichen fachlichen Vertiefung spielt auch die Präsentation und Verteidigung von wissenschaftlichen Ergebnissen eine wichtige Rolle und dies weit über das Studium hinaus. Ein gewähltes Fachgebiet so allgemeinverständlich aufzubereiten und zu vermitteln, dass es auch für Nicht-Spezialisten verständlich wird, stellt immer wieder eine besondere Herausforderung dar. Dieser Herausforderung stellen sich die Studierenden im Rahmen der Herbstkonferenz zur Wissenschaftlichen Vertiefung am 24. November 2021. Bereits zum vierten Mal wird die Veranstaltung in einem online-Modus stattfinden, einschließlich eines virtuellen Begleitprogramms.
Das Themenspektrum der diesjährigen Herbstkonferenz ist wieder sehr vielfältig und breit gefächert. So erwarten Sie u.a. Beiträge aus dem Gesundheitssektor, dem Maschinellen Lernen, der KI und VR sowie dem Marketing und E-Learning. Allen gemein ist ein sehr starker Bezug zu innovativen Informatikansätzen, was sich auch in dem Wortspiel und Motto „RockIT Science“ der Konferenz widerspiegelt. Die Informatik durchdringt fast alle beruflichen und privaten Anwendungsbereiche und hat zunehmend größeren Einfluss auf unser tägliches Leben. Dies kann einerseits Besorgnis und andererseits Begeisterung auslösen. Gerade letzteres wollen die Studierenden mit Ihren Beiträgen erreichen und es auch mal im Informatiksektor „rocken“ lassen.
Der Masterstudiengang Human-Centered Computing der Hochschule Reutlingen ist geprägt durch die Zusammenarbeit von Mensch und Computer. Eine wichtige Rolle an der Schnittstelle spielt die Sensorik, die der diesjährigen Konferenz Informatics Inside das Motto „perceive(it)“ verleiht. Dabei hebt das Wortspiel „it“ für die Informationstechnologie und die englische Übersetzung des Personalpronomens „es“ die Dualität der Wahrnehmung der Informationstechnologie durch den Menschen und der Realität durch den Computer hervor. Das Spannungsfeld zwischen künstlichen Sinneserweiterungen und der intelligenten Verarbeitung von Sensordaten ermöglicht unzählige Anwendungen für digitale Medien, virtuelle Welten, realitätsnahe Simulationen, computergestützte Arbeitsprozesse sowie intelligente Assistenzsysteme in der Produktion, im Haushalt, in der Medizin und in der Mobilität. Meine Aufmerksamkeit gilt hierbei der praxisnahen Forschung als Motor für diesen technischen Fortschritt.