Refine
Year of publication
- 2021 (297) (remove)
Document Type
- Journal article (158)
- Conference proceeding (86)
- Book chapter (29)
- Book (7)
- Report (5)
- Doctoral Thesis (3)
- Anthology (3)
- Issue of a journal (2)
- Working Paper (2)
- Patent / Standard / Guidelines (1)
Is part of the Bibliography
- yes (297)
Institute
- ESB Business School (113)
- Informatik (83)
- Life Sciences (44)
- Technik (37)
- Texoversum (12)
- Zentrale Einrichtungen (5)
Publisher
- Springer (39)
- Elsevier (25)
- MDPI (24)
- IEEE (16)
- Springer Gabler (12)
- Wiley (10)
- De Gruyter (9)
- Hochschule Reutlingen (6)
- ACS (5)
- Association for Information Systems (AIS) (5)
The strong demand to transform the textile and fashion industry towards sustainability requires continuous implementation of the Education for Sustainable Development (ESD) mission statement in education and industry. To achieve this goal, the European research project "Fashion DIET - Sustainable Fashion Curriculum at Textile Universities in Europe. Development, Implementation and Evaluation of a Teaching Module for Educators", co-funded by the Erasmus+ programme of the European Union (2020-1-DE01-KA203-005657), aims to create an ESD module for university lecturers and research-based teaching and learning materials delivered through an e-learning portal. First, an online questionnaire was rolled out to assess university faculty attitudes toward and needs for ESD content and methods. The feedback questionnaire enabled the selection of the most relevant data for the elaboration of an action and research-oriented professional development module for ESD in textile education, which will be accessible through an information & e-learning portal. The e-learning portal can be used as a web-based tool to apply and evaluate the project outcomes, e.g. the further education module and the teaching and learning materials for educators, such as manuals, broadcasts and the provision of interactive and physical materials. It thus ensures that the teaching materials can be used sustainably in the classroom. It also provides country-specific data for the fashion and textile industry and its market, taking into account the different perspectives of universities and schools. In any case, the portal represents (1) the web-based platform to support the dissemination of ESD as a guiding principle and (2) a central contact point for the target group to obtain relevant information on ESD. Fashion DIET explores the use of e-learning to improve teaching and learning on ESD, by training educators and empowering them as multipliers for a sustainable textile and fashion industry. At a higher level, the European project strengthens the quality and relevance of learning provision in education towards the latest developments in textile research and innovation in terms of a more sustainable fashion.
Purpose – This paper aims to determine the affecting factors of the brand authenticity of startups in social media.
Design/methodology/approach – Using a qualitative method based on a grounded theory approach, this research specifies and classifies the affecting factors of brand authenticity of startups in social media through in-depth semi-structured interviews.
Findings – Multiple factors affecting the brand authenticity of startups in social media are determined and categorized as indexical, iconic and existential cues through this research. Connection to heritage and having credible support are determined as indexical cues. Founder intellectuality, brand intellectuality, commitment toward customers and proactive clear and interesting communications are identified as iconic cues. Having self-confidence and self-satisfaction, having intimacy with the brand and a joyful feeling for interactions with the community around the brand are determined as existential cues in this research. This research furthers previous arguments on a multiplicity of brand authenticity by shedding light on the relationship between the different aspects of authenticity and the form that different affecting factors can be organized together. Consumers eventually evaluate a strengthened perception of brand authenticity through existential cues that reflect the cues of other aspects (iconic and indexical) which passed through the goal-based assessment and self-authentication filter.
Research limitations/implications – The research sampling population can be more diversified in terms of sociodemographic attributes. Due to the qualitative methodology of this research, assessment of the findings through quantitative methods can be considered in future research. Practical implications – Using the findings of this research, startup managers can properly build a perception of authenticity in their consumers’ minds by using alternate factors while lacking major indexical cues such as heritage. This research helps startup businesses to design their brand communications better to convey their authenticity to their audiences.
Originality/value – This research determines the factors affecting the authenticity of startup brands in social media. It also defines the process of authenticity perception through different aspects of brand authenticity.
Academic research is vital for innovation and industrial growth. However, a potential burden of processing ever more knowledge could be affecting research output and researchers’ careers. We look at a dataset of researchers who have published in journals in the field of economics during a period of 45 years. For a subset of these researchers, we amass data from journals listed in the EconLit database, supplemented with years of birth from public sources. Our results show an increase in the age of researchers at their first publication, in the number of articles referenced in debut articles, and in the number of co-authors. Simultaneously, we observe a decline in the probability of researchers changing research fields. Our findings extend earlier findings on patents and hint at a burden of knowledge pervading different areas of human progress. Moreover, our results indicate that researchers develop strategies of specialisation to deal with this challenge.
We analyze economics PhDs’ collaborations in peer-reviewed journals from 1990 to 2014 and investigate such collaborations’ quality in relation to each co-author’s research quality, field and specialization. We find that a greater overlap between co-authors’ previous research fields is significantly related to a greater publication success of co-authors’ joint work and this is robust to alternative specifications. Co-authors that engage in a distant collaboration are significantly more likely to have a large research overlap, but this significance is lost when co-authors’ social networks are accounted for. High quality collaboration is more likely to emerge as a result of an interaction between specialists and generalists with overlapping fields of expertise. Regarding interactions across subfields of economics (interdisciplinarity), it is more likely conducted by co- authors who already have interdisciplinary portfolios, than by co-authors who are specialized or starred in different subfields.
COVID-19 and educational inequality: How school closures affect low- and high-achieving students
(2021)
In spring 2020, governments around the globe shut down schools to mitigate the spread of the novel coronavirus. We argue that low-achieving students may be particularly affected by the lack of educator support during school closures. We collect detailed time-use information on students before and during the school closures in a survey of 1099 parents in Germany. We find that while students on average reduced their daily learning time of 7.4 h by about half, the reduction was significantly larger for low-achievers (4.1 h) than for high-achievers (3.7 h). Low-achievers disproportionately replaced learning time with detrimental activities such as TV or computer games rather than with activities more conducive to child development. The learning gap was not compensated by parents or schools who provided less support for low-achieving students.
Weltweit und in Deutschland erreicht das Thema Inflation neue Höchststände in der Aufmerksamkeit (Google-Trends 2022). Nach einer vielbeachteten und millionenfach angesehenen Online-Weihnachtsvorlesung aus dem Jahre 2020 hat der Ökonomieprofessor Hans-Werner Sinn das Buch mit gleichnamigem Titel „Die wundersame Geldvermehrung“ veröffentlicht. Abermals könnte es dem Autor gelingen die politische Öffentlichkeit damit aufzurütteln.
Identifikation von Schlaf- und Wachzuständen durch die Auswertung von Atem- und Bewegungssignalen
(2021)
In the current age of innovative business financing opportunities available from fintech apps, social media crowdfunding sites such as Kickstarter, Indiegogo, and RocketHub, et.al., and friends and family private equity investors, start-up firms can strategically source their venture capital funds from many globally disperse organizations and individuals. As the firm in this case learned, the benefit of alternative investing sources comes with a critical hidden risk for corporate governance. After a financial restructuring, a typical Silicon Valley software start-up found itself with close to 300 external individual shareholders, some of whom had not been documented as accredited investors. The regulatory agency could decide that the prior actions of the founders and the decisions of the board had been prejudicial to the interests of the minority investors. The management of this small private company faced an atypical investor relations dilemma, before its initial public offering (IPO).
In this work, a comparison between different brushless harmonic-excited wound-rotor synchronous machines is performed. The general idea of all topologies is the elimination of the slip rings and auxiliary windings by using the already existing stator and rotor winding for field excitation. This is achieved by injecting a harmonic airgap field with the help of power electronics. This harmonic field does not interact with the fundamental field, it just transfers the excitation power across the airgap. Alternative methods with varying number of phases, different pole-pair combinations, and winding layouts are covered and compared with a detailed Finite-Element-parameterized model. Parasitic effects due to saturation and coupling between the harmonic and main windings are considered.
Massive data transfers in modern data-intensive systems resulting from low data-locality and data-to-code system design hurt their performance and scalability. Near-Data processing (NDP) and a shift to code-to-data designs may represent a viable solution as packaging combinations of storage and compute elements on the same device has become feasible. The shift towards NDP system architectures calls for revision of established principles. Abstractions such as data formats and layouts typically spread multiple layers in traditional DBMS, the way they are processed is encapsulated within these layers of abstraction. The NDP-style processing requires an explicit definition of cross-layer data formats and accessors to ensure in-situ executions optimally utilizing the properties of the underlying NDP storage and compute elements. In this paper, we make the case for such data format definitions and investigate the performance benefits under RocksDB and the COSMOS hardware platform.
Continuous monitoring of individual vital parameters can provide information for the assessment of one’s health and indications of medical problems in the context of personalized medicine. Correlations between parameters and health issues are to be evaluated. As one project in this topic area, a telemedicine platform is implemented to gather data of outpatients via wearables and accumulate them for physicians and researchers to review. This work extracts requirements, draws use case scenarios, and shows the current system architecture consisting of a patient application, a physician application with a web server, and a backend server application. In further work, the prototype will assist to develop a vendor-free and open monitoring solution. A conclusion on functionality and usability will be evaluated in an imminent first study.
As consumer awareness surrounding impacts of the climate crisis continues to be a notable threat, businesses are searching for new models to make their sustainability profile even better. As a result, the implementation of a company’s sustainability vision following the SDGs has to be linked closely to the integration of customers into strategic action. One success factor is the management of customers over their entire life cycle. The Customer Journey serves as a model to systematise this approach, by designing touchpoints throughout the purchasing process in order to motivate consumers to act sustainably. Based on behaviour models, the authors develop recommendations for the food industry to design a sustainable Customer Journey that helps to reduce the percentage of consumers reporting positive attitudes to sustainable products while not exhibiting corresponding behaviour.
Imagine a world in which the search for tomorrow's trends of (software) products is not subject to a long and laborious data search but is possible with a single mouse click. Through the use of artificial intelligence (AI), this reality is made possible and is to be further advanced through research. The study therefore aims to provide an initial overview of the young research field. Based on research, expert interviews, company and student surveys, current application possibilities of AI in the innovation process (defined as Smart Innovation), existing challenges that slow down the further development are discussed in more detail and future application possibilities are presented. Finally, a recommendation for action is made for business, politics and science to help overcome the current obstacles together and thus drive the future of Smart Innovation.
Imagine a world in which the search for tomorrow's trends is not subject to a long and laborious data search but is possible with a single mouse click. Through the use of artificial intelligence (AI), this reality is made possible and is to be further advanced through research. The study therefore aims to provide an initial overview of the young research field. Based on research, expert interviews, company and student surveys, current application possibilities of AI in the innovation process (defined as Smart Innovation), existing challenges that slow down the further development are discussed in more detail and future application possibilities are presented. Finally, a recommendation for action is made for business, politics and science to help overcome the current obstacles together and thus drive the future of Smart Innovation.
In recent years, the Graph Model has become increasingly popular, especially in the application domain of social networks. The model has been semantically augmented with properties and labels attached to the graph elements. It is difficult to ensure data quality for the properties and the data structure because the model does not need a schema. In this paper, we propose a schema bound Typed Graph Model with properties and labels. These enhancements improve not only data quality but also the quality of graph analysis. The power of this model is provided by using hyper-nodes and hyper-edges, which allows to present data structures on different abstraction levels. We prove that the model is at least equivalent in expressive power to most popular data models. Therefore, it can be used as a supermodel for model management and data integration. We illustrate by example the superiority of this model over the property graph data model of Hidders and other prevalent data models, namely the relational, object-oriented, XML model, and RDF Schema.
Deep learning-based EEG detection of mental alertness states from drivers under ethical aspects
(2021)
One of the most critical factors for a successful road trip is a high degree of alertness while driving. Even a split second of inattention or sleepiness in a crucial moment, will make the difference between life and death. Several prestigious car manufacturers are currently pursuing the aim of automated drowsiness identification to resolve this problem. The path between neuro-scientific research in connection with artificial intelligence and the preservation of the dignity of human individual’s and its inviolability, is very narrow. The key contribution of this work is a system of data analysis for EEGs during a driving session, which draws on previous studies analyzing heart rate (ECG), brain waves (EEG), and eye function (EOG). The gathered data is hereby treated as sensitive as possible, taking ethical regulations into consideration. Obtaining evaluable signs of evolving exhaustion includes techniques that obtain sleeping stage frequencies, problematic are hereby the correlated interference’s in the signal. This research focuses on a processing chain for EEG band splitting that involves band-pass filtering, principal component analysis (PCA), independent component analysis (ICA) with automatic artefact severance, and fast fourier transformation (FFT). The classification is based on a step-by-step adaptive deep learning analysis that detects theta rhythms as a drowsiness predictor in the pre-processed data. It was possible to obtain an offline detection rate of 89% and an online detection rate of 73%. The method is linked to the simulated driving scenario for which it was developed. This leaves space for more optimization on laboratory methods and data collection during wakefulness-dependent operations.
Seit über 12 Jahren findet nun die Informatics Inside als Informatikkonferenz an der Hochschule Reutlingen statt, in diesem Jahr zum zweiten Mal in einem halbjährigen Rhythmus, d.h. auch im Herbst. Diese Wissenschaftliche Konferenz des Masterstudiengangs Human-Centered Computing wird von den Studierenden selbst organisiert und durchgeführt. Sie erhalten während ihres Masterstudiums die Gelegenheit sich in einem selbstgewählten Fachthema zu vertiefen. Dies kann an der Hochschule, in einem Unternehmen, einem Forschungsinstitut oder im Ausland durchgeführt werden. Gerade diese flexible Ausgestaltung des Moduls „Wissenschaftliche Vertiefung“ führt zu einem sehr breiten Themenspektrum, das von den Studierenden bearbeitet wird. Neben der eigentlichen fachlichen Vertiefung spielt auch die Präsentation und Verteidigung von wissenschaftlichen Ergebnissen eine wichtige Rolle und dies weit über das Studium hinaus. Ein gewähltes Fachgebiet so allgemeinverständlich aufzubereiten und zu vermitteln, dass es auch für Nicht-Spezialisten verständlich wird, stellt immer wieder eine besondere Herausforderung dar. Dieser Herausforderung stellen sich die Studierenden im Rahmen der Herbstkonferenz zur Wissenschaftlichen Vertiefung am 24. November 2021. Bereits zum vierten Mal wird die Veranstaltung in einem online-Modus stattfinden, einschließlich eines virtuellen Begleitprogramms.
Das Themenspektrum der diesjährigen Herbstkonferenz ist wieder sehr vielfältig und breit gefächert. So erwarten Sie u.a. Beiträge aus dem Gesundheitssektor, dem Maschinellen Lernen, der KI und VR sowie dem Marketing und E-Learning. Allen gemein ist ein sehr starker Bezug zu innovativen Informatikansätzen, was sich auch in dem Wortspiel und Motto „RockIT Science“ der Konferenz widerspiegelt. Die Informatik durchdringt fast alle beruflichen und privaten Anwendungsbereiche und hat zunehmend größeren Einfluss auf unser tägliches Leben. Dies kann einerseits Besorgnis und andererseits Begeisterung auslösen. Gerade letzteres wollen die Studierenden mit Ihren Beiträgen erreichen und es auch mal im Informatiksektor „rocken“ lassen.
Context-aware systems to support actors in the operating room depending on the status of the intervention require knowledge about the current situation in the intra-operative area. In literature, solutions to achieve situation awareness already exist for specific use cases, but applicability and transferability to other conditions are less addressed. It is assumed that a unified solution that can be adapted to different processes and sensors would allow for greater flexibility, applicability, and thus transferability to different applications. To enable a flexible and intervention-independent system, this work proposes a concept for an adaptable situation recognition system. The system consists of four layers with several modular components for different functionalities. The feasibility is demonstrated via prototypical implementation and functional evaluation of a first basic framework prototype. Further development goal is the stepwise extension of the prototype.
Silicon photonic micro-ring resonators (MRR) developed on the silicon-on-insulator (SOI) platform, owing to their high sensitivity and small footprint, show great potential for many chemical and biological sensing applications such as label-free detection in environmental monitoring, biomedical engineering, and food analysis. In this tutorial, we provide the theoretical background and give design guidelines for SOI-based MRR as well as examples of surface functionalization procedures for label-free detection of molecules. After introducing the advantages and perspectives of MRR, fundamentals of MRR are described in detail, followed by an introduction to the fabrication methods, which are based on a complementary metal-oxide semiconductor (CMOS) technology. Optimization of MRR for chemical and biological sensing is provided, with special emphasis on the optimization of waveguide geometry. At this point, the difference between chemical bulk sensing and label-free surface sensing is explained, and definitions like waveguide sensitivity, ring sensitivity, overall sensitivity as well as the limit of detection (LoD) of MRR are introduced. Further, we show and explain chemical bulk sensing of sodium chloride (NaCl) in water and provide a recipe for label-free surface sensing.
Together with many success stories, promises such as the increase in production speed and the improvement in stakeholders' collaboration have contributed to making agile a transformation in the software industry in which many companies want to take part. However, driven either by a natural and expected evolution or by contextual factors that challenge the adoption of agile methods as prescribed by their creator(s), software processes in practice mutate into hybrids over time. Are these still agile In this article, we investigate the question: what makes a software development method agile We present an empirical study grounded in a large-scale international survey that aims to identify software development methods and practices that improve or tame agility. Based on 556 data points, we analyze the perceived degree of agility in the implementation of standard project disciplines and its relation to used development methods and practices. Our findings suggest that only a small number of participants operate their projects in a purely traditional or agile manner (under 15%). That said, most project disciplines and most practices show a clear trend towards increasing degrees of agility. Compared to the methods used to develop software, the selection of practices has a stronger effect on the degree of agility of a given discipline. Finally, there are no methods or practices that explicitly guarantee or prevent agility. We conclude that agility cannot be defined solely at the process level. Additional factors need to be taken into account when trying to implement or improve agility in a software company. Finally, we discuss the field of software process-related research in the light of our findings and present a roadmap for future research.
Prior to the introduction of AI-based forecast models in the procurement department of an industrial retail company, we assessed the digital skills of the procurement employees and surveyed their attitudes toward a new digital technology. The aim of the survey was to ascertain important contextual factors which are likely to influence the acceptance and the successful use of the new forecast tool. What we find is that the digital skills of the employees show an intermediate level and that their attitudes toward key aspects of new digital technologies are largely positive. Thus, the conditions for high acceptance and the successful use of the models are good, as evidenced by the high intention of the procurement staff to use the models. In line with previous research, we find that the perceived usefulness of a new technology and the perceived ease of use are significant drivers of the willingness to use the new forecast tool.
Forecasting demand is challenging. Various products exhibit different demand patterns. While demand may be constant and regular for one product, it may be sporadic for another, as well as when demand occurs, it may fluctuate significantly. Forecasting errors are costly and result in obsolete inventory or unsatisfied demand. Methods from statistics, machine learning, and deep learning have been used to predict such demand patterns. Nevertheless, it is not clear for what demand pattern, which algorithm would achieve the best forecast. Therefore, even today a large number of models are used to forecast on a test period. The model with the best result on the test period is used for the actual forecast. This approach is computationally and time intensive and, in most cases, uneconomical. In our paper we show the possibility to use a machine learning classification algorithm, which predicts the best possible model based on the characteristics of a time series. The approach was developed and evaluated on a dataset from a B2B-technical-retailer. The machine learning classification algorithm achieves a mean ROC-AUC of 89%, which emphasizes the skill of the model.
In buchstäblich letzter Minute haben sich die englische Regierung und die Europäische Union auf ein umfangreiches Abkommen geeinigt, um einen ungeregelten Brexit zu verhindern. Nach dem jahrelangen zähen Verhandlungsmarathon fällt der Jubel verhalten aus, dennoch herrscht auf beiden Seiten des Ärmelkanals Erleichterung, weil ein Modus Vivendi gefunden wurde, auf dem sich die künftigen Beziehungen aufbauen und fortführen lassen. Ob sich die englischen Blütenträume, die an den Brexit geknüpft wurden, erfüllen werden, wird die Zukunft erweisen.
Die Strategie und Taktik der englischen Regierungen zum Brexit und bei den Austrittsverhandlungen spiegeln sich in den Erfahrungen wider, die Friedrich List vor genau 175 Jahren bei seinen Bemühungen um eine deutsch-englische Allianz machen musste. Wegen der von England schon damals strikt befolgten Insular und Handelssuprematie musste er sich eingestehen, dass England diese Position hartnäckig verteidigt und deshalb frustriert und ernüchtert seine Pläne aufgeben. Deshalb setzte er seine Hoffnung auf eine "Kontinentalallianz" der europäischen Nationen, wie sie nun nach dem Austritt Großbritanniens aus der Europäischen Union entstanden ist. Vielleicht werden wir uns nun an den Begriff "Kontinentalallianz" gewöhnen müssen und dabei an die Weitsicht von Friedrich List erinnert.
Andererseits gilt auch für die englische Politik das Motto von Lists zweiter Pariser Preisschrift: "Le monde marche - Die Welt bewegt sich", allerdings mit völlig anderen Vorzeichen als vor 175 Jahren: Die Welthandelsachse hat sich von der westlichen auf die östliche Halbkugel verlagert; das britische Weltreich ist Geschichte, die Fließgeschwindigkeit des globalen Wandels hat sich dramatisch beschleunigt und trotz der Lingua Franca erscheint England, vor allem aus asiatischer Sicht, nur noch als kleiner Fleck auf der Weltkarte. Falls die schottische Regierung ihre Absicht durchsetzen und die Unabhängigkeit vom Vereinigten Königreich erreichen sollte, würde sich der Brexit als verhängnisvoller Bumerang erweisen.
Digitalisierung und Mediatisierung prägen die Gesellschaft und auch die Erwachsenenbildung/Weiterbildung. Der Beitrag geht der Frage nach, wie Digitalisierung in Angeboten der Erwachsenenbildung/Weiterbildung gelingt. Damit wird ein Fokus auf den Einsatz digitaler Medien gelegt. Dazu werden die Angebotsentwicklung für Adressatinnen und Adressaten sowie Teilnehmende, medienbezogene Inhalte, Lehr- und Lernarrangements mit digitalen Medien, der Einsatz digitaler Medien und die Zugänglichkeit von Lehr- und Lernmaterialien als relevante Merkmale identifiziert. Insgesamt zeigen die analysierten Interviewdaten, dass der Einsatz digitaler Medien in Angeboten eine Erweiterung der didaktischen Aufgaben darstellt, da Angebote mit digitalen Medien zielgenau auf die Bedarfe und Möglichkeiten von Adressatinnen und Adressaten sowie Teilnehmenden abgestimmt werden müssen.
Forecasting demand is challenging. Various products exhibit different demand patterns. While demand may be constant and regular for one product, it may be sporadic for another, as well as when demand occurs, it may fluctuate significantly. Forecasting errors are costly and result in obsolete inventory or unsatisfied demand. Methods from statistics, machine learning, and deep learning have been used to predict such demand patterns. Nevertheless, it is not clear for what demand pattern, which algorithm would achieve the best forecast. Therefore, even today a large number of models are used to forecast on a test period. The model with the best result on the test period is used for the actual forecast. This approach is computationally and time intensive and, in most cases, uneconomical. In our paper we show the possibility to use a machine learning classification algorithm, which predicts the best possible model based on the characteristics of a time series. The approach was developed and evaluated on a dataset from a B2B-technical-retailer. The machine learning classification algorithm achieves a mean ROC-AUC of 89%, which emphasizes the skill of the model.
Hearing contact lens (HCL) is a new type of hearing aid devices. One of its main components is a piezo-electric actuator (PEA). In order to evaluate and maximizethe HCL´s performance, a model of the HCL coupled to the middle ear was developed using finite element (FE)approach. To validate the model, vibrational measurements on the HCL and temporal bones were performed using a Laser-Doppler-Vibrometer (LDV). The model was validated step by step starting with HCL only. Then a silicone cap was fitted onto the HCL to provide an interface between the HCL and the tympanic membrane. The HCL was placed on the tympanic membrane and additional measurements were performed to validate the coupled model. The model was used to evaluate the sensitivity of geometrical and material parameters with respect to performance measures of the HCL. Moreover, deeper insight was gained into the feedback behavior, which causes whistling sounds, and the contact between the HCL and tympanic membrane.
This paper presents a permanent magnet tubular linear generator system for powering passive sensors using vertical vibration harvesting energy. The system consists of a permanent magnet tubular linear vibration generator and electric circuits. By using the design of mechanical resonant movers, the generator is capable of converting low frequencies small amplitude vertical vibration energy into more regular sinusoidal electrical energy. The distribution of the magnetic field and electromotive force are calculated by Finite Element Analysis. The characteristics of the linear vibration generator system are observed. The experimental results show the generator can produce about 0.4W~1.6W electrical power when the vibration source's amplitude is fixed on 2mm and the frequencies are between 13Hz and 22Hz.
So-called cloud-based management information systems are a fairly new phenomenon in management accounting in recent years. Quite a few companies (and especially their business managers and management accountants) do not always work via the cloud, but with hybrid solutions or on-premise solutions of ERP software such as SAP or Oracle, but often still with "manual" solutions such as Microsoft Excel.
This paper takes a holistic view on an IP-traceability process in interorganizational R&D projects, as a particular Open innovation mode, aiming at showing different technologies which can be used in the front and backend of a traceability process and discussing these technologies in terms of their suitability for data from creativity processes in these projects. To achieve this goal a two-stage literature review on different technologies in the context of traceability was conducted. Then, criteria were derived from the characteristics of data from creativity processes and of interorganizational R&D projects, with which the resulting technologies were discussed. At the end, recommendations regarding suitable technologies for tracing individual creativity artifacts in interorganizational R&D projects were given.
Die vorliegende Studie zeigt, dass das Thema Smart Innovation (der Einsatz von KI-Systemen im Innovationsprozess) von hoher Relevanz ist und Zustimmung für den Einsatz von KI im Innovationsprozess besteht. Sowohl von den Unternehmen als auch von den Studierenden werden Effizienzsteigerung, schnellere Bearbeitung großer Datenmengen, die Steigerung der Wettbewerbsfähigkeit und Kosteneinsparungen als Gründe für den Einsatz von KI im Innovationsprozess gesehen. In Deutschland finden KI-Technologien bereits jetzt punktuell und branchenunabhängig Anwendung im Innovationsprozess. Einflussfaktoren, wie Hochschulkooperationen, Innovationsabteilungen und Open Innovation können den Einsatz fördern. Vor allem KMU aus den frühen Phasen der Industrialisierung sollten davon Gebrauch machen. In einem Zusammenspiel von menschlicher Expertise und der schnellen und präzisen Datenverarbeitung der KI liegt das Erfolgsgeheimnis eines möglichst effizienten Innovationsprozesses. Es wird deutlich, dass verschiedene Einflussfaktoren erforderlich sind, um die Anwendung von Smart Innovation praktikabel zu gestalten. So gilt es zunächst die technischen Voraussetzungen einer funktionierenden IT-Infrastruktur zu erfüllen. Gleichbedeutend sind offene Fragestellungen hinsichtlich der Datenverfügbarkeit, des Dateneigentums und der Datensicherheit. Ohne rechtlichen Rahmen sind kaum Akteure gewillt, ihre Daten zu teilen und zugänglich zu machen. Erschwert wird der Einsatz von KI durch den nationalen IT-Fachkräftemangel. So sehen sowohl Unternehmen als auch die Studierenden das größte Hindernis im Mangel von KI-relevantem Know-how. Dies hemmt einerseits die Forschung, andererseits fehlt es den Unternehmen an erforderlichen Fachkräften für eine Einführung von KI im Unternehmen. Es ist jedoch notwendig, den Unternehmen durch das Aufzeigen von Anwendungsbeispielen, die Potenziale und Chancen von Smart Innovation zu vermitteln. Es gilt, die anwendungsorientierte Forschung zu fördern und einen reibungslosen Transfer in die Wirtschaft sicherzustellen. Dieser Wissensaustausch erfordert zudem eine höhere unternehmerische Risikobereitschaft. Es wächst die Notwendigkeit, unternehmensspezifische KI-Strategien zu entwerfen. Die Technologien entwickeln sich schnell, es gilt daher auch für Unternehmen sich diesem Fortschritt anzupassen, um den Anschluss nicht zu verlieren und die Wettbewerbsfähigkeit zu sichern. So liegt die größte Herausforderung im grundlegenden Wandel der Geschäftsmodelle, denn die Wertschöpfung erfolgreicher Unternehmen basiert zunehmend auf "digitalen assets". Daten gelten generell als die neue Ressource, als Rohstoff, auch für Smarte Innovationen. Die Bedeutung von Smart Innovation wird in Zukunft weiterhin ansteigen. Kurz- und mittelfristig unterstützt die Schwache KI vor allem bei der Datensammlung und -analyse, bei der Prozessautomatisierung sowie bei der Bedürfnis- und Trendidentifikation. Weiter werden sich inkrementelle Veränderungen im Innovationsmanagement mithilfe von Simulationen und der zufälligen Kombination von Technologien erhofft. Langfristig wird eine stärkere KI den Einsatz der Menschen im Innovationsprozess in Teilen ersetzen können. Ob autonomes Innovieren zukünftig möglich sein wird, hängt zunächst von dem Ausmaß der Neuheit einer Innovation, aber vor allem auch von der Möglichkeit einer kreativen KI ab. Es ist davon auszugehen, dass die Fortschritte im Bereich der KI nicht nur radikale Innovationen ermöglichen werden, sondern auch zu einer strukturellen Veränderung unseres heutigen Verständnisses des Innovationsmanagements führen.
Unter den widrigsten wirtschaftlichen und politischen Verhältnissen und Bedingungen wurde die Friedrich-List-Gesellschaft (FLG) 1925 gegründet und bis 1934 fortgeführt. Sie verfolgte vor allem den Zweck, die weit verstreuten, schwer zugänglichen und vielfach unbekannten Schriften, Reden und Briefe von Friedrich List (1789-1846) zusammenzutragen und in Form einer Gesamtausgabe zu publizieren.
Weder diese 10- bzw. 12-bändige Gesamtausgabe, noch die Namen ihrer Herausgeber haben in der Wirtschaftswissenschaft die gebührende Wertschätzung und Aufmerksamkeit erfahren. Die längst überfällige Dankesschuld wird in dem vorliegenden Beitrag nach nahezu 100 Jahren abgetragen. Ohne den engagierten und mutigen Einsatz der Herausgeber, insbesondere von Edgar Salin, wäre die List-Forschung undenkbar und die deutsche Wirtschaftswissenschaft um ein ruhmreiches Kapitel ärmer.
Seit 5 Jahrzehnten steht die Erforschung von Leben, Werk und Wirkungsgeschichte von Friedrich List (1789–1846) im Zentrum der wissenschaftlichen Arbeit von Eugen Wendler. Im Laufe der Zeit sind ca. 30 Monographien und eine größere Anzahl von wissenschaftlichen Aufsätzen und journalistischen Artikeln entstanden. Dabei baute Eugen Wendler auf der unschätzbaren Vorarbeit der Herausgeber der Gesamtausgabe von Lists Werken von 1925 bis 1935 auf.
Der vorliegende Aufsatz vermittelt einen Überblick über die Buchpublikationen von Eugen Wendler zur List-Forschung. Mit seinem eindrucksvollen Oeuvre bekennt er sich zum letzten lebenden Fossil in der Nachfolge der FLG und erweist damit den Herausgebern die gebührende und längst überfällige Wertschätzung und Achtung.
In various German cities free-floating e-scooter sharing is an upcoming trend in e-mobility. Trends such as climate change, urbanization, demographic change, amongst others are arising and forces the society to develop new mobility solutions. Contrasting the more scientifically explored car sharing, the usage patterns and behaviors of e-scooter sharing customers still need to be analyzed. This presumably enables a better addressing of customers as well as adaptions of the business model to increase scooter utilization and therefore the profit of the e-scooter providers. The customer journey is digitally traceable from registration to scooter reservation and the ride itself. These data enable to identifies customer needs and motivations. We analyzed a dataset from 2017 to 2019 of an e-scooter sharing provider operating in a big German city. Based on the datasets we propose a customer clustering that identifies three different customer segments, enabling to draw multiple conclusions for the business development and improving the problem-solution fit of the e-scooter sharing model.
Forecasting intermittent and lumpy demand is challenging. Demand occurs only sporadically and, when it does, it can vary considerably. Forecast errors are costly, resulting in obsolescent stock or unmet demand. Methods from statistics, machine learning and deep learning have been used to predict such demand patterns. Traditional accuracy metrics are often employed to evaluate the forecasts, however these come with major drawbacks such as not taking horizontal and vertical shifts over the forecasting horizon into account, or indeed stock-keeping or opportunity costs. This results in a disadvantageous selection of methods in the context of intermittent and lumpy demand forecasts. In our study, we compare methods from statistics, machine learning and deep learning by applying a novel metric called Stock-keeping-oriented Prediction Error Costs (SPEC), which overcomes the drawbacks associated with traditional metrics. Taking the SPEC metric into account, the Croston algorithm achieves the best result, just ahead of a Long Short-Term Memory Neural Network.
Omnichannel retailing and sustainability are two important challenges for the fast fashion industry. However, the sustainable behavior of fast fashion consumers in an omnichannel environment has not received much attention from researchers. This paper aims to examine the factors that determine consumers’ willingness to participate in fast fashion brands’ used clothes recycling plans in an omnichannel retail environment. In particular, we examine the impact of individual consumer characteristics (environmental attitudes, consumer satisfaction), organizational arrangements constitutive for omnichannel retailing (channel integration), and their interplay (brand identification, impulsive consumption). A conceptual model was developed based on findings from previous research and tested on data that were collected online from Chinese fast fashion consumers. Findings suggest that consumers’ intentions for clothes recycling are mainly determined by individual factors, such as environmental attitudes and consumer satisfaction. Organizational arrangements (perceived channel integration) showed smaller effects. This study contributes to the literature on omnichannel (clothing) retail, as well as on sustainability in the clothing industry, by elucidating individual and organizational determinants of consumers’ recycling intentions for used clothes in an omnichannel environment. It helps retailers to organize used clothes recycling plans in an omnichannel environment and to motivate consumers to participate in them.
Digitalization increases the pressure for companies to innovate. While current research on digital transformation mostly focuses on technological and management aspects, less attention has been paid to organizational culture and its influence on digital innovations. The purpose of this paper is to identify the characteristics of organizational culture that foster digital innovations. Based on a systematic literature review on three scholarly databases, we initially found 778 articles that were then narrowed down to a total number of 23 relevant articles through a methodical approach. After analyzing these articles, we determine nine characteristics of organizational culture that foster digital innovations: corporate entrepreneurship, digital awareness and necessity of innovations, digital skills and resources, ecosystem orientation, employee participation, agility and organizational structures, error culture and risk-taking, internal knowledge sharing and collaboration, customer and market orientation as well as open-mindedness and willingness to learn.
The early detection of head and neck cancer is a prolonged challenging task. It requires a precise and accurate identification of tissue alterations as well as a distinct discrimination of cancerous from healthy tissue areas. A novel approach for this purpose uses microspectroscopic techniques with special focus on hyperspectral imaging (HSI) methods. Our proof-of-principle study presents the implementation and application of darkfield elastic light scattering spectroscopy (DF ELSS) as a non-destructive, high-resolution, and fast imaging modality to distinguish lingual healthy from altered tissue regions in a mouse model. The main aspect of our study deals with the comparison of two varying HSI detection principles, which are a point-by-point and line scanning imaging, and whether one might be more appropriate in differentiating several tissue types. Statistical models are formed by deploying a principal component analysis (PCA) with the Bayesian discriminant analysis (DA) on the elastic light scattering (ELS) spectra. Overall accuracy, sensitivity, and precision values of 98% are achieved for both models whereas the overall specificity results in 99%. An additional classification of model-unknown ELS spectra is performed. The predictions are verified with histopathological evaluations of identical HE-stained tissue areas to prove the model’s capability of tissue distinction. In the context of our proof-of-principle study, we assess the Pushbroom PCA-DA model to be more suitable for tissue type differentiations and thus tissue classification. In addition to the HE-examination in head and neck cancer diagnosis, the usage of HSI-based statistical models might be conceivable in a daily clinical routine.
Gold bipyramids (AuBPs) attract significant attention due to the large enhancement of the electric field around their sharp tips and well-defined tunability of their plasmon resonances. Excitation patterns of single AuBPs are recorded using raster-scanning confocal microscopy combined with radially and azimuthally polarized laser beams. Photoluminescence spectra (PL) and excitation patterns of the same AuBPs are acquired with three different excitation wavelengths. The isotropic excitation patterns suggest that the AuBPs are mainly excited by interband transitions with 488/530 nm radiation, while excitation patterns created with a 633 nm laser exhibit a double-lobed shape that indicates a single-dipole excitation process associated with the longitudinal plasmon resonance mode. We are able to determine the three-dimensional orientation of single AuBPs nonperturbatively by comparing experimental patterns with theoretical simulations. The asymmetric patterns show that the AuBPs are lying on the substrate with an out-of-plane tilt angle of around 10–15°.
Today, many industrial tasks are not automated and still require human intervention. One of these tasks is the unloading of oversea containers. After the end of transportation to the sorting center, the containers must be unloaded manually for further sending the parcels to the recipients. A robot-based automatic unloading of containers was therefore researched. However, the promising results of the system developed in these projects could not be commercialized due to problems with its reliability. Mechanical, algorithmic or other limitations are possible causes of the observed errors. To analyze errors, it is necessary to evaluate the results of the robot’s work without complicating the existing system by adding new sensors to it. This paper presents a reference system based on machine learning to evaluate the robotics grasps of parcels. It analyzes two states of the container: before and after picking up one box. The states are represented as a point cloud received from a laser scanner. The proposed system evaluates the success of transferring a box from an overseas container to the sorting line by supervised learning using convolutional neural networks (CNN) and manual labeling of the data. The process of obtaining a working model using a hyperband model search with a maximum classification error of 3.9 % is also described.
Focal adhesion clusters (FAC) are dynamic and complex structures that help cells to sense physicochemical properties of their environment. Research in biomaterials, cell adhesion or cell migration often involves the visualization of FAC by fluorescence staining and microscopy, which necessitates quantitative analysis of FAC and other cell features in microscopy images using image processing. Fluorescence microscopy images of human umbilical vein endothelial cells (HUVEC) obtained at 63x magnification were quantitatively analysed using ImageJ software. A generalised algorithm for selective segmentation and morphological analysis of FAC, nucleus and cell morphology is implemented. Further, a method for discrimination of FACnear the nucleus and around the periphery is implemented using masks. Our algorithm is able to effectively quantify different morphological characteristics of cell components and shows a high sensitivity and specificity while providing a modular software implementation.
This paper presents a modular and scalable power electronics concept for motor control with continuous output voltage. In contrast to multilevel concepts, modules with continuous output voltage are connected in series. The continuous output voltage of each module is obtained by using gallium nitride (GaN) high electron motility transistor (HEMT)s as switches inside the modules with a switching frequency in the range between 500 kHz and 1 MHz. Due to this high switching frequency a LC filter is integrated into the module resulting in a continuous output voltage. A main topic of the paper is the active damping of this LC output filter for each module and the analysis of the series connection of the damping behaviour. The results are illustrated with simulations and measurements.
Context: The manufacturing industry is facing a transformation with regard to Industry 4.0 (I4). A transformation towards full automation of production including a multitude of innovations is necessary. Startups and entrepreneurial processes can support such a transformation as has been shown in other industries. However, I4 has some specifics, so it is unclear how entrepreneurship can be adapted in I4. Understanding these specifics is important to develop suitable training programs for I4 startups and to accelerate the transformation.
Objective: This study identifies and outlines the essential characteristics and constraints of entrepreneurial processes in I4.
Method: 14 semi-structured interviews were conducted with experts in the field of I4 entrepreneurship. The interviews were analysed and categorized by qualitative analyses.
Results: The interviews revealed several characteristics of I4 that have a significant impact on the various phases of the entrepreneurial process. Examples of such specifics include the difficult access to customers, the necessary deep understanding of the customer and the domain, the difficulty of testing risky assumptions, and the complex development and productization of solutions. The complexity of hardware and software components, cost structures, and necessary customer-specific customizations affect the scalability of I4 startups. These essential characteristics also require specialised skills and resources from I4 startups.
Soft lithography, a tool widely applied in biology and life sciences with numerous applications, uses the soft molding of photolithography-generated master structures by polymers. The central part of a photolithography set-up is a mask-aligner mostly based on a high-pressure mercury lamp as an ultraviolet (UV) light source. This type of light source requires a high level of maintenance and shows a decreasing intensity over its lifetime, influencing the lithography outcome. In this paper, we present a low-cost, bench-top photolithography tool based on ninety-eight 375 nm light-emitting diodes (LEDs). With approx. 10 W, our presented lithography set-up requires only a fraction of the energy of a conventional lamp, the LEDs have a guaranteed lifetime of 1000 h, which becomes noticeable by at least 2.5 to 15 times more exposure cycles compared to a standard light source and with costs less than 850 C it is very affordable. Such a set-up is not only attractive to small academic and industrial fabrication facilities who want to enable work with the technology of photolithography and cannot afford a conventional set-up, but also microfluidic teaching laboratories and microfluidic research and development laboratories, in general, could benefit from this cost-effective alternative. With our self-built photolithography system, we were able to produce structures from 6 μm to 50 μm in height and 10 μm to 200 μm in width. As an optional feature, we present a scaled-down laminar flow hood to enable a dust-free working environment for the photolithography process.
Das Buch führt in die Grundlagen der Softwaretechnik ein. Dabei liegt sein Fokus auf der systematischen und modellbasierten Software- und Systementwicklung aber auch auf dem Einsatz agiler Methoden. Die Autoren legen besonderen Wert auf die gleichwertige Behandlung praktischer Aspekte und zugrundeliegender Theorien, was das Buch als Fach- und Lehrbuch gleichermaßen geeignet macht. Die Softwaretechnik wird im Rahmen eines systematischen Frameworks umfassend beschrieben. Ausgewählte und aufeinander abgestimmte Konzepte und Methoden werden durchgängig und integriert dargestellt.
Software is an integrated part of new features within the automotive sector, car manufacturers, the Hersteller Initiative Software (HIS) consortium defined metrics to determine software quality. Yet, problems with assigning metrics to quality attributes often occur in practice. The specified boundary values lead to discussions between contractors and clients as different standards and metric sets are used. This paper studies metrics used in the automotive sector and the quality attributes they address. The HIS, ISO/IEC 25010:2011, and ISO/IEC 26262:2018 are utilized to draw a big picture illustrating (i) which metrics and boundary values are reported in literature, (ii) how the metrics match the standards, (iii) which quality attributes are addressed, and (iv) how the metrics are supported by tools. Our findings from analyzing 38 papers include a catalog of 112 metrics of which 17 define boundary values and 48 are supported by tools. Most of the metrics are concerned with source code, are generic, and not specifically designed for automotive software development. We conclude that many metrics exist, but a clear definition of the metrics' context, notably regarding the construction of flexible and efficient measurement suites, is missing.
Context:
Test-driven development (TDD) is an agile software development approach that has been widely claimed to improve software quality. However, the extent to which TDD improves quality appears to be largely dependent upon the characteristics of the study in which it is evaluated (e.g., the research method, participant type, programming environment, etc.). The particularities of each study make the aggregation of results untenable.
Objectives:
The goal of this paper is to: increase the accuracy and generalizability of the results achieved in isolated experiments on TDD, provide joint conclusions on the performance of TDD across different industrial and academic settings, and assess the extent to which the characteristics of the experiments affect the quality-related performance of TDD.
Method:
We conduct a family of 12 experiments on TDD in academia and industry. We aggregate their results by means of meta-analysis. We perform exploratory analyses to identify variables impacting the quality-related performance of TDD.
Results:
TDD novices achieve a slightly higher code quality with iterative test-last development (i.e., ITL, the reverse approach of TDD) than with TDD. The task being developed largely determines quality. The programming environment, the order in which TDD and ITL are applied, or the learning effects from one development approach to another do not appear to affect quality. The quality-related performance of professionals using TDD drops more than for students. We hypothesize that this may be due to their being more resistant to change and potentially less motivated than students.
Conclusion:
Previous studies seem to provide conflicting results on TDD performance (i.e., positive vs. negative, respectively). We hypothesize that these conflicting results may be due to different study durations, experiment participants being unfamiliar with the TDD process, or case studies comparing the performance achieved by TDD vs. the control approach (e.g., the waterfall model), each applied to develop a different system. Further experiments with TDD experts are needed to validate these hypotheses.
Fault diagnosis of rolling bearings is an essential process for improving the reliability and safety of the rotating machinery. It is always a major challenge to ensure fault diag- nosis accuracy in particular under severe working conditions. In this article, a deep adversarial domain adaptation (DADA) model is proposed for rolling bearing fault diagnosis. This model con- structs an adversarial adaptation network to solve the commonly encountered problem in numerous real applications: the source domain and the target domain are inconsistent in their distribution. First, a deep stack autoencoder (DSAE) is combined with representative feature learning for dimensionality reduction, and such a combination provides an unsupervised learning method to effectively acquire fault features. Meanwhile, domain adaptation and recognition classification are implemented using a Softmax classifier to augment classification accuracy. Second, the effects of the number of hidden layers in the stack autoencoder network, the number of neurons in each hidden layer, and the hyperparameters of the proposed fault diagnosis algorithm are analyzed. Third, comprehensive analysis is performed on real data to vali- date the performance of the proposed method; the experimental results demonstrate that the new method outperforms the existing machine learning and deep learning methods, in terms of classification accuracy and generalization ability.
In the era of precision medicine, digital technologies and artificial intelligence, drug discovery and development face unprecedented opportunities for product and business model innovation, fundamentally changing the traditional approach of how drugs are discovered, developed and marketed. Critical to this transformation is the adoption of new technologies in the drug development process, catalyzing the transition from serendipity-driven to data-driven medicine. This paradigm shift comes with a need for both translation and precision, leading to a modern Translational Precision Medicine approach to drug discovery and development. Key components of Translational Precision Medicine are multi-omics profiling, digital biomarkers, model-based data integration, artificial intelligence, biomarker-guided trial designs and patient-centric companion diagnostics. In this review, we summarize and critically discuss the potential and challenges of Translational Precision Medicine from a cross-industry perspective.
By 2019, Germany-based Kärcher, “the world’s leading provider of cleaning technology,” had turned its professional cleaning devices into IoT products. The data generated by these IoT-connected cleaning devices formed a key ingredient in the company’s ongoing strategic shift in its B2B business: Kärcher was transforming from a seller of cleaning devices to a provider of consulting services in order to help professional cleaning companies improve their cleaning processes. Based on interviews with seven IT- and non-IT executives, the case illustrates how the company learned to generate value from IoT products. And it demonstrates how a family-owned company transformed its organization in order to be able to more effectively develop and provide IoT products, while adding roles, developing technology platforms, and changing organizational structures and ways of working.
Intermittent time series forecasting is a challenging task which still needs particular attention of researchers. The more unregularly events occur, the more difficult is it to predict them. With Croston’s approach in 1972 (1.Nr. 3:289–303), intermittence and demand of a time series were investigated the first time separately. He proposes an exponential smoothing in his attempt to generate a forecast which corresponds to the demand per period in average. Although this algorithm produces good results in the field of stock control, it does not capture the typical characteristics of intermittent time series within the final prediction. In this paper, we investigate a time series’ intermittence and demand individually, forecast the upcoming demand value and inter-demand interval length using recent machine learning algorithms, such as long-short-term-memories and light-gradient-boosting machines, and reassemble both information to generate a prediction which preserves the characteristics of an intermittent time series. We compare the results against Croston’s approach, as well as recent forecast procedures where no split is performed.
Context: Agile practices as well as UX methods are nowadays well-known and often adopted to develop complex software and products more efficiently and effectively. However, in the so called VUCA environment, which many companies are confronted with, the sole use of UX research is not sufficient to find the best solutions for customers. The implementation of Design Thinking can support this process. But many companies and their product owners don’t know how much resources they should spend for conducting Design Thinking.
Objective: This paper aims at suggesting a supportive tool, the “Discovery Effort Worthiness (DEW) Index”, for product owners and agile teams to determine a suitable amount of effort that should be spent for Design Thinking activities.
Method: A case study was conducted for the development of the DEW index. Design Thinking was introduced into the regular development cycle of an industry Scrum team. With the support of UX and Design Thinking experts, a formula was developed to determine the appropriate effort for Design Thinking.
Results: The developed “Discovery Effort Worthiness Index” provides an easy-to-use tool for companies and their product owners to determine how much effort they should spend on Design Thinking methods to discover and validate requirements. A company can map the corresponding Design Thinking methods to the results of the DEW Index calculation, and product owners can select the appropriate measures from this mapping. Therefore, they can optimize the effort spent for discovery and validation.
Context: The software-intensive business is characterized by increasing market dynamics, rapid technological changes, and fast-changing customer behaviors. Organizations face the challenge of moving away from traditional roadmap formats to an outcome-oriented approach that focuses on delivering value to the customer and the business. An important starting point and a prerequisite for creating such outcome-oriented roadmaps is the development of a product vision to which internal and external stakeholders can be aligned. However, the process of creating a product vision is little researched and understood.
Objective: The goal of this paper is to identify lessons-learned from product vision workshops, which were conducted to develop outcome-oriented product roadmaps.
Method: We conducted a multiple-case study consisting of two different product vision workshops in two different corporate contexts.
Results: Our results show that conducting product vision workshops helps to create a common understanding among all stakeholders about the future direction of the products. In addition, we identified key organizational aspects that contribute to the success of product vision workshops, including the participation of employees from functionally different departments.
Context: Many companies are facing an increasingly dynamic and uncertain market environment, making traditional product roadmapping practices no longer sufficiently applicable. As a result, many companies need to adapt their product roadmapping practices for continuing to operate successfully in today’s dynamic market environment. However, transforming product roadmapping practices is a difficult process for organizations. Existing literature offers little help on how to accomplish such a process.
Objective: The objective of this paper is to present a product roadmap transformation approach for organizations to help them identify appropriate improvement actions for their roadmapping practices using an analysis of their current practices.
Method: Based on an existing assessment procedure for evaluating product roadmapping practices, the first version of a product roadmap transformation approach was developed in workshops with company experts. The approach was then given to eleven practitioners and their perceptions of the approach were gathered through interviews.
Results: The result of the study is a transformation approach consisting of a process describing what steps are necessary to adapt the currently applied product roadmapping practice to a dynamic and uncertain market environment. It also includes recommendations on how to select areas for improvement and two empirically based mapping tables. The interviews with the practitioners revealed that the product roadmap transformation approach was perceived as comprehensible, useful, and applicable. Nevertheless, we identified potential for improvements, such as a clearer presentation of some processes and the need for more improvement options in the mapping tables. In addition, minor usability issues were identified.
Entrepreneurial software engineering: towards a hybrid development method for early-stage startups
(2021)
A considerable share of innovative software-intensive products is developed by startups. However, product development in an early-stage startup is not a sequential process. A business idea is usually based on a number of assumptions. The riskiest assumptions need to be tested. Depending on the test results, a product strategy may change several times. This raises the question of how to create sufficiently stable software using engineering principles despite a dynamic product strategy that is subject to many uncertainties. Hybrid development methods that combine agile aspects with classical engineering methods seem to be a good choice in such a start-up context. This paper proposes a lightweight hybrid development method that provides early-stage startups with a framework to support the development of single-feature minimum viable products. The method was derived from a start-up company's founding case and evaluated in expert interviews. The proposed method is intended to provide a basis for discussion between practitioners and scientists with the aim of better understanding the application of software engineering principles in software start-ups.
Product roadmaps in the new mobility domain: state of the practice and industrial experiences
(2021)
Context: The New Mobility industry is a young market that includes high market dynamics and is therefore associated with a high degree of uncertainty. Traditional product roadmapping approaches such a detailed planning of features over a long-time horizon typically fail in such environments. For this reason, companies that are active in the field of New Mobility are faced with the challenge of keeping their product roadmaps reliable for stakeholders while at the same time being able to react flexibly to changing market requirements.
Objective: The goal of this paper is to identify the state of practice regarding product roadmapping of New Mobility companies. In addition, the related challenges within the product roadmapping process as well as the success factors to overcome these challenges will be highlighted.
Method: We conducted semi-structured expert interviews with 8 experts (7 German company and one Finnish company) from the field of New Mobility and performed a content analysis.
Results: Overall the results of the study showed that the participating companies are aware of the requirements that the New Mobility sector entails. Therefore, they exhibit a high level of maturity in terms of product roadmapping. Nevertheless, some aspects were revealed that pose specific challenges for the participating companies. One major challenge, for example, is that New Mobility in terms of public clients is often a tender business with non-negotiable product requirements. Thus, the product roadmap can be significantly influenced from the outside. As factors for a successful product roadmapping mainly soft factors such as trust between all people involved in the product development process and transparency throughout the entire roadmapping process were mentioned.
How to prioritize your product roadmap when everything feels important: a grey literature review
(2021)
Context: A key factor in achieving product success is to identify what and in which order outputs must be launched in order to deliver the most value to the customer and the business. Therefore, a well-established process to discover and prioritize the content of the product roadmap in the right way is crucial for the success of a company. However, most companies prioritize their product roadmap items based on opinions of experts or the management. Additionally, increasing market dynamics, rapidly evolving technologies and fast changing customer behavior complicate the conduction of the prioritization process. Therefore, many companies are struggling to finding and establishing suitable techniques for prioritizing their product roadmap.
Objective: In order to gain a better understanding of the prioritization process in a dynamic and uncertain market environment, this paper aims to identify suitable techniques for the prioritization in such environments.
Method: We conducted a Grey Literature Review according to the guidelines of Garousi et al.
Results: 18 techniques for the prioritization of the product roadmap could be identified. 15 techniques are primarily used to prioritize outputs by considering factors such as the expected impact or effort. Two technique are most suitable for prioritizing risky assumptions that need to be validated and one technique focuses on the prioritization of outcomes. All techniques have in common that they should be conducted as cross-functional team activity in order to include different perspectives in the prioritization process.
Collagen-based barrier membranes are an essential component in Guided Bone Regeneration (GBR) procedures. They act as cell-occlusive devices that should maintain a micromilieu where bone tissue can grow, which in turn provides a stable bed for prosthetic implantation. However, the standing time of collagen membranes has been a challenging area, as native membranes are often prematurely resorbed. Therefore, consolidation techniques, such as chemical cross-linking, have been used to enhance the structural integrity of the membranes, and by consequence, their standing time. However, these techniques have cytotoxic tendencies and can cause exaggerated inflammation and in turn, premature resorption, and material failures. However, tissues from different extraction sites and animals are variably cross-linked. For the present in vivo study, a new collagen membrane based on bovine dermis was extracted and compared to a commercially available porcine-sourced collagen membrane extracted from the pericardium. The membranes were implanted in Wistar rats for up to 60 days. The analyses included well-established histopathological and histomorphometrical methods, including histochemical and immunohistochemical staining procedures, to detect M1- and M2-macrophages as well as blood vessels. Initially, the results showed that both membranes remained intact up to day 30, while the bovine membrane was fragmented at day 60 with granulation tissue infiltrating the implantation beds. In contrast, the porcine membrane remained stable without signs of material-dependent inflammatory processes. Therefore, the bovine membrane showed a special integration pattern as the fragments were found to be overlapping, providing secondary porosity in combination with a transmembraneous vascularization. Altogether, the bovine membrane showed comparable results to the porcine control group in terms of biocompatibility and standing time. Moreover, blood vessels were found within the bovine membranes, which can potentially serve as an additional functionality of barrier membranes that conventional barrier membranes do not provide.
A full understanding of the relationship between surface properties, protein adsorption, and immune responses is lacking but is of great interest for the design of biomaterials with desired biological profiles. In this study, polyelectrolyte multilayer (PEM) coatings with gradient changes in surface wettability were developed to shed light on how this impacts protein adsorption and immune response in the context of material biocompatibility. The analysis of immune responses by peripheral blood mononuclear cells to PEM coatings revealed an increased expression of proinflammatory cytokines tumor necrosis factor (TNF)-α, macrophage inflammatory protein (MIP)-1β, monocyte chemoattractant protein (MCP)-1, and interleukin (IL)-6 and the surface marker CD86 in response to the most hydrophobic coating, whereas the most hydrophilic coating resulted in a comparatively mild immune response. These findings were subsequently confirmed in a cohort of 24 donors. Cytokines were produced predominantly by monocytes with a peak after 24 h. Experiments conducted in the absence of serum indicated a contributing role of the adsorbed protein layer in the observed immune response. Mass spectrometry analysis revealed distinct protein adsorption patterns, with more inflammation-related proteins (e.g., apolipoprotein A-II) present on the most hydrophobic PEM surface, while the most abundant protein on the hydrophilic PEM (apolipoprotein A-I) was related to anti-inflammatory roles. The pathway analysis revealed alterations in the mitogen-activated protein kinase (MAPK)-signaling pathway between the most hydrophilic and the most hydrophobic coating. The results show that the acute proinflammatory response to the more hydrophobic PEM surface is associated with the adsorption of inflammation-related proteins. Thus, this study provides insights into the interplay between material wettability, protein adsorption, and inflammatory response and may act as a basis for the rational design of biomaterials.
Die Digitalisierung und Nachhaltigkeit werden die Erwartungen und Anforderungen an die Controller dauerhaft und umfassend verändern. Die Lehre hat für den Rollenwandel eine hohe Relevanz. Eine auf die veränderten Anforderungen abgestimmte Ausbildung bietet den Unternehmen die Möglichkeit, Controller mit diesen veränderte Rollenprofilen für ihre Organisation zu gewinnen. Für die Absolventen mit dem Berufswunsch Controlling sichert das veränderte Rollenprofil ihre langfristige Arbeitsmarktfähigkeit. Für den Rollenwandel selbst kann diese als Treiber verstanden werden.
Trotz der Bedeutung der Lehre für den Rollenwandel gibt es dazu bislang wenige Forschungsergebnisse zur konkreten Abbildung der Rollen in der Lehre. Es stellt sich daher die Frage, wie Hochschulen in ihren Studiengängen die Rollen grundsätzlich abbilden und mit welcher Intensität sowie Kombinationen die Rollen gelehrt werden. Diese Forschungsfrage wird anhand einer Analyse von controllingspezifischen Masterstudiengängen und deren Modulhandbücher evaluiert und diskutiert.
Im Ergebnis stellt sich der Rollenwandel in der Controllinglehre sehr heterogen dar. Es dominiert die Vermittlung der klassische Controllerrolle gefolgt von der Business Partner Rolle. Lehrinhalte bezogen auf die Rollen des digitalen Controllers oder Risikocontrollers sind schwach ausgeprägt. Für die Übernahme einer Controllerrolle im Nachhaltigkeitsmanagement existiert kaum ein Lehrangebot. Diese Ergebnisse sollen zum Diskurs über den Rollenwandel und die Gestaltung der Lehre im Controlling beitragen.
Stellenausschreibungen sind ein wichtiges Mittel, um Rollen von Controllern auf dem Arbeitsmarkt zu kommunizieren. Stellenanzeigen öffnen ein Fenster zu dem, was Firmen als Rollen für ihre Controller wahrnehmen. Welche Rollen Stellenanzeigen kommunizieren, ist bisher nicht bekannt. Unter Verwendung einer großen Stichprobe von 889 Stellenanzeigen und eines Text-Mining-Ansatzes zeigen wir, dass es offenbar eine Mischung verschiedener Rollentypen mit einem starken Fokus auf einen eher klassischen Rollentyp gibt, die Watchdog-Rolle. Personen mit Business-Partner-Eigenschaften werden dagegen häufiger für Führungspositionen oder in Familienunternehmen und kleinen und mittleren Unternehmen (KMU) gesucht. Die Ergebnisse stellen die derzeitige Rollen-diskussion für Controller als Business Partner in der Praxis und in einigen Bereichen der Wissenschaft in Frage.
Veränderungen der Rolle von Controllern in Großkonzernen - Ergebnisse einer empirischen Erhebung
(2021)
Die anhaltende Diskussion über die Rolle von Management Accountants (MA) führt häufig dazu, dass die Rolle des Business Partners (BP) als die Rolle der Wahl angesehen wird. Dennoch scheinen viele Wissenschaftler und Praktiker davon auszugehen, dass diese Rolle den Managern und MA klar ist, dass sie für sie sinnvoll ist und alle Manager und MA ihr zustimmen und sie umsetzen. Unstimmigkeiten zwischen der tatsächlichen Rolle, der wahrgenommenen und der erwarteten Rolle könnten zu Identitäts- und Rollenkonflikten führen. Dieser Beitrag basiert auf einer quantitativen empirischen Studie in einem großen deutschen High-Tech-Unternehmen im Jahr 2019, dessen Top-Management sich für die Einführung der BP-Rolle entschied.
The increasing urban population growth leads to challenges in cities in many aspects: Urbanisation problems such as excessive environmental pollution or increasing urban traffic demand new and innovative solutions. In this context, the concept of smart cities is discussed. An enabling element of the smart city concept is applying information technology (IT) to improve administrative efficiency and quality of life while reducing costs and resource consumption and ensuring greater citizen participation in administrative and urban development issues. While these smart city services are technologically studied and implemented, government officials, citizens or businesses are often unaware of the large variety of smart city service solutions. Therefore, this work deals with developing a smart city services catalogue that documents best practice services to create a platform that brings citizens, city government, and businesses together. Although the concept of IT service catalogues is not new and guidelines and recommendations for the design and development of service catalogues already exist in the corporate context, there is little work on smart city service catalogues. Therefore, approaches from agile software development and pattern research were adapted to develop the smart city service catalogue platform in this work.
The physicochemical properties of synthetically produced bone substitute materials (BSM) have a major impact on biocompatibility. This affects bony tissue integration, osteoconduction, as well as the degradation pattern and the correlated inflammatory tissue responses including macrophages and multinucleated giant cells (MNGCs). Thus, influencing factors such as size, special surface morphologies, porosity, and interconnectivity have been the subject of extensive research. In the present publication, the influence of the granule size of three identically manufactured bone substitute granules based on the technology of hydroxyapatite (HA)-forming calcium phosphate cements were investigated, which includes the inflammatory response in the surrounding tissue and especially the induction of MNGCs (as a parameter of the material degradation). For the in vivo study, granules of three different size ranges (small = 0.355–0.5 mm; medium = 0.5–1 mm; big = 1–2 mm) were implanted in the subcutaneous connective tissue of 45 male BALB/c mice. At 10, 30, and 60 days post implantationem, the materials were explanted and histologically processed. The defect areas were initially examined histopathologically. Furthermore, pro- and anti-inflammatory macrophages were quantified histomorphometrically after their immunohistochemical detection. The number of MNGCs was quantified as well using a histomorphometrical approach. The results showed a granule size-dependent integration behavior. The surrounding granulation tissue has passivated in the groups of the two bigger granules at 60 days post implantationem including a fibrotic encapsulation, while a granulation tissue was still present in the group of the small granules indicating an ongoing cell-based degradation process. The histomorphometrical analysis showed that the number of proinflammatory macrophages was significantly increased in the small granules at 60 days post implantationem. Similarly, a significant increase of MNGCs was detected in this group at 30 and 60 days post implantationem. Based on these data, it can be concluded that the integration and/or degradation behavior of synthetic bone substitutes can be influenced by granule size.
Polymeric micelle-like nanoparticles have demonstrated effectiveness for the delivery of some poorly soluble or hydrophobic anticancer drugs. In this study, a hydrophobic moiety, deoxycholic acid (DCA) was first bonded on a polysaccharide, chitosan (CS), for the preparation of amphiphilic chitosan (CS-DCA), which was further modified with a cationic glycidyltrimethylammounium chloride (GTMAC) to form a novel soluble chitosan derivative (HT-CS-DCA). The cationic amphiphilic HT-CS-DCA was easily self-assembled to micelle-like nanoparticles about 200 nm with narrow size distribution (PDI 0.08–0.18). The zeta potential of nanoparticles was in the range of 14 to 24 mV, indicating higher positive charges. Then, doxorubicin (DOX), an anticancer drug with poor solubility, was entrapped into HT-CS-DCA nanoparticles. The DOX release test was performed in PBS (pH 7.4) at 37 °C, and the results showed that there was no significant burst release in the first two hours, and the cumulative release increased steadily and slowly in the following hours. HT-CS-DCA nanoparticles loaded with DOX could easily enter into MCF-7 cells, as observed by a confocal microscope. As a result, DOX-loaded HT-CS-DCA nanoparticles demonstrated a significant inhibition activity on MCF-7 growth without obvious cellular toxicity in comparison with blank nanoparticles. Therefore, the anticancer efficacy of these cationic HT-CS-DCA nanoparticles showed great promise for the delivery of DOX in cancer therapy.
Despite its success against cancer, photothermal therapy (PTT) (>50 °C) suffers from several limitations such as triggering inflammation and facilitating immune escape and metastasis and also damage to the surrounding normal cells. Mild-temperature PTT has been proposed to override these shortcomings. We developed a nanosystem using HepG2 cancer cell membrane-cloaked zinc glutamate-modified Prussian blue nanoparticles with triphenylphosphine-conjugated lonidamine (HmPGTL NPs). This innovative approach achieved an efficient mild-temperature PTT effect by downregulating the production of intracellular ATP. This disrupts a section of heat shock proteins that cushion cancer cells against heat. The physicochemical properties, anti-tumor efficacy, and mechanisms of HmPGTL NPs both in vitro and in vivo were investigated. Moreover, the nanoparticles cloaked with the HepG2 cell membrane substantially prolonged the circulation time in vivo. Overall, the designed nanocomposites enhance the efficacy of mild-temperature PTT by disrupting the production of ATP in cancer cells. Thus, we anticipate that the mild-temperature PTT nanosystem will certainly present its enormous potential in various biomedical applications.
Escherichia coli (E. coli) is considered the most common life-threatening infectious bacteria in our daily life and poses a major challenge to human health. However, antibiotics frequently overused and misused has triggered increased multidrug resistance, hinders therapeutic outcomes, and causes higher mortalities. Herein, we addressed near-infrared (NIR) laser-excited human serum albumin (HSA) mediated graphene oxide loaded palladium nano-dots (HSA-GO-Pd) that can effectively combat Gram-negative E. coli in vitro. NIR laser-excited designed hybrid material highly generates singlet oxygen and hydroxyl radical by electron spin-resonance (ESR) analysis. Transmission electron microscope (TEM) images show small spherical sizes PdNPs on the surface of GO nano-sheets. The zeta (ζ) potential study indicates that in an aqueous medium, the average PdNPs size and surface capped charge comes from human body protein (HSA), HSA-GO-Pd is 5–8 nm, and +25 mV, respectively. The spectroscopic characterization reveals that in the synthesized HSA-GO-Pd nanocomposite, PdNPs successfully well-dispersed decorated on the surface of graphene oxide. The as-synthesized HSA-GO-Pd shows excellent antibacterial activity against gram-negative pathogen by killing 95% bacteria within 5 h. HSA-GO-Pd having very biocompatible and shows significant antibacterial activities. Owing to their intense photothermal conversation potential, low toxicity to normal cells, the as-addressed hybrid (HSA-GO-Pd) combined with NIR-irradiation will catch up valuable insight into the effective ablation of pathogenic bacteria.
Background/Aim: The aim of this study was the conception, production, material analysis and cytocompatibility analysis of a new collagen foam for medical applications. Materials and Methods: After the innovative production of various collagen sponges from bovine sources, the foams were analyzed ex vivo in terms of their structure (including pore size) and in vitro in terms of cytocompatibility according to EN ISO 10993-5/-12. In vitro, the collagen foams were compared with the established soft and hard tissue materials cerabone and Jason membrane (both botiss biomaterials GmbH, Zossen, Germany). Results: Collagen foams with different compositions were successfully produced from bovine sources. Ex vivo, the foams showed a stable and long-lasting primary structure quality with a bubble area of 1,000 to 2,000 μm2. In vitro, all foams showed sufficient cytocompatibility. Conclusion: Collagen sponges represent a promising material for hard and soft tissue regeneration. Future studies could focus on integrating and investigating different additives in the foams.
Since the beginning of the energy sector liberalization, the design of energy markets has become a prominent field of research. Markets nowadays facilitate efficient resource allocation in many fields of energy system operation, such as plant dispatch, control reserve provisioning, delimitation of related carbon emissions, grid congestion management, and, more recently, smart grid concepts and local energy trading. Therefore, good market designs play an important role in enabling the energy transition toward a more sustainable energy supply for all. In this chapter, we retrace how market engineering shaped the development of energy markets and how the research focus shifted from national wholesale markets to more decentralized and location-sensitive concepts.
In a networked world, companies depend on fast and smart decisions, especially when it comes to reacting to external change. With the wealth of data available today, smart decisions can increasingly be based on data analysis and be supported by IT systems that leverage AI. A global pandemic brings external change to an unprecedented level of unpredictability and severity of impact. Resilience therefore becomes an essential factor in most decisions when aiming at making and keeping them smart. In this chapter, we study the characteristics of resilient systems and test them with four use cases in a wide-ranging set of application areas. In all use cases, we highlight how AI can be used for data analysis to make smart decisions and contribute to the resilience of systems.
F&E-Bereiche nur kostenorientiert zu steuern, wird der immensen Bedeutung von Innovationserfolgen für die Zukunft von Unternehmen nicht gerecht. Ein agiles Innovationsmanagement benötigt ein solides F&E-Performance-Managementsystem als Basis. Entscheidend ist neben der Wahl "richtiger" Methoden und Kennzahlen die Wahrnehmung der Business-Partnerrolle durch die Controller. In diesem Beitrag wird das Innovation Performance Management beispielhaft für WMF konkretisiert.
Ziel des Projekts ist es, die Schutzwirkung von Schweißerschutzkleidung zu verbessern. Der Fokus lag dabei auf den Fragestellungen: Kann man durch eine Ausrüstung die Beständigkeit der Textilien gegen Tropfen von flüssigem Metall erhöhen und gleichzeitig einen besseren UV-Schutz erhalten? Diese Schutzfaktoren von Schweißerschutzkleidung hängen stark vom Flächengewicht des verwendeten Textils ab. Je höher das Flächengewicht, desto beständiger ist die Kleidung gegenüber Metallspritzern und desto weniger UV wird durch die Kleidung hindurchgelassen. Jedoch gilt, je höher das Flächengewicht, desto schlechter ist der Tragekomfort, da ein hohes Flächengewicht u.a. das Schwitzen fördert. Schweißerschutzkleidung wird nach zwei Klassen unterteilt. Im Fall von Kleidung der Klasse 1 darf ein Temperaturanstieg von 40 K auf der Rückseite des Textils erst nach dem 15. aufgetroffenen Tropfen flüssigen Eisens auftreten. Im Fall der Klasse 2 darf der Temperaturanstieg erst nach 25 Tropfen auftreten. Als Ausgang für dieses Projekt wurden Gewebe ausgewählt, welche die Klasse 1 erfüllen. Es wurde versucht, diese Gewebe durch die Ausrüstung entweder mit wärmeleitfähigen Kompositen oder durch eine Nanostrukturierung ("Lotuseffekt") entsprechend auszurüsten, so dass die Anforderungen für Klasse 2 erfüllt werden. Wärmeleitfähige Komposite sollten für die Ausrüstung ein schnelles Ableiten und Verteilen der Wärme der Metalltropfen auf der Oberfläche garantieren, wodurch sichergestellt werden sollte, dass die Erwärmung der Rückseite des Gewebes deutlich verlangsamt wird. Mit dieser Ausrüstung konnte die Klasse 2 nicht erreicht werden, sie führte jedoch zu keiner Verschlechterung des Tragekomforts des leichteren Gewebes, und die Transmission von schädlicher UV-Strahlung wurde verringert. Durch eine Nanostrukturierung sollte ein "Lotuseffekt" für kleine Metalltropfen erzielt werden. Durch die Nanostrukturierung trifft der Metalltropfen zuerst auf die Oberfläche der Nanopartikel auf, wobei isolierende Luft zwischen Metalltropfen und Gewebeoberfläche eingeschlossen wird und so das Gewebe vor dem Tropfen selbst schützt. Dieser Ansatz lässt vermuten, dass sich der Effekt gut über die aufgetragenen Menge Nanopartikel / Binder einstellen lässt. Im Fall von Binderkonzentrationen zwischen 1,25 und 2,5 % wird die Flexibilität nur geringfügig beeinträchtigt, wobei mit unterschiedlichen Partikeln (SiO2, ZnO, AlOx und TiO2) die Schweißerschutzklasse 2 erreicht werden kann. Der Tragekomfort der Gewebe wird nicht beeinflusst. Das Verfahren bietet KMU aus dem Bereich der Textilveredlung neue innovative Produkte für den Arbeitsschutzsektor. Die Verwendung von leichterer Kleidung im Bereich der PSA (Persönliche Schutzausrüstung) erhöht die Akzeptanz dieser, da der Tragekomfort im Vergleich zu Schweißerschutzkleidung der Klasse 2 durch das im Projekt entwickelte Verfahren der Nanostrukturierung von Kleidung der Schweißerschutzklasse 1 einen deutlich verbesserten Tragekomfort mit sich bringt. Dadurch können von KMU, welche sich auf den Sektor PSA spezialisiert haben, neue und auch internationale Absatzmärkte eröffnet werden.
Problem: Die Covid-19 Pandemie verschärft nicht nur die wirtschaftlichen, sondern auch die öko-sozialen Rahmenbedingungen vieler Unternehmen. Nachhaltiges Handeln ist daher wichtiger denn je. Unternehmen wählen unterschiedliche Wege, um Nachhaltigkeit in das Managementsystem der oberen Führungsebene zu integrieren. Dadurch besteht die Chance, Nachhaltigkeit nicht nur in Form von Einzelmaßnahmen zu sehen, sondern als Element der Strategie- und Organisationsentwicklung zu verstehen. Für die gesamthafte Betrachtung kommen u. a. die Gemeinwohlbilanz (GWB) und die Nachhaltigkeits-Balanced Scorecard (N-BSC) in Betracht, wie die Beispiele von Vaude und der Sparda Bank München, die die GWB nutzen (siehe https://web.ecogood.org/de/die-bewegung/pionier-unternehmen/), sowie Alpha und Axel Springer, die Nachhaltigkeit in ihre BSC integrieren (Hansen/Schaltegger, 2016, S. 207), zeigen.
Ziel: Diskussion der GWB und der N-BSC als Möglichkeiten zur Integration öko-sozialer Aspekte in das Managementsystem.
Methode: Aufzeigen wesentlicher Grundzüge der GWB und N-BSC
We examine the role of communication from users on dropout from digital learning systems to answer the following questions: (1) how does the sentiment within qualitative signals (user comments) affect dropout rates? (2) does the variance in the proportion of positive and negative sentiments affect dropout rates? (3) how do quantitative signals (e.g. likes) moderate the effect of the qualitative signals? and (4) how does the effect of qualitative signals on dropout rates change across early and late stages of learning? Our hypotheses draws from learning theory and self-regulation theory, and were tested using data of 447 learning videos across 32 series of online tutorials, spanning 12 different fields of learning. The findings indicate a main effect of negative sentiment on dropout rates but no effect of positive sentiment on preventing dropout behaviour. This main effect is stronger in the early stages of learning and weakens at later stages. We also observe an effect of the extent of variance of positive and negative sentiments on dropout behaviour. The effects are negatively moderated by quantitative signals. Overall, making commenting more broad-based rather than polarised can be a useful strategy in managing learning, transferring knowledge, and building consensus.
Der Verschleiß von Werkzeugen bei der Zerspanung mit geometrisch definierter Schneide ist wesentliches Kriterium für die Qualität der bearbeiteten Werkstücke, die Zuverlässigkeit der Bearbeitungsprozesse sowie der Wirtschaftlichkeit. Die Wirtschaftlichkeit der Bearbeitung wird vor allem durch die Anzahl der mit einem Werkzeug zuverlässig bearbeitbaren Werkstücke beeinflusst. Die Standzeit / der Standweg der Werkzeuge sowie die einsetzbaren Technologieparameter sind von unterschiedlichen Faktoren abhängig. Dabei sind neben dem Werkzeug und deren Eingriffsbedingungen (z. B. axiale und radiale Zustellung) auch die Einflüsse seitens der Maschine (z. B. Steifigkeit, Eigenfrequenzen, Drehmoment), des Werkstückes (z. B. Werkstoff, Genauigkeiten) und des Bearbeitungsprozesses mit den dabei auftretenden Kräften, Drehmomenten, Drehzahlen und Vorschüben abhängig. Trotz verschiedener Bemühungen der vergangenen beiden Jahrzehnte zur Bearbeitung ohne Kühlschmierstoff oder mit Minimalmengenschmierung werden heute immer noch zahlreiche Bearbeitungsprozesse unter Einsatz von Kühlschmierstoff durchgeführt. Dadurch lassen sich aufgrund der geringeren thermischen Belastung von Werkzeug und Werkstück teilweise deutlich höhere Schnittbedingungen und/oder Standzeiten erzielen.
In this paper, we propose a radical new approach for scale-out distributed DBMSs. Instead of hard-baking an architectural model, such as a shared-nothing architecture, into the distributed DBMS design, we aim for a new class of so-called architecture-less DBMSs. The main idea is that an architecture-less DBMS can mimic any architecture on a per-query basis on-the-fly without any additional overhead for reconfiguration. Our initial results show that our architecture-less DBMS AnyDB can provide significant speedup across varying workloads compared to a traditional DBMS implementing a static architecture.
Near-Data Processing is a promising approach to overcome the limitations of slow I/O interfaces in the quest to analyze the ever-growing amount of data stored in database systems. Next to CPUs, FPGAs will play an important role for the realization of functional units operating close to data stored in non-volatile memories such as Flash.It is essential that the NDP-device understands formats and layouts of the persistent data, to perform operations in-situ. To this end, carefully optimized format parsers and layout accessors are needed. However, designing such FPGA-based Near-Data Processing accelerators requires significant effort and expertise. To make FPGA-based Near-Data Processing accessible to non-FPGA experts, we will present a framework for the automatic generation of FPGA-based accelerators capable of data filtering and transformation for key-value stores based on simple data-format specifications.The evaluation shows that our framework is able to generate accelerators that are almost identical in performance compared to the manually optimized designs of prior work, while requiring little to no FPGA-specific knowledge and additionally providing improved flexibility and more powerful functionality.
Many modern DBMS architectures require transferring data from storage to process it afterwards. Given the continuously increasing amounts of data, data transfers quickly become a scalability limiting factor. Near-Data Processing and smart/computational storage emerge as promising trends allowing for decoupled in-situ operation execution, data transfer reduction and better bandwidth utilization. However, not every operation is suitable for an in-situ execution and a careful placement and optimization is needed.
In this paper we present an NDP-aware cost model. It has been implemented in MySQL and evaluated with nKV. We make several observations underscoring the need for optimization.
Several studies analyzed existing Web APIs against the constraints of REST to estimate the degree of REST compliance among state-of-the-art APIs. These studies revealed that only a small number of Web APIs are truly RESTful. Moreover, identified mismatches between theoretical REST concepts and practical implementations lead us to believe that practitioners perceive many rules and best practices aligned with these REST concepts differently in terms of their importance and impact on software quality. We therefore conducted a Delphi study in which we confronted eight Web API experts from industry with a catalog of 82 REST API design rules. For each rule, we let them rate its importance and software quality impact. As consensus, our experts rated 28 rules with high, 17 with medium, and 37 with low importance. Moreover, they perceived usability, maintainability, and compatibility as the most impacted quality attributes. The detailed analysis revealed that the experts saw rules for reaching Richardson maturity level 2 as critical, while reaching level 3 was less important. As the acquired consensus data may serve as valuable input for designing a tool-supported approach for the automatic quality evaluation of RESTful APIs, we briefly discuss requirements for such an approach and comment on the applicability of the most important rules.
Wir kennen alle die Herausforderungen der neuen Arbeitswelt. Menschen müssen lernen, in der so genannten VUCA-Welt zurecht zu kommen. Der Begriff ist ein Akronym für die englischen Begriffe volatility (Unbeständigkeit), uncertainty (Unsicherheit), complexity (Komplexität) und ambiguity (Mehrdeutigkeit). Das erfordert ein schnelles Adaptieren an Veränderungen im Arbeitskontext. Beweglichkeit ist da ein zentraler Faktor und einerseits sehr energetisierend und andererseits auch mit Anstrengungen verbunden, denn es müssen Gewohnheiten und Arbeitsweisen verändert werden.
The Internet of Things (IoT) is coined by many different standards, protocols, and data formats that are often not compatible to each other. Thus, the integration of different heterogeneous (IoT) components into a uniform IoT setup can be a time-consuming manual task. This lacking interoperability between IoT components has been addressed with different approaches in the past. However, only very few of these approaches rely on Machine Learning techniques. In this work, we present a new way towards IoT interoperability based on Deep Reinforcement Learning (DRL). In detail, we demonstrate that DRL algorithms, which use network architectures inspired by Natural Language Processing (NLP), can be applied to learn to control an environment by merely taking raw JSON or XML structures, which reflect the current state of the environment, as input. Applied to IoT setups, where the current state of a component is often reflected by features embedded into JSON or XML structures and exchanged via messages, our NLP DRL approach eliminates the need for feature engineering and manually written code for pre-processing of data, feature extraction, and decision making.
Enterprises and information societies confront crucial challenges currently, while Industry 4.0 becomes important in the global manufacturing industry and Society 5.0 should contribute to a supersmart society, especially for healthcare. Physical activity monitoring digital platforms are architected to improve the healthcare status of patients with diabetes and other lifestyle-related diseases. Furthermore, digital platforms are expected to generate profits for health technology companies and help control costs in the healthcare ecosystem. However, current digital enterprise architecture approaches are not well-established, and the potentials have not yet been realized. Design thinking approach and agile software development methodologies can overcome these limitations, beginning with proof of concept and pilot projects and then scaling to the production environment. In this paper, we describe how that the adaptive integrated digital architecture framework (AIDAF) for Design Thinking approach is proposed and verified in a case of a university hospital in the Americas. In addition, challenges and future activities for this area are discussed that cover the directions for Society 5.0.
Rotating machinery occupies a predominant place in many industrial applications. However, rotating machines are often encountered with severe vibration problems. The measurement of these machines’ vibrations signal is of particular importance since it plays a crucial role in predictive maintenance. When the vibrations are too high, they often cause fatigue failure. They announce an unexpected stop or break and, consequently, a significant loss of productivity or an attack on the personnel’s safety. Therefore, fault identification at early stages will significantly enhance the machine’s health and significantly reduce maintenance costs. Although considerable efforts have been made to master the field of machine diagnostics, the usual signal processing methods still present several drawbacks. This paper examines the rotating machinery condition monitoring in the time and frequency domains. It also provides a framework for the diagnosis process based on machine learning by analyzing the vibratory signals.
Enterprises and societies currently face crucial challenges, while Industry 4.0 becomes important in the global manufacturing industry all the more. Industry 4.0 offers a range of opportunities for companies to increase the flexibility and efficiency of production processes. The development of new business models can be promoted with digital platforms and architectures for Industry 4.0. Therefore, products from the healthcare sector can increase in value. The adaptive integrated digital architecture framework (AIDAF) for Industry 4.0 is expected to promote and implement the digital platforms and robotics for healthcare and medical communities efficiently. In this paper, we propose that various digital platforms and robotics are designed and evaluated for digital healthcare as for manufacturing industry with Industry 4.0. We argue that the design of an open healthcare platform “Open Healthcare Platform 2030 - OHP2030” for medical product design and robotics can be developed with AIDAF. The vision of AIDAF applications to enable Industry 4.0 in the OHP2030 research initiative is explained and referenced, extended in the context of Society 5.0.
Autonomous navigation is one of the main areas of research in mobile robots and intelligent connected vehicles. In this context, we are interested in presenting a general view on robotics, the progress of research, and advanced methods related to this field to improve autonomous robots’ localization. We seek to evaluate algorithms and techniques that give robots the ability to move safely and autonomously in a complex and dynamic environment. Under these constraints, we focused our work in the paper on a specific problem: to evaluate a simple, fast and light SLAM algorithm that can minimize localization errors. We presented and validated a FastSLAM 2.0 system combining scan matching and loop closure detection. To allow the robot to perceive the environment and detect objects, we have studied one of the best deep learning technique using convolutional neural networks (CNN). We validate our testing using the YOLOv3 algorithm.
Das vorliegende Taschenbuch fasst die bekannten Berechnungsformeln und Erkenntnisse aus der betrieblichen Praxis und aus wissenschaftlichen Untersuchungen im Bereich des Weberei-Vorwerks und der Weberei zusammen. Die bei der Gewebeherstellung notwendigen Entscheidungsprozesse sollen damit erleichtert werden.
Mit dieser Formelsammlung lassen sich jedoch nicht nur die optimalen Fertigungsvorschriften für Gewebe praxisgerecht erstellen, sondern auch die wichtigsten technischen und physikalischen Grundlagen des Fabrikbetriebs werden in der gebotenen Kürze dargestellt.
Covid-19 und die Maßnahmen zu Eindämmung der Pandemie wirkten für viele Menschen lebensverändernd und zwangen Unternehmen zu teilweise substantiellen Anpassungen ihrer gewohnten Praktiken. Sie führten jedoch auch zu Veränderungen, die sich weitgehend außerhalb der öffentlichen Wahrnehmung vollzogen haben. Ein Beispiel hierfür ist die deutliche Verschiebung der Kraftverhältnisse im Markt für Bannerwerbung, auf dem sich sowohl für Werbetreibende, als auch für Vermarkter zu deutlichen Veränderungen kam. Gleichzeitig verändert sich der Markt strukturell. Es kommt derzeit zu einer Professionalisierung, bei der Werbetreibende heute die richtigen Weichen stellen müssen, um in Zukunft zu den Gewinnern in diesem Markt zu zählen. Dieser Report fasst die wichtigen strukturellen und Corona-bedingten Veränderungen zusammen und erklärt die Implikationen für Werbetreibende.
Facial beauty prediction (FBP) aims to develop a machine that automatically makes facial attractiveness assessment. In the past those results were highly correlated with human ratings, therefore also with their bias in annotating. As artificial intelligence can have racist and discriminatory tendencies, the cause of skews in the data must be identified. Development of training data and AI algorithms that are robust against biased information is a new challenge for scientists. As aesthetic judgement usually is biased, we want to take it one step further and propose an Unbiased Convolutional Neural Network for FBP. While it is possible to create network models that can rate attractiveness of faces on a high level, from an ethical point of view, it is equally important to make sure the model is unbiased. In this work, we introduce AestheticNet, a state-of-the-art attractiveness prediction network, which significantly outperforms competitors with a Pearson Correlation of 0.9601. Additionally, we propose a new approach for generating a bias-free CNN to improve fairness in machine learning.
Sichtprüfungen von Produktoberflächen werden überwiegend von Mitarbeitern ausgeführt, wobei Automatisierungsansätze mit Kamera- und Bildverarbeitungssystemen großes Potenzial zeigen. Auch Cobots werden in Qualitätssicherungsprozesse einbezogen.Im Folgenden werden die Integrationsmöglichkeiten von Cobots in die Sichtprüfung diskutiert und ein Entscheidungsmodell dargestellt, mit dem Sichtprüfungsprozesse auf ihre Cobot-Tauglichkeit überprüft werden können. Das Entscheidungsmodell ist für die direkte Integration in bereits existierende Cobot-Eignungsuntersuchungsverfahren konzipiert und dient als erste strategische Entscheidungshilfe.
Intralogistics operations in automotive OEMs increasingly confront problems of overcomplexity caused by a customer-centred production that requires customisation and, thus, high product variability, short-notice changes in orders and the handling of an overwhelming number of parts. To alleviate the pressure on intralogistics without sacrificing performance objectives, the speed and flexibility of logistical operations have to be increased. One approach to this is to utilise three-dimensional space through drone technology. This doctoral thesis aims at establishing a framework for implementing aerial drones in automotive OEM logistic operations.
As of yet, there is no research on implementing drones in automotive OEM logistic operations. To contribute to filling this gap, this thesis develops a framework for Drone Implementation in Automotive Logistics Operations (DIALOOP) that allows for a close interaction between the strategic and the operative level and can lead automotive companies through a decision and selection process regarding drone technology.
A preliminary version of the framework was developed on a theoretical basis and was then revised using qualitative-empirical data from semi-structured interviews with two groups of experts, i.e. drone experts and automotive experts. The drone expert interviews contributed a current overview of drone capabilities. The automotive experts interview were used to identify intralogistics operations in which drones can be implemented along with the performance measures that can be improved by drone usage.
Furthermore, all interviews explored developments and changes with a foreseeable influence on drone implementation.
The revised framework was then validated using participant validation interviews with automotive experts.
The finalised framework defines a step-by-step process leading from strategic decisions and considerations over the identification of logistics processes suitable for drone implementation and the relevant performance measures to the choice of appropriate drone types based on a drone classification specifically developed in this thesis for an automotive context.
On 5 May 2020, the Federal Constitutional Court of Germany announced in a momentous ruling that the Public Sector Purchase Programme (PSPP) of the European Central Bank (ECB) exceeds European Union (EU) competences. This decision initiated a lively debate in law and economics all over Europe. This article provides a unique interdisciplinary reading of the ruling in order to clarify the line of argument. Considering a cross-disciplinary view enlightens the understanding of the historic judgment.
Energy consumption by air-conditioning is expansive and leads to the emission of millions of tons of CO2 every year. A promising approach to circumvent this problem is the reflection of solar radiation: Rooms that would not heat up by irradiation will not need to be cooled down. Especially, transparent conductive metal oxides exhibit high infrared (IR) reflectivity and are commonly applied as low-emissivity coatings (low-e coatings). Indium tin oxide (ITO) coatings are the state-of-the-art application, though indium is a rare and expensive resource. This work demonstrates that aluminum-doped zinc oxide (AZO) can be a suitable alternative to ITO for IR-reflection applications. AZO synthesized here exhibits better emissivity to be used as roofing membrane coatings for buildings in comparison to commercially available ITO coatings. AZO particles forming the reflective coating are generated via solvothermal synthesis routes and obtain high conductivity and IR reflectivity without the need of any further post-thermal treatment. Different synthesis parameters were studied, and their effects on both conductive and optical properties of the AZO nanoparticles were evaluated. To this end, a series of characterization methods, especially 27Al-nuclear magnetic resonance spectroscopy (27Al-NMR) analysis, have been conducted for a deeper insight into the particles’ structure to understand the differences in conductivity and optical properties. The optimized AZO nanoparticles were coated on flexible transparent textile-based roofing membranes and tested as low-e coatings. The membranes demonstrated higher thermal reflectance compared with commercial ITO materials with an emissivity value lowered by 16%.
Controlled adhesion of HUVEC on polyelectrolyte multilayers by regulation of coating conditions
(2021)
Adhesion of host cells on the surface of implants is necessary for a healthy ingrowth of the implanted material. One possibility of surface modification is the coating of the implant with a second material with advantageous physical–chemical surface properties for the biological system. The coverage with blood proteins takes place immediately after implantation. It is followed by host–cell interaction on the surface. In this work, the effect of polyelectrolyte multilayer coatings (PEMs) on adhesion and activity of human umbilical vein endothelial cells (HUVECs) was studied. The PEMs were formed from poly(styrenesulfonate) (PSS) and poly(allylamine hydrochloride) (PAH) from solutions with different concentrations of NaCl varying between 0 and 1.0 M. The adhesion of HUVEC and their viability on the PEM is related to the amount of adsorbed proteins from the applied cell growth medium. The amount of adsorbed proteins is controlled not only by the surface charge but also by the internal excess charge of the PEM. The internal excess charge of the PEM was controlled by changing the electrolyte concentration in the deposition solutions.
Science-based analysis for climate action: how HSBC Bank uses the En-ROADS climate policy simulation
(2021)
In 2018, the Intergovernmental Panel on Climate Change (IPCC, 2018) found that rapid decarbonization and net negative greenhouse gas (GHG) emissions by mid-century are required to "hold the increase in global average temperature to well below 2°C above pre-industrial levels and pursue efforts to limit the temperature increase to 1.5°C," as stipulated by the Paris Agreement (UNFCCC, 2015, p. 2). Meeting these goals reduces physical climate-related risks from, for example, sea-level rise, ocean acidification, extreme weather, water shortages, declining crop yields, and other impacts. These impacts threaten our economy, security, health, and lives.
At the same time, policies to mitigate these harms by rapidly reducing GHG emissions can create transition risks for businesses - for example, stranded assets and loss of market value for fossil fuel producers and firms dependent on fossil energy (Carney, 2019). Rapid decarbonization requires an unprecedented energy transition (IEA, 2021a) driven by and affecting economic players including businesses, asset managers, and investors in all sectors and all countries (Kriegler et al., 2014).
However, GHG emissions are not falling rapidly enough to meet the goals of the Paris Agreement (Holz et al., 2018). The UNFCCC, 2021 found that the emissions reductions pledged by all nations as of early 2021 "fall far short of what is required, demonstrating the need for Parties to further strengthen their mitigation commitments under the Paris Agreement" (2021, p. 5). Businesses are faring no better. Despite high-profile calls to action from influential firms such as BlackRock (Fink, 2018, 2021), corporate action to meet climate goals has thus far fallen short (e.g. the Right, 2019 analysis of the German DAX 30 companies' emissions targets by NGO "right."). Instead of implementing climate strategies that might mitigate the risks, managers are often caught up in "firefighting" and capability traps that erode the resources needed for ambitious climate action (Sterman, 2015). Firms may also exaggerate environmental accomplishments, leading to greenwashing (Lyon and Maxwell, 2011); implement policies that are vague, rely on unproven offsets, or are not climate neutral (e.g. Sterman et al., 2018); or simply take no action at all (Delmas and Burbano, 2011; Sterman, 2015).
Adding to the confusion are difficulties evaluating the effectiveness of different climate policies. Misperceptions include wait-and-see approaches (Dutt and Gonzalez, 2012; Sterman, 2008), underestimating time delays and ignoring the unintended consequences of policies (Sterman, 2008), and beliefs in "silver bullet" solutions (Gilbert, 2009; Kriegler et al., 2013; Shackley and Dütschke, 2012). These beliefs arise in part because the climate–energy system is a high-dimensional dynamic system characterized by long time delays, multiple feedback loops, and nonlinearities (Sterman, 2011), while even simple systems are difficult for people to understand (Booth Sweeney and Sterman, 2000; Cronin et al., 2009; Kapmeier et al., 2017). Although senior executives might receive briefings on climate change, simply providing more information does not necessarily lead to more effective action (Pearce et al., 2015; Sterman, 2011).
Alternatively, interactive approaches to learning about climate change and policies to mitigate it can trigger climate action (Creutzig and Kapmeier, 2020). Decision-makers require tools and methods grounded in science that enable them to learn for themselves how a low-carbon economy can be achieved and how climate policies condition physical and transition risks. The system dynamics climate–energy simulation En-ROADS (Energy-Rapid Overview and Decision Support; Jones et al., 2019b), codeveloped by the climate think-tank Climate Interactive and the MIT Sloan Sustainability Initiative, provides such a tool.
Here we show how En-ROADS helps HSBC Bank U.S.A., the American subsidiary of U.K.-based multinational financial services company HSBC Holdings plc, focus its global sustainability strategy on activities with higher impact and relevance, communicate and implement the strategy, understand transition risks, and better align the strategy with global climate goals. We show how the versatility and interactivity of En-ROADS increases its reach throughout the organization. Finally, we discuss challenges and lessons learned that may be helpful to other organizations.
Debiasing als Managementtool
(2021)
Unternehmen existieren dadurch, dass sie eine Entscheidung auf die andere folgen lassen: Sollen wir hier investieren oder lieber dort? Sollte diese Mitarbeiterin auf die Führungsstelle befördert werden oder jener Mitarbeiter? Und welcher Preis ist für die neue Dienstleistung angemessen? Ob es einem Unternehmen gut geht, hängt daher mit der Qualität der zahlreichen Entscheidungen zusammen: Entscheiden Unternehmen immer wieder klüger als die Konkurrenz, werden sie sich im Wettbewerb behaupten können.
Metalworking fluids (MWFs) are widely used to cool and lubricate metal workpieces during processing to reduce heat and friction. Extending a MWF’s service life is of importance from both economical and ecological points of view. Knowledge about the effects of processing conditions on the aging behavior and reliable analytical procedures are required to properly characterize the aging phenomena. While so far no quantitative estimations of ageing effects on MWFs have been described in the literature other than univariate ones based on single parameter measurements, in the present study we present a simple spectroscopy-based set-up for the simultaneous monitoring of three quality parameters of MWF and a mathematical model relating them to the most influential process factors relevant during use. For this purpose, the effects of MWF concentration, pH and nitrite concentration on the droplet size during aging were investigated by means of a response surface modelling approach. Systematically varied model MWF fluids were characterized using simultaneous measurements of absorption coefficients µa and effective scattering coefficients µ’s. Droplet size was determined via dynamic light scattering (DLS) measurements. Droplet size showed non-linear dependence on MWF concentration and pH, but the nitrite concentration had no significant effect. pH and MWF concentration showed a strong synergistic effect, which indicates that MWF aging is a rather complex process. The observed effects were similar for the DLS and the µ’s values, which shows the comparability of the methodologies. The correlations of the methods were R2c = 0.928 and R2P = 0.927, as calculated by a partial least squares regression (PLS-R) model. Furthermore, using µa, it was possible to generate a predictive PLS-R model for MWF concentration (R2c = 0.890, R2P = 0.924). Simultaneous determination of the pH based on the µ’s is possible with good accuracy (R²c = 0.803, R²P = 0.732). With prior knowledge of the MWF concentration using the µa-PLS-R model, the predictive capability of the µ’s-PLS-R model for pH was refined (10 wt%: R²c = 0.998, R²p = 0.997). This highlights the relevance of the combined measurement of µa and µ’s. Recognizing the synergistic nature of the effects of MWF concentration and pH on the droplet size is an important prerequisite for extending the service life of an MWF in the metalworking industry. The presented method can be applied as an in-process analytical tool that allows one to compensate for ageing effects during use of the MWF by taking appropriate corrective measures, such as pH correction or adjustment of concentration.