Refine
Year of publication
- 2022 (287) (remove)
Document Type
- Journal article (140)
- Conference proceeding (102)
- Book chapter (25)
- Book (5)
- Working Paper (5)
- Doctoral Thesis (3)
- Anthology (3)
- Patent / Standard / Guidelines (2)
- Report (2)
Is part of the Bibliography
- yes (287)
Institute
- ESB Business School (108)
- Informatik (85)
- Technik (43)
- Life Sciences (31)
- Texoversum (14)
- Zentrale Einrichtungen (5)
Publisher
- Springer (40)
- Elsevier (30)
- IEEE (21)
- MDPI (21)
- Hochschule Reutlingen (13)
- Center for Promoting Education and Research (6)
- Wiley (5)
- dfv Mediengruppe (5)
- Association for Computing Machinery (4)
- Association for Information Systems (4)
Chinas Subsahara-Afrika-Engagement : Chancen und Herausforderungen für die bayerische Wirtschaft
(2022)
Afrika ist ein attraktiver Markt, auch für bayerische Unternehmen
Die Studie zeigt die Attraktivität der Zukunftsmärkte in Subsahara-Afrika für die bayerische Wirtschaft auf. Trotz einiger Herausforderungen und aktuell noch kleinen, aber profitablen Märkten sind viele Länder der Region aufgrund ihrer enormen Wachstumsdynamik grundsätzlich attraktiv für ein geschäftliches Engagement.
Auch die Volksrepublik China hat die Bedeutung Afrikas erkannt und ist seit dem Jahr 2000 verstärkt politisch und wirtschaftlich in Afrika aktiv. Die Initiativen im Rahmen des Forum on China and African Cooperation (FOCAC) und der Belt and Road Initiative bewegten chinesische Firmen seit 2013 zu einem verstärkten Afrikaengagement, das häufig durch massive Subventionierung und politische Flankierung begleitet wird. Im Infrastrukturbereich dominieren inzwischen chinesische Unternehmen. Das sollte von nicht-chinesischen Unternehmen akzeptiert werden. Die letzte FOCAC-Konferenz in Dakar im Jahr 2021 zeigte allerdings einen deutlichen Rückgang der Kredit- und Finanzierungszusagen, was sowohl durch statistische Daten, die einen Rückgang der Finanzierungsflüsse seit 2016 verzeichnen, als auch durch die Experteninterviews bestätigt wurde.
Ein Umgang mit dem chinesischen Wettbewerb sowie Ansatzpunkte für Geschäftsbeziehungen müssen gefunden werden. Auf Basis von Experteninterviews analysiert die Studie die vielschichtigen Implikationen und Handlungsoptionen der bayerischen Wirtschaft in Subsahara-Afrika vor dem Hintergrund der chinesischen Wirtschaftspräsenz. Bei der Analyse wird grundsätzlich differenziert, ob die chinesischen Firmen Wettbewerber oder Kunden bzw. potenzielle Partner sind.
– Auf der einen Seite sind chinesische Firmen oftmals Wettbewerber. Gerade bei Infrastrukturprojekten haben sie Wettbewerbsvorteile durch den niedrigen Preis und die günstige Finanzierung, die häufig von chinesischen Banken wie der Exim-Bank bereitgestellt wird. Gleichzeitig werden die Kredite der chinesischen Banken ohne komplexe Bedingungen für die afrikanischen Regierungen bzw. Auftraggeber vergeben. Neben diesen Wettbewerbsvorteilen wurden auch Wettbewerbsnachteile identifiziert. So offenbaren Infrastrukturprojekte und chinesische Produkte häufig eine niedrige Qualität, was bei großen Infrastrukturprojekten oft ein Resultat des niedrigen Preises ist. Außerdem bieten chinesische Unternehmen nach wie vor wenige After Sales Dienstleistungen an.
– Auf der anderen Seite können chinesische Unternehmen auch Kunden und Partner bayerischer Unternehmen sein. Im Ausschreibungsgeschäft, vor allem im Infrastrukturbereich, sind die Gewinn-Chancen nicht-chinesischer Firmen maßgeblich von der Quelle der Finanzierung abhängig. Sofern China die Finanzierung bereitstellt, finden sich allenfalls Einzelfälle von Zulieferungen durch nicht-chinesische Firmen. Bei den internationalen Ausschreibungen durch die African Development Bank oder Weltbank stehen die Chancen für nicht-chinesische Firmen gut, wenn in den Entscheidungskriterien die Qualität stärker als der Preis gewichtet wird. Die besten Chancen ergeben sich durch europäische Entwicklungsbanken bzw. bei privatwirtschaftlicher Finanzierung. Daher ist es für den Geschäftserfolg wichtig, die richtigen Ausschreibungen und Finanzierungsquellen auszuwählen.
Für den vertrieblichen Erfolg beim Geschäft mit chinesischen Unternehmen in Subsahara-Afrika sollte idealerweise ein Ansatz auf vier Ebenen verfolgt werden.
– Zum einen ist eine Unternehmenspräsenz in China bei den Firmenzentralen wichtig, da dort in den meisten Fällen die Beschaffung für Projekte in Subsahara-Afrika erfolgt. Deshalb sollten bayerische Unternehmen, wenn sie in China vor Ort sind, das Afrikageschäft mit chinesischen Partnern in Gesprächen mitberücksichtigen. Dabei hilft eine eigene Tochtergesellschaft in China oder regelmäßige Besuche des Top-Managements bei bestehenden und potenziellen Partnern.
– Zum anderen ist eine Vor-Ort-Präsenz in den wichtigsten Märkten Subsahara-Afrikas aus mehreren Gründen von Vorteil. Erstens, um mit den örtlichen Niederlassungen chinesischer Baufirmen zusammenzuarbeiten.
– Zweitens bietet die Präsenz in den afrikanischen Märkten die Möglichkeit, die afrikanischen Auftraggeber – zumeist staatliche Institutionen – von den Vorteilen eines bayerischen bzw. deutschen Projektanteils zu überzeugen.
– Drittens können lokal ansässige, chinesische Händler durch eine lokale Präsenz besser von bayerischen Unternehmen adressiert werden.
Im operativen Geschäft in den Märkten Subsahara-Afrikas sollten idealerweise Mitarbeiter mit Chinaerfahrung und chinesischen Sprachkenntnissen den Vertrieb bei den chinesischen Firmen und Partnern bestreiten. Es sollten auch die entsprechenden Kommunikationsmittel, wie WeChat, verwendet werden. Zudem ist es von großer Bedeutung, ein Vertrauensverhältnis zu den chinesischen Firmen aufzubauen. Dies steigert die Chancen auf weitere geschäftliche Beziehungen.
Zusammenfassend lässt sich sagen, dass die Präsenz der chinesischen Firmen zu Herausforderungen für das Afrikageschäft bayerischer Unternehmen führt. Gleichzeitig eröffnen sich aber auch Geschäftspotenziale. Es ist wichtig – je nach Branche und Set-Up des Unternehmens – die erwähnten Erfolgsfaktoren zu berücksichtigen. Dann bestehen durchaus Geschäftsmöglichkeiten – sei es im Wettbewerb mit den chinesischen Unternehmen oder als Partner und Lieferant der chinesischen Firmen.
Das Buch untersucht die Umsetzung der Seidenstraßeninitiative (BRI) in Ostafrika. Die BRI gilt als das zentrale geopolitische und geoökonomische Vorhaben Chinas in der Ära von Präsident Xi Jinping. Durch die Arbeit soll ein Beitrag zur Schließung einiger Forschungslücken geleistet werden, etwa die mangelnde Tiefe von Untersuchungen einzelner BRI-Projekte und die Unterberücksichtigung von Verarbeitungsnarrativen in den teilnehmenden Ländern. Die Leitfrage ist, inwiefern die BRI ein politisches bzw. hegemoniales Projekt des von der KPCh gelenkten Staats-Zivilgesellschafts-Komplexes in Ostafrika ist. Zu deren Beantwortung werden Datenbanken internationaler Organisationen und Policy-Dokumente ausgewertet. Außerdem führt der Verfasser eine qualitative Inhaltsanalyse von Zeitungsartikeln lokaler Medienhäuser in den Ländern Äthiopien, Kenia und Tansania durch, um drei Infrastrukturprojekte zu untersuchen. Die Arbeit verdeutlicht, dass die BRI zur Steigerung der Konnektivität in Ostafrika beiträgt. Gleichzeitig führen die Verdichtung der ökonomischen Beziehungen und die Implementierung der Infrastrukturvorhaben in Ostafrika zu zahlreichen Konsequenzen und konturieren ein hegemoniales Projekt.
Handling complexity in modern software engineering : editorial introduction to issue 32 of CSIMQ
(2022)
The potential of the Internet and related digital technologies, such as the Internet of Things (IoT), cognition and artificial intelligence, data analytics, services computing, cloud computing, mobile systems, collaboration networks, and cyber-physical systems, are both strategic drivers and enablers of modern digital platforms with fast-evolving ecosystems of intelligent services for digital products. This issue of CSIMQ presents three recent articles on modern software engineering. First, we focus on continuous software development and place it in the context of software architectures and digital transformation. The first contribution is followed by the description of the basis of specific security requirements and adequate digital monitoring mechanisms. Finally, we present a practical example of the digital management of livestock farming.
Intraoperative imaging can assist neurosurgeons to define brain tumours and other surrounding brain structures. Interventional ultrasound (iUS) is a convenient modality with fast scan times. However, iUS data may suffer from noise and artefacts which limit their interpretation during brain surgery. In this work, we use two deep learning networks, namely UNet and TransUNet, to make automatic and accurate segmentation of the brain tumour in iUS data. Experiments were conducted on a dataset of 27 iUS volumes. The outcomes show that using a transformer with UNet is advantageous providing an efficient segmentation modelling long-range dependencies between each iUS image. In particular, the enhanced TransUNet was able to predict cavity segmentation in iUS data with an inference rate of more than 125 FPS. These promising results suggest that deep learning networks can be successfully deployed to assist neurosurgeons in the operating room.
Glioblastomas are the most aggressive fast-growing primary brain cancer which originate in the glial cells of the brain. Accurate identification of the malignant brain tumor and its sub-regions is still one of the most challenging problems in medical image segmentation. The Brain Tumor Segmentation Challenge (BraTS) has been a popular benchmark for automatic brain glioblastomas segmentation algorithms since its initiation. In this year, BraTS 2021 challenge provides the largest multi-parametric (mpMRI) dataset of 2,000 pre-operative patients. In this paper, we propose a new aggregation of two deep learning frameworksnamely, DeepSeg and nnU-Net for automatic glioblastoma recognition in pre-operative mpMRI. Our ensemble method obtains Dice similarity scores of 92.00, 87.33, and 84.10 and Hausdorff Distances of 3.81, 8.91, and 16.02 for the enhancing tumor, tumor core, and whole tumor regions, respectively, on the BraTS 2021 validation set, ranking us among the top ten teams. These experimental findings provide evidence that it can be readily applied clinically and thereby aiding in the brain cancer prognosis, therapy planning, and therapy response monitoring. A docker image for reproducing our segmentation results is available online at (https://hub.docker.com/r/razeineldin/deepseg21).
Purpose
Artificial intelligence (AI), in particular deep neural networks, has achieved remarkable results for medical image analysis in several applications. Yet the lack of explainability of deep neural models is considered the principal restriction before applying these methods in clinical practice.
Methods
In this study, we propose a NeuroXAI framework for explainable AI of deep learning networks to increase the trust of medical experts. NeuroXAI implements seven state-of-the-art explanation methods providing visualization maps to help make deep learning models transparent.
Results
NeuroXAI has been applied to two applications of the most widely investigated problems in brain imaging analysis, i.e., image classification and segmentation using magnetic resonance (MR) modality. Visual attention maps of multiple XAI methods have been generated and compared for both applications. Another experiment demonstrated that NeuroXAI can provide information flow visualization on internal layers of a segmentation CNN.
Conclusion
Due to its open architecture, ease of implementation, and scalability to new XAI methods, NeuroXAI could be utilized to assist radiologists and medical professionals in the detection and diagnosis of brain tumors in the clinical routine of cancer patients. The code of NeuroXAI is publicly accessible at https://github.com/razeineldin/NeuroXAI.
Purpose
Artificial intelligence (AI), in particular deep learning (DL), has achieved remarkable results for medical image analysis in several applications. Yet the lack of human-like explanations of such systems is considered the principal restriction before utilizing these methods in clinical practice (Yang, Ye, & Xia, 2022).
Methods
Explainable Artificial Intelligence (XAI) provides a human-explainable and interpretable description of the “black-box” nature of DL (Gulum, Trombley, & Kantardzic, 2021). An effective XAI diagnosis generator, namely NeuroXAI (refer to Fig. 1), has been developed to extract 3D explanations from convolutional neural networks (CNN) models of brain gliomas (Zeineldin et al., 2022). By providing visual justification maps, NeuroXAI can help make DL models transparent and thus increase the trust of medical experts.
Results
NeuroXAI has been applied to two applications of the most widely investigated problems in brain imaging analysis, i.e. image classification and segmentation using magnetic resonance imaging (MRI). Visual attention maps of multiple XAI methods have been generated and compared for both applications, which could help to provide transparency about the performance of DL systems.
Conclusion
NeuroXAI helps to understand the prediction process of 3D CNN networks for brain glioma using human-understandable explanations. Results revealed that the investigated DL models behave in a logical human-like manner and can improve the analytical process of the MRI images systematically. Due to its open architecture, ease of implementation, and scalability to new XAI methods, NeuroXAI could be utilized to assist medical professionals in the detection and diagnosis of brain tumors. NeuroXAI code is publicly accessible at https://github.com/razeineldin/NeuroXAI
The general conclusion of climate change studies is the necessity of eliminating net CO2 emissions in general and from the electric power systems in particular by 2050. The share of renewable energy is increasing worldwide, but due to the intermittent nature of wind and solar power, a lack of system flexibility is already hampering the further integration of renewable energy in some countries. In this study, we analyze if and how combinations of carbon pricing and power-to-gas (PtG) generation in the form of green power-to-hydrogen followed by methanation (which we refer to as PtG throughout) using captured CO2 emissions can provide transitions to deep decarbonization of energy systems. To this end, we focus on the economics of deep decarbonization of the European electricity system with the help of an energy system model. In different scenario analyses, we find that a CO2 price of 160 €/t (by 2050) is on its own not sufficient to decarbonize the electricity sector, but that a CO2 price path of 125 (by 2040) up to 160 €/t (by 2050), combined with PtG technologies, can lead to an economically feasible decarbonization of the European electricity system by 2050. These results are robust to higher than anticipated PtG costs.
Lehrbuch zur CAD-Software Creo Parametric und zur Produktdatenverwaltung mit Windchill.
3D-Volumenmodellierung, 3D-Flächenmodellierung, Blechmodellierung, Baugruppen- und Zeichnungserstellung, Definition von Normteilen, Erstellen von Animationen und dynamischen Analysen.
Verfahren zum Umgang mit großen Baugruppen und zur flexiblen Modellierung, Konstruk-tionsvarianten "Top-Down" und "Bottom-Up", Organisation von Konstruktionsprojekten über Skeletttechnik.
Neu: Konstruktion von und mit Mehrkörperobjekten, Rahmenkonstruktion in der Profilumgebung (AFX), intelligente Verbindungen (IFX), Live Simulation und Generatives Design.
The world population is growing and alternative ways of satisfying the increasing demand for meat are being explored, such as using animal cells for the fabrication of cultured meat. Edible biomaterials are required as supporting structures. Hence, we chose agarose, gellan and a xanthan-locust bean gum blend (XLB) as support materials with pea and soy protein additives and analyzed them regarding material properties and biocompatibility. We successfully built stable hydrogels containing up to 1% pea or soy protein. Higher amounts of protein resulted in poor handling properties and unstable gels. The gelation temperature range for agarose and gellan blends is between 23–30 °C, but for XLB blends it is above 55 °C. A change in viscosity and a decrease in the swelling behavior was observed in the polysaccharide-protein gels compared to the pure polysaccharide gels. None of the leachates of the investigated materials had cytotoxic effects on the myoblast cell line C2C12. All polysaccharide-protein blends evaluated turned out as potential candidates for cultured meat. For cell-laden gels, the gellan blends were the most suitable in terms of processing and uniform distribution of cells, followed by agarose blends, whereas no stable cell-laden gels could be formed with XLB blends.
We propose a novel technique to compensate the effects of R-C / gm-C time-constant (TC) errors due to process variation in continuous-time delta-sigma modulators. Local TC error compensation factors are shifted around in the modulator loop to positions where they can be implemented efficiently with tunable circuit structures, such as current-steering digital-to-analog converters (DAC). This approach constitutes an alternative or supplement to existing compensation techniques, including capacitor or gm tuning. We apply the proposed technique to a third-order, single-bit, low-pass continuous-time delta-sigma modulator in cascaded integrator feedback structure. A feedback path tuning scheme is derived analytically and confirmed numerically using behavioral simulations. The modulator circuit was implemented in a 0.35-μm CMOS process using an active feedback coefficient tuning structure based on current-steering DACs. Post-layout simulations show that with this tuning structure, constant performance and stable operation can be obtained over a wide range of TC variation.
Verification of an active time constant tuning technique for continuous-time delta-sigma modulators
(2022)
In this work we present a technique to compensate the effects of R-C / g m -C time-constant (TC) errors due to process variation in continuous-time delta-sigma modulators. Local TC error compensation factors are shifted around in the modulator loop to positions where they can be implemented efficiently with finely tunable circuit structures, such as current-steering digital-to-analog converters (DAC). We apply our technique to a third-order, single-bit, low-pass continuous-time delta-sigma modulator in cascaded integrator feedback structure, implemented in a 0.35-μm CMOS process. A tuning scheme for the reference currents of the feedback DACs is derived as a function of the individual TC errors and verified by circuit simulations. We confirm the tuning technique experimentally on the fabricated circuit over a TC parameter variation range of ±20%. Stable modulator operation is achieved for all parameter sets. The measured performances satisfy the expectations from our theoretical calculations and circuit-level simulations.
With the digital transformation, companies will experience a change that focuses on shaping the organization into an agile organizational form. In today's competitive and fast-moving business environment, it is necessary to react quickly to changing market conditions. Agility represents a promising option for overcoming these challenges. The path to an agile organization represents a development process that requires consideration of countless levels of the enterprise. This paper examines the impact of digital transformation on agile working practices and the benefits that can be achieved through technology. To enable a solution for today's so-called VUCA (Volatility, Uncertainty, Complexity und Ambiguity) world, agile ways of working can be applied project management requires adaptation. In the qualitative study, expert interviews were conducted and analyzed using the grounded theory method. As a result, a model can be presented that shows the influencing factors and potentials of agile management in the context of the digital transformation of medium-sized companies.
Mobile apps for sustainability in grocery shopping: increasing acceptance through gameification
(2022)
Sustainability has become an important topic in social sciences research as well as in the societal debate. Research in general indicates a high sensitivity of sustainability issues in broad parts of the society, however a change of consumption habits can hardly be overserved. It can be argued that technology, such as mobile apps, can play an important role to increase more sustainable behaviors and consumption habits, as they facilitate such behaviors, bring transparency to an unclear field and reduce complexity. Our research hence approaches an important research gap, especially as currently existing apps show a lack of functionalities and UX. By using a Design Science Research (DSR) approach applying Chou’s Octalysis framework, we systematically analyzed eight apps in the field of sustainability and two general gamification apps as reference points complementing our findings with issues discussed in literature and could identify a broad range of functionalities. This comprehensive analysis allowed us to develop an initial mockup of a potential app, which then was tested within a user-group of ten users by using a semi structured interview approach. Our findings contribute to knowledge by highlighting the importance of user experience on the acceptance of mobile apps, as well as, by showcasing how gamification can contribute to a sustained use of mobile apps in this specific context.
Adoption of artificial intelligence (AI) has risen sharply in recent years but many firms are not successful in realising the expected benefits or even terminate projects before completion. While there are a number of previous studies that highlight challenges in AI projects, critical factors that lead to project failure are mostly unknown. The aim of this study is therefore to identify distinct factors that are critical for failure of AI projects. To address this, interviews with experts in the field of AI from different industries are conducted and the results are analyzed using qualitative analysis methods. The results show that both, organizational and technological issues can cause project failure. Our study contributes to knowledge by reviewing previously identified challenges in terms of their criticality for project failure based on new empirical data, as well as, by identifying previously unknown factors.
For collision and obstacle avoidance as well as trajectory planning, robots usually generate and use a simple 2D costmap without any semantic information about the detected obstacles. Thus a robot’s path planning will simply adhere to an arbitrarily large safety margin around obstacles. A more optimal approach is to adjust this safety margin according to the class of an obstacle. For class prediction, an image processing convolutional neural network can be trained. One of the problems in the development and training of any neural network is the creation of a training dataset. The first part of this work describes methods and free open source software, allowing a fast generation of annotated datasets. Our pipeline can be applied to various objects and environment settings and is extremely easy to use to anyone for synthesising training data from 3D source data. We create a fully synthetic industrial environment dataset with 10 k physically-based rendered images and annotations. Our da taset and sources are publicly available at https://github.com/LJMP/synthetic-industrial-dataset. Subsequently, we train a convolutional neural network with our dataset for costmap safety class prediction. We analyse different class combinations and show that learning the safety classes end-to-end directly with a small dataset, instead of using a class lookup table, improves the quantity and precision of the predictions.
There have been substantial research efforts for algorithms to improve continuous and automated assessment of various health-related questions in recent years. This paper addresses the deployment gap between those improving algorithms and their usability in care and mobile health applications. In practice, most algorithms require significant and founded technical knowledge to be deployed at home or support healthcare professionals. Therefore, the digital participation of persons in need of health care professionals lacks a usable interface to use the current technological advances. In this paper, we propose applying algorithms taken from research as web-based microservices following the common approach of a RESTful service to bridge the gap and make algorithms accessible to caregivers and patients without technical knowledge and extended hardware capabilities. We address implementation details, interpretation and realization of guidelines, and privacy concerns using our self-implemented example. Also, we address further usability guidelines and our approach to those.
Mit den Aufgaben und Fallstudien des Übungsbuchs lassen sich die zentralen Kapitel und Themen des Lehrbuchs gezielt wiederholen und vertiefen. Es bietet zu jedem Werkzeug Aufgaben und Fragestellungen aus der Praxis. Zusätzlich werden komplexe Anwendungsfälle namhafter deutscher und internationaler Unternehmen wie Ernst & Young, HUGO BOSS, Alfred Kärcher und Bayer angeboten.
Externe Ladeinfrastruktur kann rechtskonform aus dem Stromnetz einer öffentlichen Liegenschaft versorgt werden. Bisher war die Vorgabe, die Versorgung über einen eigenen (neuen) Netzanschlusspunkt zu realisieren. Die hier vorgestellte Lösung ist ökologisch, wirtschaftlich und technisch deutlich günstiger und dient als Muster für die weitere Erschließung landeseigenen Parkraums in ganz Baden-Württemberg. Ein virtuelles Kraftwerk ermöglicht den gemeinschaftsdienlichen Betrieb.
Monitoring tautomerization of single hypericin molecules in a tunable optical λ/2 microcavity
(2022)
Hypericin tautomerization that involves the migration of the labile protons is believed to be the primary photophysical process relevant to its light-activated antiviral activity. Despite the difficulty in isolating individual tautomers, it can be directly observed in single-molecule experiments. We show that the tautomerization of single hypericin molecules in free space is observed as an abrupt flipping of the image pattern accompanied with fluorescence intensity fluctuations, which are not correlated with lifetime changes. Moreover, the study can be extended to a λ/2 Fabry–Pérot microcavity. The modification of the local photonic environment by a microcavity is well simulated with a theoretical model that shows good agreement with the experimental data. Inside a microcavity, the excited state lifetime and fluorescence intensity of single hypericin molecules are correlated, and a distinct jump of the lifetime and fluorescence intensity reveals the temporal behavior of the tautomerization with high sensitivity and high temporal resolution. The observed changes are also consistent with time-dependent density functional theory calculations. Our approach paves the way to monitor and even control reactions for a wider range of molecules at the single molecule level.
Switched reluctance motors are particularly attractive due to their simple structure. The control of this machine type requires the instants, to switch the currents in the motor phases in an appropriate sequence. These switching instants are determined either based on a position sensor, or on signals generated by a sensorless method. A very simple sensorless method uses the switching frequency of the hysteresis controllers used for phase current control. This paper first presents an automatic commissioning method for this sensorless method and second a startup procedure, thus enhancing this approach towards an application in industry.
This article illustrates a method for sensorless control of a switched reluctance motor. The detection of the time instants for switching between the working phases is determined based on the evaluation of the switching frequency of the hysteresis current controllers for appropriately selected sensing phases. This enables a simple and cost efficient implementation. The method is compared with a pulse injection method in terms of efficiency and resolution.
Turning students into Industry 4.0 entrepreneurs: design and evaluation of a tailored study program
(2022)
Startups in the field of Industry 4.0 could be a huge driver of innovation for many industry sectors such as manufacturing. However, there is a lack of education programs to ensure a sufficient number of well-trained founders and thus a supply of such startups. Therefore, this study presents the design, implementation, and evaluation of a university course tailored to the characteristics of Industry 4.0 entrepreneurship. Educational design-based research was applied with a focus on content and teaching concept. The study program was first implemented in 2021 at a German university of applied sciences with 25 students, of which 22 participated in the evaluation. The evaluation of the study program was conducted with a pretest–posttest-design targeting three areas: (1) knowledge about the application domain, (2) entrepreneurial intention and (3) psychological characteristics. The entrepreneurial intention was measured based on the theory of planned behavior. For measuring psychological characteristics, personality traits associated with entrepreneurship were used. Considering the study context and the limited external validity of the study, the following can be identified in particular: The results show that a university course can improve participants' knowledge of this particular area. In addition, perceived behavioral control of starting an Industry 4.0 startup was enhanced. However, the results showed no significant effects on psychological characteristics.
Background
Personalized medicine requires the integration and analysis of vast amounts of patient data to realize individualized care. With Surgomics, we aim to facilitate personalized therapy recommendations in surgery by integration of intraoperative surgical data and their analysis with machine learning methods to leverage the potential of this data in analogy to Radiomics and Genomics.
Methods
We defined Surgomics as the entirety of surgomic features that are process characteristics of a surgical procedure automatically derived from multimodal intraoperative data to quantify processes in the operating room. In a multidisciplinary team we discussed potential data sources like endoscopic videos, vital sign monitoring, medical devices and instruments and respective surgomic features. Subsequently, an online questionnaire was sent to experts from surgery and (computer) science at multiple centers for rating the features’ clinical relevance and technical feasibility.
Results
In total, 52 surgomic features were identified and assigned to eight feature categories. Based on the expert survey (n = 66 participants) the feature category with the highest clinical relevance as rated by surgeons was “surgical skill and quality of performance” for morbidity and mortality (9.0 ± 1.3 on a numerical rating scale from 1 to 10) as well as for long-term (oncological) outcome (8.2 ± 1.8). The feature category with the highest feasibility to be automatically extracted as rated by (computer) scientists was “Instrument” (8.5 ± 1.7). Among the surgomic features ranked as most relevant in their respective category were “intraoperative adverse events”, “action performed with instruments”, “vital sign monitoring”, and “difficulty of surgery”.
Conclusion
Surgomics is a promising concept for the analysis of intraoperative data. Surgomics may be used together with preoperative features from clinical data and Radiomics to predict postoperative morbidity, mortality and long-term outcome, as well as to provide tailored feedback for surgeons.
Near-data processing in database systems on native computational storage under HTAP workloads
(2022)
Today’s Hybrid Transactional and Analytical Processing (HTAP) systems, tackle the ever-growing data in combination with a mixture of transactional and analytical workloads. While optimizing for aspects such as data freshness and performance isolation, they build on the traditional data-to-code principle and may trigger massive cold data transfers that impair the overall performance and scalability. Firstly, in this paper we show that Near-Data Processing (NDP) naturally fits in the HTAP design space. Secondly, we propose an NDP database architecture, allowing transactionally consistent in-situ executions of analytical operations in HTAP settings. We evaluate the proposed architecture in state-of-the-art key/value-stores and multi-versioned DBMS. In contrast to traditional setups, our approach yields robust, resource- and cost-effcient performance.
Current data-intensive systems suffer from scalability as they transfer massive amounts of data to the host DBMS to process it there. Novel near-data processing (NDP) DBMS architectures and smart storage can provably reduce the impact of raw data movement. However, transferring the result-set of an NDP operation may increase the data movement, and thus, the performance overhead. In this paper, we introduce a set of in-situ NDP result-set management techniques, such as spilling, materialization, and reuse. Our evaluation indicates a performance improvement of 1.13 × to 400 ×.
Over the last decades, a tremendous change toward using information technology in almost every daily routine of our lives can be perceived in our society, entailing an incredible growth of data collected day-by-day on Web, IoT, and AI applications.
At the same time, magneto-mechanical HDDs are being replaced by semiconductor storage such as SSDs, equipped with modern Non-Volatile Memories, like Flash, which yield significantly faster access latencies and higher levels of parallelism. Likewise, the execution speed of processing units increased considerably as nowadays server architectures comprise up to multiple hundreds of independently working CPU cores along with a variety of specialized computing co-processors such as GPUs or FPGAs.
However, the burden of moving the continuously growing data to the best fitting processing unit is inherently linked to today’s computer architecture that is based on the data-to-code paradigm. In the light of Amdahl's Law, this leads to the conclusion that even with today's powerful processing units, the speedup of systems is limited since the fraction of parallel work is largely I/O-bound.
Therefore, throughout this cumulative dissertation, we investigate the paradigm shift toward code-to-data, formally known as Near-Data Processing (NDP), which relieves the contention on the I/O bus by offloading processing to intelligent computational storage devices, where the data is originally located.
Firstly, we identified Native Storage Management as the essential foundation for NDP due to its direct control of physical storage management within the database. Upon this, the interface is extended to propagate address mapping information and to invoke NDP functionality on the storage device. As the former can become very large, we introduce Physical Page Pointers as one novel NDP abstraction for self-contained immutable database objects.
Secondly, the on-device navigation and interpretation of data are elaborated. Therefore, we introduce cross-layer Parsers and Accessors as another NDP abstraction that can be executed on the heterogeneous processing capabilities of modern computational storage devices. Thereby, the compute placement and resource configuration per NDP request is identified as a major performance criteria. Our experimental evaluation shows an improvement in the execution durations of 1.4x to 2.7x compared to traditional systems. Moreover, we propose a framework for the automatic generation of Parsers and Accessors on FPGAs to ease their application in NDP.
Thirdly, we investigate the interplay of NDP and modern workload characteristics like HTAP. Therefore, we present different offloading models and focus on an intervention-free execution. By propagating the Shared State with the latest modifications of the database to the computational storage device, it is able to process data with transactional guarantees. Thus, we achieve to extend the design space of HTAP with NDP by providing a solution that optimizes for performance isolation, data freshness, and the reduction of data transfers. In contrast to traditional systems, we experience no significant drop in performance when an OLAP query is invoked but a steady and 30% faster throughput.
Lastly, in-situ result-set management and consumption as well as NDP pipelines are proposed to achieve flexibility in processing data on heterogeneous hardware. As those produce final and intermediary results, we continue investigating their management and identified that an on-device materialization comes at a low cost but enables novel consumption modes and reuse semantics. Thereby, we achieve significant performance improvements of up to 400x by reusing once materialized results multiple times.
Human retinal pigment epithelial (RPE) cells express the transmembrane Ca2+-dependent Cl− channel bestrophin-1 (hBest1) of the plasma membrane. Mutations in the hBest1 protein are associated with the development of distinct pathological conditions known as bestrophinopathies. The interactions between hBest1 and plasma membrane lipids (cholesterol (Chol), 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine (POPC) and sphingomyelin (SM)) determine its lateral organization and surface dynamics, i.e., their miscibility or phase separation. Using the surface pressure/mean molecular area (π/A) isotherms, hysteresis and compressibility moduli (Cs−1) of hBest1/POPC/Chol and hBest1/SM/Chol composite Langmuir monolayers, we established that the films are in an LE (liquid-expanded) or LE-LC (liquid-condensed) state, the components are well-mixed and the Ca2+ ions have a condensing effect on the surface molecular organization. Cholesterol causes a decrease in the elasticity of both films and a decrease in the ΔGmixπ values (reduction of phase separation) of hBest1/POPC/Chol films. For the hBest1/SM/Chol monolayers, the negative values of ΔGmixπ are retained and equalized with the values of ΔGmixπ in the hBest1/POPC/Chol films. Shifts in phase separation/miscibility by cholesterol can lead to changes in the structure and localization of hBest1 in the lipid rafts and its channel functions.
Higher education institutions (HEIs) rely heavily on information technology (IT) to create innovations. Therefore, IT governance (ITG) is essential for education activities, particularly during the ongoing COVID-19 pandemic. However, the traditional concept of ITG is not fully equipped to deal with the current changes occurring in the digital age. Today's ITG requires an agile approach that can respond to disruptions in the HEI environment. Consequently, universities increasingly need to adopt agile strategies to ensure superior performance. This research proposes a conceptualization comprising three agile dimensions within the ITG construct: structures, processes, and relational mechanisms. An extensive qualitative evaluation of industry uncovered 46 agile governance mechanisms. Moreover, 16 professors rated these elements to assess agile ITG in their HEIs to determine those most effective for HEIs. This led to the identification of four structure elements, seven processes, and seven relational mechanisms.
With significant advancements in digital technologies, firms find themselves competing in an increasingly dynamic business environment. Therefore, the logic of business decisions is based on the agility to respond to emerging trends in a proactive way. By contrast, traditional IT governance (ITG) frameworks rely on hierarchy and standardized mechanisms to ensure better business/IT alignment. This conflict leads to a call for an ambidextrous governance, in which firms alternate between stability and agility in their ITG mechanisms. Accordingly, this research aims to explore how agility might be integrated in ITG. A quantitative research strategy is implemented to explore the impact of agility on the causal relationship among ITG, business/IT alignment, and firm performance. The results show that the integration of agile ITG mechanisms contributes significantly to the explanation of business/IT alignment. As such, firms need to develop a dual governance model powered by traditional and agile ITG mechanisms.
A single-phase fixed-frequency operated power factor correction circuit with reduced switching losses is proposed. The circuit uses the combination of a boost converter with an added clamp-switch, a pulse wave shaping circuit, and a standard control IC to discharge the transistor's output capacitance prior to its turn-on. In this way, a very low-complexity control circuit implementation to reduce switching losses or even achieve complete zero-voltage switching without additional sensors is possible. Moreover, this operation method is achieved at a constant switching frequency, possibly simplifying the design of the EMI filter and the converter's inductor. Experimental test results for a 100 W prototype converter are presented to validate the feasibility of the proposed operating method and corresponding circuit structure.
The current paper proposes a design method for an active damping approach for LC output filters in a power stage for motor control with continuous output voltage. The power stage uses GaN-HEMTs and operates at switching frequencies in a range between 500 kHz and 1MHz. The active damping of the output filter is achieved here by a feedback of the filter inductor current using a high-pass structure. The paper discusses the impact of this feedback on the system behavior and proposes a design method.
The paper describes how eye-tracking can be used to explore electronic patient records (EPR) in a sterile environment. As an information display, we used a system that we developed for the presentation of patient data and for supporting surgical hand disinfection. The eye-tracking was performed using the Tobii Eye Tracker 4C, and the connection between the eye-tracker and the HTML website was realized using the Tobii EyeX Chrome Extension. Interactions with the EPR are triggered by fixations of icons. The interaction was working as intended, but test persons reported a high mental load while using the system.
There is still a great reliance on human expert knowledge during the analog integrated circuit sizing design phase due to its complexity and scale, with the result that there is a very low level of automation associated with it. Current research shows that reinforcement learning is a promising approach for addressing this issue. Similarly, it has been shown that the convergence of conventional optimization approaches can be improved by transforming the design space from the geometrical domain into the electrical domain. Here, this design space transformation is employed as an alternative action space for deep reinforcement learning agents. The presented approach is based entirely on reinforcement learning, whereby agents are trained in the craft of analog circuit sizing without explicit expert guidance. After training and evaluating agents on circuits of varying complexity, their behavior when confronted with a different technology, is examined, showing the applicability, feasibility as well as transferability of this approach.
Early exposure makes the entrepreneur: how economics education in school influences entrepreneurship
(2022)
Many countries that seek to boost their economy share the goal of promoting entrepreneurship. Whereas there is ample research on the predictors of entrepreneurship during adulthood, we know little about how pre-adulthood experience influences entrepreneurship later in life. Using a natural experiment, this paper examines whether introducing economics classes in school enhances entrepreneurial behavior in adulthood. Our difference-in-differences approach exploits curricula reforms across German states that introduced compulsory economics education classes in secondary schools. Using information on school and labor market careers for more than 10,000 individuals from 1984 to 2019, we find that the reform increases students’ entrepreneurial activities by three percentage points. Examining gender differences, we find that economics classes equally benefit female and male students. Our results advance our understanding of how pre-adulthood experiences shape individuals’ entrepreneurial behavior.
Providing a digital infrastructure, platform technologies foster interfirm collaboration between loosely coupled companies, enabling the formation of ecosystems and building the organizational structure for value co-creation. Despite the known potential, the development of platform ecosystems creates new sources of complexity and uncertainty due to the involvement of various independent actors. For a platform ecosystem to succeed, it is essential that the platform ecosystem participants are aligned, coordinated, and given a common direction. Traditionally, product roadmaps have served these purposes during product development. A systematic mapping study was conducted to better understand how product roadmapping could be used in the dynamic environment of platform ecosystems. One result of the study is that there are hardly any concrete approaches for product roadmapping in platform ecosystems so far. However, many challenges on the topic are described in the literature from different perspectives. Based on the results of the systematic mapping study, a research agenda for product roadmapping in platform ecosystems is derived and presented.
Today, companies face increasing market dynamics, rapidly evolving technologies, and rapid changes in customer behavior. Traditional approaches to product development typically fail in such environments and require companies to transform their often feature-driven mindset into a product-led mindset. A promising first step on the way to a product-led company is a better understanding of how product planning can be adapted to the requirements of an increasingly dynamic and uncertain market environment in the sense of product roadmapping. The authors developed the DEEP product roadmap assessment tool to help companies evaluate their current product roadmap practices and identify appropriate actions to transition to a more product-led company. Objective: The goal of this paper is to gain insight into the applicability and usefulness of version 1.1 of the DEEP model. In addition, the benefits, and implications of using the DEEP model in corporate contexts will be explored. Method: We conducted a multiple case study in which participants were observed using the DEEP model. We then interviewed each participant to understand their perceptions of the DEEP model. In addition, we conducted interviews with each company's product management department to learn how the application of the DEEP model influenced their attitudes toward product roadmapping. Results: The study showed that by applying the DEEP model, participants better understood which artifacts and methods were critical to product roadmapping success in a dynamic and uncertain market environment. In addition, the application of the DEEP model helped convince management and other stakeholders of the need to change current product roadmapping practices. The application also proved to be a suitable starting point for the transformation in the participating companies.
Context: Companies that operate in the software-intensive business are confronted with high market dynamics, rapidly evolving technologies as well as fast-changing customer behavior. Traditional product roadmapping practices, such as fixed-time-based charts including detailed planned features, products, or services typically fail in such environments. Until now, the underlying reasons for the failure of product roadmaps in a dynamic and uncertain market environment are not widely analyzed and understood.
Objective: This paper aims to identify current challenges and pitfalls practitioners face when developing and handling product roadmaps in a dynamic and uncertain market environment.
Method: To reach our objective we conducted a grey literature review (GLR).
Results: Overall, we identified 40 relevant papers, from which we could extract 11 challenges of the application of product roadmapping in a dynamic and uncertain market environment. The analysis of the articles showed that the major challenges for practitioners originate from overcoming a feature-driven mindset, not including a lot of details in the product roadmap, and ensuring that the content of the roadmap is not driven by management or expert opinion.
Context: Nowadays the market environment is characterized by high uncertainties due to high market dynamics, confronting companies with new challenges in creating and updating product roadmaps. Most companies are still using traditional approaches which typically fail in such environments. Therefore, companies are seeking opportunities for new product roadmapping approaches.
Objective: This paper presents good practices to support companies better understand what factors are required to conduct a successful product roadmapping in a dynamic and uncertain market environment.
Method: Based on a grey literature review, essential aspects for conducting product roadmapping in a dynamic and uncertain market environment were identified. Expert workshops were then held with two researchers and three practitioners to develop best practices and the proposed approach for an outcome-driven roadmap. These results were then given to another set of practitioners and their perceptions were gathered through interviews.
Results: The study results in the development of 9 good practices that provide practitioners with insights into what aspects are crucial for product roadmapping in a dynamic and uncertain market environment. Moreover, we propose an approach to product roadmapping that includes providing a flexible structure and focusing on delivering value to the customer and the business. To ensure the latter, this approach consists of the main items outcome hypothesis, validated outcomes, and discovered outputs.
Allyls
(2022)
This chapter addresses the importance and usage of the commercially low-volume thermoset plastics group known as allyls. The three significant subelements of this group are poly(diallylphthalates), poly(diallylisophthalates), and poly(allyldiglycol carbonate). Chemistry, processing, and properties are also described. Allyl polymers are synthesized by radical polymerizations of allyl monomers that usually do not produce high-molecular-mass macromolecules. Therefore only a few specific monomers can produce thermosetting materials. Diallyldiglycolcarbonate (CR-39) and diallylphthalates are the most significant examples that have considerably improved our everyday life.
On the influence of ground and substrate on the radiation characteristics of planar spiral antennas
(2022)
The unidirectional radiation of spiral antennas mounted on a substrate requires the presence of a ground plane. In this work, we successively illustrate the impact of dielectric material and ground plane on the key metrics of a planar equiangular spiral antenna (PESA). For this purpose, a PESA mounted on several substrates with different dielectric properties and thicknesses is modeled and simulated. We introduce the tertiary current flowing on spiral arms when backed by a ground plane.
A new planar compact antenna composed of two crossed Cornu spirals is presented. Each Cornu spiral is fed from the center of the linearly part of the curvature between the two spirals, which builds the clothoid. Sequential rotation is applied using a sequential phase network to obtain circular polarization and increase the effective bandwidth. Signal integrity issues have been addressed and designed to ensure high quality of signal propagation. As a result, the antenna shows good radiation characteristics in the bandwidth of interest. Compared to antennas of the same size in the literature, it is broadband and of high gain. Although the proposed antenna has been designed for K- and Ka-band operations, it can also be developed for lower and upper frequencies because of the linearity of the Maxwell equations.
Purpose
Returnable transport packaging (RTP) solutions have found increasing attention in the recent past. It is not clear, however, under what conditions an RTP system improves a company's financial performance. This paper investigates the operational factors that influence the financial attractiveness of an RTP solution in a manufacturing environment and discusses how these factors are related to each other.
Design/methodology/approach
The paper presents the results of five empirical RTP use cases and compares the case study findings with the results found in literature in order to develop a taxonomy of RTP cost effects. Drawing on the concept of value-based management (VBM), the operational drivers of these RTP cost effects are systematized and categorized in a value driver model that relates RTP cost effects to overall economic value added (EVA).
Findings
Based on the use case findings, additional cost factors are identified that have not been previously discussed in literature. The amended taxonomy of influence factors is further operationalized in a value driver model.
Originality/value
The present paper is the first one providing a taxonomy of RTP cost effects and putting these effects in a conceptual framework that can be used for decision-making and performance benchmarking.
For a long time, most discrete accelerators have been attached to host systems using various generations of the PCI Express interface. However, with its lack of support for coherency between accelerator and host caches, fine-grained interactions require frequent cache-flushes, or even the use of inefficient uncached memory regions. The Cache Coherent Interconnect for Accelerators (CCIX) was the first multi-vendor standard for enabling cache-coherent host-accelerator attachments, and already is indicative of the capabilities of upcoming standards such as Compute Express Link (CXL). In our work, we compare and contrast the use of CCIX with PCIe when interfacing an ARM-based host with two generations of CCIX-enabled FPGAs. We provide both low-level throughput and latency measurements for accesses and address translation, as well as examine an application-level use-case of using CCIX for fine-grained synchronization in an FPGA-accelerated database system. We can show that especially smaller reads from the FPGA to the host can benefit from CCIX by having roughly 33% shorter latency than PCIe. Small writes to the host have a latency roughly 32% higher than PCIe, though, since they carry a higher coherency overhead. For the database use-case, the use of CCIX allowed to maintain a constant synchronization latency even with heavy host-FPGA parallelism.
Morphometry and stiffness of red blood cells - signatures of neurodegenerative diseases and aging
(2022)
Human red blood cells (RBCs) are unique cells with the remarkable ability to deform, which is crucial for their oxygen transport function, and which can be significantly altered under pathophysiological conditions. Here we performed ultrastructural analysis of RBCs as a peripheral cell model, looking for specific signatures of the neurodegenerative pathologies (NDDs) - Parkinson’s disease (PD), amyotrophic lateral sclerosis (ALS) and Alzheimer’s disease (AD), utilizing atomic force (AFM) and conventional optical (OM) microscopy. We found significant differences in the morphology and stiffness of RBCs isolated from patients with the selected NDDs and those from healthy individuals. Neurodegenerative pathologies’ RBCs are characterized by a reduced abundance of biconcave discoid shape, lower surface roughness and a higher Young’s modulus, compared to healthy cells. Although reduced, the biconcave is still the predominant shape in ALS and AD cells, while the morphology of PD is dominated by crenate cells. The features of RBCs underwent a marked aging-induced transformation, which followed different aging pathways for NDDs and normal healthy states. It was found that the diameter, height and volume of the different cell shape types have different values for NDDs and healthy cells. Common and specific morphological signatures of the NDDs were identified.
The imaging and force-distance curve modes of atomic force microscopy (AFM) are explored to compare the morphological and mechanical signatures of platelets from patients diagnosed with classical neurodegenerative diseases (NDDs) and healthy individuals. Our data demonstrate the potential of AFM to distinguish between the three NDDs-Parkinson’s disease (PD), amyotrophic lateral sclerosis (ALS) and Alzheimer’s disease (AD), and normal healthy platelets. The common features of platelets in the three pathologies are reduced membrane surface roughness, area and height, and enhanced nanomechanics in comparison with healthy cells. These changes might be related to general phenomena associated with reorganization in the platelet membrane morphology and cytoskeleton, a key factor for all platelets’ functions. Importantly, the platelets’ signatures are modified to a different extent in the three pathologies, most significant in ALS, less pronounced in PD and the least in AD platelets, which shows the specificity associated with each pathology. Moreover, different degree of activation, distinct pseudopodia and nanocluster formation characterize ALS, PD and AD platelets. The strongest alterations in the biophysical properties correlate with the highest activation of ALS platelets, which reflect the most significant changes in their nanoarchitecture. The specific platelet signatures that mark each of the studied pathologies can be added as novel biomarkers to the currently used diagnostic tools.
Der Fokus dieses Beitrags liegt in der Analyse einer nachhaltigen Verpackungsgestaltung mit Blick auf die kognitive und emotionale Steuerung von Kunden. Aufbauend auf den Guidelines zur nachhaltigen Verpackungsgestaltung werden unter Berücksichtigung relevanter Konsummotive des SHIFT-Modells die Möglichkeiten zur Beeinflussung von Kunden über den gesamten Customer Journey betrachtet. Dabei steht in der Lebensmittelindustrie die Verpackung als ein zentraler Kommunikationskanal im Fokus dieser Analyse. Im Sinne eines nachhaltigen Verpackungskonzeptes gilt es nun sämtliche Interaktionen mit dem Kunden sowohl online als auch offline so zu gestalten, dass Konsumenten zum nachhaltigen Konsum motiviert werden. Am Beispiel von Nomoo werden die einzelnen Schritte dargelegt.
Monodisperse porous poly(glycidyl methacrylate-co-ethylene glycol dimethacrylate) particles are widely applied in different fields, as their pore properties can be influenced and functionalization of the epoxy group is versatile. However, the adjustment of parameters which control morphology and pore properties such as pore volume, pore size and specific surface area is scarcely available. In this work, the effects of the process factors monomer:porogen ratio, GMA:EDMA ratio and composition of the porogen mixture on the response variables pore volume, pore size and specific surface area are investigated using a face centered central composite design. Non-linear effects of the process factors and second order interaction effects between them were identified. Despite the complex interplay of the process factors, targeted control of the pore properties was possible. For each response a response surface model was derived with high predictive power (all R2 predicted > 0.85). All models were tested by four external validation experiments and their validity and predictive power was demonstrated.
Hybrid organic/inorganic nanocomposites combine the distinct properties of the organic polymer and the inorganic filler, resulting in overall improved system properties. Monodisperse porous hybrid beads consisting of tetraethylene pentamine functionalized poly(glycidyl methacrylateco-ethylene glycol dimethacrylate) particles and silica nanoparticles (SNPs) were synthesized under Stoeber sol-gel process conditions. A wide range of hybrid organic/silica nanocomposite materials with different material properties was generated. The effects of n(H2O)/n(TEOS) and c(NH3 ) on the hybrid bead properties particle size, SiO2 content, median pore size, specific surface area, pore volume and size of the SNPs were studied. Quantitative models with a high robustness and predictive power were established using a statistical and systematic approach based on response surface methodology. It was shown that the material properties depend in a complex way on the process factor settings and exhibit non-linear behaviors as well as partly synergistic interactions between the process factors. Thus, the silica content, median pore size, specific surface area, pore volume and size of the SNPs are non-linearly dependent on the water-to-precursor ratio. This is attributed to the effect of the water-to-precursor ratio on the hydrolysis and condensation rates of TEOS. A possible mechanism of SNP incorporation into the porous polymer network is discussed.
Organizations that operate under uncertainty need to cultivate their ability to manage their primary resource, knowledge, accordingly. Under such conditions, organizations are required to harvest knowledge from two sources: to explore knowledge that is to be found outside the organization as well as exploit knowledge that is contained within. In a knowledge management context these exploitation and exploration activities have been conceptualized as knowledge ambidexterity. While ambidexterity has been studied extensively in contexts as manufacturing or IT, the notion of knowledge ambidexterity remains scarce in current knowledge management research. This study illustrates knowledge ambidexterity and elaborates its positive impact on organizational performance. Our study furthermore answers the question of how the use of enterprise social media (ESM) can facilitate the performance effects of knowledge ambidexterity. Drawing on the theory of communication visibility, we argue that ESM (e.g., Microsoft Teams, Slack, etc.) allow employees to communicate unhindered while making these communications visible. This allows for capturing tacit knowledge within these communications - this form of knowledge is generally hard to codify and can be a source of competitive edge. With respect to knowledge ambidexterity, ESM use can capture tacit knowledge aspects originating from inside and outside the organization, which fosters the development of a competitive advantage and, thus, supports its positive effect on organizational performance. This paper contributes to IT-enabled ambidexterity research in two aspects: (1) It sheds light on knowledge ambidexterity and, thereby, addresses a major practical challenge for knowledge-intensive organizations, and (2) it elaborates on the effects that ESM use can have on the relationship between knowledge ambidexterity and organizational performance. This work-in-progress paper offers a better understanding of the phenomenon of ambidexterity in a knowledge context, while providing insights on the facilitating role of ESM. Our research serves as a foundation for future empirical examinations of the concept of knowledge ambidexterity.
Commercially available homogenized cow- and plant-based milks were investigated by optical spectroscopy in the range of 400–1360 nm. Absorbance spectra, the effective scattering coefficient μs′, and the spectral absorption coefficient μa were recorded for 23 milk varieties and analyzed by multivariate data analysis. Cow- and plant-based milks were compared and discriminated using principal component analysis combined with a quadratic discriminant analysis. Furthermore, it was possible to discriminate the origin of plant-based milk by μa and the fat content in cow-based milk by μs′. Partial least squares regression models were developed to determine the fat content in cow-based milk. The model for μs′ proved to be the most efficient for this task with R2 = 0.98 and RMSEP = 0.19 g/100 mL for the external validation. Thus, optical spectroscopy together with multivariate data analysis is suitable for routine laboratory analysis or quality monitoring in the dairy production.
Production systems are becoming increasingly complex, which means that the main task of industrial maintenance, ensuring the technical availability of a production system, is also becoming increasingly difficult. The previous focus of maintenance efforts on individual machines must give way to a holistic view encompassing the whole production system. Against this background, the technical availability of a production system must be redefined. The aim of this publication is to present different definition approaches of production systems’ availability and to demonstrate the effects of random machine failures on the key figures considering the complexity of the production system using a discrete event simulation.
The textile-finishing industry, is one of the main sources of persistent organic pollutants in water; in this regard, it is necessary to develop and employ new sustainable approaches for fabric finishing and treatment. This research study shows the development of an efficient and eco-friendly procedure to form highly hydrophobic surfaces on cotton fabrics using different modified silica sols. In particular, the formation of highly hydrophobic surfaces on cotton fabrics was studied by using a two-step treatment procedure, i.e., first applying a hybrid silica sol obtained by hydrolysis and subsequent condensation of (3-Glycidyloxypropyl) trimethoxy silane with different alkyl(trialkoxy) silane under acid conditions, and then applying hydrolyzed hexadecyltrimethoxysilane on the treated fabrics to further improve the fabrics’ hydrophobicity. The treated cotton fabrics showed excellent water repellency with a water contact angle above 150◦ under optimum treatment conditions. The cooperative action of rough surface structure due to the silica sol nanoparticles and the low surface energy caused by long-chain alkyl(trialkoxy)silane in the nanocomposite coating, combined with the expected roughness on microscale due to the fabrics and fiber structure, provided the treated cotton fabrics with excellent, almost super, hydrophobicity and water-based stain resistance in an eco-sustainable way.
The digital twin concept has been widely known for asset monitoring in the industry for a long time. A clear example is the automotive industry. Recently, there has also been significant interest in the application of digital twins in healthcare, especially in genomics in what is known as precision medicine. This work focuses on another medical speciality where digital twins can be applied, sleep medicine. However, there is still great controversy about the fundamentals that constitute digital twins, such as what this concept is based on and how it can be included in healthcare effectively and sustainably. This article reviews digital twins and their role so far in what is known as personalized medicine. In addition, a series of steps will be exposed for a possible implementation of a digital twin for a patient suffering from sleep disorders. For this, artificial intelligence techniques, clinical data management, and possible solutions for explaining the results derived from artificial intelligence models will be addressed.
Today many scientific works are using deep learning algorithms and time series, which can detect physiological events of interest. In sleep medicine, this is particularly relevant in detecting sleep apnea, specifically in detecting obstructive sleep apnea events. Deep learning algorithms with different architectures are used to achieve decent results in accuracy, sensitivity, etc. Although there are models that can reliably determine apnea and hypopnea events, another essential aspect to consider is the explainability of these models, i.e., why a model makes a particular decision. Another critical factor is how these deep learning models determine how severe obstructive sleep apnea is in patients based on the apnea-hypopnea index (AHI). Deep learning models trained by two approaches for AHI determination are exposed in this work. Approaches vary depending on the data format the models are fed: full-time series and window-based time series.
The use of deep learning models with medical data is becoming more widespread. However, although numerous models have shown high accuracy in medical-related tasks, such as medical image recognition (e.g. radiographs), there are still many problems with seeing these models operating in a real healthcare environment. This article presents a series of basic requirements that must be taken into account when developing deep learning models for biomedical time series classification tasks, with the aim of facilitating the subsequent production of the models in healthcare. These requirements range from the correct collection of data, to the existing techniques for a correct explanation of the results obtained by the models. This is due to the fact that one of the main reasons why the use of deep learning models is not more widespread in healthcare settings is their lack of clarity when it comes to explaining decision making.
Ausbildung in der Akustik
(2022)
Die Wissenschaft der Akustik mit ihrer Vielfallt und Interdisziplinarität bietet hervorragende Möglichkeiten an beruflichen Betätigungsfeldern und hat viele von uns in ihren Bann gezogen. Ausbildung in der Akustik bedeutet mehr als Studierenden nur Wissen und Fähigkeiten zu vermitteln. Eigentlich ist nach dem Studium auch der Lernprozess nicht abgeschlossen, sondern wie viele Akustiker:innen meinen, fängt dieser erst dann richtig an. Um eine sehr gute Ausbildung zu gestalten, bedarf es neben Vorbildern an Personen auch Lehrformate, Methoden und Tools. Die folgenden sechs Kurzbeiträge sind Beispiele gelungener Maßnahmen in der Ausbildung der Akustik und sollen anregen, die Qualität in der Lehre stetig zu verbessern.
The citizen-centered health platform project is intended to provide a platform that can be used in EU cross-border regions, where social and economic exchange occurs across national borders. The overriding challenges are: (a) social: improving citizen-centered health and care provision; (b) technical: providing a digital platform for networking citizens, service providers, and municipal actors; (c) economic: developing long-term successful (sustainable) business models/value chains. The platform should strengthen and expand existing networks and establish new regional networks. Each network addresses particular challenges and apply them in a region-specific manner. Here, the national boundary conditions and the interregional needs play an essential role. These objectives require sufficient participation of civil society representatives. Furthermore, the platform will establish an overarching, sustainable, and knowledge-based network of health experts. The platform is to be jointly developed and implemented in the regions and follow an open-access approach. Therefore, synergies will be shared more quickly, strengthening competencies and competitiveness. In addition to practice partners, scientific and municipal institutions and SMEs are involved. The actors thus contribute to scientific performance, innovative strength, and resilience.
This paper presents a toolbox in Matlab/Octave for procedural design of analog integrated circuits. The toolbox contains all native functions required by analog designers (namely, schematic-generation, simulation setup and execution, integrated look-up tables and functions for design space exploration) to capture an entire design strategy in an executable script. This script - which we call an Expert Design Plan (EDP) - is capable of executing an analog circuit design fully automatically. The toolbox is integrated in an existing design flow. A bandgap reference voltage circuit is designed with this tool in less than 15 min.
Industrial practice is characterized by random events, also referred to as internal and external turbulences, which disturb the target-oriented planning and execution of production and logistics processes. Methods of probabilistic forecasting, in contrast to single value predictions, allow an estimation of the probability of various future outcomes of a random variable in the form of a probability density function instead of predicting the probability of a specific single outcome. Probabilistic forecasting methods, which are embedded into the analytics process to gain insights for the future based on historical data, therefore offer great potential for incorporating uncertainty into planning and control in industrial environments. In order to familiarize students with these potentials, a training module on the application of probabilistic forecasting methods in production and intralogistics was developed in the learning factory 'Werk150' of the ESB Business School (Reutlingen University). The theoretical introduction to the topic of analytics, probabilistic forecasting methods and the transition to the application domain of intralogistics is done based on examples from other disciplines such as weather forecasting and energy consumption forecasting. In addition, data sets of the learning factory are used to familiarize the students with the steps of the analytics process in a practice-oriented manner. After this, the students are given the task of identifying the influencing factors and required information to capture intralogistics turbulences based on defined turbulence scenarios (e.g. failure of a logistical resource) in the learning factory. Within practical production scenario runs, the students apply probabilistic forecasting using and comparing different probabilistic forecasting methods. The graduate training module allows the students to experience the potentials of using probabilistic forecasting methods to improve production and intralogistics processes in context with turbulences and to build up corresponding professional and methodological competencies.
Affordable Luxury Sports Cars in Germany : Investigating the Determinants of Customer Experience
(2022)
The article discusses the factors affecting the customer experience when buying affordable luxury sports cars in Germany by identifying differences between first-time and experienced buyers. It emphasizes the need for the creation of two different customer journeys based on different customer experience clusters, a touchpoint analysis from the customer perspective identified differences in purchase stages, and staff behaviour and brand trust for customer satisfaction and brand identification.
Moral change and the purchase-sales-relationship: critical analysis of German and Swiss companies
(2022)
This study examines the awareness and causes of moral change from the economic perspective in Germany and Switzerland. Based on an analysis of value research to date and interviews with experts in B2B sales, the manifestations of moral change are critically examined and recommendations for action are derived on an employee-specific and company-wide level.
Customer Success Management is the next evolution in complex sales that drives growth. Moreover, Customer Success Management is a modern holistic sales philosophy and part of a professional customer experience management strategy. The following conceptual paper discusses fundamental thoughts based on value-based selling, customer success focus, and a clear view on a perspective beyond selling that will gain importance in the future.
Gewinn, Profitabilität und Wachstum eines Unternehmens sind untrennbar mit dem Arbeitseinsatz und der Mitarbeiterperformance verbunden (Birri, 2014; Bligh et al., 2006). Arbeitseinsatz und Performance wiederum erwachsen maßgeblich aus den grundlegenden intrinsischen und extrinsischen Motivationstreibern. Also müssen Unternehmen zur Sicherung ihres Erfolgs mit geeigneten Werkzeugen gezielt auf diese Treiber einwirken. Ein zentrales Werkzeug hierfür sind die Vergütungssysteme der Unternehmen.
Gamification erfährt seit Jahren zunehmende Aufmerksamkeit in Wissenschaft und Praxis. Eine 2019 veröffentliche Metaanalyse zur Gamification-Forschung zeigt, dass der Fokus bislang auf den Bereichen Bildung und Gesundheit lag. Mit verbreiteter Anwendung steigt allerdings der Bedarf, die Forschung auf weitere Felder auszuweiten. Dieser Beitrag ergründet die Potenziale von Gamification im B-to-B-Vertrieb. Eine Betrachtung an dieser Stelle eignet sich aus den folgenden Gründen: Gamification wird vor allem für diejenigen Felder als attraktiv bezeichnet, wo Aktivitäten komplex und vielfältig sind und Fortschritt nur schwer nachzuvollziehen ist. Darüber hinaus unterstützt Gamification in der Erzielung nachhaltiger Ergebnisse mit Kontinuität und langfristigem Engagement.
Will chatbots play a significant role for B2B marketingin the future? Chatbots in B2B businesses
(2022)
Digitalization has gained a foothold in our everyday lives. However, it remains to be seen what digital tools B2B companies can benefit from. During the last few years, chatbots have been on the rise and have played a more significant role in B2B marketing. Thus, this research follows a literature review to examine the current state of B2B chatbots. With this, the study will discover the buyer’s preferences for chatbots compared to sales agents and the role of chatbots in different stages of the B2B sales funnel.
Social-Customer-Relationship-Management zeichnet sich vor allem durch die Möglichkeit eines zentralen, überregionalen Kundendialogs mit der Option einer inhaltlichen Segmentierung aus. Obwohl Wissenschaftler schon seit Längerem die Vorteile von Social CRM als ganzheitliche Marketingstrategie betonen, versuchen nur wenig Unternehmen eine ernsthafte Etablierung. Dabei eignet sich dieser Ansatz insbesondere für größere Firmen, die eine Palette unterschiedlicher Produkte überregional unter einem Markennamen vertreiben. Hier könnte Social CRM eine sinnvolle Bereicherung für das CRM-Instrumentarium darstellen und je nach Art der vertriebenen Produkte auch zum Qualitätsmanagement beitragen.
The United Nations (UN) Global Compact is a call to companies to align their strategies and operations with ten universal principles in the areas of human rights, labor, environment, and anti-corruption, and to take actions that advance societal goals (UN Global Compact 2017, p. 3). The UN Global Compacts’ vision is “to mobilize a global movement of sustainable companies and stakeholder to create the world we want” (UN Global Compact 2021a). It is a global network with local presence all around the world.
The Principles for Responsible Investments (PRI) is “the world’s leading proponent of responsible investment” (PRI 2021a). With the development of six Principles for Responsible Investment, the PRI supports its international network of investor signatories in incorporating the environmental, social, and governance (ESG) factors into their investment and ownership decisions. The goal of PRI is to develop a more sustainable global financial system by encouraging “investors to use responsible investment to enhance returns and better manage risks” (PRI 2021a). This independent financial initiative is supported by the United Nations and linked to the United Nations Environmental Program Finance Initiative (UNEP FI 2021) and the United Nations Global Compact (UN Global Compact 2021).
Values Management System
(2022)
The ValuesManagementSystem (VWS) is a management standard to “provide a sustainable safeguard of a firm and its development, in all dimensions (legal, economic, ecological, social)” (VWSZfW, p. 4). It includes a framework for values-driven governance through self-commitment and self-binding mechanisms. Values promote a sense of identity and give organizations guidance in decision-making. This is especially important in decision-making processes where topics are not clearly ruled by laws and regulations.
VMSZfW must be embedded in the specific business strategy, structure, and culture of an organization. The following four steps describe the implementation of the ValuesManagementSystemZfW: (i) Codify core values of an organization, for instance, with a “mission, vision and values statement” or Code of Ethics, (ii) implement guidelines such as Code of Conduct and specific policies and procedures, (iii) systematize these by establishing management systems such as Compliance and CSR management systems, and (iv) finally organize and establish structures to ensure the strategic direction and operational implementation and review of these processes. The top management shows that values management is taken seriously by their self-commitment to the core values of the company.
Digital assistants like Alexa, Google Assistant or Siri have seen a large adoption over the past years. Using artificial intelligence (AI) technologies, they provide a vocal interface to physical devices as well as to digital services and have spurred an entire new ecosystem. This comprises the big tech companies themselves, but also a strongly growing community of developers that make these functionalities available via digital platforms. At present, only few research is available to understand the structure and the value creation logic of these AI-based assistant platforms and their ecosystem. This research adopts ecosystem intelligence to shed light on their structure and dynamics. It combines existing data collection methods with an automated approach that proves useful in deriving a network-based conceptual model of Amazon’s Alexa assistant platform and ecosystem. It shows that skills are a key unit of modularity in this ecosystem, which is linked to other elements such as service, data, and money flows. It also suggests that the topology of the Alexa ecosystem may be described using the criteria reflexivity, symmetry, variance, strength, and centrality of the skill coactivations. Finally, it identifies three ways to create and capture value on AI-based assistant platforms. Surprisingly only a few skills use a transactional business model by selling services and goods but many skills are complementary and provide information, configuration, and control services for other skill provider products and services. These findings provide new insights into the highly relevant ecosystems of AI-based assistant platforms, which might serve enterprises in developing their strategies in these ecosystems. They might also pave the way to a faster, data-driven approach for ecosystem intelligence.
Since half a decade, there has been an increasing interest in Robotic Process Automation (RPA) by business firms. However, academic literature has been lacking attention to RPA, before adopting the topic to a larger extent. The aim of this study is to review and structure the latest state of scholarly research on RPA. This chapter is based on a systematic literature review that is used as a basis to develop a conceptual framework to structure the field. Our study shows that some areas of RPA have been extensively examined by many authors, e.g. potential benefits of RPA. Other categories, such as empirical studies on adoption of RPA or organisational readiness models, have remained research gaps.
The purpose of this paper is to examine the effects of perceived stress on traffic and road safety. One of the leading causes of stress among drivers is the feeling of having a lack of control during the driving process. Stress can result in more traffic accidents, an increase in driver errors, and an increase in traffic violations. To study this phenomenon, the Stress Perceived Questionnaire (PSQ) was used to evaluate the perceived stress while driving in a simulation. The study was conducted with participants from Germany, and they were grouped into different categories based on their emotional stability. Each participant was monitored using wearable devices that measured their instantaneous heart rate (HR). The preference for wearable devices was due to their non-intrusive and portable nature. The results of this study provide an overview of how stress can affect traffic and road safety, which can be used for future research or to implement strategies to reduce road accidents and promote traffic safety.
The vast majority of state-of-the-art integrated circuits are mixed-signal chips. While the design of the digital parts of the ICs is highly automated, the design of the analog circuitry is largely done manually; it is very time-consuming; and prone to error. Among the reasons generally listed for this is often the attitude of the analog designer. The fact is that many analog designers are convinced that human experience and intuition are needed for good analog design. This is why they distrust the automated synthesis tools. This observation is quite correct, but this is only a symptom of the real problem. This paper shows that this phenomenon is caused by very concrete technical (and thus very rational) issues. These issues lie in the mode of operation of the typical optimization processes employed for the synthesizing tasks. I will show that the dilemma that arises in analog design with these optimizers is the root cause of the low level of automation in analog design. The paper concludes with a review of proposals for automating analog design
Simulation models of the middle ear have rarely been used for diagnostic purposes due to their limited predictive ability with respect to pathologies. One big challenge is the large uncertainty and ambiguity in the choice of material parameters of the model.
Typically, the model parameters are determined by fitting simulation results to validation measurements. In a previous study, it was shown that fitting the model parameters of a finite-element model using the middle-ear transfer function and various other measurable output variables from normal ears alone is not sufficient to obtain a good predictive ability of the model on pathological middle-ear conditions. However, the inclusion of validation measurements on one pathological case resulted in a very good predictive ability also for other pathological cases. Although the found parameter set was plausible in all aspects, it was not yet possible to draw conclusions about the uniqueness and the accuracy or the uncertainty of the parameter set.
To answer these questions, statistical solution approaches are used in this study. Using the Monte Carlo method, a large number of plausible model data sets are generated that correctly represent the normal and pathological middle-ear characteristics in terms of various output variables like e.g., impedance, reflectance, umbo, and stapes transfer function. Subsequent principal component analyses (PCA) allow to draw conclusions about correlations, quantitative limits and statistical density of parameter values.
Furthermore, applying inverse PCA yields numerous plausible parameterizations of the middle-ear model, which can be used for data augmentation and training of a neural network which is capable of distinguishing between a normal middle ear and pathologies like otosclerosis, malleus fixation, and disarticulation based on objectively measured quantities like impedance, reflectance, and umbo velocity.
Current clinical practice is often unable to identify the causes of conductive hearing loss in the middle ear with sufficient certainty without exploratory surgery. Besides the large uncertainties due to interindividual variances, only partially understood cause–effect principles are a major reason for the hesitant use of objective methods such as wideband tympanometry in diagnosis, despite their high sensitivity to pathological changes. For a better understanding of objective metrics of the middle ear, this study presents a model that can be used to reproduce characteristic changes in metrics of the middle ear by altering local physical model parameters linked to the anatomical causes of a pathology. A finite-element model is, therefore, fitted with an adaptive parameter identification algorithm to results of a temporal bone study with stepwise and systematically prepared pathologies. The fitted model is able to reproduce well the measured quantities reflectance, impedance, umbo and stapes transfer function for normal ears and ears with otosclerosis, malleus fixation, and disarticulation. In addition to a good representation of the characteristic influences of the pathologies in the measured quantities, a clear assignment of identified model parameters and pathologies consistent with previous studies is achieved. The identification results highlight the importance of the local stiffness and damping values in the middle ear for correct mapping of pathological characteristics and address the challenges of limited measurement data and wide parameter ranges from the literature. The great sensitivity of the model with respect to pathologies indicates a high potential for application in model-based diagnosis.
Purpose
Supporting the surgeon during surgery is one of the main goals of intelligent ORs. The OR-Pad project aims to optimize the information flow within the perioperative area. A shared information space should enable appropriate preparation and provision of relevant information at any time before, during, and after surgery.
Methods
Based on previous work on an interaction concept and system architecture for the sterile OR-Pad system, we designed a user interface for mobile and intraoperative (stationary) use, focusing on the most important functionalities like clear information provision to reduce information overload. The concepts were transferred into a high-fidelity prototype for demonstration purposes. The prototype was evaluated from different perspectives, including a usability study.
Results
The prototype’s central element is a timeline displaying all available case information chronologically, like radiological images, labor findings, or notes. This information space can be adapted for individual purposes (e.g., highlighting a tumor, filtering for own material). With the mobile and intraoperative mode of the system, relevant information can be added, preselected, viewed, and extended during the perioperative process. Overall, the evaluation showed good results and confirmed the vision of the information system.
Conclusion
The high-fidelity prototype of the information system OR-Pad focuses on supporting the surgeon via a timeline making all available case information accessible before, during, and after surgery. The information space can be personalized to enable targeted support. Further development is reasonable to optimize the approach and address missing or insufficient aspects, like the holding arm and sterility concept or new desired features.
Up to now biorefinery concepts can hardly compete with the conventional production of fossil-based chemicals. On one hand, conventional chemical production has been optimised over many decades in terms of energy, yield and costs. Biorefineries, on the other hand, do not have the benefit of long-term experience and therefore have a huge potential for optimisation. This study deals with the economic evaluation of a newly developed biorefinery concept based on superheated steam (SHS) torrefaction of biomass residues with recovery of valuable platform chemicals. Two variants of the biorefinery were economically investigated. One variant supplies various platform chemicals and torrefied biomass. The second variant supplies thermal energy for external consumers in addition to platform chemicals. The results show that both variants can be operated profitably if the focus of the platform chemicals produced is on high quality and thus on the higher-priced segment. The economic analysis gives clear indications of the most important financial influencing parameters. The economic impact of integration into existing industrial structures is positive. With the analysis, a viable business model can be developed. Based on the results of the present study, an open-innovation platform is recommended for the further development and commercialisation of the novel biorefinery.
Within the last decade, research on torrefaction has gained increasing attention due to its ability to improve the physical properties and chemical composition of biomass residues for further energetic utilisation. While most of the research works focused on improving the energy density of the solid fraction to offer an ecological alternative to coal for energy applications, little attention was paid to the valorisation of the condensable gases as platform chemicals and its ecological relevance when compared to conventional production processes. Therefore, the present study focuses on the ecological evaluation of an innovative biorefinery concept that includes superheated steam drying and the torrefaction of biomass residues at ambient pressure, the recovery of volatiles and the valorisation/separation of several valuable platform chemicals. For a reference case and an alternative system design scenario, the ecological footprint was assessed, considering the use of different biomass residues. The results show that the newly developed process can compete with established bio-based and conventional production processes for furfural, 5-HMF and acetic acid in terms of the assessed environmental performance indicators. The requirements for further research on the synthesis of other promising platform chemicals and the necessary economic evaluation of the process were elaborated.
Digital twins: a meta-review on their conceptualization, application, and reference architecture
(2022)
The concept of digital twins (DTs) is receiving increasing attention in research and management practice. However, various facets around the concept are blurry, including conceptualization, application areas, and reference architectures for DTs. A review of preliminary results regarding the emerging research output on DTs is required to promote further research and implementation in organizations. To do so, this paper asks four research questions: (1) How is the concept of DTs defined? (2) Which application areas are relevant for the implementation of DTs? (3) How is a reference architecture for DTs conceptualized? and (4) Which directions are relevant for further research on DTs? With regard to research methods, we conduct a meta-review of 14 systematic literature reviews on DTs. The results yield important insights for the current state of conceptualization, application areas, reference architecture, and future research directions on DTs.
(57) Zusammenfassung: Die Erfindung betrifft ein Verfahren zur konstruktionslosen Schnittgestaltung für wenigstens ein Bekleidungsstück (250) einer Bekleidungskollektion, wobei ein Modellkörper verwendet wird, auf dem wenigstens ein Markierungspunkt (203) und/oder wenigstens eine Schnittlinie (202) vorhanden ist oder angebracht wird, wobei von dem Modellkörper durch Oberflächenabrollen Schnittteile (204) erhalten werden, wobei als Modellkörper ein Optimalkörper (200) verwendet wird, der eine Soll-Innenkontur (212) eines herzustellenden Bekleidungsstückes (250) repräsentiert. Die Erfindung betrifft ferner den Optimalkörper (200) sowie einen 3D-Datensatz des Optimalkörpers (200) sowie ein Verfahren zum Erzeugen des Optimalkörpers (200).
Uncontrolled movement of instruments in laparoscopic surgery can lead to inadvertent tissue damage, particularly when the dissecting or electrosurgical instrument is located outside the field of view of the laparoscopic camera. The incidence and relevance of such events are currently unknown. The present work aims to identify and quantify potentially dangerous situations using the example of laparoscopic cholecystectomy (LC). Twenty-four final year medical students were prompted to each perform four consecutive LC attempts on a well-established box trainer in a surgical training environment following a standardized protocol in a porcine model. The following situation was defined as a critical event (CE): the dissecting instrument was inadvertently located outside the laparoscopic camera’s field of view. Simultaneous activation of the electrosurgical unit was defined as a highly critical event (hCE). Primary endpoint was the incidence of CEs. While performing 96 LCs, 2895 CEs were observed. Of these, 1059 (36.6%) were hCEs. The median number of CEs per LC was 20.5 (range: 1–125; IQR: 33) and the median number of hCEs per LC was 8.0 (range: 0–54, IQR: 10). Mean total operation time was 34.7 min (range: 15.6–62.5 min, IQR: 14.3 min). Our study demonstrates the significance of CEs as a potential risk factor for collateral damage during LC. Further studies are needed to investigate the occurrence of CE in clinical practice, not just for laparoscopic cholecystectomy but also for other procedures. Systematic training of future surgeons as well as technical solutions address this safety issue.
Database management systems and K/V-Stores operate on updatable datasets – massively exceeding the size of available main memory. Tree-based K/V storage management structures became particularly popular in storage engines. B+ -Trees [1, 4] allow constant search performance, however write-heavy workloads yield in inefficient write patterns to secondary storage devices and poor performance characteristics. LSM-Trees [16, 23] overcome this issue by horizontal partitioning fractions of data – small enough to fully reside in main memory, but require frequent maintenance to sustain search performance.
Firstly, we propose Multi-Version Partitioned BTrees (MV-PBT) as sole storage and index management structure in key-sorted storage engines like K/V-Stores. Secondly, we compare MV-PBT against LSM-Trees. The logical horizontal partitioning in MV-PBT allows leveraging recent advances in modern B+ -Tree techniques in a small transparent and memory resident portion of the structure. Structural properties sustain steady read performance, yielding efficient write patterns and reducing write amplification.
We integrated MV-PBT in the WiredTiger [15] KV storage engine. MV-PBT offers an up to 2× increased steady throughput in comparison to LSM-Trees and several orders of magnitude in comparison to B+ -Trees in a YCSB [5] workload.
An empirical study on management accountants’ roles and role perceptions: a German perspective
(2022)
The ongoing discussion on roles of management accountants (MAs) leads often to perceive the business partner (BP) role as the role of choice. Yet, many scholars and practitioners seem to assume that this role is clear to managers and MAs, that it makes sense for them and that all managers and MAs agree on it and implement it. Inconsistencies between actual roles, perceived, and expected roles might cause identity and role conflicts. However, we lack evidence of whether managers and MAs perceive, expect and act in the BP role and if tensions and conflicts might exist. This paper is based on a quantitative empirical study of a large German high-tech firm in 2019 whose top management decided to implement the BP role. We found several areas of tension in this role discussion and contribute to the literature on MAs’ roles with a more nuanced view of the interaction between managers and MAs regarding MAs’ roles. The study shows that there are mainly differences in business managers’ expectations of MAs to the role of the BP, which the MAs do not know exactly how to fulfill.
Job advertisements are important means of communicating role expectations for management accountants to the labor market. They provide information about which roles of management accountants are sought by companies or which roles are expected. However, which roles are communicated in job advertisements is unknown so far. Using a large sample of 889 job ads and a text-mining approach, we show an apparent mix of different role types with a strong focus on a rather classic role: the watchdog role. However, individuals with business partner characteristics are more often sought for leadership positions or in family businesses and small and medium-sized enterprises (SMEs). The results challenge the current role discussion for management accountants as business partners in practice and some academic fields.
Evaluation of human-robot order picking systems considering the evolution of object detection
(2022)
The automation of intralogistic processes is a major trend, but order picking, one of the core and most cost-intensive tasks in this field, remains mostly manual due to the flexibility required during picking. Reacting to its hard physical and ergonomic strain, the automation of this process is however highly relevant. Robotic picking system would enable the automation of this process from a technical point of view, but the necessity for the system to evolve in time, due to dynamics of logistic environments, faces operations with new challenges that are hardly treated in literature. This unknown scares potential investors, hindering the application of technically feasible solutions. In this paper, a model for the evaluation of the additional cost of training of automated systems during operations is presented, that also considers the savings enabled by the system after its evolution. The proposed approach, that considers different parameters such as capacity, ergonomics and cost, is validated with a case study and discussed.
Hybrid project management is an approach that combines traditional and agile project management techniques. The goal is to benefit from the strengths of each approach, and, at the same time avoid the weaknesses. However, due to the variety of hybrid methodologies that have been presented in the meantime, it is not easy to understand the differences or similarities of the methodologies, as well as, the advantages or disadvantages of the hybrid approach in general. Additionally, there is only fragmented knowledge about prerequisites and success factors for successfully implementing hybrid project management in organizations. Hence, the aim of this study is to provide a structured overview of the current state of research regarding the topic. To address this aim, we have conducted a systematic literature review focusing on a set of specific research questions. As a result, four different hybrid methodologies are discussed, as well as, the definition, benefits, challenges, suitability and prerequisites of hybrid project management. Our study contributes to knowledge by synthesizing and structuring prior work in this growing area of research, which serves as a basis for purposeful and targeted research in the future.
Repräsentativ zur Erkenntnis
(2022)
In einer immer komplexeren Welt wird es zunehmend schwierig, den Blick aufs große Ganze zu bewahren. Trainingsprofis können Führungskräfte und Teams dabei unterstützen -etwas mit einer speziellen Aufstellungstechnik, der prototypischen Strukturaufstellung. Kerstin Reich erklärt anhand eines Praxisbeispiels, wie sie funktioniert.
All around the world, there are numerous academic competitions (e.g., “Academic Olympiads”) and corresponding training courses to foster students’ competences and motivation. But do students’ competences and motivation really benefit from such courses? We developed and evaluated a course that was designed to prepare third and fourth graders to participate in the German Mathematical Olympiad. Its effectiveness was evaluated in a quasi-experimental pre- and posttest design (N = 201 students). Significant positive effects of the training were found for performance in the academic competition (for both third and fourth graders) as well as mathematical competences as measured with a curriculum-oriented test (for fourth graders only). Differential effects across grade levels (with more pronounced positive effects in fourth-grade students) were observed for students’ math self-concept and task-specific interest in mathematics, pointing to possible social comparison effects.
Die Gestaltung von Lernumgebungen, die unterschiedliche Lernpotenziale fordern und fördern, ist eine Kernaufgabe schulischer Bildungsprozesse. Die Frage, welche Gestaltungselemente einer Lernumgebung sich für welche Lernenden unter welchen Bedingungen als wirksam erwiesen und wie eine Implementation dieser Elemente in die Praxis gelingen kann, ist dabei von hoher Bedeutung. Ausgehend von Enrichment-Konzepten und -Materialien, die sich bisher im Rahmen eines Begabtenförderungsprogramms für Dritt- und Viertklässler als wirksam erwiesen haben, entstehen in LemaS-Teilprojekt 7 „ENRICHMINT“ Unterrichtsmaterialien, die auch im Regelweise des Teilprojekts nach dem sogenannten Design-Based Implementation Research vorgestellt und ein erstes Fazit gezogen. Außerdem werden die grundlegenden Konzepte der Unterrichtsmaterialien für den Mathematik-, Sach-, und Deutschunterricht vorgestellt und die nächsten Schritte der Arbeit des Teilprojekts skizziert.
Purpose
This field study aims to investigate the interactive relationships of millennial employee’s gender, supervisor’s gender and country culture on the conflict-management strategies (CMS) in ten countries (USA, China, Turkey, Germany, Bangladesh, Portugal, Pakistan, Italy, Thailand and Hong Kong).
Design/methodology/approach
This exploratory study extends past research by examining the interactive effects of gender × supervisor’s gender × country on the CMS within a single generation of workers, millennials. The Rahim Organizational Conflict Inventory–II, Form A was used to assess the use of the five CMS (integrating, obliging, dominating, avoiding and compromising). Data analysis found CMS used in the workplace are associated with the interaction of worker and supervisor genders and the national context of their work.
Findings
Data analysis (N = 2,801) was performed using the multivariate analysis of covariance with work experience as a covariate. The analysis provided support for the three-way interaction. This interaction suggests how one uses the CMS depends on self-gender, supervisor’s gender and the country where the parties live. Also, the covariate – work experience – was significantly associated with CMS.
Research limitations/implications
One of the limitations of this study is that the authors collected data from a collegiate sample of employed management students in ten countries. There are significant implications for leading global teams and training programs for mid-level millennials.
Practical implications
There are various conflict situations where one conflict strategy may be more appropriate than others. Organizations may have to change their policies for recruiting employees who are more effective in conflict management.
Social implications
Conflict management is not only important for managers but it is also important for all human beings. Individuals handle conflict every day and it would be really good if they could handle it effectively and improve their gains.
Originality/value
To the best of the authors’ knowledge, no study has tested a three-way interaction of variables on CMS. This study has a wealth of information on CMS for global managers.
Hintergrund: Endoskopische Operationsverfahren haben sich als Goldstandard in der Nasennebenhöhlen-(NNH-)Chirurgie etabliert. Den sich daraus ergebenden Herausforderungen für die chirurgische Ausbildung kann durch den Einsatz von Virtuelle-Realität-(VR-)Trainingssimulatoren begegnet werden. Bislang wurde eine Reihe von Simulatoren für NNH-Operationen entwickelt. Frühere Studien im Hinblick auf den Trainingseffekt wurden jedoch nur mit medizinisch vorgebildeten Probanden durchgeführt oder es wurde nicht über dessen zeitlichen Verlauf berichtet.
Methoden: Ein NNH-CT-Datensatz wurde nach der Segmentierung in ein 3-dimensionales, polygonales Oberflächenmodell überführt und mithilfe von originalem Fotomaterial texturiert. Die Interaktion mit der virtuellen Umgebung erfolgte über ein haptisches Eingabegerät. Während der Simulation wurden die Parameter Eingriffsdauer und Fehleranzahl erfasst. Zehn Probanden absolvierten jeweils eine Trainingseinheit bestehend aus je 5 Übungsdurchläufen an 10 aufeinanderfolgenden Tagen.
Ergebnisse: Vier Probanden verringerten die benötigte Zeit um mehr als 60% im Verlauf des Übungszeitraums. Vier der Probanden verringerten ihre Fehleranzahl um mehr als 60%. Acht von 10 Probanden zeigten eine Verbesserung bezüglich beider Parameter. Im Median wurde im gesamten gemessenen Zeitraum die Dauer des Eingriffs um 46 Sekunden und die Fehleranzahl um 191 reduziert. Die Überprüfung eines Zusammenhangs zwischen den 2 Parametern ergab eine positive Korrelation.
Schlussfolgerung: Zusammenfassend lässt sich feststellen, dass das Training am NNH-Simulator auch bei unerfahrenen Personen die Performance beträchtlich verbessert, sowohl in Bezug auf die Dauer als auch auf die Genauigkeit des Eingriffs.