Ja
Refine
Year of publication
- 2022 (135) (remove)
Document Type
- Journal article (85)
- Conference proceeding (42)
- Book chapter (2)
- Doctoral Thesis (2)
- Report (2)
- Working Paper (2)
Is part of the Bibliography
- yes (135)
Institute
- Informatik (47)
- ESB Business School (46)
- Life Sciences (20)
- Technik (14)
- Texoversum (4)
- Zentrale Einrichtungen (4)
Publisher
- MDPI (21)
- Elsevier (20)
- Hochschule Reutlingen (13)
- Springer (12)
- Center for Promoting Education and Research (6)
- De Gruyter (4)
- University of Colorado (4)
- University of Hawai'i at Manoa (4)
- Wiley (4)
- Frontiers Media (2)
We address the problem of 3D face recognition based on either 3D sensor data, or on a 3D face reconstructed from a 2D face image. We focus on 3D shape representation in terms of a mesh of surface normal vectors. The first contribution of this work is an evaluation of eight different 3D face representations and their multiple combinations. An important contribution of the study is the proposed implementation, which allows these representations to be computed directly from 3D meshes, instead of point clouds. This enhances their computational efficiency. Motivated by the results of the comparative evaluation, we propose a 3D face shape descriptor, named Evolutional Normal Maps, that assimilates and optimises a subset of six of these approaches. The proposed shape descriptor can be modified and tuned to suit different tasks. It is used as input for a deep convolutional network for 3D face recognition. An extensive experimental evaluation using the Bosphorus 3D Face, CASIA 3D Face and JNU-3D Face datasets shows that, compared to the state of the art methods, the proposed approach is better in terms of both computational cost and recognition accuracy.
The Fourteenth International Conference on Advances in Databases, Knowledge, and Data Applications (DBKDA 2022), held between May 22 – 26, 2022, continued a series of international events covering a large spectrum of topics related to advances in fundamentals on databases, evolution of relation between databases and other domains, data base technologies and content processing, as well as specifics in applications domains databases.
Advances in different technologies and domains related to databases triggered substantial improvements for content processing, information indexing, and data, process and knowledge mining. The push came from Web services, artificial intelligence, and agent technologies, as well as from the generalization of the XML adoption.
High-speed communications and computations, large storage capacities, and load-balancing for distributed databases access allow new approaches for content processing with incomplete patterns, advanced ranking algorithms and advanced indexing methods.
Evolution on e-business, ehealth and telemedicine, bioinformatics, finance and marketing, geographical positioning systems put pressure on database communities to push the ‘de facto’ methods to support new requirements in terms of scalability, privacy, performance, indexing, and heterogeneity of both content and technology.
Die Gestaltung von Lernumgebungen, die unterschiedliche Lernpotenziale fordern und fördern, ist eine Kernaufgabe schulischer Bildungsprozesse. Die Frage, welche Gestaltungselemente einer Lernumgebung sich für welche Lernenden unter welchen Bedingungen als wirksam erwiesen und wie eine Implementation dieser Elemente in die Praxis gelingen kann, ist dabei von hoher Bedeutung. Ausgehend von Enrichment-Konzepten und -Materialien, die sich bisher im Rahmen eines Begabtenförderungsprogramms für Dritt- und Viertklässler als wirksam erwiesen haben, entstehen in LemaS-Teilprojekt 7 „ENRICHMINT“ Unterrichtsmaterialien, die auch im Regelweise des Teilprojekts nach dem sogenannten Design-Based Implementation Research vorgestellt und ein erstes Fazit gezogen. Außerdem werden die grundlegenden Konzepte der Unterrichtsmaterialien für den Mathematik-, Sach-, und Deutschunterricht vorgestellt und die nächsten Schritte der Arbeit des Teilprojekts skizziert.
Purpose
Artificial intelligence (AI), in particular deep learning (DL), has achieved remarkable results for medical image analysis in several applications. Yet the lack of human-like explanations of such systems is considered the principal restriction before utilizing these methods in clinical practice (Yang, Ye, & Xia, 2022).
Methods
Explainable Artificial Intelligence (XAI) provides a human-explainable and interpretable description of the “black-box” nature of DL (Gulum, Trombley, & Kantardzic, 2021). An effective XAI diagnosis generator, namely NeuroXAI (refer to Fig. 1), has been developed to extract 3D explanations from convolutional neural networks (CNN) models of brain gliomas (Zeineldin et al., 2022). By providing visual justification maps, NeuroXAI can help make DL models transparent and thus increase the trust of medical experts.
Results
NeuroXAI has been applied to two applications of the most widely investigated problems in brain imaging analysis, i.e. image classification and segmentation using magnetic resonance imaging (MRI). Visual attention maps of multiple XAI methods have been generated and compared for both applications, which could help to provide transparency about the performance of DL systems.
Conclusion
NeuroXAI helps to understand the prediction process of 3D CNN networks for brain glioma using human-understandable explanations. Results revealed that the investigated DL models behave in a logical human-like manner and can improve the analytical process of the MRI images systematically. Due to its open architecture, ease of implementation, and scalability to new XAI methods, NeuroXAI could be utilized to assist medical professionals in the detection and diagnosis of brain tumors. NeuroXAI code is publicly accessible at https://github.com/razeineldin/NeuroXAI
Die prä-, intra- und postoperative Entitäts- und Dignitätsbestimmung von Speicheldrüsen-tumoren (ST) allein anhand von histomorphologischen Kriterien ist häufig mit großen Unsicherheiten verbunden.
Die Spektren der Raman-Spektroskopie (RS) und der Infrarot-Spektroskopie (IS) enthalten Informationen zu der molekularen Zusammensetzung des untersuchten Gewebes. Ziel der Arbeit war die Etablierung eines Gewebe-Aufarbeitungs-Workflows und die Analyse des Einflusses der Fixierung auf die spektrale Bioinformation. Zudem wird ein Überblick über den Einsatz der RS und IS im Kopf-Hals Bereich gegeben.
Es wurden 10 mm dicke, konsekutive kryo-, formalin- und paraffinfixierte ST-Gewebeschnitte von Zystadenolymphomen (n=5) und pleomorphen Adenomen (n=4) mit der RS und IS untersucht und die Daten multivariat ausgewertet. Die Messungen erfolgten in Korrelation zur Histomorphologie über einen korrespondierenden HE-Schnitt sowohl im Tumorgewebe als auch im gesunden Speicheldrüsengewebe.
In der Mittelwertspektrenanalyse zeigte sich eine deutliche Paraffin-Signatur, Formalin-Fixierung hatte keinen wesentlichen Einfluss. Dies konnte durch die Hauptkomponentenanalyse (PCA) bestätigt werden. Eine Diskriminierung von Tumor- und Nicht-Tumorgewebe durch die PCA und gekoppelte Diskriminanzanalyse war ebenfalls mit beiden spektroskopischen Methoden mit einer hohen Sensitivität möglich.
Für eine Translation von spektralen Verfahren ist das Wissen über Einflussfaktoren auf die spektrale Bioinformation der Gewebeaufarbeitung und -fixierung unabdingbar. Die Integration spektraler Verfahren additiv in bestehende Arbeitsabläufe ist möglich. Der Einfluss der Formalinfixierung auf die spektrale Bioinformation ist gering. Die bioinformatische Analyse der umfangreichen Datensätze ist herausfordernd.
IZKF Würzburg
This paper reviews suggestions for changes to database technology coming from the work of many researchers, particularly those working with evolving big data. We discuss new approaches to remote data access and standards that better provide for durability and auditability in settings including business and scientific computing. We propose ways in which the language standards could evolve, with proof-of-concept implementations on Github.
We study whether compulsory religious education in schools affects students' religiosity as adults. We exploit the staggered termination of compulsory religious education across German states in models with state and cohort fixed effects. Using three different datasets, we find that abolishing compulsory religious education significantly reduced religiosity of affected students in adulthood. It also reduced the religious actions of personal prayer, church-going, and church membership. Beyond religious attitudes, the reform led to more equalized gender roles, fewer marriages and children, and higher labor-market participation and earnings. The reform did not affect ethical and political values or non-religious school outcomes.
We study whether compulsory religious education in schools affects students' religiosity as adults. We exploit the staggered termination of compulsory religious education across German states in models with state and cohort fixed effects. Using three different datasets, we find that abolishing compulsory religious education significantly reduced religiosity of affected students in adulthood. It also reduced the religious actions of personal prayer, church-going, and church membership. Beyond religious attitudes, the reform led to more equalized gender roles, fewer marriages and children, and higher labor-market participation and earnings. The reform did not affect ethical and political values or non-religious school outcomes.
Die Lohnlücke zwischen Frauen und Männern (der sogenannte Gender Pay Gap) wird üblicherweise in Bevölkerungsgruppen untersucht, die ihre Bildungslaufbahn bereits abgeschlossen haben. In diesem Beitrag betrachten wir eine frühere Phase der Berufstätigkeit, indem wir den Gender Pay Gap unter Studierenden, die neben ihrem Studium arbeiten, analysieren. Anhand von Daten aus fünf Kohorten einer Studierendenbefragung in Deutschland beschreiben wir den Gender Pay Gap und diskutieren mögliche Erklärungen. Die Ergebnisse zeigen, dass Studentinnen im Durchschnitt etwa 6% weniger verdienen als Studenten. Nach Berücksichtigung verschiedener entlohnungsrelevanter Faktoren verringert sich die Lücke auf 4,1%. Einer der Hauptgründe für die Differenz in der Entlohnung sind die unterschiedlichen Beschäftigungen, die männliche und weibliche Studierende ausüben.
Being exposed to compulsory religious education in school can have long-run consequences for students’ lives. At different points in time since the 1970s, German states terminated compulsory religious education in public schools and replaced it by a choice between ethics classes and religious education. This article shows that the reform not only led to reduced religiosity in students’ later life, but also eroded traditional attitudes towards gender roles and increased labor-market participation and earnings.
Gender pay gaps are commonly studied in populations with already completed educational careers. We focus on an earlier stage by investigating the gender pay gap among university students working alongside their studies. With data from five cohorts of a large-scale student survey from Germany, we use regression and wage decomposition techniques to describe gender pay gaps and potential explanations. We find that female students earn about 6% less on average than male students, which reduces to 4.1% when accounting for a rich set of explanatory variables. The largest explanatory factor is the type of jobs male and female students pursue.
With the digital transformation, companies will experience a change that focuses on shaping the organization into an agile organizational form. In today's competitive and fast-moving business environment, it is necessary to react quickly to changing market conditions. Agility represents a promising option for overcoming these challenges. The path to an agile organization represents a development process that requires consideration of countless levels of the enterprise. This paper examines the impact of digital transformation on agile working practices and the benefits that can be achieved through technology. To enable a solution for today's so-called VUCA (Volatility, Uncertainty, Complexity und Ambiguity) world, agile ways of working can be applied project management requires adaptation. In the qualitative study, expert interviews were conducted and analyzed using the grounded theory method. As a result, a model can be presented that shows the influencing factors and potentials of agile management in the context of the digital transformation of medium-sized companies.
Uncontrolled movement of instruments in laparoscopic surgery can lead to inadvertent tissue damage, particularly when the dissecting or electrosurgical instrument is located outside the field of view of the laparoscopic camera. The incidence and relevance of such events are currently unknown. The present work aims to identify and quantify potentially dangerous situations using the example of laparoscopic cholecystectomy (LC). Twenty-four final year medical students were prompted to each perform four consecutive LC attempts on a well-established box trainer in a surgical training environment following a standardized protocol in a porcine model. The following situation was defined as a critical event (CE): the dissecting instrument was inadvertently located outside the laparoscopic camera’s field of view. Simultaneous activation of the electrosurgical unit was defined as a highly critical event (hCE). Primary endpoint was the incidence of CEs. While performing 96 LCs, 2895 CEs were observed. Of these, 1059 (36.6%) were hCEs. The median number of CEs per LC was 20.5 (range: 1–125; IQR: 33) and the median number of hCEs per LC was 8.0 (range: 0–54, IQR: 10). Mean total operation time was 34.7 min (range: 15.6–62.5 min, IQR: 14.3 min). Our study demonstrates the significance of CEs as a potential risk factor for collateral damage during LC. Further studies are needed to investigate the occurrence of CE in clinical practice, not just for laparoscopic cholecystectomy but also for other procedures. Systematic training of future surgeons as well as technical solutions address this safety issue.
Industrial practice is characterized by random events, also referred to as internal and external turbulences, which disturb the target-oriented planning and execution of production and logistics processes. Methods of probabilistic forecasting, in contrast to single value predictions, allow an estimation of the probability of various future outcomes of a random variable in the form of a probability density function instead of predicting the probability of a specific single outcome. Probabilistic forecasting methods, which are embedded into the analytics process to gain insights for the future based on historical data, therefore offer great potential for incorporating uncertainty into planning and control in industrial environments. In order to familiarize students with these potentials, a training module on the application of probabilistic forecasting methods in production and intralogistics was developed in the learning factory 'Werk150' of the ESB Business School (Reutlingen University). The theoretical introduction to the topic of analytics, probabilistic forecasting methods and the transition to the application domain of intralogistics is done based on examples from other disciplines such as weather forecasting and energy consumption forecasting. In addition, data sets of the learning factory are used to familiarize the students with the steps of the analytics process in a practice-oriented manner. After this, the students are given the task of identifying the influencing factors and required information to capture intralogistics turbulences based on defined turbulence scenarios (e.g. failure of a logistical resource) in the learning factory. Within practical production scenario runs, the students apply probabilistic forecasting using and comparing different probabilistic forecasting methods. The graduate training module allows the students to experience the potentials of using probabilistic forecasting methods to improve production and intralogistics processes in context with turbulences and to build up corresponding professional and methodological competencies.
Bioactive cations, including calcium, copper and magnesium, have shown the potential to become the alternative to protein growth factor-based therapeutics for bone healing. Ion substitutions are less costly, more stable, and more effective at low concentrations. Although they have been shown to be effective in providing bone grafts with more biological functions, the precise control of ion release kinetics is still a challenge. Moreover, the synergistic effect of three or more metal ions on bone regeneration has rarely been studied. In this study, vaterite-calcite CaCO3 particles were loaded with copper (Cu2+) and magnesium (Mg2+). The polyelectrolyte multilayer (PEM) was deposited on CaCuMg-CO3 particles via layer-by-layer technique to further improve the stability and biocompatibility of the particles and to enable controlled release of multiple metal ions. The PEM coated microcapsules were successfully combined with collagen at the outmost layer, providing a further stimulating microenvironment for bone regeneration. The in vitro release studies showed remarkably stable release of Cu2+ in 2 months without initial burst release. Mg2+ was released in relatively low concentration in the first 7 days. Cell culture studies showed that CaCuMg-PEM-Col microcapsules stimulated cell proliferation, extracellular maturation and mineralization more effectively than blank control and other microcapsules without collagen adsorption (Ca-PEM, CaCu-PEM, CaMg-PEM, CaCuMg-PEM). In addition, the CaCuMg-PEM-Col microcapsules showed positive effects on osteogenesis and angiogenesis in gene expression studies. The results indicate that such a functional and controllable delivery system of multiple bioactive ions might be a safer, simpler and more efficient alternative of protein growth factor-based therapeutics for bone regeneration. It also provides an effective method for functionalizing bone grafts for bone tissue engineering.
Cell migration plays an essential role in wound healing and inflammatory processes inside the human body. Peripheral blood neutrophils, a type of polymorphonuclear leukocyte (PMN), are the first cells to be activated during inflammation and subsequently migrate toward an injured tissue or infection site. This response is dependent on both biochemical signaling and the extracellular environment, one aspect of which includes increased temperature in the tissues surrounding the inflammation site. In our study, we analyzed temperature-dependent neutrophil migration using differentiated HL-60 cells. The migration speed of differentiated HL-60 cells was found to correlate positively with temperature from 30 to 42 °C, with higher temperatures inducing a concomitant increase in cell detachment. The migration persistence time of differentiated HL-60 cells was higher at lower temperatures (30–33 °C), while the migration persistence length stayed constant throughout the temperature range. Coupled with the increased speed observed at high temperatures, this suggests that neutrophils are primed to migrate more effectively at the elevated temperatures characteristic of inflammation. Temperature gradients exist on both cell and tissue scales. Taking this into consideration, we also investigated the ability of differentiated HL-60 cells to sense and react to the presence of temperature gradients, a process known as thermotaxis. Using a two-dimensional temperature gradient chamber with a range of 27–43 °C, we observed a migration bias parallel to the gradient, resulting in both positive and negative thermotaxis. To better mimic the extracellular matrix (ECM) environment in vivo, a three-dimensional collagen temperature gradient chamber was constructed, allowing observation of biased neutrophil-like differentiated HL-60 migration toward the heat source.
There is a growing consensus in research and practice that value-creating networks and ecosystems are supplementing the traditional distinction between the internal firm and market perspectives. To achieve joint value in ecosystems, it is crucial to align the various interests of independently acting ecosystem actors and create a common vision. In this paper, we argue that the ecosystem-wide use of product roadmaps may help with this. To get a better understanding of how roadmapping is conducted in the dynamic ecosystem environment, we systematize the main characteristics of product roadmaps and perform a conceptual comparison with the known challenges of ecosystem management. Comparing the two concepts of ecosystems and product roadmaps, we highlight the fit between the characteristics and objectives of the roadmaps and the challenges of ecosystem management. Hence, we propose to experiment with the ecosystem-wide use of product roadmaps as well as the empirical study of the challenges emerging in the process and the associated redesign of the roadmaps.
The energy turnaround, digitalization and decreasing revenues forces enterprises in the energy domain to develop new business models. Following a Design Science Research approach, we showed in two action research projects that businesses models in the energy domain result in complex ecosystems with multiple actors. Additionally, we identified that municipal utilities have problems with the systematic development of business models. In order to solve the problem, we captured together with the partners of the enterprises the requirements in a second phase. Further we developed a method which consist of the following components: Method for the creative development of a new business model in form of a Business Model Canvas (BMC). A mapping between the e3Value ontology and the BMC for modelling a business ecosystem. The Business Model Configurator (BMConfig) prototype for modelling and simulating the e3Value-Ontology. The Business model can be quantified and analyzed for its viability. We demonstrate the feasibility of our approach in business model of a power community.
Turning students into Industry 4.0 entrepreneurs: design and evaluation of a tailored study program
(2022)
Startups in the field of Industry 4.0 could be a huge driver of innovation for many industry sectors such as manufacturing. However, there is a lack of education programs to ensure a sufficient number of well-trained founders and thus a supply of such startups. Therefore, this study presents the design, implementation, and evaluation of a university course tailored to the characteristics of Industry 4.0 entrepreneurship. Educational design-based research was applied with a focus on content and teaching concept. The study program was first implemented in 2021 at a German university of applied sciences with 25 students, of which 22 participated in the evaluation. The evaluation of the study program was conducted with a pretest–posttest-design targeting three areas: (1) knowledge about the application domain, (2) entrepreneurial intention and (3) psychological characteristics. The entrepreneurial intention was measured based on the theory of planned behavior. For measuring psychological characteristics, personality traits associated with entrepreneurship were used. Considering the study context and the limited external validity of the study, the following can be identified in particular: The results show that a university course can improve participants' knowledge of this particular area. In addition, perceived behavioral control of starting an Industry 4.0 startup was enhanced. However, the results showed no significant effects on psychological characteristics.
Data governance have been relevant for companies for a long time. Yet, in the broad discussion on smart cities, research on data governance in particular is scant, even though data governance plays an essential role in an environment with multiple stakeholders, complex IT structures and heterogeneous processes. Indeed, not only can a city benefit from the existing body of knowledge on data governance, but it can also make the appropriate adjustments for its digital transformation. Therefore, this literature review aims to spark research on urban data governance by providing an initial perspective for future studies. It provides a comprehensive overview of data governance and the relevant facets embedded in this strand of research. Furthermore, it provides a fundamental basis for future research on the development of an urban data governance framework.
Hybrid organic/inorganic nanocomposites combine the distinct properties of the organic polymer and the inorganic filler, resulting in overall improved system properties. Monodisperse porous hybrid beads consisting of tetraethylene pentamine functionalized poly(glycidyl methacrylateco-ethylene glycol dimethacrylate) particles and silica nanoparticles (SNPs) were synthesized under Stoeber sol-gel process conditions. A wide range of hybrid organic/silica nanocomposite materials with different material properties was generated. The effects of n(H2O)/n(TEOS) and c(NH3 ) on the hybrid bead properties particle size, SiO2 content, median pore size, specific surface area, pore volume and size of the SNPs were studied. Quantitative models with a high robustness and predictive power were established using a statistical and systematic approach based on response surface methodology. It was shown that the material properties depend in a complex way on the process factor settings and exhibit non-linear behaviors as well as partly synergistic interactions between the process factors. Thus, the silica content, median pore size, specific surface area, pore volume and size of the SNPs are non-linearly dependent on the water-to-precursor ratio. This is attributed to the effect of the water-to-precursor ratio on the hydrolysis and condensation rates of TEOS. A possible mechanism of SNP incorporation into the porous polymer network is discussed.
Digital twins: a meta-review on their conceptualization, application, and reference architecture
(2022)
The concept of digital twins (DTs) is receiving increasing attention in research and management practice. However, various facets around the concept are blurry, including conceptualization, application areas, and reference architectures for DTs. A review of preliminary results regarding the emerging research output on DTs is required to promote further research and implementation in organizations. To do so, this paper asks four research questions: (1) How is the concept of DTs defined? (2) Which application areas are relevant for the implementation of DTs? (3) How is a reference architecture for DTs conceptualized? and (4) Which directions are relevant for further research on DTs? With regard to research methods, we conduct a meta-review of 14 systematic literature reviews on DTs. The results yield important insights for the current state of conceptualization, application areas, reference architecture, and future research directions on DTs.
The euphoria around microservices has decreased over the years, but the trend of modernizing legacy systems to this novel architectural style is unbroken to date. A variety of approaches have been proposed in academia and industry, aiming to structure and automate the often long-lasting and cost-intensive migration journey. However, our research shows that there is still a need for more systematic guidance. While grey literature is dominant for knowledge exchange among practitioners, academia has contributed a significant body of knowledge as well, catching up on its initial neglect. A vast number of studies on the topic yielded novel techniques, often backed by industry evaluations. However, practitioners hardly leverage these resources. In this paper, we report on our efforts to design an architecture-centric methodology for migrating to microservices. As its main contribution, a framework provides guidance for architects during the three phases of a migration. We refer to methods, techniques, and approaches based on a variety of scientific studies that have not been made available in a similarly comprehensible manner before. Through an accompanying tool to be developed, architects will be in a position to systematically plan their migration, make better informed decisions, and use the most appropriate techniques and tools to transition their systems to microservices.
Especially, if the potential of technical and organizational measures for ergonomic workplace design is limited, exoskeletons can be considered as innovative ergonomic aids to reduce the physical workload of workers. Recent scientific findings from ergonomic analyses with and without exoskeletons are indicating that strain reduction can be achieved, particularly at workplaces with lifting, holding, and carrying processes. Currently, a work system design method is under development incorporating criteria and characteristics for the design of work systems in which a human worker is supported by an exoskeleton. Based on the properties of common passive and active exoskeletons, factors influencing the human on which an exoskeleton can have a positive or negative effect (e.g. additional weight) were derived. The method will be validated by the conceptualization and setup of several work system demonstrators at Werk150, the factory of ESB Business School on campus of Reutlingen University, to prove the positive ergonomic effect on humans and the supporting process to choose the suitable exoskeleton. The developed method and demonstrators enable the user to experience the positive ergonomic effects of exoskeletal support in lifting, holding and carrying processes in logistics and production. The new work system design method will contribute to the fact that employees can pursue their professional activity longer without substantial injuries or can be used more flexibly at different work stations. Also new work concepts, strategies and scenarios are opened up to reduce the risk of occupational accidents and to promote the compatibility of work for employees. A training module is being developed and evaluated with participants from industry and master students to build up competence.
The early involvement of experiences gained through intelligence and data analysis is becoming increasingly important in order to develop new products, leading to a completely different conception of product creation, development and engineering processes using the advantages that the dedication of the digital twin entails. Introducing a novel stage gate process in order to be holistically anchored in learning factories adopting idea generation and idea screening in an early stage, beta testing of first prototypes, technical implementation in real production scenarios, business analysis, market evaluation, pricing, service models as well as innovative social media portals. Corresponding product modelling in the sense of sustainability, circular economy, and data analytics forecasts the product on the market both before and after market launch with the interlinking of data interpretation nearby in real-time. The digital twin represents the link between the digital model and the digital shadow. Additionally, the connection of the digital twin with the product provides constantly updated operating status and process data as well as mapping of technical properties and real-world behaviours. A future-networking product, by embedded information technology with the ability to initiate and carry out one's own further development, is able to interact with people and environments and thus is relevant to the way of life of future generations. In today's development work for this new product creation approach, on one hand, "Werk150" is the object of the development itself and on the other hand the validation environment. In the next step, new learning modules and scenarios for trainings at master level will be derived from these findings.
Over the last decades, a tremendous change toward using information technology in almost every daily routine of our lives can be perceived in our society, entailing an incredible growth of data collected day-by-day on Web, IoT, and AI applications.
At the same time, magneto-mechanical HDDs are being replaced by semiconductor storage such as SSDs, equipped with modern Non-Volatile Memories, like Flash, which yield significantly faster access latencies and higher levels of parallelism. Likewise, the execution speed of processing units increased considerably as nowadays server architectures comprise up to multiple hundreds of independently working CPU cores along with a variety of specialized computing co-processors such as GPUs or FPGAs.
However, the burden of moving the continuously growing data to the best fitting processing unit is inherently linked to today’s computer architecture that is based on the data-to-code paradigm. In the light of Amdahl's Law, this leads to the conclusion that even with today's powerful processing units, the speedup of systems is limited since the fraction of parallel work is largely I/O-bound.
Therefore, throughout this cumulative dissertation, we investigate the paradigm shift toward code-to-data, formally known as Near-Data Processing (NDP), which relieves the contention on the I/O bus by offloading processing to intelligent computational storage devices, where the data is originally located.
Firstly, we identified Native Storage Management as the essential foundation for NDP due to its direct control of physical storage management within the database. Upon this, the interface is extended to propagate address mapping information and to invoke NDP functionality on the storage device. As the former can become very large, we introduce Physical Page Pointers as one novel NDP abstraction for self-contained immutable database objects.
Secondly, the on-device navigation and interpretation of data are elaborated. Therefore, we introduce cross-layer Parsers and Accessors as another NDP abstraction that can be executed on the heterogeneous processing capabilities of modern computational storage devices. Thereby, the compute placement and resource configuration per NDP request is identified as a major performance criteria. Our experimental evaluation shows an improvement in the execution durations of 1.4x to 2.7x compared to traditional systems. Moreover, we propose a framework for the automatic generation of Parsers and Accessors on FPGAs to ease their application in NDP.
Thirdly, we investigate the interplay of NDP and modern workload characteristics like HTAP. Therefore, we present different offloading models and focus on an intervention-free execution. By propagating the Shared State with the latest modifications of the database to the computational storage device, it is able to process data with transactional guarantees. Thus, we achieve to extend the design space of HTAP with NDP by providing a solution that optimizes for performance isolation, data freshness, and the reduction of data transfers. In contrast to traditional systems, we experience no significant drop in performance when an OLAP query is invoked but a steady and 30% faster throughput.
Lastly, in-situ result-set management and consumption as well as NDP pipelines are proposed to achieve flexibility in processing data on heterogeneous hardware. As those produce final and intermediary results, we continue investigating their management and identified that an on-device materialization comes at a low cost but enables novel consumption modes and reuse semantics. Thereby, we achieve significant performance improvements of up to 400x by reusing once materialized results multiple times.
Digital assistants like Alexa, Google Assistant or Siri have seen a large adoption over the past years. Using artificial intelligence (AI) technologies, they provide a vocal interface to physical devices as well as to digital services and have spurred an entire new ecosystem. This comprises the big tech companies themselves, but also a strongly growing community of developers that make these functionalities available via digital platforms. At present, only few research is available to understand the structure and the value creation logic of these AI-based assistant platforms and their ecosystem. This research adopts ecosystem intelligence to shed light on their structure and dynamics. It combines existing data collection methods with an automated approach that proves useful in deriving a network-based conceptual model of Amazon’s Alexa assistant platform and ecosystem. It shows that skills are a key unit of modularity in this ecosystem, which is linked to other elements such as service, data, and money flows. It also suggests that the topology of the Alexa ecosystem may be described using the criteria reflexivity, symmetry, variance, strength, and centrality of the skill coactivations. Finally, it identifies three ways to create and capture value on AI-based assistant platforms. Surprisingly only a few skills use a transactional business model by selling services and goods but many skills are complementary and provide information, configuration, and control services for other skill provider products and services. These findings provide new insights into the highly relevant ecosystems of AI-based assistant platforms, which might serve enterprises in developing their strategies in these ecosystems. They might also pave the way to a faster, data-driven approach for ecosystem intelligence.
Glioblastoma WHO IV belongs to a group of brain tumors that are still incurable. A promising treatment approach applies photodynamic therapy (PDT) with hypericin as a photosensitizer. To generate a comprehensive understanding of the photosensitizer-tumor interactions, the first part of our study is focused on investigating the distribution and penetration behavior of hypericin in glioma cell spheroids by fluorescence microscopy. In the second part, fluorescence lifetime imaging microscopy (FLIM) was used to correlate fluorescence lifetime (FLT) changes of hypericin to environmental effects inside the spheroids. In this context, 3D tumor spheroids are an excellent model system since they consider 3D cell–cell interactions and the extracellular matrix is similar to tumors in vivo. Our analytical approach considers hypericin as probe molecule for FLIM and as photosensitizer for PDT at the same time, making it possible to directly draw conclusions of the state and location of the drug in a biological system. The knowledge of both state and location of hypericin makes a fundamental understanding of the impact of hypericin PDT in brain tumors possible. Following different incubation conditions, the hypericin distribution in peripheral and central cryosections of the spheroids were analyzed. Both fluorescence microscopy and FLIM revealed a hypericin gradient towards the spheroid core for short incubation periods or small concentrations. On the other hand, a homogeneous hypericin distribution is observed for long incubation times and high concentrations. Especially, the observed FLT change is crucial for the PDT efficiency, since the triplet yield, and hence the O2 activation, is directly proportional to the FLT. Based on the FLT increase inside spheroids, an incubation time 30 min is required to achieve most suitable conditions for an effective PDT.
Geometry of music perception
(2022)
Prevalent neuroscientific theories are combined with acoustic observations from various studies to create a consistent geometric model for music perception in order to rationalize, explain and predict psycho-acoustic phenomena. The space of all chords is shown to be a Whitney stratified space. Each stratum is a Riemannian manifold which naturally yields a geodesic distance across strata. The resulting metric is compatible with voice-leading satisfying the triangle inequality. The geometric model allows for rigorous studies of psychoacoustic quantities such as roughness and harmonicity as height functions. In order to show how to use the geometric framework in psychoacoustic studies, concepts for the perception of chord resolutions are introduced and analyzed.
Moral change and the purchase-sales-relationship: critical analysis of German and Swiss companies
(2022)
This study examines the awareness and causes of moral change from the economic perspective in Germany and Switzerland. Based on an analysis of value research to date and interviews with experts in B2B sales, the manifestations of moral change are critically examined and recommendations for action are derived on an employee-specific and company-wide level.
In this paper we presented the results of the workshop with the topic: Co-creation in citizen science (CS) for the development of climate adaptation measurements - Which success factors promote, and which barriers hinder a fruitful collaboration and co-creation process between scientists and volunteers? Under consideration of social, motivational, technical/technological and legal factors., which took place at the CitSci2022. We underlined the mentioned factors in the work with scientific literature. Our findings suggest that a clear communication strategy of goals and how citizen scientists can contribute to the project are important. In addition, they have to feel include and that the contribution makes a difference. To achieve this, it is critical to present the results to the citizen scientists. Also, the relationship between scientist and citizen scientists are essential to keep the citizen scientists engaged. Notification of meetings and events needs to be made well in advance and should be scheduled on the attendees' leisure time. The citizen scientists should be especially supported in technical questions. As a result, they feel appreciated and remain part of the project. For legal factors the current General Data Protection Regulation was considered important by the participants of the workshop. For the further research we try to address the individual points and first of all to improve our communication with the citizen scientist about the project goals and how they can contribute. In addition, we should better share the achieved results.
Eine wichtige Informationsgrundlage für strategische Entscheidungen im Sportmarketing bildet das Markenimage, da es die Perspektive der Anspruchsgruppen auf die Marke widerspiegelt. Die Analyse des Markenimages ist jedoch methodisch komplex, weshalb dafür der Einsatz Künstlicher Neuronaler Netze eingehender untersucht wird. Denn dieses Verfahren der Künstlichen Intelligenz ermöglicht die Modellierung vielschichtiger und nichtlinearer Wirkungsbeziehungen. Der konzeptionelle Ansatz wird am empirischen Praxisbeispiel des Sportartikelherstellers adidas veranschaulicht, indem ein mehrschichtiges Künstliches Neuronales Netz zwischen den Bewertungen spezifischer Markenattribute und der Gesamtmarke modelliert wird. Mithilfe einer Analyse der Verbindungsgewichte des Netzes wird der Variableneinfluss verschiedener Markenattribute gemessen, woraus sich konkrete Implikationen für die Sportmarketingpraxis ergeben.
Will chatbots play a significant role for B2B marketingin the future? Chatbots in B2B businesses
(2022)
Digitalization has gained a foothold in our everyday lives. However, it remains to be seen what digital tools B2B companies can benefit from. During the last few years, chatbots have been on the rise and have played a more significant role in B2B marketing. Thus, this research follows a literature review to examine the current state of B2B chatbots. With this, the study will discover the buyer’s preferences for chatbots compared to sales agents and the role of chatbots in different stages of the B2B sales funnel.
This article aims to give an overview of what German business needs in current times. By illustrating the Made in Germany label as a perceived image in sales, specific attributes are being evaluated to explain better the challenges German businesses are currently facing: Digitization, Education, Environment, and Quality & China.
Production systems are becoming increasingly complex, which means that the main task of industrial maintenance, ensuring the technical availability of a production system, is also becoming increasingly difficult. The previous focus of maintenance efforts on individual machines must give way to a holistic view encompassing the whole production system. Against this background, the technical availability of a production system must be redefined. The aim of this publication is to present different definition approaches of production systems’ availability and to demonstrate the effects of random machine failures on the key figures considering the complexity of the production system using a discrete event simulation.
The aim of this work is to establish and generalize a relationship between fractional partial differential equations (fPDEs) and stochastic differential equations (SDEs) to a wider class of stochastic processes, including fractional Brownian motions and sub-fractional Brownian motions with Hurst parameter H ∈ (1/2,1). We start by establishing the connection between a fPDE and SDE via the Feynman-Kac Theorem, which provides a stochastic representation of a general Cauchy problem. In hindsight, we extend this connection by assuming SDEs with fractional and sub-fractional Brownian motions and prove the generalized Feynman-Kac formulas under a (sub-)fractional Brownian motion. An application of the theorem demonstrates, as a by-product, the solution of a fractional integral, which has relevance in probability theory.
Surface-enhanced Raman spectroscopy (SERS) provides a strong enhancement to an inherently weak Raman signal, which strongly depends on the material, design, and fabrication of the substrate. Here, we present a facile method of fabricating a non-uniform SERS substrate based on an annealed thin gold (Au) film that offers multiple resonances and gap sizes within the same sample. It is not only chemically stable, but also shows reproducible trends in terms of geometry and plasmonic response. Scanning electron microscopy (SEM) reveals particle-like and island-like morphology with different gap sizes at different lateral positions of the substrate. Extinction spectra show that the plasmonic resonance of the nanoparticles/metal islands can be continuously tuned across the substrate. We observed that for the analytes 1,2-bis(4-pyridyl) ethylene (BPE) and methylene blue (MB), the maximum SERS enhancement is achieved at different lateral positions, and the shape of the extinction spectra allows for the correlation of SERS enhancement with surface morphology. Such non-uniform SERS substrates with multiple nanoparticle sizes, shapes, and interparticle distances can be used for fast screening of analytes due to the lateral variation of the resonances within the same sample.
Perforations of the tympanic membrane (TM) can occur as a result of injury or inflammation of the middle ear. These perforations can lead to conductive hearing loss (HL), where in some cases the magnitude of HL exceeds that attributable to the observed TM perforation alone. We aim with this study to better understand the effects of location and size of TM perforations on the sound transmitting properties of the middle ear.
The middle ear transfer function (METF) of six human temporal bones (TB; freshly frozen specimen of body donors) were compared before and after perforation of the TM at different locations (anterior or posterior lower quadrant) and of different sizes (1mm, ¼ of the TM, ½ of the TM, and full ablation). The
METF were correlated with a Finite Element (FE) model of the middle ear, in which similar alterations were simulated.
The measured and simulated FE model METFs exhibited frequency and perforation size dependent amplitude losses at all locations and severities. In direct comparison, posterior TM perforations affected the transmission properties to a larger degree than perforations of the anterior quadrant. This could possibly be caused by an asymmetry of the TM, where the malleus-incus complex rotates and results in larger deflections in the posterior TM half than in the anterior TM half. The FE model of the TM with a sealed cavity suggest that small perforations result in a decrease of TM rigidity and thus to an increase in oscillation amplitude of the TM, mostly above 1 kHz.
The location and size of TM perforations influence the METF in a reproducible way. Correlating our data with the FE model could help to better understand the pathologic mechanisms of middle-ear diseases. If small TM perforations with uncharacteristically significant HL are observed in daily clinical practice, additional middle ear pathologies should be considered. Further investigations on the loss of TM pretension due to perforations may be informative.
Die Debatte über die Zukunft der Europäischen Wirtschafts- und Währungsunion ist seit geraumer Zeit omnipräsent (Herzog und Hengstermann 2013). Mit der temporären Aussetzung der europäischen (nationalen) Schuldenregeln bis zum 31. Dezember 2022 ging abermals eine leidenschaftlich geführte Post-Covid-19-Reformdiskussion los. Zu den bisherigen Veränderungsnotwendigkeiten kommen nunmehr die geopolitischen Herausforderungen hinzu. Ist die Stabilität der Währungsunion in Gefahr?
The time has come : application of artificial intelligence in small- and medium-sized enterprises
(2022)
Artificial intelligence (AI) is not yet widely used in small- and medium-sized industrial enterprises (SME). The reasons for this are manifold and range from not understanding use cases, not enough trained employees, to too little data. This article presents a successful design-oriented case study at a medium-sized company, where the described reasons are present. In this study, future demand forecasts are generated based on historical demand data for products at a material number level using a gradient boosting machine (GBM). An improvement of 15% on the status quo (i.e. based on the root mean squared error) could be achieved with rather simple techniques. Hence, the motivation, the method, and the first results are presented. Concluding challenges, from which practical users should derive learning experiences and impulses for their own projects, are addressed.
Demand forecasting intermittent time series is a challenging business problem. Companies have difficulties in forecasting this particular form of demand pattern. On the one hand, it is characterized by many non-demand periods and therefore classical statistical forecasting algorithms, such as ARIMA, only work to a limited extent. On the other hand, companies often cannot meet the requirements for good forecasting models, such as providing sufficient training data. The recent major advances of artificial intelligence in applications are largely based on transfer learning. In this paper, we investigate whether this method, originating from computer vision, can improve the forecasting quality of intermittent demand time series using deep learning models. Our empirical results show that, in total, transfer learning can reduce the mean square error by 65 percent. We also show that especially short (65 percent reduction) and medium long (91 percent reduction) time series benefit from this approach.
The use of deep learning models with medical data is becoming more widespread. However, although numerous models have shown high accuracy in medical-related tasks, such as medical image recognition (e.g. radiographs), there are still many problems with seeing these models operating in a real healthcare environment. This article presents a series of basic requirements that must be taken into account when developing deep learning models for biomedical time series classification tasks, with the aim of facilitating the subsequent production of the models in healthcare. These requirements range from the correct collection of data, to the existing techniques for a correct explanation of the results obtained by the models. This is due to the fact that one of the main reasons why the use of deep learning models is not more widespread in healthcare settings is their lack of clarity when it comes to explaining decision making.
Nowadays, the importance of early active patient mobilization in the recovery and rehabilitation phase has increased significantly. One way to involve patients in the treatment is a gamification-like approach, which is one of the methods of motivation in various life processes. This article shows a system prototype for patients who require physical activity because of active early mobilization after medical interventions or during illness. Bedridden patients and people with a sedentary lifestyle (predominantly lying in bed) are also potential users. The main idea for the concept was non-contact system implementation for the patients making them feel effortless during its usage. The system consists of three related parts: hardware, software, and game application. To test the relevance and coherence of the system, it was used by 35 people. The participants were asked to play a video game requiring them to make body movements while lying down. Then they were asked to take part in a small survey to evaluate the system's usability. As a result, we offer a prototype consisting of hardware and software parts that can increase and diversify physical activity during active early mobilization of patients and prevent the occurrence of possible health problems due to predominantly low activity. The proposed design can be possibly implemented in hospitals, rehabilitation centers, and even at home.
Healthy sleep is required for sufficient restoration of the human body and brain. Therefore, in the case of sleep disorders, appropriate therapy should be applied timely, which requires a prompt diagnosis. Traditionally, a sleep diary is a part of diagnosis and therapy monitoring for some sleep disorders, such as cognitive behaviour therapy for insomnia. To automatise sleep monitoring and make it more comfortable for users, substituting a sleep diary with a smartwatch measurement could be considered. With the aim of providing accurate results, a study with a total of 30 night recordings was conducted. Objective sleep measurement with a Samsung Galaxy Watch 4 was compared with a subjective approach (sleep diary), evaluating the four relevant sleep characteristics: time of getting asleep, wake up time, sleep efficiency (SE), and total sleep time (TST). The performed analysis has demonstrated that the median difference between both measurement approaches was equal to 7 and 3 minutes for a time of getting asleep and wake up time correspondingly, which allows substituting a subjective measurement with a smartwatch. The SE was determined with a median difference between the two measurement methods of 5.22%. This result also implicates a possibility of substitution. Some single recordings have indicated a higher variance between the two approaches. Therefore, the conclusion can be made that a substitution provides reliable results primarily in the case of long-term monitoring. The results of the evaluation of the TST measurement do not allow to recommend substitution of the measurement method.
In order to evaluate the performance of different stapes prosthesis types, a coupled finite element (FE) model of human ear was developed. First, the middle-ear FE model was developed and validated using the middle-ear transfer function measurements available in literature including pathological cases. Then, the inner-ear FE model was developed and validated using tonotopy, impedance, and level of cochlea amplification curves from literature. Both models are based on pre-existing research with some improvements and were combined into one coupled FE model. The stapes in the coupled FE ear model was replaced with a model of a stapes prosthesis to create a reconstructed ear model that can be used to estimate how different types of protheses perform relative to each other as well as to the natural ear. This will help in designing of new innovative types of stapes prostheses or any other type of middle-ear prostheses as well as to improve the ones that are already available on the market.
Simulation models of the middle ear have rarely been used for diagnostic purposes due to their limited predictive ability with respect to pathologies. One big challenge is the large uncertainty and ambiguity in the choice of material parameters of the model.
Typically, the model parameters are determined by fitting simulation results to validation measurements. In a previous study, it was shown that fitting the model parameters of a finite-element model using the middle-ear transfer function and various other measurable output variables from normal ears alone is not sufficient to obtain a good predictive ability of the model on pathological middle-ear conditions. However, the inclusion of validation measurements on one pathological case resulted in a very good predictive ability also for other pathological cases. Although the found parameter set was plausible in all aspects, it was not yet possible to draw conclusions about the uniqueness and the accuracy or the uncertainty of the parameter set.
To answer these questions, statistical solution approaches are used in this study. Using the Monte Carlo method, a large number of plausible model data sets are generated that correctly represent the normal and pathological middle-ear characteristics in terms of various output variables like e.g., impedance, reflectance, umbo, and stapes transfer function. Subsequent principal component analyses (PCA) allow to draw conclusions about correlations, quantitative limits and statistical density of parameter values.
Furthermore, applying inverse PCA yields numerous plausible parameterizations of the middle-ear model, which can be used for data augmentation and training of a neural network which is capable of distinguishing between a normal middle ear and pathologies like otosclerosis, malleus fixation, and disarticulation based on objectively measured quantities like impedance, reflectance, and umbo velocity.
Handling complexity in modern software engineering : editorial introduction to issue 32 of CSIMQ
(2022)
The potential of the Internet and related digital technologies, such as the Internet of Things (IoT), cognition and artificial intelligence, data analytics, services computing, cloud computing, mobile systems, collaboration networks, and cyber-physical systems, are both strategic drivers and enablers of modern digital platforms with fast-evolving ecosystems of intelligent services for digital products. This issue of CSIMQ presents three recent articles on modern software engineering. First, we focus on continuous software development and place it in the context of software architectures and digital transformation. The first contribution is followed by the description of the basis of specific security requirements and adequate digital monitoring mechanisms. Finally, we present a practical example of the digital management of livestock farming.
Rational behavior is a standard assumption in science. Indeed, rationality is required for environmental action towards net-zero emissions or public health interventions during the SARS-CoV-2 pandemic. Yet, little is known about the elements of rationality. This paper explores a dualism of rationality comprised of optimality and consistency. By designing a new guessing game, we experimentally uncover and disentangle two building blocks of human rationality: the notions of optimality and consistency. We find evidence that rationality is largely associated to optimality and weakly to consistency. Remarkably, under uncertainty, rationality gradually shifts to a heuristic notion. Our findings provide insights to better understand human decision making.
An empirical study on management accountants’ roles and role perceptions: a German perspective
(2022)
The ongoing discussion on roles of management accountants (MAs) leads often to perceive the business partner (BP) role as the role of choice. Yet, many scholars and practitioners seem to assume that this role is clear to managers and MAs, that it makes sense for them and that all managers and MAs agree on it and implement it. Inconsistencies between actual roles, perceived, and expected roles might cause identity and role conflicts. However, we lack evidence of whether managers and MAs perceive, expect and act in the BP role and if tensions and conflicts might exist. This paper is based on a quantitative empirical study of a large German high-tech firm in 2019 whose top management decided to implement the BP role. We found several areas of tension in this role discussion and contribute to the literature on MAs’ roles with a more nuanced view of the interaction between managers and MAs regarding MAs’ roles. The study shows that there are mainly differences in business managers’ expectations of MAs to the role of the BP, which the MAs do not know exactly how to fulfill.