Nein
Refine
Document Type
- Journal article (637)
- Conference proceeding (593)
- Book chapter (329)
- Book (133)
- Anthology (30)
- Doctoral Thesis (25)
- Patent / Standard / Guidelines (24)
- Working Paper (6)
- Review (3)
- Issue of a journal (1)
Is part of the Bibliography
- yes (1782)
Institute
- ESB Business School (644)
- Informatik (443)
- Technik (363)
- Life Sciences (181)
- Texoversum (142)
- Zentrale Einrichtungen (4)
Publisher
- Springer (307)
- IEEE (246)
- Elsevier (103)
- Springer Gabler (74)
- ACM (39)
- Wiley (36)
- Erich Schmidt (31)
- Association for Information Systems (AIS) (22)
- Emerald (20)
- VDE Verlag GmbH (20)
This paper explores the application of People Analytics in
recruiting professors for universities of applied sciences. Using data-driven personas, the research project aims to identify and communicate the different paths and connections leading candidates to a professorship. The authors introduce the concept of personas, describe the underlying data source and derive an example for the current project.
Framework for integrating intelligent product structures into a flexible manufacturing system
(2023)
Increasing individualisation of products with a high variety and shorter product lifecycles result in smaller lot sizes, increasing order numbers, and rising data and information processing for manufacturing companies. To cope with these trends, integrated management of the products and manufacturing information is necessary through a “product-driven” manufacturing system. Intelligent products that are integrated as an active element within the controlling and planning of the manufacturing process can represent flexibility advantages for the system. However, there are still challenges regarding system integration and evaluation of product intel-ligence structures. In light of these trends, this paper proposes a conceptual frame-work for defining, analysing, and evaluating intelligent products using the example of an assembly system. This paper begins with a classification of the existing problems in the assembly and a definition of the intelligence level. In contrast to previous approaches, the analysis of products is expanded to five dimensions. Based on this, a structured evaluation method for a use case is presented. The structure of solving the assembly problem is provided by the use case-specific ontology model. Results are presented in terms of an assignment of different application areas, linking the problem with the target intelligence class and, depending on the intelligence class of the product, suggesting requirements for implementation. The conceptual frame-work is evaluated by utilising a case study in a learning factory. Here, the model-mix assembly is controlled actively by the workpiece carrier in terms of transferring the variant-specific work instructions to the operator and the collaborative robot (cobot) at the workstations. The resulting system thus enables better exploitation of the poten-tials through less frequent errors and shorter search times. Such an implementation has demonstrated that the intelligent workpiece carrier represents an additional part for realising a cyber-physical production system (CPPS).
Das Thema des Direktvertriebs (Direct-to-Customer oder kurz D-to-C) in der Automobilindustrie ist en vogue, denn nach Valtech (2023, S. 2) ist die Umstellung der Vertriebsmodelle in dieser Branche unumgänglich. Die Covid-19-Pandemie hat zudem noch als Katalysator für den D-to-C fungiert und die digitale Transformation sowie die Akzeptanz virtueller Verkaufsprozesse beschleunigt.
The massive use of patient data for the training of artificial intelligence algorithms is common nowadays in medicine. In this scientific work, a statistical analysis of one of the most used datasets for the training of artificial intelligence models for the detection of sleep disorders is performed: sleep health heart study 2. This study focuses on determining whether the gender and age of the patients have a relevant influence to consider working with differentiated datasets based on these variables for the training of artificial intelligence models.
Accurate monitoring of a patient's heart rate is a key element in the medical observation and health monitoring. In particular, its importance extends to the identification of sleep-related disorders. Various methods have been established that involve sensor-based recording of physiological signals followed by automated examination and analysis. This study attempts to evaluate the efficacy of a non-invasive HR monitoring framework based on an accelerometer sensor specifically during sleep. To achieve this goal, the motion induced by thoracic movements during cardiac contractions is captured by a device installed under the mattress. Signal filtering techniques and heart rate estimation using the symlets6 wavelet are part of the implemented computational framework described in this article. Subsequent analysis indicates the potential applicability of this system in the prognostic domain, with an average error margin of approximately 3 beats per minute. The results obtained represent a promising advancement in non-invasive heart rate monitoring during sleep, with potential implications for improved diagnosis and management of cardiovascular and sleep-related disorders.
Software scripts for sensor data extraction in Rasberry Pi: user-space and kernel-space comparison
(2024)
This paper compares two popular scripting implementations for hardware prototyping: Python scripts execut from User-Space and C-based Linux-Driver processes executed from Kernel-Space, which can provide information to researchers when considering one or another in their implementations. Conclusions exhibit that deploying software scripts in the kernel space makes it possible to grant a certain quality of sensor information using a Raspberry Pi without the need for advanced real-time operational systems.
The strong demand to transform the textile and fashion industry towards sustainability requires continuous implementation of the Education for Sustainable Development (ESD) mission statement in education and industry. To achieve this goal, the European research project "Fashion DIET - Sustainable Fashion Curriculum at Textile Universities in Europe. Development, Implementation and Evaluation of a Teaching Module for Educators", co-funded by the Erasmus+ programme of the European Union (2020-1-DE01-KA203-005657), aims to create an ESD module for university lecturers and research-based teaching and learning materials delivered through an e-learning portal. First, an online questionnaire was rolled out to assess university faculty attitudes toward and needs for ESD content and methods. The feedback questionnaire enabled the selection of the most relevant data for the elaboration of an action and research-oriented professional development module for ESD in textile education, which will be accessible through an information & e-learning portal. The e-learning portal can be used as a web-based tool to apply and evaluate the project outcomes, e.g. the further education module and the teaching and learning materials for educators, such as manuals, broadcasts and the provision of interactive and physical materials. It thus ensures that the teaching materials can be used sustainably in the classroom. It also provides country-specific data for the fashion and textile industry and its market, taking into account the different perspectives of universities and schools. In any case, the portal represents (1) the web-based platform to support the dissemination of ESD as a guiding principle and (2) a central contact point for the target group to obtain relevant information on ESD. Fashion DIET explores the use of e-learning to improve teaching and learning on ESD, by training educators and empowering them as multipliers for a sustainable textile and fashion industry. At a higher level, the European project strengthens the quality and relevance of learning provision in education towards the latest developments in textile research and innovation in terms of a more sustainable fashion.
Purpose
As a response to the increased frequency of disruptive events and intense competition, organizational agility has become a key concept in organizational research. Fostering organizational agility requires leveraging knowledge that exists both outside (exploration) and inside (exploitation) the organization. This research tests the so-called ambidexterity hypothesis, which claims that a balance between exploration and exploitation leads to increased organizational outcomes, including the development of organizational agility. Complementing previously established measurement models on ambidexterity, this research proposes an alternative measurement model to analyze how ambidexterity can enhance organizational agility and, indirectly, performance, taking into consideration the moderating effect of environmental competitiveness.
Design/methodology/approach
A review of existing measurement models for ambidexterity shows that tension, a crucial aspect of ambidexterity, is often neglected. The authors, therefore, develop a new measurement model of ambidexterity to incorporate ambidexterity-induced tension. Using this measurement model, they examine the effect of ambidexterity on the development of entrepreneurial and adaptive agility as well as performance.
Findings
Ambidexterity positively influences both entrepreneurial and adaptive agility, indicating that a balance between exploration and exploitation has superior organizational effects. This finding confirms the ambidexterity hypothesis with respect to organizational agility. Furthermore, both entrepreneurial and adaptive agility drive organizational performance. These two indirect effects via agility fully mediate the impact of ambidexterity on organizational performance. Finally, environmental competitiveness positively moderates the relationship between ambidexterity and adaptive agility.
Originality/value
The findings extend research on ambidexterity by showing its positive effects on organizational agility. Furthermore, the study proposes an alternative operationalization to capture the ambidexterity construct that may lay the groundwork for further applications of the ambidexterity concept.
Comparative analysis of the chemical and rheological curing kinetics of formaldehyde-based wood adhesives is crucial for assessing their respective performance. Differential scanning calorimetry (DSC) and rheometry are the conventional techniques used for monitoring the curing processes leading to crosslinking polymerization of the adhesives. However, the direct comparison of these techniques is inappropriate due to the intrinsic differences in their underlying procedures. To address this challenge, the two adhesive samples were sequentially cured, firstly with rheometry and followed by DSC. The observed higher curing degree in the subsequent DSC procedure underpins the incomplete curing of the samples during initial rheometry. Furthermore, the comparative assessment of the activation energies, molar ratios, and active groups of the two adhesives highlights the importance of the pre-exponential factor in addition to the activation energies, as it attributes to the probability of active groups coinciding at the appropriate spatial arrangement.
Im Team zum fliegenden Traum
(2024)
Purpose – This paper aims to determine the affecting factors of the brand authenticity of startups in social media.
Design/methodology/approach – Using a qualitative method based on a grounded theory approach, this research specifies and classifies the affecting factors of brand authenticity of startups in social media through in-depth semi-structured interviews.
Findings – Multiple factors affecting the brand authenticity of startups in social media are determined and categorized as indexical, iconic and existential cues through this research. Connection to heritage and having credible support are determined as indexical cues. Founder intellectuality, brand intellectuality, commitment toward customers and proactive clear and interesting communications are identified as iconic cues. Having self-confidence and self-satisfaction, having intimacy with the brand and a joyful feeling for interactions with the community around the brand are determined as existential cues in this research. This research furthers previous arguments on a multiplicity of brand authenticity by shedding light on the relationship between the different aspects of authenticity and the form that different affecting factors can be organized together. Consumers eventually evaluate a strengthened perception of brand authenticity through existential cues that reflect the cues of other aspects (iconic and indexical) which passed through the goal-based assessment and self-authentication filter.
Research limitations/implications – The research sampling population can be more diversified in terms of sociodemographic attributes. Due to the qualitative methodology of this research, assessment of the findings through quantitative methods can be considered in future research. Practical implications – Using the findings of this research, startup managers can properly build a perception of authenticity in their consumers’ minds by using alternate factors while lacking major indexical cues such as heritage. This research helps startup businesses to design their brand communications better to convey their authenticity to their audiences.
Originality/value – This research determines the factors affecting the authenticity of startup brands in social media. It also defines the process of authenticity perception through different aspects of brand authenticity.
Das Ziel dieses Papiers ist es zu verstehen, inwieweit Musik und Mode voneinander abhängig und miteinander interagieren, wobei der Schwerpunkt auf der Entwicklung von Musik- und Modetrends im Zeitraum von 1950 bis heute liegt. Darüber hinaus soll dem Leser ein Einblick darin ermöglicht werden, ob die zur Verfügung stehende Technologie die Entwicklung und den Zugang zu Musik und Mode in Zukunft beeinflusst. Die Recherche für dieses Papier erforderte die Verwendung von Sekundärquellen, einschließlich Bibliotheks- und Online-Recherchen. Das Ziel war es, Informationen über die frühere und aktuelle Entwicklung von Musik und Mode zu sammeln. Diese Methoden waren die besten Alternativen zu Sekundärquellen, da sie zuverlässige Ergebnisse lieferten und so die Genauigkeit der gesammelten Daten erhöhten. Sie waren jedoch auch begrenzt, da vor allem Daten für die Mode- und Musikentwicklung der Nullerjahre begrenzt waren. Dies ist durch das Hauptergebnis erklärbar, dass die Entwicklung dieser Zeit nicht so deutlich ist wie die der früheren Zeiten, in denen ein Modetrend mit einem neuen Musikgenre oder Hit einherging, was bedeutet, dass Mode und Musik in gewissem Maße korrelieren, aber durch eine Reaktivierung der Musik- und Modetrends der Vorjahre ohne neue Erfindungen gekennzeichnet ist.
Human Digital Twin
(2022)
Man stelle sich vor, man könnte mit Unterstützung von künstlicher Intelligenz Spielabläufe von Bundesligaspielen oder sogar ganze WM-Partien simulieren. Oder der Trainer würde die Mannschaft im Endspiel anhand von Daten über den Gegner aufstellen und entsprechend psychologisch und physiologisch verschiedene Spielertypen auf den Platz schicken (vgl. Jahn). Ist das reine Fiktion? Nicht wirklich. Bereits heute werden die Leistungen von Sportlern immer häufiger digital analysiert und bewertet. Beispielsweise hat SAP eine Plattform entwickelt, die ein digitales Datenbild von Fußballspielern erstellt (vgl. SAP). Bei der letzten WM erhielt jeder Spieler über die neue Fifa Player App kurz nach der Begegnung präzise Statistiken zu seinen Leistungen während des Spiels (vgl. FIFA). Noch bessere Informationen sollen in Zukunft virtuelle Abbilder der Fußballspieler, digitale Zwillinge, liefern. Die dafür notwendigen Daten werden mithilfe von Sensoren im Trikot, in den Schuhen oder im Ball gewonnen. Durch erfassten Bewegungs- und Positionsdaten sowie Ballkontakten entsteht ein präzises Datenbild des Spielers. Solche Simulationen, die auf einem Modell des Menschen in der digitalen Welt beruhen, erfahren derzeit große Aufmerksamkeit in Wissenschaft und Praxis (vgl. van der Valk et al.). Nicht nur in der Fußballwelt, auch in der Medizin und im Kontext von Industrie 4.0 und Produktdesign, haben digitale menschliche Zwillinge das Potenzial, zu einer Schlüsseltechnologie zu werden.
Climate change is one of the key challenges of this century due to its impact on society and the economy. Students are asking their business schools to scale up climate change education (CCE) across all disciplines, and employers are looking for graduates ready to work on solutions. This desire for solutions is shared by faculty; however, in a recent survey, many highlighted that they lack knowledge about climate change mitigation and how to integrate CCE into their disciplines.
This chapter supports lecturers, professors and senior management in their journey to get an overview of CCE and, more importantly, to find high-impact climate solutions to be integrated and assessed in their teaching units.
Introduction to the special issue on self‑managing and hardware‑optimized database systems 2022
(2023)
Data management systems have evolved in terms of functionality, performance characteristics, complexity, and variety during the last 40 years. Particularly, the relational database management systems and the big data systems (e.g., Key-Value stores, Document stores, Graph stores and Graph Computation Systems, Spark, MapReduce/Hadoop, or Data Stream Processing Systems) have evolved with novel additions and extensions. However, the systems administration and tasks have become highly complex and expensive, especially given the simultaneous and rapid hardware evolution in processors, memory, storage, or networking. These developments present new open problems and challenges to data management systems as well as new opportunities.
The SMDB (International Workshop on Self-Managing Database Systems) and HardBD&Active (Joint International Workshop on Big Data Management on Emerging Hardware and Data Management on Virtualized Active Systems) workshops organized in conjunction with the IEEE ICDE (International Conference on Data Engineering) offered two distinct platforms for examining the above system-related challenges from different perspectives. The SMDB workshop looks into developing autonomic or self-* features in database and data management systems to tackle complex administrative tasks, while the HardBD&Active workshop focuses on harnessing hardware technologies to enhance efficiency and performance of data processing and management tasks. As a result of these workshops, we are delighted to present the third special issue of DAPD titled “Self-Managing and Hardware-Optimized Database Systems 2022,” which showcases the best contributions from the SMDB 2021/2022 and HardBD&Active 2021/2022 workshops.
§ 251 Haftungsverhältnisse
(2023)
Unter der Bilanz sind, sofern sie nicht auf der Passivseite auszuweisen sind, Verbindlichkeiten aus der Begebung und Übertragung von Wechseln, aus Bürgschaften, Wechsel- und Scheckbürgschaften und aus Gewährleistungsverträgen aus der Bestellung von Sicherheiten für fremde Verbindlichkeiten zu vermerken; sie dürfen in einem Betrag angegeben werden. Haftungsverhältnisse sind auch anzugeben, wenn ihnen gleichwertige Rückgriffsforderungen gegenüberstehen.
In recent years, both fields, AI and VRE, have received increasing attention in scientific research. Thus, this article’s purpose is to investigate the potential of DL-based applications on VRE and as such provide an introduction to and structured overview of the field. First, we conduct a systematic literature review of the application of Artificial Intelligence (AI), especially Deep Learning (DL), on the integration of Variable Renewable Energy (VRE). Subsequently, we provide a comprehensive overview of specific DL-based solution approaches and evaluate their applicability, including a survey of the most applied and best suited DL architectures. We identify ten DL-based approaches to support the integration of VRE in modern power systems. We find (I) solar PV and wind power generation forecasting, (II) system scheduling and grid management, and (III) intelligent condition monitoring as three high potential application areas.
Because of a high product and technology complexity, companies involve external partners in their research and development (R&D) processes. Interorganizational projects result, which represent temporary organizations. In these projects heterogenous organizations work closely together. Since project work is always teamwork, these projects face due to their characteristic’s major challenges on an organizational, relational, and content-related collaboration level. Thus, this paper raises the following research question: “How can a project team be supported on an organizational, relational, and content-related level in an interorganizational new product development setting?” To answer this research question, an explorative expert study was set up with two digital workshops using the interactive presentation tool Mentimeter. The results show that a cooperative innovation culture could support project teams on an organizational and relational level in the future in minimizing predominant problems. Moreover, it supports project teams for example in a functional communication. Furthermore, 18 values of a cooperative innovation culture result which are for example openness and transparency, risk and failure tolerance or respect. On a content-related level the results show that an adaptable tool which promotes creativity and collaboration method as well as content-related input support could be beneficial for problem-solving in an interorganizational new product development setting in the future. Because the tool can guide product developers through the process with suitable creativity and collaboration methods, can give content-related input and can enable interactive interchange on a table-top. Future research could mainly focus on the connection of the cooperative innovation culture and the tool since these potentially influence each other.
In a recently developed study programme at Reutlingen University, which focuses on practical orientations, an innovative product with solid company references is to be defined and realised by student teams. On the basis of this product, all subjects of the business engineering study programme “Sustainable Production and Business” are taught. By focusing on three main paths of future skills that have been developed by NextSkills to analyse upcoming social changes, global challenges and fields of work that are innovation-driven and agile, the new study programme aims to create responsible leaders who will shape global businesses respectfully. Thereby, different TRIZ tools help to support students in developing their own products with a focus on sustainability and pay off on the future skills enhancement. Further, students get to know TRIZ tools in an unbiased way, unburdened by too much theory, and are thus continuously supported in the progressing product development process that accompanies their studies. Hence, students perceive TRIZ on the one hand as a method to develop sustainable products and, on the other hand, to find sustainable solutions for everyday problems. The knowledge and positive experiences gained in this way should then arouse curiosity for the TRIZ class at the end of the study programme. The students can graduate with a TRIZ Level 1 certificate. Thereby, as many students as possible are introduced to the TRIZ methods, and the TRIZ tool is spread widely.
Vor mehr als einem Jahrzehnt stellten die Autoren dieses Beitrags die folgende Denkaufgabe:
“Imagine the business of sports without fans. No spectators at sports matches, no buyers of merchandising, no potential customers for sponsoring companies, no recipients for the sports media. Such a scenario would be unthinkable.“ (Bühler & Nufer, 2010, S. 63)
Während der Corona-Pandemie 2020/21 wurde das Undenkbare dann aber doch Realität, als Zuschauer auf der ganzen Welt keine Sportveranstaltungen mehr besuchen durften. Das größte Sportevent der Welt, die Olympischen Spiele in Tokio 2020, mussten verschoben werden und fanden ein Jahr später unter nicht wirklich besseren Bedingungen vor so gut wie leeren Rängen statt. Das Gleiche galt für die UEFA EURO 2020, die ebenfalls um ein Jahr verschoben werden musste, dann aber zumindest (bis auf wenige Ausnahmen wie beispielsweise das Finale in Wembley) mit reduzierter Zuschauerzulassung stattfinden konnte. Hintergrund der Überlegungen sowohl des Internationalen Olympischen Komitees wie auch der Europäischen Fußballunion war damals die Befürchtung, dass ihre jeweiligen Premiumprodukte ohne Fans in den Stadien leiden würden. Natürlich gab es immer noch Millionen von Menschen, die Live-Streams von Sportveranstaltungen verfolgten oder in diesen schwierigen Corona-Zeiten allerhand Merchandise ihrer Lieblingsmannschaften kauften. Doch die Pandemie bestätigte einmal mehr die Grundregel im Sportbusiness: Der Wirtschaftsmarkt Sport im Allgemeinen und professionelle Sportorganisationen im Besonderen brauchen Fans, die bereit sind, ihre Zeit, ihre Emotionen und ihr Geld für ihren Lieblingssport und ihre Lieblingsmannschaften zu investieren. Zuschauer sind die primären – und wohl wichtigsten – Kunden eines Sportunternehmens. Daher ist es für jede professionelle Sportorganisation unerlässlich, eine nachhaltige Beziehung zu ihren Fans aufzubauen und aufrechtzuerhalten und sie auf jede mögliche Weise einzubeziehen. Vor diesem Hintergrund wird die Bedeutung des Fan-Engagements deutlich.
Sponsoring zählt zu den nicht-klassischen Formen der Marketing-Kommunikationspolitik und spricht Menschen in nicht-kommerziellen Situationen an. Gerade durch Sponsoring können Zielgruppen erreicht werden, die z.B. Werbung gegenüber negativ eingestellt oder durch klassische Kommunikationsinstrumente nicht erreichbar sind. Auch wird ein Sponsoringengagement i.d.R. eher akzeptiert als klassische Werbung, da dem Sponsoring per se eine gewisse Förderabsicht zugrunde liegt. In diesem Kapitel werden die wesentlichen Sponsoring-Grundlagen vorgestellt und das Kommunikationsinstrument Sportsponsoring sowohl aus der Perspektive von Sponsoren als auch aus der Sicht von Gesponserten genau beleuchtet. Zusätzlich werden die Besonderheiten des Sportevent-Sponsorings aufgezeigt und Ambush Marketing als Alternative zum Sportsponsoring präsentiert. Abschließend wird auf aktuelle Entwicklungen im Sportsponsoring im Rahmen der FIFA Fußball-Weltmeisterschaft 2022 und der bevorstehenden EURO 2024 eingegangen.
There are indicators we are entering a new era for MTM research, by moving beyond the structural approach that has characterized MTM research to date, to focus on important and under-researched issues, such as the nature of employees’ experiences in an MTM context. Although team research suggests that the experiences of members impact team functioning, these lines of reasoning have not, until recently, made their way to MTM research. To overcome this limitation, this symposium showcases five papers that use a variety of theoretical perspectives, research designs (i.e., qualitative, quantitative), contexts (e.g., healthcare, automotive manufacturer, online panels), methodologies, and analytical methods (i.e., meta-analysis, content/thematic analysis). The symposium focuses on surfacing and advancing unanswered questions that extend theory and can offer fruitful directions for MTM research by examining critical individual and team level outcomes (e.g., individual/team performance, individual counterproductive and organizational citizenship behavior, individual learning, individual turnover intentions, organizational commitment) in the experiences of MTM employees across their teams (e.g., goals, functions, roles). We hope to provide a forum to advance unanswered questions that offer fruitful directions for MTM research.
Application systems often need to be deployed in different variants if requirements that influence their implementation, hosting, and configuration differ between customers. Therefore, deployment technologies, such as Ansible or Terraform, support a certain degree of variability modeling. Besides, modern application systems typically consist of various software components deployed using multiple deployment technologies that only support their proprietary, non-interoperable variability modeling concepts. The Variable Deployment Metamodel (VDMM) manages the deployment variability across heterogeneous deployment technologies based on a single variable deployment model. However, VDMM currently only supports modeling conditional components and their relations which is sometimes too coarse-grained since it requires modeling entire components, including their implementation and deployment configuration for each different component variant. Therefore, we extend VDMM by a more fine-grained approach for managing the variability of component implementations and their deployment configurations, e.g., if a cheap version of a SaaS deployment provides only a community edition of the software and not the enterprise edition, which has additional analytical reporting functionalities built-in. We show that our extended VDMM can be used to realize variable deployments across different individual deployment technologies using a case study and our prototype OpenTOSCA Vintner.
Mit zunehmender Dynamik im Forschungsumfeld – Digitalisierung der Produktentwicklung – steigen neben der Komplexität auch die technischen Anforderungen an die künftigen Entscheidungsprozesse. Die Einführung von neuen IT-Systemen zur Automation von Entscheidungen haben Anpassungen in den derzeitigen Geschäftsprozessen der Unternehmen zur Folge. Für eine erfolgreiche Implementierung neuer IT-Informationstools gilt es im Voraus mögliche Auswirkungen auf die bisherigen Anwendersysteme genauer zu untersuchen. Neue Technologien, KI-Informationssysteme oder auch neues Wissen entstehen in der Wissenschaft oft durch Interpretation und Synthese von bestehendem Wissen. Aus diesem Grund nimmt die Qualität von Literaturanalysen eine immer größere Relevanz in der Ingenieur- und Informatikwissenschaft ein. Neben der Anzahl an Publikationen wächst auch der Aufwand für die strukturierte Literaturrecherche (SLA). Die Autoren stellen in diesem Paper den Rechercheprozess und die Ergebnisse einer SLA vor. Mit dieser Arbeit soll der derzeitige Forschungsstand zur Entscheidungsunterstützung in der Produktentwicklung von Klein- und mittelständischen Unternehmen sowie Großunternehmen in der
Automobilbranche ermittelt und nach Analyse sowie Bewertung mögliche Forschungslücken zu automatisierten Entscheidungsunterstützungssystemen (aEUS) aufgezeigt werden.
In the era of digital transformation, the notion of software quality transcends its traditional boundaries, necessitating an expansion to encompass the realms of value creation for customers and the business. Merely optimizing technical aspects of software quality can result in diminishing returns. Product discovery techniques can be seen as a powerful mechanism for crafting products that align with an expanded concept of quality - one that incorporates value creation. Previous research has shown that companies struggle to determine appropriate product discovery techniques for generating, validating, and prioritizing ideas for new products or features to ensure they meet the needs and desires of the customers and the business. For this reason, we conducted a grey literature review to identify various techniques for product discovery. First, the article provides an overview of different techniques and assesses how frequently they are mentioned in the literature review. Second, we mapped these techniques to an existing product discovery process from previous research to provide concrete guidelines for establishing product discovery in their organizations. The analysis shows, among other things, the increasing importance of techniques to structure the problem exploration process and the product strategy process. The results are interpreted regarding the importance of the techniques to practical applications and recognizable trends.
Gamification has been increasingly applied to software engineering education in the past. The approaches vary from applying game elements on a conceptual phase in the course to using specific tools to engage the students more and support their learning goals. However, existing tools usually have game elements, such as quizzes or challenges, but do not provide a more computer game-like experience. Therefore, we try to raise the level of gamified learning experience to another level by proposing Gamify-IT. Gamify-IT is a Unity- and web-based game platform intended to help students learn software engineering. It follows an immersive role-play game characteristic where the students explore a world, find and solve minigames and clear dungeons with SE tasks. Lecturers can configure the worlds, e.g., to add content hints. Furthermore, they can add and configure minigames and dungeons to include exercises in a fully gamified way. Thereby, they customize their course in Gamify-IT to adapt the world very precisely to other materials such as lectures or exercises. Results of an evaluation of our initial prototype show that (i) students like to engage with the platform, (ii) students are motivated to learn when using Gamify-IT, and (iii) the minigames support students in understanding the learning objectives.
Impact of a large distribution network on radiation characteristics of planar spiral antenna arrays
(2023)
Designing antenna arrays with a central feed point has gained ground in the antenna technique. This approach, which is usually applied because of manufacturing costs, is difficult to achieve and leads to a large feeding network. The impact of which is numerically investigated in the present work. Upon comparing three different antennas, it is shown that the enlargement of the feed strongly affects the antenna's overall dimensions and the antenna's radiation characteristics. The antenna with the plug-in solution is not only small in size but also performs better compared to antennas with a central feed point. Considering the high effort in designing the feed network with a central point and the influence of the resulting enlarged network on the dimensions and radiation characteristics of the antenna, the cost saving in production can be put into perspective.
The Belt and Road Initiative (BRI) has reinforced China’s business engagement in Sub-Saharan Africa (SSA). While previous international business research focused on the internationalization and investments of Chinese companies, this viewpoint uncovers how both local African and international non-Chinese Small and Medium Sized Enterprises (SMEs) may benefit from and participate in the BRI. A focus is laid on the infrastructure sector accounting for the highest investments since the inception of the BRI in 2013. In a conceptual way, the motives of SMEs to participate in infrastructure project business in the context of the BRI are explored. Investigating the challenges of two large transport infrastructure projects, the business potentials for SMEs become visible. It is argued that SMEs find business potentials particularly as investors, sub-contractors and project management experts in the BRI in Sub-Saharan Africa.
The introduction of smart contracts has expanded the applicability of blockchains to many domains beyond finance and cryptocurrencies. Moreover, different blockchain technologies have evolved that target special requirements. As a result, in practice, often a combination of different blockchain systems is required to achieve an overall goal. However, due to the heterogeneity of blockchain protocols, the execution of distributed business transactions that span several blockchains leads to multiple interoperability and integration challenges. Therefore, in this article, we examine the domain of Cross-Chain Smart Contract Invocations (CCSCIs), which are distributed transactions that involve the invocation of smart contracts hosted on two or more blockchain systems. We conduct a systematic multi-vocal literature review to get an overview of the available CCSCI approaches. We select 20 formal literature studies and 13 high-quality gray literature studies, extract data from them, and analyze it to derive the CCSCI Classification Framework. With the help of the framework, we group the approaches into two categories and eight subcategories. The approaches differ in multiple characteristics, e.g., the mechanisms they follow, and the capabilities and transaction processing semantics they offer. Our analysis indicates that all approaches suffer from obstacles that complicate real-world adoption, such as the low support for handling heterogeneity and the need for trusted third parties.
Blockchains have become increasingly important in recent years and have expanded their applicability to many domains beyond finance and cryptocurrencies. This adoption has particularly increased with the introduction of smart contracts, which are immutable, user-defined programs directly deployed on blockchain networks. However, many scenarios require business transactions to simultaneously access smart contracts on multiple, possibly heterogeneous blockchain networks while ensuring the atomicity and isolation of these transactions, which is not natively supported by current blockchain systems. Therefore, in this work, we introduce the Transactional Cross-Chain Smart Contract Invocation (TCCSCI) approach that supports such distributed business transactions while ensuring their global atomicity and serializability. The approach introduces the concept of Resource Manager Smart Contracts, and 2PC for Blockchains (2PC4BC), a client-driven Atomic Commit Protocol (ACP) specialized for blockchain-based distributed transactions. We validate our approach using a prototypical implementation, evaluate its introduced overhead, and prove its correctness.
Die meisten Innovationsprojekte in Unternehmen scheitern nicht am Mangel an Ideen, Kreativität oder am Umsetzungswillen, sondern an vielen kleinen Hürden, die die Projekte massiv entschleunigen. So verlieren Initiativen an der Dynamik, die dafür sorgt, dass sich zügig Erfolge einstellen. Ein Bereich, in dem unkonventionell, agil und schnell Ergebnisse erzielt werden, ist das Guerilla Marketing. Was können Innovations-, Forschungs- und Projektleiter aus dem Methodenbaukasten lernen? Wie können konkrete Taktiken aus dem Marketing auch Innovationsprojekten zu mehr Viralität und Schwung verhelfen, um die Eigendynamik der Initiativen „unbremsbar“ zu machen? Das erfahren Sie in diesem essential.
Advancing mental health diagnostics: AI-based method for depression detection in patient interviews
(2023)
In this paper, we present a novel artificial intelligence (AI) application for depression detection, using advanced transformer networks to analyse clinical interviews. By incorporating simulated data to enhance traditional datasets, we overcome limitations in data protection and privacy, consequently improving the model’s performance. Our methodology employs BERT-based models, GPT-3.5, and ChatGPT-4, demonstrating state-of-the-art results in detecting depression from linguistic patterns and contextual information that significantly outperform previous approaches. Utilising the DAIC-WOZ and Extended-DAIC datasets, our study showcases the potential of the proposed application in revolutionising mental health care through early depression detection and intervention. Empirical results from various experiments highlight the efficacy of our approach and its suitability for real-world implementation. Furthermore, we acknowledge the ethical, legal, and social implications of AI in mental health diagnostics. Ultimately, our study underscores the transformative potential of AI in mental health diagnostics, paving the way for innovative solutions that can facilitate early intervention and improve patient outcomes.
This research evaluates current measurement scales for ambidexterity and proposes a new approach for the measurement of this important construct. We argue that current measurement approaches may be unsuitable to capture the concept of ambidexterity. Through a systematic scale development process, we derive a measurement scale with dual items that simultaneously refer to both dimensions, exploitation and exploration, thus reflecting the true nature of ambidexterity. An extensive pre-test with 39 executives suggests that our scale is suitable for capturing ambidexterity. Our measurement model enhances conceptual clarity of ambidexterity and can serve as a base for future investigations of the concept.
Verschleiß an Zerspanwerkzeugen mit geometrisch definierter Schneide führt zu schlechter Oberflächenqualität, erhöhten Kräften, Maßabweichungen und Bruch. Bisher wird dieser Verschleiß außerhalb der Maschine oder indirekt (z. B. Durchmesser) erfasst. Der Tausch der Werkzeuge findet nach einer bestimmten Werkstückzahl, Zeit, oder einem Standweg statt. In diesem Beitrag wird ein neuartiges System zur direkten Ermittlung des Freiflächenverschleißes im Arbeitsraum eines Bearbeitungszentrums dargestellt. Dabei wird eine geschützt integrierte Industriekamera mit Objektiv im Arbeitsraum installiert. Die Maschinenachsen bzw. die Bearbeitungsspindel positionieren das Werkzeug davor. Nach einer nur wenige Sekunden dauernden Messung findet die Auswertung des Verschleißes hauptzeitparallel statt.
The EAT–Lancet planetary health diet (PHD) provides guidelines on a global scale and calls for red meat consumption to be halved. Operational PHD guidelines at country level have yet to be determined. Here we argue that the biological link between milk and bovine-meat production must be considered when operationalizing the globally calculated PHD to national contexts. Using a stylized computer simulation model rooted in a food system approach, we explore the impact of dietary scenarios on milk and bovine-meat production and show that ignoring this biological link can lead to substantial imbalances between national dietary guidelines and production outcomes and potentially lead to food waste. Furthermore, we assess current national dietary guidelines in Europe and find that most disregard this biological link and are incompatible with the PHD, with implications for policymakers and consumers to consider when adapting the PHD in national contexts.
Fragestellung: Das klinische Standardverfahren und Referenz der Schlafmessung und der Klassifizierung der einzelnen Schlafstadien ist die Polysomnographie (PSG). Alternative Ansätze zu diesem aufwändigen Verfahren könnten einige Vorteile bieten, wenn die Messungen auf eine komfortablere Weise durchgeführt werden. Das Hauptziel dieser Forschung Studie ist es, einen Algorithmus für die automatische Klassifizierung von Schlafstadien zu entwickeln, der ausschließlich Bewegungs- und Atmungssignale verwendet [1].
Patienten und Methoden: Nach der Analyse der aktuellen Forschungsarbeiten haben wir multinomiale logistische Regression als Grundlage für den Ansatz gewählt [2]. Um die Genauigkeit der Auswertung zu erhöhen, wurden vier Features entwickelt, die aus Bewegungs- und Atemsignalen abgeleitet wurden. Für die Auswertung wurden die nächtlichen Aufzeichnungen von 35 Personen verwendet, die von der Charité-Universitätsmedizin Berlin zur Verfügung gestellt wurden. Das Durchschnittsalter der Teilnehmer betrug 38,6 +/– 14,5 Jahre und der BMI lag bei durchschnittlich 24,4 +/– 4,9 kg/m2. Da der Algorithmus mit drei Stadien arbeitet, wurden die Stadien N1, N2 und N3 zum NREM-Stadium zusammengeführt. Der verfügbare Datensatz wurde strikt aufgeteilt: in einen Trainingsdatensatz von etwa 100 h und in einen Testdatensatz mit etwa 160 h nächtlicher Aufzeichnungen. Beide Datensätze wiesen ein ähnliches Verhältnis zwischen Männern und Frauen auf, und der durchschnittliche BMI wies keine signifikante Abweichung auf.
Ergebnisse: Der Algorithmus wurde implementiert und lieferte erfolgreiche Ergebnisse: die Genauigkeit der Erkennung von Wach-/NREM-/REM-Phasen liegt bei 73 %, mit einem Cohen’s Kappa von 0,44 für die analysierten 19.324 Schlafepochen von jeweils 30 s. Die beobachtete gewisse Überschätzung der NREM-Phase lässt sich teilweise durch ihre Prävalenz in einem typischen Schlafmuster erklären. Selbst die Verwendung eines ausbalancierten Trainingsdatensatzes konnte dieses Problem nicht vollständig lösen.
Schlussfolgerungen: Die erreichten Ergebnisse haben die Tauglichkeit des Ansatzes prinzipiell bestätigt. Dieser hat den Vorteil, dass nur Bewegungs- und Atemsignale verwendet werden, die mit weniger Aufwand und komfortabler für Benutzer aufgezeichnet werden können als z. B. Herz- oder EEG-Signale. Daher stellt das neue System eine deutliche Verbesserung im Vergleich zu bestehenden Ansätzen dar. Die Zusammenführung der beschriebenen algorithmischen Software mit dem in [1] beschriebenen Hardwaresystem zur Messung von Atem- und Körperbewegungssignalen zu einem autonomen, berührungslosen System zur kontinuierlichen Schlafüberwachung ist eine mögliche Richtung zukünftiger Arbeiten.
Context
In a world of high dynamics and uncertainties, it is almost impossible to have a long-term prediction of which products, services, or features will satisfy the needs of the customer. To counter this situation, the conduction of Continuous Improvement or Design Thinking for product discovery are common approaches. A major constraint in conducting product discovery activities is the high effort to discover and validate features and requirements. In addition, companies struggle to integrate product discovery activities into their agile processes and iterations.
Objective
This paper aims at suggests a supportive tool, the “Discovery Effort Worthiness (DEW) Index”, for product owners and agile teams to determine a suitable amount of effort that should be spent on Design Thinking activities. To operationalize DEW, proposals for practitioners are presented that can be used to integrate product discovery into product development and delivery.
Method
A case study was conducted for the development of the DEW index. In addition, we conducted an expert workshop to develop proposals for the integration of product discovery activities into the product development and delivery process.
Results
First, we present the "Discovery Effort Worthiness Index" in form of a formula. Second, we identified requirements that must be fulfilled for systematic integration of product discovery activities into product development and delivery. Third, we derived from the requirements proposals for the integration of product discovery activities with a company's product development and delivery.
Conclusion
The developed "Discovery Effort Worthiness Index" provides a tool for companies and their product owners to determine how much effort they should spend on Design Thinking methods to discover and validate requirements. Integrating product discovery with product development and delivery should ensure that the results of product discovery are incorporated into product development. This aims to systematically analyze product risks to increase the chance of product success.
Human pose estimation (HPE) is integral to scene understanding in numerous safety-critical domains involving human-machine interaction, such as autonomous driving or semi-automated work environments. Avoiding costly mistakes is synonymous with anticipating failure in model predictions, which necessitates meta-judgments on the accuracy of the applied models. Here, we propose a straightforward human pose regression framework to examine the behavior of two established methods for simultaneous aleatoric and epistemic uncertainty estimation: maximum a-posteriori (MAP) estimation with Monte-Carlo variational inference and deep evidential regression (DER). First, we evaluate both approaches on the quality of their predicted variances and whether these truly capture the expected model error. The initial assessment indicates that both methods exhibit the overconfidence issue common in deep probabilistic models. This observation motivates our implementation of an additional recalibration step to extract reliable confidence intervals. We then take a closer look at deep evidential regression, which, to our knowledge, is applied comprehensively for the first time to the HPE problem. Experimental results indicate that DER behaves as expected in challenging and adverse conditions commonly occurring in HPE and that the predicted uncertainties match their purported aleatoric and epistemic sources. Notably, DER achieves smooth uncertainty estimates without the need for a costly sampling step, making it an attractive candidate for uncertainty estimation on resource-limited platforms.
With the rapid development of globalization, the demand for translation between different languages is also increasing. Although pre-training has achieved excellent results in neural machine translation, the existing neural machine translation has almost no high-quality suitable for specific fields. Alignment information, so this paper proposes a pre-training neural machine translation with alignment information via optimal transport. First, this paper narrows the representation gap between different languages by using OTAP to generate domain-specific data for information alignment, and learns richer semantic information. Secondly, this paper proposes a lightweight model DR-Reformer, which uses Reformer as the backbone network, adds Dropout layers and Reduction layers, reduces model parameters without losing accuracy, and improves computational efficiency. Experiments on the Chinese and English datasets of AI Challenger 2018 and WMT-17 show that the proposed algorithm has better performance than existing algorithms.
Analog integrated circuit sizing still relies heavily on human expert knowledge as previous automation approaches have not found wide-spread acceptance in industry. One strand, the optimization-based automation, is often discarded due to inflated constraining setups, infeasible results or excessive run times. To address these deficits, this work proposes a alternative optimization flow featuring a designer’s intuition for feasible design spaces through integration of expert knowledge based on the gm/ID-method. Moreover, the extensive run times of simulation-based optimization flows are overcome by incorporating computationally efficient machine learning methods. Neural network surrogate models predicting eleven performance parameters increase the evaluation speed by 3 400× on average compared to a simulator. Additionally, they enable the use of optimization algorithms dependent on automatic differentiation, that would otherwise be unavailable in this field. First, an up to 4× more efficient way for sampling training data based on the aforementioned space is detailed. After presenting the architecture and training effort regarding the surrogate models, they are employed as part of the objective function for sizing three operational amplifiers with three different optimization algorithms. Additionally, the benefits of using the gm/ID-method become evident when considering technology migration, as previously found solutions may be reused for other technologies.
Natural wood colors occur within a wide range from almost white (e.g., white poplar), various yellowish, reddish, and brownish hues to almost black (e.g., ebony). The intrinsic color of wood is basically defined by its chemical composition. However, other factors such as specific anatomical formations or physical properties further affect the optical impression. Starting with the chemical composition of wood and anatomical basics, wood color and its modifications are discussed in this chapter. The classic method of coloring or re-coloring wood-based material surfaces is the application of a coating containing appropriate dyes or pigments. Different concepts for wood coating and coloration are presented. Another method used dyes for coloration of the wood structure. As alternative techniques, physical methods, for example, drying, steaming, ammoniation, bleaching, enzyme treatment, as well as treatment with electromagnetic irradiation (e.g., UV), are explained in this chapter.
Die neue Arbeitswelt verändert den Arbeitsmarkt rasant und fördert die Entwicklung neuer Arbeitsmodelle. Hybrid Work bietet dabei unzählige Zukunftspotentiale, die das eigene Unternehmen nach innen und außen auf ein ganz neues Niveau heben können. In diesem Buch stellen Expert:innen aus Wirtschaft, Politik und Wissenschaft die Herausforderungen, Chancen und Lösungen der neuen Arbeitsmodelle vor und werfen einen Blick in die Zukunft. Sie diskutieren, wie sich hybride Arbeit auf die Produktivität und das Wohlbefinden der Mitarbeiter:innen auswirkt und welche Anforderungen an Führungskräfte gestellt werden, um erfolgreich zu führen.
Facing ever-looming climate change, studying the drivers for individuals' Information Systems (IS) Use to reduce environmental harm gains momentum. While extant research on the antecedents of sustainable IS Use has focused on specific theories, interventions, contexts, and technologies, a holistic understanding has become increasingly elusive, with a synthesis remaining absent. We employ a systematic literature review methodology to shed light on the driving antecedents for sustainable IS Use among individual consumers. Our results build on findings of 29 empirical studies drawn from 598 articles retrieved from our premier outlets and a forward/backward search. The analysis reveals six salient complementary antecedents: Relief, Empowerment, Default, User-centricity, Salience, and Encouragement. We recommend considering these concepts when developing, deploying, promoting, or regulating digital technologies to mitigate individual consumers' emissions. Along with memorable and implementable concepts, our theoretical framework offers a novel conceptualization and four promising avenues for researchers on sustainable IS Use.
Fazit und Ausblick
(2023)
In diesem abschließenden Kapitel erfolgt zunächst eine zusammenfassende Diskussion der verschiedenen Teile des vorliegenden Bandes sowie der einzelnen Kapitel. Dabei wird auf die Hauptaussagen der jeweiligen Kapitel eingegangen und ein Fazit zu jedem Teil gezogen. Daran anknüpfend setzen sich die Herausgeber des Bandes mit der Zukunft des Nachhaltigkeitsmanagements im Allgemeinen sowie der Zukunft eines nachhaltigen Sport- und Kulturmanagements im Besonderen auseinander. Abschließend wird ein maßgeschneidertes Weiterbildungsangebot als Baustein zur Bewältigung dieser Herausforderungen vorgestellt.
Delphi Markets
(2023)
Delphi markets refer to approaches and implementations of integrating prediction markets and Delphi studies (Real-time Delphi). The combination of the two methods for producing forecasts can potentially compensate for each other´s weaknesses. For example, prediction markets can be used to select participants with expertise and also motivate long-term participation through their gamified approach and incentive mechanisms. In this paper, two potentials for prediction markets and four potentials for Delphi studies, which are made possible by integration, are derived theoretically. Subsequently, three different integration approaches are presented, on the basis of which the integration on user, market and Delphi question-level is exemplified and it is shown that, depending on the approach, not all potentials can be achieved. At the end, recommendations for the use of Delphi markets are derived, existing limitations for Delphi markets as well as future developments are pointed out.
Film formation of self synthesized Polymer EPM–g–VTMDS (ethylene–propylene rubber, EPM, grafted with vinyltetramethyldisiloxane, VTMDS) was studied regarding bonding to adhesion promoter vinyltrimethoxysilane (VTMS) on oxidized 18/10 chromium/nickel–steel (V2A) stainless steel surfaces. Polymer films of different mixed solutions including commercial siloxane and silicone, dimethyl, vinyl group terminated crosslinker (HANSA SFA 42100, CAS# 68083-19-2, 0.35 mmol Vinyl/g) and platinum, 1,3-diethenyl-1,1,3,3-tetramethyldisiloxane complex Karstedt's catalyst (ALPA–KAT 1, CAS# 68478-92-2) were spin coated on V2A stainless steel surfaces with adsorbed VTMS thin layers in order to analyze film formation of EPM–g–VTMDS at early stages. Surface topography and chemical bonding of the high performance polymers on different oxidized V2A surfaces were investigated with X–ray photoelectron spectroscopy (XPS), atomic force microscopy (AFM), scanning electron microscopy (SEM) and surface enhanced Raman spectroscopy (SERS). AFM and SEM as well as XPS results indicated that the formation of the polymer film proceeds via growth of polymer islands. Chemical signatures of the essential polymer contributions, linker and polymer backbones, could be identified using XPS core level peak shape analysis and also SERS. The appearance of signals which are related to Si–O–Si can be seen as a clear indication of lateral crosslinking and silica network formation in the films on the V2A surface.
Smart cities are considered data factories that generate an enormous amount of data from various sources. In fact data is the backbone of any smart services. Therefore, the strategic beneficial handling of this digital capital is crucial for cities. Some smart city pioneers have already written down their approach to data in the form of data strategies, but what should a city's data strategy include, and how can the goals and measures defined in the strategies be operationalized? This paper addresses these questions by looking closely at the data strategies of cities in Germany and the top three countries in the EU Digital Economy and Society Index. The in-depth analysis of 8 city data strategies has yielded 11 dimensions that cities should consider in their data strategy. These are relevance of data, principles, methods, data sharing, technology, data culture, data ethics, organizational structure, data security and privacy, collaborations, data literacy. In addition, data governance is a concept to put these 11 strategic dimensions into practice through standardization measures, training programs, and defining roles and responsibilities by developing a data catalog.