Refine
Document Type
- Journal article (1244)
- Conference proceeding (1039)
- Book chapter (381)
- Book (223)
- Doctoral Thesis (54)
- Working Paper (38)
- Anthology (32)
- Report (25)
- Patent / Standard / Guidelines (24)
- Issue of a journal (19)
Is part of the Bibliography
- yes (3088)
Institute
- ESB Business School (1107)
- Informatik (875)
- Technik (509)
- Life Sciences (343)
- Texoversum (220)
- Zentrale Einrichtungen (16)
Publisher
- Springer (473)
- IEEE (252)
- Elsevier (243)
- Hochschule Reutlingen (191)
- MDPI (99)
- Wiley (72)
- Gesellschaft für Informatik e.V (69)
- Universität Tübingen (65)
- De Gruyter (60)
- VDE Verlag (48)
Over the last 50 years, neoclassical financial theory has been dominating our perception of what is happening in financial markets. It has spurred numerous valuable theories and concepts all based on the concept of Homo Economicus, the strictly rational economic man. However, humans do not always act in a strictly rational manner. For students and practitioners alike, our book aims at opening the door to another perspective on financial markets: a behavioral perspective based on a Homo Oeconomicus Humanus. This agent acts with limited rationality when making decisions. He/she uses heuristics and shortcuts and is prone to the influence of emotions. This sounds familiar in real life and can be transferred to what happens in financial markets, too.
Das Buch behandelt die verhaltensorientierte Finanzierungslehre und damit das irrationale Verhalten auf Finanz- und Kapitalmärkten. Die aktuellen Krisen und die zunehmenden Kursschwankungen auf diesen Märkten erfordern eine Erweiterung des neoklassischen Ansatzes. Verständlich und mit umfangreichem Anwendungsmaterial lernt der Studierende die Realität des Finanzsektors kennen. Das vorliegende Lehrbuch möchte Studierenden und Praktikern die Türe öffnen zu einer neu entstehenden, verhaltenswissenschaftlichen Sicht auf die Finanzmärkte, in der ein realitätsnäherer Homo Oeconomicus Humanus an den Märkten agiert. Er setzt bei der Entscheidungsfindung begrenzt rationale Heuristiken ein und lässt sich von emotionalen Einflüssen lenken. Dabei geht es nicht darum, das Verhalten des Marktteilnehmers als richtig oder falsch zu werten. Vielmehr soll der Leser einen Eindruck gewinnen, dass unterschiedliche Blickwinkel auf das Geschehen an Finanzmärkten möglich sind. Welcher Blickwinkel in einer spezifischen Konstellation der Realität besser geeignet ist, die Realität zu erfassen, dürfte für sich schon ein fruchtbares, neues Forschungsgebiet darstellen. Insofern geht es nicht darum, die neoklassische Kapitalmarkttheorie durch Behavioral Finance zu ersetzen, sondern vielmehr darum, sie in diese Richtung zu öffnen und die angestoßene Paradigmenerweiterung durch die Behavioral Finance vorzustellen.
Spekulationsblasen ziehen sich wie ein roter Faden durch die Geschichte der Finanzmärkte. Die erste bedeutende und dokumentierte Blase entstand im 17. Jahrhundert in Holland als weit bekannte Tulpenmanie. Weitere sollten in den nächsten Jahrhunderten folgen. Doch wie entstehen diese zumeist euphorischen und schlussendlich panikartigen Marktentwicklungen? Und wie konnte es immer wieder passieren, dass diese von Börsenprofis nicht rechtzeitig erkannt wurden? In diesem Buch stehen die Spekulationsblasen als Anzeichen für wiederkehrende und anhaltende Marktanomalien im Fokus der Betrachtung. Rolf J. Daxhammer und Máté Facsar erklären im ersten Teil die Entstehung und Ursachen für die Bildung von Spekulationsblasen sowie die unterschiedlichen Phasen und Arten. Dabei ist es ihnen besonders wichtig aufzuzeigen, dass diese sowohl positive als auch negative Effekte auf die Volkswirtschaften haben. Anschließend stellen die Autoren die wichtigsten Spekulationsblasen, beginnend mit der Tulpenmanie über die Südseeblase bis hin zu aktuellen Entwicklungen an Finanz- und Immobilienmärkten, vor. Der Leser ist somit in der Lage, typische Eigenschaften der Kapitalmärkte zu verstehen und die Entwicklung historischer Spekulationsblasen auf Basis des Fünf-Phasen-Modells zu erklären und dieses anzuwenden.
Seit über 50 Jahren dominiert die neoklassische Kapitalmarkttheorie unser Verständnis für die Abläufe an Finanzmärkten. Sie hat eine Vielzahl von Theorien und Konzepten (z.B. Portfoliotheorie, Capital Asset Pricing Model oder Value-at-Risk) hervorgebracht und basiert auf der Annahme eines streng rationalen Homo Oeconomicus.
Das vorliegende Buch möchte Praktikern die Türe öffnen zu einer neu entstehenden, verhaltenswissenschaftlichen Sicht auf die Finanzmärkte, in der ein realitätsnäherer Homo Oeconomicus Humanus an den Märkten agiert. Er setzt bei der Entscheidungsfindung begrenzt rationale Heuristiken ein und lässt sich von emotionalen Einflüssen lenken.
Die Autoren schlagen zunächst den Bogen von der neoklassischen Sicht der Finanzmärkte zur Behavioral Finance. Anschließend werden spekulative Blasen, von der Tulpenmanie bis zur Subprime Hypothekenblase, als Anzeichen für begrenzte Rationalität an Finanzmärkten ausführlich vorgestellt. Danach stehen die Heuristiken bei Anlageentscheidungen an Wertpapiermärkten im Vordergrund. Die dadurch ausgelösten Verzerrungen werden ntsprechend ihrer Risiko-/Renditeschädlichkeit im Rahmen des RRS-Index® eingeordnet. Abschließend werden Beispiele für die Anwendung der Behavioral-Finance-Erkenntnisse im Wealth Management und Corporate Governance diskutiert und es wird ein Blick auf aktuelle Entwicklungen der Neuro-Finance und Emotional Finance geworfen.
In dieser Auflage neu hinzugekommen ist Financial Nudging, eine besonders vielversprechende Anwendung von Behavioral Finance-Erkenntnissen.
Das vorliegende Buch möchte Studierenden und Praktikern die Türe öffnen zu einer neu entstehenden, verhaltenswissenschaftlichen Sicht auf die Finanzmärkte in der ein realitätsnäherer Homo Oeconomicus Humanus an den Märkten agiert. Er setzt bei der Entscheidungsfindung begrenzt rationale Heuristiken ein und lässt sich von emotionalen Einflüssen lenken.
Die Autoren schlagen zunächst den Bogen von der neoklassischen Sicht der Finanzmärkte zur Behavioral Finance. Anschließend werden spekulative Blasen als Anzeichen für begrenzte Rationalität vorgestellt. Danach stehen die Heuristiken bei Anlageentscheidungen im Vordergrund. Abschließend werden Beispiele für die Anwendung der Behavioral-Finance-Erkenntnisse im Wealth Management und Corporate Governance diskutiert und es wird ein Blick auf Neuro-Finance und Emotional Finance geworfen.
In this article we would like to link certain developments of the Cryptocurrency price movement to the five characteristic phases speculative bubbles undergo according to US economists Charles Kindleberger and Hyman Minsky, who developed the respective framework in "Manias, panics and crashes" (1978). Although every speculative bubble is somewhat different, they tend to follow five phases.In addition, we would like to answer the question how speculative bubbles develop and why they suddenly collapse.
Appropriate mechanical properties and fast endothelialization of synthetic grafts are key to ensure long-term functionality of implants. We used a newly developed biostable polyurethane elastomer (TPCU) to engineer electrospun vascular scaffolds with promising mechanical properties (E-modulus: 4.8 ± 0.6 MPa, burst pressure: 3326 ± 78 mmHg), which were biofunctionalized with fibronectin (FN) and decorin (DCN). Neither uncoated nor biofunctionalized TPCU scaffolds induced major adverse immune responses except for minor signs of polymorph nuclear cell activation. The in vivo endothelial progenitor cell homing potential of the biofunctionalized scaffolds was simulated in vitro by attracting endothelial colony-forming cells (ECFCs). Although DCN coating did attract ECFCs in combination with FN (FN + DCN), DCN-coated TPCU scaffolds showed a cell-repellent effect in the absence of FN. In a tissue-engineering approach, the electrospun and biofunctionalized tubular grafts were cultured with primary-isolated vascular endothelial cells in a custom-made bioreactor under dynamic conditions with the aim to engineer an advanced therapy medicinal product. Both FN and FN + DCN functionalization supported the formation of a confluent and functional endothelial layer.
Stress is becoming an important topic in modern life. The influence of stress results in a higher rate of health disorders such as burnout, heart problems, obesity, asthma, diabetes, depressions and many others. Furthermore individual’s behavior and capabilities could be directly affected leading to altered cognition, inappropriate decision making and problem solving skills. In a dynamic and unpredictable environment, such as automotive, this can result in a higher risk for accidents. Different papers faced the estimation as well as prediction of drivers’ stress level during driving. Another important question is not only the stress level of the driver himself, but also the influence on and of a group of other drivers in the near area. This paper proposes a system, which determines a group of drivers in a near area as clusters and it derives the individual stress level. This information will be analyzed to generate a stress map, which represents a graphical view about road section with a higher stress influence. Aggregated data can be used to generate navigation routes with a lower stress influence to decrease stress influenced driving as well as improve road safety.
Sleep is an important aspect in life of every human being. The average sleep duration for an adult is approximately 7 h per day. Sleep is necessary to regenerate physical and psychological state of a human. A bad sleep quality has a major impact on the health status and can lead to different diseases. In this paper an approach will be presented, which uses a long-term monitoring of vital data gathered by a body sensor during the day and the night supported by mobile application connected to an analyzing system, to estimate sleep quality of its user as well as give recommendations to improve it in real-time. Actimetry and historical data will be used to improve the individual recommendations, based on common techniques used in the area of machine learning and big data analysis.
The impact of stress of every human being has become a serious problem. Reported impact on persons are a higher rate or health disorders like heart problems, obesity, asthma, diabetes, depressions and many others. An individual in a stressful situation has to deal with altered cognition as well as an affected decision making skill and problem solving. This could lead to a higher risk for accidents in dynamic environments such as automotive. Different papers faced the estimation as well as prediction of drivers’ stress level during driving. Another important question is not only the stress level of the driver himself, but also the influence on and of a group of other drivers in the near area. This paper proposes a system, which determines a group of drivers in a near area as clusters and it derives or computes the individual stress level. This information will be analyzed to generate a stress map, which represents a graphical view about road section with a higher stress influence. Aggregated data can be used to generate navigation routes with a lower stress influence as well as recommend driving behavior to decrease stress influenced driving as well as improve road safety.
We examine the role of communication from users on dropout from digital learning systems to answer the following questions: (1) how does the sentiment within qualitative signals (user comments) affect dropout rates? (2) does the variance in the proportion of positive and negative sentiments affect dropout rates? (3) how do quantitative signals (e.g. likes) moderate the effect of the qualitative signals? and (4) how does the effect of qualitative signals on dropout rates change across early and late stages of learning? Our hypotheses draws from learning theory and self-regulation theory, and were tested using data of 447 learning videos across 32 series of online tutorials, spanning 12 different fields of learning. The findings indicate a main effect of negative sentiment on dropout rates but no effect of positive sentiment on preventing dropout behaviour. This main effect is stronger in the early stages of learning and weakens at later stages. We also observe an effect of the extent of variance of positive and negative sentiments on dropout behaviour. The effects are negatively moderated by quantitative signals. Overall, making commenting more broad-based rather than polarised can be a useful strategy in managing learning, transferring knowledge, and building consensus.
The paper describes a new stimulus using learning factories and an academic research programme - an M.Sc. in Digital Industrial Management and Engineering (DIME) comprising a double degree - to enhance international collaboration between four partner universities. The programme will be structured in such a way as to maintain or improve the level of innovation at the learning factories of each partner. The partners agreed to use Learning Factory focus areas along with DIME learning modules to stimulate international collaboration. Furthermore, they identified several research areas within the framework of the DIME program to encourage horizontal and vertical collaboration. Vertical collaboration connects faculty expertise across the Learning Factory network to advance knowledge in one of the focus areas, while Horizontal collaboration connects knowledge and expertise across multiple focus areas. Together they offer a platform for students to develop disciplinary and cross-disciplinary applied research skills necessary for addressing the complex challenges faced by industry. Hence, the university partners have the opportunity to develop the learning factory capabilities in alignment with the smart manufacturing concept. The learning factory is thus an important pillar in this venture. While postgraduate students/researchers in the DIME program are the enablers to ensure the success of entire projects, the learning factory provides a learning environment which is entirely conducive to fostering these successful collaborations. Ultimately, the partners are focussed on utilising smart technologies in line with the digitalization of the production process.
Facial beauty prediction (FBP) aims to develop a machine that automatically makes facial attractiveness assessment. In the past those results were highly correlated with human ratings, therefore also with their bias in annotating. As artificial intelligence can have racist and discriminatory tendencies, the cause of skews in the data must be identified. Development of training data and AI algorithms that are robust against biased information is a new challenge for scientists. As aesthetic judgement usually is biased, we want to take it one step further and propose an Unbiased Convolutional Neural Network for FBP. While it is possible to create network models that can rate attractiveness of faces on a high level, from an ethical point of view, it is equally important to make sure the model is unbiased. In this work, we introduce AestheticNet, a state-of-the-art attractiveness prediction network, which significantly outperforms competitors with a Pearson Correlation of 0.9601. Additionally, we propose a new approach for generating a bias-free CNN to improve fairness in machine learning.
We address the problem of 3D face recognition based on either 3D sensor data, or on a 3D face reconstructed from a 2D face image. We focus on 3D shape representation in terms of a mesh of surface normal vectors. The first contribution of this work is an evaluation of eight different 3D face representations and their multiple combinations. An important contribution of the study is the proposed implementation, which allows these representations to be computed directly from 3D meshes, instead of point clouds. This enhances their computational efficiency. Motivated by the results of the comparative evaluation, we propose a 3D face shape descriptor, named Evolutional Normal Maps, that assimilates and optimises a subset of six of these approaches. The proposed shape descriptor can be modified and tuned to suit different tasks. It is used as input for a deep convolutional network for 3D face recognition. An extensive experimental evaluation using the Bosphorus 3D Face, CASIA 3D Face and JNU-3D Face datasets shows that, compared to the state of the art methods, the proposed approach is better in terms of both computational cost and recognition accuracy.
In recent years, 3D facial reconstructions from single images have garnered significant interest. Most of the approaches are based on 3D Morphable Model (3DMM) fitting to reconstruct the 3D face shape. Concurrently, the adoption of Generative Adversarial Networks (GAN) has been gaining momentum to improve the texture of reconstructed faces. In this paper, we propose a fundamentally different approach to reconstructing the 3D head shape from a single image by harnessing the power of GAN. Our method predicts three maps of normal vectors of the head’s frontal, left, and right poses. We are thus presenting a model-free method that does not require any prior knowledge of the object’s geometry to be reconstructed.
The key advantage of our proposed approach is the substantial improvement in reconstruction quality compared to existing methods, particularly in the case of facial regions that are self-occluded in the input image. Our method is not limited to 3d face reconstruction. It is generic and applicable to multiple kinds of 3D objects. To illustrate the versatility of our method, we demonstrate its efficacy in reconstructing the entire human body.
By delivering a model-free method capable of generating high-quality 3D reconstructions, this paper not only advances the field of 3D facial reconstruction but also provides a foundation for future research and applications spanning multiple object types. The implications of this work have the potential to extend far beyond facial reconstruction, paving the way for innovative solutions and discoveries in various domains.
3D assisted 2D face recognition involves the process of reconstructing 3D faces from 2D images and solving the problem of face recognition in 3D. To facilitate the use of deep neural networks, a 3D face, normally represented as a 3D mesh of vertices and its corresponding surface texture, is remapped to image-like square isomaps by a conformal mapping. Based on previous work, we assume that face recognition benefits more from texture. In this work, we focus on the surface texture and its discriminatory information content for recognition purposes. Our approach is to prepare a 3D mesh, the corresponding surface texture and the original 2D image as triple input for the recognition network, to show that 3D data is useful for face recognition. Texture enhancement methods to control the texture fusion process are introduced and we adapt data augmentation methods. Our results show that texture-map-based face recognition can not only compete with state-of-the-art systems under the same precon ditions but also outperforms standard 2D methods from recent years.
The aim of this work is the development of artificial intelligence (AI) application to support the recruiting process that elevates the domain of human resource management by advancing its capabilities and effectiveness. This affects recruiting processes and includes solutions for active sourcing, i.e. active recruitment, pre-sorting, evaluating structured video interviews and discovering internal training potential. This work highlights four novel approaches to ethical machine learning. The first is precise machine learning for ethically relevant properties in image recognition, which focuses on accurately detecting and analysing these properties. The second is the detection of bias in training data, allowing for the identification and removal of distortions that could skew results. The third is minimising bias, which involves actively working to reduce bias in machine learning models. Finally, an unsupervised architecture is introduced that can learn fair results even without ground truth data. Together, these approaches represent important steps forward in creating ethical and unbiased machine learning systems.
AI-based prediction and recommender systems are widely used in various industry sectors. However, general acceptance of AI-enabled systems is still widely uninvestigated. Therefore, firstly we conducted a survey with 559 respondents. Findings suggested that AI-enabled systems should be fair, transparent, consider personality traits and perform tasks efficiently. Secondly, we developed a system for the Facial Beauty Prediction (FBP) benchmark that automatically evaluates facial attractiveness. As our previous experiments have proven, these results are usually highly correlated with human ratings. Consequently they also reflect human bias in annotations. An upcoming challenge for scientists is to provide training data and AI algorithms that can withstand distorted information. In this work, we introduce AntiDiscriminationNet (ADN), a superior attractiveness prediction network. We propose a new method to generate an unbiased convolutional neural network (CNN) to improve the fairn ess of machine learning in facial dataset. To train unbiased networks we generate synthetic images and weight training data for anti-discrimination assessments towards different ethnicities. Additionally, we introduce an approach with entropy penalty terms to reduce the bias of our CNN. Our research provides insights in how to train and build fair machine learning models for facial image analysis by minimising implicit biases. Our AntiDiscriminationNet finally outperforms all competitors in the FBP benchmark by achieving a Pearson correlation coefficient of PCC = 0.9601.
Advancing mental health diagnostics: AI-based method for depression detection in patient interviews
(2023)
In this paper, we present a novel artificial intelligence (AI) application for depression detection, using advanced transformer networks to analyse clinical interviews. By incorporating simulated data to enhance traditional datasets, we overcome limitations in data protection and privacy, consequently improving the model’s performance. Our methodology employs BERT-based models, GPT-3.5, and ChatGPT-4, demonstrating state-of-the-art results in detecting depression from linguistic patterns and contextual information that significantly outperform previous approaches. Utilising the DAIC-WOZ and Extended-DAIC datasets, our study showcases the potential of the proposed application in revolutionising mental health care through early depression detection and intervention. Empirical results from various experiments highlight the efficacy of our approach and its suitability for real-world implementation. Furthermore, we acknowledge the ethical, legal, and social implications of AI in mental health diagnostics. Ultimately, our study underscores the transformative potential of AI in mental health diagnostics, paving the way for innovative solutions that can facilitate early intervention and improve patient outcomes.
In recent years, the demand for accurate and efficient 3D body scanning technologies has increased, driven by the growing interest in personalised textile development and health care. This position paper presents the implementation of a novel 3D body scanner that integrates multiple RGB cameras and image stitching techniques to generate detailed point clouds and 3D mesh models. Our system significantly enhances the scanning process, achieving higher resolution and fidelity while reducing the cost, time and effort required for data acquisition and processing. Furthermore, we evaluate the potential use cases and applications of our 3D body scanner, focusing on the textile technology and health sectors. In textile development, the 3D scanner contributes to bespoke clothing production, allowing designers to construct made-to-measure garments, thus minimising waste and enhancing customer satisfaction through fitting clothing. In mental health care, the 3D body scanner can be employed as a tool for body image analysis, providing valuable insights into the psychological and emotional aspects of self-perception. By exploring the synergy between the 3D body scanner and these fields, we aim to foster interdisciplinary collaborations that drive advancements in personalisation, sustainability, and well-being.
Integrierte Schaltkreise (IC) sind ein integraler Bestandteil vieler Geräte wie zum Beispiel Smartphones, Computer oder Fernseher. Auf den Schaltkreisen werden immer mehr Funktionen integriert. Um die Arbeit auch zukünftig in gegebener Zeit bewältigen zu können, bedarf es daher einer Möglichkeit für die gleichzeitige Zusammenarbeit der Entwickler. Unter dem Arbeitstitel eCEDA (eCollaboration for Electronic Design Automation) wird ein Konzept für eine Webanwendung entwickelt, die die Echtzeitkollaboration von Entwicklern im Chipentwurf ermöglichen soll. Dieses Konzept sowie verschiedene Aspekte der Kollaboration werden in dieser Arbeit behandelt.
Highly viscous bioinks offer great advantages for the three-dimensional fabrication of cell-laden constructs by microextrusion printing. However, no standardised method of mixing a high viscosity biomaterial ink and a cell suspension has been established so far, leading to non-reproducible printing results. A novel method for the homogeneous and reproducible mixing of the two components using a mixing unit connecting two syringes is developed and investigated. Several static mixing units, based on established mixing designs, were adapted and their functionality was determined by analysing specific features of the resulting bioink. As a model system, we selected a highly viscous ink consisting of fresh frozen human blood plasma, alginate, and methylcellulose, and a cell suspension containing immortalized human mesenchymal stem cells. This bioink is crosslinked after fabrication. A pre-crosslinked gellan gum-based bioink providing a different extrusion behaviour was introduced to validate the conclusions drawn from the model system. For characterisation, bioink from different zones within the mixing device was analysed by measurement of its viscosity, shape fidelity after printing and visual homogeneity. When taking all three parameters into account, a comprehensive and reliable comparison of the mixing quality was possible. In comparison to the established method of manual mixing inside a beaker using a spatula, a significantly higher proportion of viable cells was detected directly after mixing and plotting for both bioinks when the mixing unit was used. A screw-like mixing unit, termed “HighVisc”, was found to result in a homogenous bioink after a low number of mixing cycles while achieving high cell viability rates.
Qualitätssicherung aus Sicht einer unternehmensnahen Hochschule für Angewandte Wissenschaften
(2015)
Mit der Umstellung auf Bachelor- und Masterstudiengänge zur Harmonisierung des europäischen Hochschulsystems wurden die Hochschulen mit der Akkreditierung ihrer Studiengänge konfrontiert. Neben der formalen Erfüllung der Kriterien kristallisieren sich mittlerweile verschiedenste Qualitätssysteme heraus, die den Prozess der Qualitätsverbesserung unterstützen sollen. Leider geht damit gerade ein verwirrender Gebrauch verschiedener in der QM-Praxis eingeführter Begriffe einher. Bei dem Thema Qualität, Qualitätssicherung oder -management sind alle verantwortlich Handelnden gefragt. Sobald ihnen klar ist, dass ein Qualitätsmanagementsystem nicht automatisch immer nur überbordende Dokumentation bedeutet, sondern ein Managementwerkzeug ist, das sich in der Praxis bei richtiger Ausgestaltung zur Steuerung von Unternehmen und zur Erreichung der Ziele bewährt hat, wird die Scheu vor einer Einführung verfliegen.
Anlass für die Ausstellung war das 150 jährige Bestehen der Hochschule Reutlingen. Ursprünglich als Webschule gegründet, später Technikum für Textilindustrie und staatliche Ingenieurschule, entwickelte sich daraus bis heute die Fachhochschule mit der Fakultät für Textil und Design. Den Schwerpunkt des historischen Teils der Ausstellung bildet die Gewebesammlung, die in der Webschule als Vorbild für die Studenten wie auch für die württembergische Textilindustrie entstand. Demgegenüber stehen aktuelle Funktionstextilien aus High-Tech-Fasern, die über die traditionelle Nutzung als Material für Bekleidung hinaus vielseitig eingesetzt werden: im Automobil- und Flugzeugbau, in der Medizintechnik, im Arbeitsschutz, an Bauwerken.
Customer relationship management (CRM) is one of the most frequently adopted management tools and has received much attention in the literature. From a company-wide perspective, CRM is viewed as a complex process requiring interventions in different company areas. Previous research has already highlighted the pitfalls and failures related to a partial and incomplete view of CRM. This study advances research on CRM by investigating the impact of the relative implementation time according to which interventions are implemented in different areas (customer management, CRM technology, organizational alignment, and CRM strategy) on CRM performance. The results of the empirical study reveal that compared to other critical CRM activities, a later implementation of organizational alignment activities has a negative impact on performance. Further, our results show that CRM implementations do not equally address the areas of customer acquisition, growth, and loyalty, since this clearly depends on company objectives and also on geographical differences.
Um den Übergang von der Schule zur Hochschule zu erleichtern, brauchen Studierende technischer Fächer häufig eine Auffrischung ihrer Kenntnisse in Mathematik und Physik. Ein Online-Lernsystem für Physik kann Studierende bei der Beschäftigung mit physikalischen Inhalten unterstützen. Zudem kann ein Physik-Wissenstest Lücken im individuellen Wissensstand aufzeigen und zum Lernen der fehlenden Themen motivieren. Die Arbeitsgruppe "eLearning in der Physik" der Hochschulföderation Süd-West (HfSW) bestehend aus den baden-württembergischen Hochschulen Aalen, Esslingen, Heilbronn, Mannheim und Reutlingen hat einen Aufgabenpool von über 200 Physikaufgaben für Erstsemester erarbeitet. Sie stehen den Studierenden mit Lösungen in Lernmanagementsystemen zum Selbststudium und jetzt auch im "Zentralen Open Educational Resources Repositorium der Hochschulen in Baden-Württemberg" (ZOERR) zur Verfügung. In diesem Beitrag wird über den Einsatz der Online-Übungsaufgaben in 2020/2021 berichtet, über die Ergebnisse der Wissenstests und über die in der Corona-Zeit neu eingerichteten eTutorien.
The following paper is dealing with the issue on which actual consumer lifestyle segmentation methods there are for particular European countries and accordingly for Europe as a whole. This is important for corporations to be able to place their products accurately by a consumer orientated marketing concerning the constant change of values and minds. Researching current literature, internet sources and documents, the state of the science is presented by a detailed description of the most popular lifestyle segmentation methods used in European countries. In addition to that, these instruments are discussed individually and then compared to each other. All instruments, the Sinus-Milieus, Euro-Socio-Styles, Roper-Consumer-Styles, RISC and Mosaic, are serving the same purpose even so they differ pretty much from each other. Each market research company has its own method to generate their model just as different segments and definitions for them. Furthermore every segmentation method is illustrated in a different way. This paper demonstrates all these instruments in detail and shows its advantages and disadvantages. Summing up literature research concerning the main research question, there are several models segmenting consumers in different lifestyle groups for e.g. in Germany, France or Great Britain, but still less models referring to the entire European market.
Modern wide bandgap power devices promise higher power conversion performance if the device can be operated reliably. As switching speed increases, the effects of parasitic ringing become more prominent, causing potentially damaging overvoltages during device turn-off. Estimating the expected additional voltage caused by such ringing enables more reliable designs. In this paper, we present an analytical expression to calculate the expected overvoltage caused by parasitic ringing based on parasitic element values and operating point parameters. Simulations and measurements confirm that the expression can be used to find the smallest rise time of the switches’ drain-source voltage for minimum overvoltage. The given expression also allows the prediction of the trade off overvoltage amplitude in case of faster required rise times.
Semi-automated image data labelling using AprilTags as a pre-processing step for machine learning
(2019)
Data labelling is a pre-processing step to prepare data for machine learning. There are many ways to collect and prepare this data, but these are usually associated with a greater effort. This paper presents an approach to semi-automated image data labelling using AprilTags. The AprilTags attached to the object, which contain a unique ID, make it possible to link the object surfaces to a particular class. This approach will be implemented and used to label data of a stackable box.
The data is evaluated by training a You Only Look Once (YOLO) net, with a subsequent evaluation of the detection results. These results show that the semi-automatically collected and labelled data can certainly be used for machine learning. However, if concise features of an object surface are covered by the AprilTag, there is a risk that the concerned class will not be recognized. It can be assumed that the labelled data can not only be used for YOLO, but also for other machine learning approaches.
Ion Mobility Spectrometry (IMS) is a widely used and `well-known’ technique of ion separation in the gaseous phase based on the differences in ion mobilities under an electric field. All IMS instruments operate with an electric field that provides space separation, but some IMS instruments also operate with a drift gas flow that provides also a temporal separation. In this review we will summarize the current IMS instrumentation. IMS techniques have received an increased interest as new instrumentation and have become available to be coupled with mass spectrometry (MS). For each of the eight types of IMS instruments reviewed it is mentioned whether they can be hyphenated with MS and whether they are commercially available. Finally, out of the described devices, the six most-consolidated ones are compared. The current review article is followed by a companion review article which details the IMS hyphenated techniques (mainly gas chromatography and mass spectrometry) and the factors that make the data from an IMS device change as a function of device parameters and sampling conditions. These reviews will provide the reader with an insightful view of the main characteristics and aspects of the IMS technique.
Ion Mobility Spectrometry (IMS) is a widely used and ‘well-known’ technique of ion separation in the gaseous phase based on the differences of ion mobilities under an electric field. This technique has received increased interest over the last several decades as evidenced by the pace and advances of new IMS devices available. In this review we explore the hyphenated techniques that are used with IMS, specifically mass spectrometry as an identification approach and a multi-capillary column as a pre-separation approach. Also, we will pay special attention to the key figures of merit of the ion mobility spectrum and how data sets are treated, and the influences of the experimental parameters on both conventional drift time IMS (DTIMS) and miniaturized IMS also known as high Field Asymmetric IMS (FAIMS) in the planar configuration. The present review article is preceded by a companion review article which details the current instrumentation and contains the sections that configure both conventional DTIMS and FAIMS devices. These reviews will give the reader an insightful view of the main characteristics and aspects of the IMS technique.
At DBKDA 2019, we demonstrated that StrongDBMS with simple but rigorous optimistic algorithms, provides better performance in situations of high concurrency than major commercial database management systems (DBMS). The demonstration was convincing but the reasons for its success were not fully analysed. There is a brief account of the results below. In this short contribution, we wish to discuss the reasons for the results. The analysis leads to a strong criticism of all DBMS algorithms based on locking, and based on these results, it is not fanciful to suggest that it is time to re-engineer existing DBMS.
This paper reviews suggestions for changes to database technology coming from the work of many researchers, particularly those working with evolving big data. We discuss new approaches to remote data access and standards that better provide for durability and auditability in settings including business and scientific computing. We propose ways in which the language standards could evolve, with proof-of-concept implementations on Github.
Recent work on database application development platforms has sought to include a declarative formulation of a conceptual data model in the application code, using annotations or attributes. Some recent work has used metadata to include the details of such formulations in the physical database, and this approach brings significant advantages in that the model can be enforced across a range of applications for a single database. In previous work, we have discussed the advantages for enterprise integration of typed graph data models (TGM), which can play a similar role in graphical databases, leveraging the existing support for the unified modelling language UML. Ideally, the integration of systems designed with different models, for example, graphical and relational database, should also be supported. In this work, we implement this approach, using metadata in a relational database management system (DBMS).
Data Integration of heterogeneous data sources relies either on periodically transferring large amounts of data to a physical Data Warehouse or retrieving data from the sources on request only. The latter results in the creation of what is referred to as a virtual Data Warehouse, which is preferable when the use of the latest data is paramount. However, the downside is that it adds network traffic and suffers from performance degradation when the amount of data is high. In this paper, we propose the use of a readCheck validator to ensure the timeliness of the queried data and reduced data traffic. It is further shown that the readCheck allows transactions to update data in the data sources obeying full Atomicity, Consistency, Isolation, and Durability (ACID) properties.
Traditional communication of research on climate change fails to encourage individual, corporate, and political leaders to take appropriate action. We argue that this problem is based on an overly simplistic unidirectional model of science communication. Conversely, theory shows that active learning processes are better suited to initiate and mobilize engagement among all stakeholders. Here, we integrate theoretical insights on active learning with empirical evidence from serious gaming: communication should be understood as an integral design feature that relates active learning on climate change to tangible action.
A TLP system with a very low characteristic impedance of 1.5 Ω and a selectable pulse length from 0.5 to 6 μs is presented. It covers the entire operation region of many power semiconductors up to 700 V and 400 A. Ist applicability is demonstrated by determining the Output characteristics for two Cool MOS devices up to destruction.
Modern power DMOS transistors greatly benefit from the continuous advances of the technology, which yield devices with very low area-specific RDS,on figures of merit and therefore allow for significantly reduced active areas. However, in many applications, where the devices must dissipate high amounts of energy and thus are subjected to significant self-heating, the active area is not dictated by RDS,on requirements, but by the energy constraints. In this paper, a simple method of improving the energy capability and reliability of power DMOS transistors operating in pulsed conditions is proposed and experimentally verified. The method consists in redistributing the power density from the hotter to the cooler device regions, hence achieving a more homogeneous temperature distribution and a reduced peak temperature. To demonstrate the principle, a simple gate offset circuit is used to redistribute the current density to the cooler DMOS parts. No technology changes are needed for the implementation, only minor changes to the driver circuit are necessary, with a minimal impact on the additional required active area. Improvements in the energy capability from 9.2% up to 39% have been measured. Furthermore, measurements have shown that the method remains effective also if the operating conditions change significantly. The simplicity and the effectiveness of the implementation makes the proposed method suitable to be used in a wide range of applications.
Die vorliegende Erfindung betrifft eine Schaltungsanordnung mit einer Bootstrap-Schaltung, die zumindest eine Hauptkapazität aufweist, von der die erste Seite mit einem ersten Zweig der Schaltungsanordnung und die zweite Seite mit einem auf veränderlichem Potential liegenden zweiten Zweig der Schaltungsanordnung verbunden ist. Die vorgeschlagene Schaltungsanordnung zeichnet sich dadurch aus, dass die Bootstrap-Schaltung parallel zur Hauptkapazität wenigstens eine weitere Kapazität aufweist, die über eine zweite Versorgungsspannung auf eine höhere Spannung aufladbar ist als die Hauptkapazität und über wenigstens ein Schaltelement zur Unterstützung der Hauptkapazität zuschaltbar ist. Bei der vorgeschlagenen Schaltungsanordnung kann in Abhängigkeit von der Dimensionierung der Bootstrap-Kapazitäten eine sehr viel kleinere Fläche mit höherem oder gleich bleibenden Spannungseinbruch oder eine nicht so starke Flächenreduzierung mit kleinerem Spannungseinbruch verglichen mit einer herkömmlichen Bootstrap-Schaltung erzielt werden.
Die Erfindung betrifft einen Energieübertrager (100) zur induktiven Energieübertragung von einem primären Schaltkreis (10) des Energieübertragers (100) an eine erste (5) und eine zweite (15) Spannungsdomäne eines sekundären Schaltkreises (20) des Energieübertragers (100) und zur Informationsübertragung vom sekundären Schaltkreis (20) zum primären Schaltkreis (10). Dabei umfasst der Energieübertrager (100): – einen Transformator (30), über den der primäre Schaltkreis (10) und der sekundäre Schaltkreis (20) induktiv miteinander gekoppelt sind und über den sowohl die Energieübertragung als auch die Informationsübertragung erfolgt; und – ein Amplitudenmodulationsmodul (50) zum Modulieren der Strom- und/oder Spannungsamplitude im sekundären Schaltkreis (20) mit Hilfe eines Amplitudenmodulationsschalters (55), wobei der Amplitudenmodulationsschalter (55) zwischen der ersten (5) und zweiten (15) Spannungsdomäne des sekundären Schaltkreises (20) angeordnet ist und ausgelegt ist, durch Öffnen und Schließen des Amplitudenmodulationsschalters (55) die Strom- und/oder Spannungsamplitude im primären Schaltkreis (10) zu ändern, um somit Information vom sekundären Schaltkreis (20) zum primären Schaltkreis (10) zu übertragen. Die vorliegende Erfindung betrifft ferner einen Gate-Treiber zum Schalten eines Leistungsschalters (500) und ein Verfahren zur induktiven Übertragung von Energie und zur kombinierten Informationsübertragung.
Introduction to the special issue on self‑managing and hardware‑optimized database systems 2022
(2023)
Data management systems have evolved in terms of functionality, performance characteristics, complexity, and variety during the last 40 years. Particularly, the relational database management systems and the big data systems (e.g., Key-Value stores, Document stores, Graph stores and Graph Computation Systems, Spark, MapReduce/Hadoop, or Data Stream Processing Systems) have evolved with novel additions and extensions. However, the systems administration and tasks have become highly complex and expensive, especially given the simultaneous and rapid hardware evolution in processors, memory, storage, or networking. These developments present new open problems and challenges to data management systems as well as new opportunities.
The SMDB (International Workshop on Self-Managing Database Systems) and HardBD&Active (Joint International Workshop on Big Data Management on Emerging Hardware and Data Management on Virtualized Active Systems) workshops organized in conjunction with the IEEE ICDE (International Conference on Data Engineering) offered two distinct platforms for examining the above system-related challenges from different perspectives. The SMDB workshop looks into developing autonomic or self-* features in database and data management systems to tackle complex administrative tasks, while the HardBD&Active workshop focuses on harnessing hardware technologies to enhance efficiency and performance of data processing and management tasks. As a result of these workshops, we are delighted to present the third special issue of DAPD titled “Self-Managing and Hardware-Optimized Database Systems 2022,” which showcases the best contributions from the SMDB 2021/2022 and HardBD&Active 2021/2022 workshops.
Globalization has increased the number of road trips and vehicles. The result has been an intensification of traffic accidents, which are becoming one of the most important causes of death worldwide. Traffic accidents are often due to human error, the probability of which increases when the cognitive ability of the driver decreases. Cognitive capacity is closely related to the driver’s mental state, as well as other external factors such as the CO2 concentration inside the vehicle. The objective of this work is to analyze how these elements affect driving. We have conducted an experiment with 50 drivers who have driven for 25 min using a driving simulator. These drivers completed a survey at the start and end of the experiment to obtain information about their mental state. In addition, during the test, their stress level was monitored using biometric sensors and the state of the environment (temperature, humidity and CO2 level) was recorded. The results of the experiment show that the initial level of stress and tiredness of the driver can have a strong impact on stress, driving behavior and fatigue produced by the driving test. Other elements such as sadness and the conditions of the interior of the vehicle also cause impaired driving and affect compliance with traffic regulations.
Entwicklung eines nicht vergilbenden, faserbasierten BH's mittels innovativer FIM-Technologie
(2017)
Personalized remote healthcare monitoring is in continuous development due to the technology improvements of sensors and wearable electronic systems. A state of the art of research works on wearable sensors for healthcare applications is presented in this work. Furthermore, a state of the art of wearable devices, chest and wrist band and smartwatches available on the market for health and sport monitoring is presented in this paper. Many activity trackers are commercially available. The prices are continuously reducing and the performances are improving, but commercial devices do not provide raw data and are therefore not useful for research purposes.
The ballistocardiography is a technique that measures the heart rate from the mechanical vibrations of the body due to the heart movement. In this work a novel noninvasive device placed under the mattress of a bed estimates the heart rate using the ballistocardiography. Different algorithms for heart rate estimation have been developed.
Die Veröffentlichung von ChatGPT-3 im November 2022 und ChatGPT-4 im März 2023 verspricht, bisher Menschen vorbehaltene Denkaufgaben in zahlreichen Bereichen, von der Medizin bis zur Juristerei, zu automatisieren. Die vorliegende Untersuchung stellt das Versprechen auf die Probe, indem 200 Fälle aus dem Bereich des Wirtschaftsrechts in die derzeit leistungsfähigsten Chatbots zur Lösung eingegeben werden. Es ergibt sich ein nuanciertes Bild: Zwar wird erkennbar, dass der menschliche Experte nach wie vor überlegen ist. Trotzdem können Chatbots teilweise erstaunlich gute Ergebnisse erzielen, wenn sie einfache Fälle mit geringer Komplexität lösen.
Das Studienbuch "Internationales Wirtschaftsprivatrecht" stellt die Fortsetzung der in Deutschland existierenden Studienbücher zum nationalen Wirtschaftsprivatrecht dar. Der Ansatz bei diesem Lehrbuch ist es, gerade aus der Sicht von Studierenden der Betriebswirtschaftslehre und Entscheidern aus dem Mittelstand, anhand von wirtschaftswissenschaftlichen Schlagwörtern Problemlösungen aus juristischer Sicht aufzuzeigen. Daher grenzt sich dieses Werk in seiner Grundausrichtung von den typischen am Markt vorhandenen Studienbüchern im Wirtschaftsprivatrecht erheblich ab. Das Studienbuch "Internationales Wirtschaftsprivatrecht" eignet sich besonders für Bachelor- und Master-Studiengänge, in denen verpflichtend Rechtsvorlesungen zu belegen sind; speziell im internationalen Recht. Das betrifft insbesondere BWL-Studiengänge bzw. dazu artverwandte Studiengänge in Deutschland.
Kauf- und Vertragsrecht sind oft Schwerpunkt von Rechtsvorlesungen in betriebswirtschaftlichen Studiengängen. Hier spielen nicht nur nationale, sondern immer öfter auch grenzüberschreitende Transaktionen eine Rolle. In diesem Buch werden nationale und internationale Regelungen im Kauf- und Vertragsrecht miteinander verglichen. Neben dem UN-Kaufrecht werden rechtsvergleichend auch allgemeine rechtliche Fragen behandelt, wie etwa das Verhältnis vertraglicher und außervertraglicher Ansprüche und Rechtsbehelfe, Einbeziehung und Gültigkeit Allgemeiner Geschäftsbedingungen, vertragliche Haftungsbeschränkungen und Vertragsstrafen oder Unterschiede im allgemeinen Schadensrecht.
Der Verfasser macht deutlich, dass Regelungen, die in der eigenen Rechtsordnung als selbstverständlich erscheinen, sich zum Teil von denen in anderen Rechtsordnungen erheblich unterscheiden oder dort sogar unbekannt sein können. Studierende sollen erkennen, warum es diese Unterschiede gibt. Das erlaubt ihnen später bei Vertragsverhandlungen, Vorschläge ausländischer Partner besser zu verstehen und angemessen auf sie zu reagieren.
Für den Unternehmer wichtig ist, binnen welcher Fristen er als Käufer seine Rechte bei Sachmängeln geltend machen muss. Ist der Unternehmer Verkäufer, kann er sich erst nach Verjährungsvollendung endgültig zurücklehnen und sicher sein, dass keine Gewährleistungsansprüche gegen ihn mehr geltend gemacht werden können. Im nationalen Rechtsverkehr hat man sich auf Verkäufer- und Käuferseite mittlerweile an die zweijährige Regelfrist im BGB gewöhnt. Welche Fristen im Auslandsgeschäft gelten, ist dagegen oft unklar, weil sich die nationalen Verjährungsfristen oft unterscheiden: Allein in Europa gibt es bei der kaufrechtlichen Gewährleistung Verjährungsfristen zwischen sechs Monaten und sechs Jahren.
Ein praktisches Problem im Auslandsgeschäft besteht oft darin, zu bestimmen, binnen welcher Fristen Käufer ihre Rechte bei Sachmängeln geltend machen müssen. Verjährungs- und Rügefristen können die Rechte des Käufers ganz oder zum Teil ausschließen oder ihnen ihre Durchsetzbarkeit nehmen. Die nationalen Regelungen unterscheiden sich hier sehr: zum einen gibt es für die Gewährleistung Verjährungsfristen zwischen sechs Monaten und sechs Jahren, zum anderen verlangen einige Rechtsordnungen vom Käufer, mangelhafte Ware sofort zu rügen oder zurückzuweisen, während wiederum andere darauf verzichten. Das UN-Kaufrecht löst diese praktischen Probleme nur zum Teil, wie der nachfolgende Beitrag aufzeigt.
Praktische Schwierigkeiten bei der Gestaltung von Einkaufs-bedingungen bestehen oft, weil Käufer unterschiedliche Interessen haben: Bei „just-in-time“-Verträgen wollen Käufer keine Zeit verlieren, um die Ware nach Lieferung zu untersuchen. Sie wollen vielmehr die Ware sofort in den Produktionsprozess geben. Anders ist das bei technisch besonders komplexen Waren. Hier fürchten Käufer, bereits vor Abschluss der Untersuchung mögliche Sachmängel bei Gefahrübergang dem Verkäufer beweisen zu müssen. Gelten BGB/HGB, werden Einkaufsbedingungen nur bedingt diesen Interessen gerecht. Abbedingung der Unter-suchungs- und Rügeobliegenheit und Änderungen der Beweislastverteilung zum Vorteil des Käufers sind dann nicht möglich. Helfen kann hier aber das UN-Kaufrecht: Hier haben Käufergrößere Freiräume, wenn sie Einkaufsbedingungen gestalten.
Parteien streiten oft darüber, ob sich Käufer bei Mängeln der Kaufsache vom Vertrag lösen können. Der Verkäufer will das meistens verhindern: Fallen die Preise, kann sich der Käufer etwa nach dem Rücktritt die Ware am Markt preiswerter verschaffen. Die Ware ist dann zum ursprünglichen Preis nicht mehr handelbar. Steigen die Preise, droht zusätzlich eine Schadensersatzforderung: Der Käufer muss sich die Ware zu einemhöheren Preis bei einem anderen Verkäufer verschaffen. Auch können beim Verkäufer zusätzliche Kosten–etwa Transport-oder Lagerkosten–entstehen. Verkaufsbedingungen machen es dem Käufer deshalb besonders schwer, sich vom Vertrag zu lösen, während Einkaufsbedingungen daran nur geringe Anforderungen stellen.
Wer Ein- und Verkaufsbedingungen für das Auslandsgeschäft gestaltet, muss wissen, wo der Gesetzgeber der Gestaltungsfreiheit Grenzen gesetzt hat. Gelten BGB/HGB, lässt die deutsche Inhaltskontrolle überschaubare Spielräume, beim Rücktritts-recht des Käufers von den gesetzlichen Regeln abzuweichen. Weitgehend ungeklärt ist bislang, welche Vertragsaufhebungsklauseln in AGB wirksam sind, wenn UN-Kaufrecht gilt: Zwar kann sich in diesen Fällen die Inhaltskontrolle wegen Art. 4Satz 2 lit. a) CISG auch nach § 307 Abs. 1 BGB richten. Dann ist aber bei der Klauselkontrolle auch den Wertungen Rechnung zu tragen, die dem UN-Kaufrecht–und eben nicht dem BGB–zugrunde liegen und in dessen Bestimmungen ihren Niederschlag gefunden haben. Bei der Gestaltung von AGB kann dies Freiräume schaffen.
Vertragsstrafen und Schadenspauschalen wegen verspäteter Lieferung in CISG-Einkaufsbedingungen
(2023)
Der Käufer will in seinen Einkaufsbedingungen die Folgen verspäteter Lieferungen regeln. Er will sicherstellen, dass der Verkäufer die Nachteile, die durch die Verspätung entstehen, voll ausgleichen muss. Das deutsche AGB-Recht setzt hier dem Käufer rechtliche Grenzen. Sie gelten grundsätzlich auch, wenn UN-Kaufrecht Anwendung findet und Rechtswahl oder Kollisionsrecht des Forumstaates auf deutsches Recht verweisen. Die Wirksamkeit der Klauseln hängt dann von den Wertungen ab, die dem UN-Kaufrecht zugrunde liegen. Bei der Gestaltung von Einkaufsbedingungen hat das Vorteile: Das UN-Kaufrecht schafft Freiräume, die man nicht hat, wenn nur das BGB gilt.
Einkaufsbedingungen weichen mitunter von der gesetzlichen Regelung der Folgen der verspäteten Lieferung ab: Hier finden sich oft Klauseln, die fixe Summen vorsehen. Mit ihnen will der Käufer sicherstellen, mindestens diese Summe vom Verkäufer bei verspäteter Lieferung zu erhalten. Das ist aber nicht das einzige Ziel, das der Käufer mit den fixen Summen verfolgt. Auch will er, dass der Verkäufer ihm Schäden ersetzt, die über diese fixe Summe hinausgehen. Darüber hinaus will der Käufer sich und sein Unternehmen nicht mit zusätzlichen Formalien belasten, etwa mit einer Pflicht, sich bei Annahme verspäteter Lieferungen weitergehende Rechte vorbehalten zu müssen. Beiden fixen Summen, die der Käufer wählen kann, ist zwischen Schadenspauschale und Vertragsstrafe zu unterscheiden. Die Vertragsstrafe soll als „Druckmittel“ die rechtzeitige Belieferung des Käufers sicherstellen, die Schadenspauschale soll dem Käufer helfen, seinen Schaden einfacher gerichtlich durchsetzen zu können. Gilt UN-Kaufrecht, ist die Vereinbarung von Vertragsstrafe oder Schadenspauschale zulässig. Das kann auch durch Einkaufsbedingungen geschehen.
In a world with rapidly changing customer requirements and the increased role of technology, companies need more flexible systems to adapt their processes and react dynamically to changes. Adaptive Case Management (ACM) comes into consideration by providing a concept to adapt to changing business conditions. Within our research project we did a first foundational evaluation of the potential of ACM in supporting unpredictable sales processes. Based on a set of criteria we tested the concept of ACM with the open source tool Cognoscenti. The evaluation gave us the possibility to experience the concept of ACM. Hence we were able to provide a statement about the potential of ACM within the context of an unpredictable sales process, setting the path to further research and discussion of ACM in the area of sales processes.
This paper presents the preliminary results of a setof research projects being developed at the distributed resources laboratory at the University of Reutlingen. The main aim of these projects is to couple distributed ledger technologies (DLTs) with distributed control of microgrids. Firstly, a DLT based solution for a local market platform has been developed. This enables end customers to participate in new local micro-energy-markets by providing them with a distributed, decentralized, transparent and secure Peer to Peer (P2P) payment system. Secondly, this solution has been integrated with an autonomous (agent-based) grid management. The integrated solution of both marked platform as well as agent based control has been implemented and tested in a real microgrid with different distributed components such as PV System, CHP and different kinds of controllable loads. This microgrid is located in the distributed energy resources laboratory at the University of Reutlingen. Thirdly, the resulting solution is being implemented as an easy to customize market solution by AC2SG Software Oy, a Finland based software company, developing solutions for the Indian market. In a next phase, the solution is going to be tested in real environment in off-grids systems in India.
This paper aims at presenting a solution that enables end customers of the energy system to participate in new local micro-energy-markets by providing them with a distributed, decentralized, transparent and secure Peer to Peer (P2P) payment system, which functions automatically applying new concepts of Machine to Machine (M2M) communication technologies. This work was performed within the German project VK_2G, funded by the DBU. The key results were: Providing means to perform microtransactions in a P2P fashion between end consumers and prosumers in local communities at low cost in a transparent and secure manner; Developing a platform with pre-defined smart contracts able to be tailored to different end customers ‘needs in an easy way and; Integrating both the market platform as well as the local control of generation and loads. This solution has been developed, integrated and tested in a laboratory prototype. This paper discusses this solution and presents the results of the first test.
Distributed Ledger Technologies for the energy sector: facilitating interoperability analysis
(2023)
The use of distributed data storage and management structures, such as Distributed Ledger Technologies (DLT), in the energy sector has gained great interest in recent times. This opens up new possibilities in e.g. microgrid management, aggregation of distributed resources, peer-to- peer trading, integration of electromobility or proof-of-origin strategies. However, in order to benefit from those new possibilities, new challenges have to be overcome. This work focuses on one of these challenges, which is the need to ensure interoperability when integrating DLT-enabled devices in energy use cases. Firstly, the use of DLTs in the energy sector will be analyzed and the main use cases will be presented. Then, a classification of DLT-Energy use cases will be proposed. Secondly, the need for a common reference architecture framework to analyze those use cases with a focus on interoperability will be discussed and the current activities in research and standardization in this field will be presented. Finally, a new common reference architecture framework based on current activities in standardization will be presented.
The presented research is dedicated to estimation of the correlation between the level of renewable energy sources and the costs of congestion management in electric networks in selected European countries. Data of six countries in North-West European area (Italy, Spain, Germany, France, Poland and Austria) were investigated. Factors considered included grid congestion costs including re-dispatching costs as well as countertrading costs, gross electricity generation, installed capacity of electric generating facilities, installed capacity of electric non-dispatchable renewable energy sources and total electricity consumption. Special attention is paid to the share of renewable energy sources. It is found that the grid congestion costs are not clearly affected by penetration of non-dispatchable renewables in all the analysed countries and therefore a clear mathematical correlation cannot no be extrapolated with the available data. The results of this research show in general a loose dependency of the grid congestion costs on the penetration of renewables and a strong dependency on the total electrical consumption of the country.
In this work, a web-based software architecture and framework for management and diagnosis of large amounts of medical data in an ophthalmologic reading center is proposed. Data management for multi-center studies requires merging of standing data and repeatedly gathered clinical evidence such as vital signs and raw data. If ophthalmologic questions are involved the data acquisition is often provided by non-medical staff at the point of care or a study center, whereas the medical finding is mostly provided by an ophthalmologist in a specialized reading center. The study data such as participants, cohorts and measured values are administrated at a single data center for the entire study. Since a specialized reading center maintains several studies, the medical staff must learn the different data administration for the different data center. With respect to the increasing number and sizes of clinical studies, two aspects must be considered. At first, an efficient software framework is required to support the data management, processing and diagnosis by medical experts at the reading center. In the second place, this software needs a standardized user-interface that has not to be trained/taylore /adapted for each new study. Furthermore different aspects of quality and security controls have to be included. Therefore, the objective of this work is to establish a multi purpose ophthalmologic reading center, which can be connected to different data centers via configurable data interfaces in order to treat various topics simultaneously.
Clinical reading centers provide expertise for consistent, centralized analysis of medical data gathered in a distributed context. Accordingly, appropriate software solutions are required for the involved communication and data management processes. In this work, an analysis of general requirements and essential architectural and software design considerations for reading center information systems is provided. The identified patterns have been applied to the implementation of the reading center platform which is currently operated at the Center of Ophthalmology of the University Hospital of Tübingen.
Die Bedeutung und Stellung der Informationstechnologie erlebte in den letzten 60 Jahren einen fortlaufenden Wandel. Der anfänglich rein unterstützende Charakter entwickelte sich immer mehr zu einem wichtigen Bestandteil der Aufbau- und Ablauforganisation im Unternehmen. Ein definiertes IT-Servicemanagement im Unternehmen sieht sich mittlerweile gleichgeordnet mit den restlichen Fachabteilungen, tritt mit seinen Leistungen als Dienstleister auf und betrachtet die Fachabteilungen als „Kunde“. Neue Technologien und Innovationen und die daraus resultierenden Neudefinitionen bestehender Anforderungen sollen im Rahmen der Digitalisierung in Unternehmen positive Effekte zeigen. IT Infrastructure Library (ITIL) wird als Framework für IT-Servicemanagement in der Industrie und im öffentlichen Dienst genutzt. Der Ansatz von ITIL unterstützte den Kulturwandel und sensibilisierte das Management und die Mitarbeiter darin, serviceorientiert zu denken. Da dieser Ansatz einen vordefinierten, zyklischen Ablauf hat, könnten schnell eintreffende Kundenanforderungen nicht fristgerecht umgesetzt werden, weshalb agile Methoden wie der DevOps-Ansatz in den Vordergrund rücken. Die Herausforderung besteht darin, den Kulturwandel bei der Einführung von DevOps in bestehenden ITIL-Strukturen in Unternehmen zu fördern.
Development of an IoT-based inventory management solution and training module using smart bins
(2023)
Flexibility, transparency and changeability of warehouse environments are playing an increasingly important role to achieve a cost-efficient production of small batch sizes. This results in increasing requirements for warehouses in terms of flexibility, scalability, reconfigurability and transparency of material and information flows to deal with large number of different components and variable material and information flows due to small batch sizes. Therefore, an IoT-based inventory management solution and training module has been developed, implemented and validated at Werk150 – the Factory on campus of the ESB Business School. Key elements of the developed solution are smart bins using weight mats to track the bin’s content and additional sensors and buttons which are connected to an IoT – Hub to collect data of material consumption and manual handling operations. The use of weight mats for the smart bins offers the possibility to measure the container content independent of the specific component geometry and thus for a variety of components based on the specific component weights. The developed solution enables focusing on key for success elements of the system to provide synchronization of the flow of materials and information resulting an increase of flexibility and significantly higher transparency of the material flow. AIbased algorithms are applied to analyse the gathered data and to initiate process optimizations by providing the logistics decision makers a profound and transparent basis for decision making. In order to provide students and industry visitors of the learning factory with the necessary competences and to support the transfer into practice, a training module on IoT-based inventory management was developed and implemented.
Perceptual integration of kinematic components in the recognition of emotional facial expressions
(2018)
According to a long-standing hypothesis in motor control, complex body motion is organized in terms of movement primitives, reducing massively the dimensionality of the underlying control problems. For body movements, this low dimensional organization has been convincingly demonstrated by the learning of low-dimensional representations from kinematic and EMG data. In contrast, the effective dimensionality of dynamic facial expressions is unknown, and dominant analysis approaches have been based on heuristically defined facial ‘‘action units,’’ which reflect contributions of individual face muscles. We determined the effective dimensionality of dynamic facial expressions by learning of a low dimensional model from 11 facial expressions. We found an amazingly low dimensionality with only two movement primitives being sufficient to simulate these dynamic expressions with high accuracy. This low dimensionality is confirmed statistically, by Bayesian model comparison of models with different numbers of primitives, and by a psychophysical experiment that demonstrates that expressions, simulated with only two primitives, are indistinguishable from natural ones.
In addition, we find statistically optimal integration of the emotion information specified by these primitives in visual perception. Taken together, our results indicate that facial expressions might be controlled by a very small number of independent control units, permitting very low dimensional parametrization of the associated facial expression.
IOS 2.0 : new aspects on inter-organizational integration through enterprise 2.0 technologies
(2015)
This special theme of „Electronic Markets“ focuses on research concerned with the use of social technologies and "2.0" principles in the interaction between organization (i.e., with "inter-organizational systems (IOS) 2.0"). This theme falls within the larger space of Enterprise 2.0 research, but focuses in particular on inter-organizational use (between enterprises), not intra-organizational use (in a single enterprise). While there is great interest in practice regarding the use of 2.0 technologies to support intra-organizational communication, collaboration and interaction, information systems (IS) research has largely been oblivious to this important use of social technologies.
In today’s education, healthcare, and manufacturing sectors, organizations and information societies are discussing new enhancements to corporate structure and process efficiency using digital platforms. These enhancements can be achieved using digital tools. Industry 5.0 and Society 5.0 give several potentials for businesses to enhance the adaptability and efficacy of their industrial processes, paving the door for developing new business models facilitated by digital platforms. Society 5.0 can contribute to a super-intelligent society that includes the healthcare industry. In the past decade, the Internet of Things, Big Data Analytics, Neural Networks, Deep Learning, and Artificial Intelligence (AI) have revolutionized our approach to various job sectors, from manufacturing and finance to consumer products. AI is developing quickly and efficiently. We have heard of the latest artificial intelligence chatbot, ChatGPT. OpenAI created this, which has taken the internet by storm. We tested the effectiveness of a considerable language model referred to as ChatGPT on four critical questions concerning “Society 5.0”, “Healthcare 5.0”, “Industry,” and “Future Education” from the perspectives of Age 5.0.
Silicon neurons represent different levels of biological details and accuracies as a trade-off between complexity and power consumption. With respect to this trade-off and high similarity to neuron behaviour models, relaxation-type oscillator circuits often yield a good compromise to emulate neurons. In this chapter, two exemplified relaxation-type silicon neurons are presented that emulate neural behaviour with energy consumption under the scale of nJ/spike. The first proposed fully CMOS relaxation SiN is based on mathematical Izhikevich model and can mimic a broad range of physiologically observable spike patterns. The results of kinds of biologically plausible output patterns and coupling process of two SiNs are presented in 0.35 μm CMOS technology. The second type is a novel ultra-low-frequency hybrid CMOS-memristive SiN based on relaxation oscillators and analog memristive devices. The hybrid SiN directly emulates neuron behaviour in the range of physiological spiking frequencies (less than 100 Hz). The relaxation oscillator is implemented and fabricated in 0.13 μm CMOS technology. An autonomous neuronal synchronization process is demonstrated with two relaxation oscillators coupled by an analog memristive device in the measurement to emulate the synchronous behaviour between spiking neurons.
Automatic anode rod inspection in aluminum smelters using deep-learning techniques: a case study
(2020)
Automatic fault detection using machine learning has become an exciting and promising area of research. This because it accurate and timely way to manage and classify with minimal human effort. In the computer vision community, deep-learning methods have become the most suitable approaches for this task. Anodes are large carbon blocks that are used to conduct electricity during the aluminum reduction process. The most basic function of anode rod inspection is to prevent a situation where the anode rod will not fit into the stub-holes of a new anode. It would be the case for a rod containing either severe toe-in, missing stubs, or a retained thimble on one or more stubs. In this work, to improve the accuracy of shape defect inspection for an anode rod, we use the Fast Region-based Convolutional Network method (Fast R-CNN), model. To train the detection model, we collect an image dataset composed of multi-class of anode rod defects with annotated labels. Our model is trained using a small number of samples, an essential requirement in the industry where the number of available defective samples is limited. It can simultaneously detect multi-class of defects of the anode rod in nearly real-time.
Rotating machinery occupies a predominant place in many industrial applications. However, rotating machines are often encountered with severe vibration problems. The measurement of these machines’ vibrations signal is of particular importance since it plays a crucial role in predictive maintenance. When the vibrations are too high, they often cause fatigue failure. They announce an unexpected stop or break and, consequently, a significant loss of productivity or an attack on the personnel’s safety. Therefore, fault identification at early stages will significantly enhance the machine’s health and significantly reduce maintenance costs. Although considerable efforts have been made to master the field of machine diagnostics, the usual signal processing methods still present several drawbacks. This paper examines the rotating machinery condition monitoring in the time and frequency domains. It also provides a framework for the diagnosis process based on machine learning by analyzing the vibratory signals.
Theory and practice of implementing a successful enterprise IoT strategy in the industry 4.0 era
(2021)
Since the arrival of the internet and affordable access to technologies, digital technologies have occupied a growing place in industries, propelling us towards a 4th industrial revolution: Industry 4.0. In today’s era of digital upheaval, enterprises are increasingly undergoing transformations that are leading to their digitalization. The traditional manufacturing industry is in the throes of a digital transformation that is accelerated by exponentially growing technologies (e.g., intelligent robots, Internet of Things, sensors, 3D printing). Around the world, enterprises are in a frantic race to implement solutions based on IoT to improve their productivity, innovation, and reduce costs and improve their markets on the international scene. Considering the immense transformative potential that IoTs and big data have to bring to the industrial sector, the adoption of IoT in all industrial systems is a challenge to remain competitive and thus transform the industry into a smart factory. This paper presents the description of the innovation and digitalization process, following the Industry 4.0 paradigm to implement a successful enterprise IoT strategy.
An autonomous vehicle is a robotic vehicle with decision and action capability capable of performing assigned tasks without or with minimal human intervention. Autonomous cars have been in development for many years. The Society of Automotive Engineers (SAE International) published in 2014 a classification in five levels of driving automation, with level 0 corresponding to completely manual driving, and level 5 to an ideal dream where the vehicle would be able to navigate entirely autonomously for all missions and in all environments. This work addressed the navigation of an autonomous vehicle in general. We focus on one of the most complex scenarios of the road network and crossing of road intersections. In this paper, the critical features of autonomous intelligent vehicles are reviewed. Furthermore, the associated problems are presented, and the most advanced solutions are derived. This article aims to allow a novice in this field to understand the different facets of localization and perception problems for autonomous vehicles.
Power line communications (PLC) reuse the existing power-grid infrastructure for the transmission of data signals. As power line the communication technology does not require a dedicated network setup, it can be used to connect a multitude of sensors and Internet of Things (IoT) devices. Those IoT devices could be deployed in homes, streets, or industrial environments for sensing and to control related applications. The key challenge faced by future IoT-oriented narrowband PLC networks is to provide a high quality of service (QoS). In fact, the power line channel has been traditionally considered too hostile. Combined with the fact that spectrum is a scarce resource and interference from other users, this requirement calls for means to increase spectral efficiency radically and to improve link reliability. However, the research activities carried out in the last decade have shown that it is a suitable technology for a large number of applications. Motivated by the relevant impact of PLC on IoT, this paper proposed a cooperative spectrum allocation in IoT-oriented narrowband PLC networks using an iterative water-filling algorithm.
The digitization of factories will be a significant issue for the 2020s. New scenarios are emerging to increase the efficiency of production lines inside the factory, based on a new generation of robots’ collaborative functions. Manufacturers are moving towards data-driven ecosystems by leveraging product lifecycle data from connected goods. Energy-efficient communication schemes, as well as scalable data analytics, will support these various data collection scenarios. With augmented reality, new remote services are emerging that facilitate the efficient sharing of knowledge in the factory. Future communication solutions should generally ensure connectivity between the various production sites spread worldwide and new players in the value chain (e.g., suppliers, logistics) transparent, real-time, and secure. Industry 4.0 brings more intelligence and flexibility to production. Resulting in more lightweight equipment and, thus, offering better ergonomics. 5G will guarantee real-time transmissions with latencies of less than 1 ms. This will provide manufacturers with new possibilities to collect data and trigger actions automatically.
Autonomous navigation is one of the main areas of research in mobile robots and intelligent connected vehicles. In this context, we are interested in presenting a general view on robotics, the progress of research, and advanced methods related to this field to improve autonomous robots’ localization. We seek to evaluate algorithms and techniques that give robots the ability to move safely and autonomously in a complex and dynamic environment. Under these constraints, we focused our work in the paper on a specific problem: to evaluate a simple, fast and light SLAM algorithm that can minimize localization errors. We presented and validated a FastSLAM 2.0 system combining scan matching and loop closure detection. To allow the robot to perceive the environment and detect objects, we have studied one of the best deep learning technique using convolutional neural networks (CNN). We validate our testing using the YOLOv3 algorithm.
This publication is a Technical report by the Joint Research Centre (JRC), the European Commission’s science and knowledge service. It aims to provide evidence-based scientific support to the European policymaking process.
This report provides an bird’s-eye view on the concept and implications of digital identities. After an introduction situating the concept of identity, the report clarifies its contemporary meaning and proposes a definition of reference. Subsequently, the authors examine the consequences of the translation of the concept of identity into the digital, internet connected world. They then analyse the particularities and consequences of this translation, which allows them to situate and define the concept of digital identities.
Finally, they conclude with the challenges that digital identity poses to the digital citizen in the attempt to manage and protect its attributes with the advent of Internet of Things and blockchain technology.
Der Beitrag befasst sich mit den wirtschaftlichen Auswirkungen auf Unternehmen der deutschen Automobilbranche in Folge der behördlichen Restriktionen im Rahmen der Corona-Pandemie. Untersuchungszeitraum ist das Jahr 2020 auf Quartalsebene. Unsere Auswertung zeigt, dass die Zulieferer von der Pandemie wesentlich stärker getroffen wurden als die Hersteller der Branche. Ebenso konnte eine zeitliche Wellenbewegung der Negativentwicklung entlang der Wertschöpfungskette festgestellt werden. Der Beitrag zeigt Instrumente der Supply-Chain-Finanzierung auf, die sowohl kurzfristige Erleichterungen in Krisenzeiten als auch langfristige Möglichkeiten der Working Capital Optimierung darstellen.
Von den Covid-19-Restriktionen wurden im Automobilsektor die Zulieferer wesentlich stärker getroffen als die Fahrzeughersteller. Vor allem die Entwicklung des Working Capitals im ersten Pandemie-Jahr erwies sich als kritisch. Der Beitrag gibt einen Überblick über mögliche Lösungen für eine allseits vorteilhaftere, stabile Supply-Chain-Finanzierung in künftigen Krisen.
Werttreiber Lean Production
(2013)
Steigern Unternehmen, die Lean-Production-Methoden einsetzten, ihren Unternehmenswert, und wenn ja, wie sehr? Das Autorenteam der Hochschule Reutlingen hat das Zusammenspiel der Managementkonzepte Working Capital Management und Wertorientierung untersucht und stellt die ermutigenden Ergebnisse anhand je eines Szenarios für ein Großunternehmen und ein KMU vor.
Trotz Niedrigzinsphase bleibt das Working Capital Management ein wichtiger Treiber für Wertgrößen in Unternehmen und wichtiges Managementinstrument. Unsere Ergebnisse über 115 Unternehmen aus den wichtigsten deutschen Indizes in den Jahren 2011 bis 2017 zeigen, dass effektives Working Capital Management einen positiven Einfluss auf die Rentabilität und den Unternehmenswert haben kann. Gleichzeitig zeigen unsere Ergebnisse aber auch, dass dem Working Capital Management jüngst weniger Aufmerksamkeit zuteilgeworden ist und digitale Innovationen vermutlich noch nicht in dem Umfang zur Effizienzsteigerung eingesetzt werden, wie dies möglich erscheint. Selbst vor dem Hintergrund andauernd niedriger Kapitalmarktzinsen ist dies kritisch zu sehen.
Twitter and citations
(2023)
Social media, especially Twitter, plays an increasingly important role among researchers in showcasing and promoting their research. Does Twitter affect academic citations? Making use of Twitter activity about columns published on VoxEU, a renowned online platform for economists, we develop an instrumental variable strategy to show that Twitter activity about a research paper has a causal effect on the number of citations that this paper will receive. We find that the existence of at least one tweet, as opposed to none, increases citations by 16-25%. Doubling overall Twitter engagement boosts citations by up to 16%.
IGBT modules with anti-parallel FWDs are widely used in inductive load switching power applications, such as motor drive applications. Nowadays there is a continuous effort to increase the efficiency of such systems by decreasing their switching losses. This paper addresses the problems arising in the turn-on process of an IGBT working in hard-switching conditions. A method is proposed which achieves – contrary to most other approaches – a high switching speed and, at the same time, a low peak reverse-recovery current. This is done by applying an improved gate current waveform that is briefly lowered during the turn-on process. The proposed method achieves low switching losses. Its effectiveness is demonstrated by experimental results with IGBT modules for 600V and 1200V.
Die vorliegende Erfindung betrifft ein Transmission Line Pulssystem zum Erzeugen eines elektrischen Pulses, sowie ein diesbezügliches Verfahren. Dabei umfasst das Transmission Line Pulssystem: eine Transmission Line, eine Energieversorgungsquelle zum Aufladen der Transmission Line und einen Entladungsschalter zum Auslösen einer Entladung der aufgeladenen Transmission Line, dadurch gekennzeichnet, dass die Transmission Line eine Vielzahl von Einzelsegmenten umfasst, wobei jedes Einzelsegment über ein zugehöriges Einstellglied mit einem gemeinsamen Massepotential elektrisch verbunden ist, und wobei zumindest eines der Einstellglieder einen Einstellkondensator und einen Einstellschalter aufweist.
This paper addresses the turn-on switching process of insulated-gate bipolar transistor (IGBT) modules with anti-parallel free-wheeling diodes (FWD) used in inductive load switching power applications. An increase in efficiency, i.e. decrease in switching losses, calls for a fast switching process of the IGBT, but this commonly implies high values of the reverse-recovery current overshoot. To overcome this undesired behaviour, a solution was proposed which achieves an independent control of the collector current slope and peak reverse recovery current by applying a gate current that is briefly turned negative during the turn-on process. The feasibility of this approach has already been shown, however, a sophisticated control method is required for applying it in applications with varying currents, temperature and device parameters. In this paper a solution based on an adaptive, iterative closed-loop ontrol is proposed. Its effectiveness is demonstrated by experimental results from a 1200 V/200A IGBT power module for different load currents and reverse-recovery current overshoots.
Artificial intelligence (AI) is one of the most promising technologies of the post-pandemic era. Cloud computing technology can simplify the process of developing AI applications by offering a variety of services, including ready-to-use tools to train machine learning (ML) algorithms. However, comparing the vast amount of services offered by different providers and selecting a suitable cloud service can be a major challenge for many firms. Also in academia, suitable criteria to evaluate this type of service remain largely unclear. Therefore, the overall aim of this work has been to develop a framework to evaluate cloud-based ML services. We use Design Science Research as our methodology and conduct a hermeneutic literature review, a vendor analysis, as well as, expert interviews. Based on our research, we present a novel framework for the evaluation of cloud-based ML services consisting of six categories and 22 criteria that are operationalized with the help of various metrics. We believe that our results will help organizations by providing specific guidance on how to compare and select service providers from the vast amount of potential suppliers.
The diversity of energy prosumer types makes it difficult to create appropriate incentive mechanisms that satisfy both prosumers and energy system operators alike. Meanwhile, European energy suppliers buy guarantees of origin (GoO) which allow them to sell green energy at premium prices while in reality delivering grey energy to their customers. Blockchain technology has proven itself to be a robust paying system in which users transact money without the involvement of a third party. Blockchain tokens can be used to represent a unit of energy and, just as GoOs, be submitted to the market. This paper focuses on simulating marketplace using the ethereum blockchain and smart contracts, where prosumers can sell tokenized GoOs to consumers willing to subsidize renewable energy producers. Such markets bypass energy providers by allowing consumers to obtain tokenized GoOs directly from the producers, which in turn benefit directly from the earnings. Two market strategies where tokens are sold as GoOs have been simulated. In the Fix Price Strategy prosumers sell their tokens to the average GoO price of 2014. The Variable Price Strategy focuses on selling tokens at a price range defined by the difference between grey and green energy. The study finds that the ethereum blockchain is robust enough to functions as a platform for tokenized GoO trading. Simulation results have been compared and the results indicate that prosumers earn significantly more money by following the Variable Price
Strategy.
Executive education in IS is under the scrutiny of many institution for the potential to bring in financial revenues. However teaching executives can be a very challenging task because of the previous experiences, variation in their previous education, and multiplicity of motivations for pursuing a continuous education. The panel aims at sharing successful experiences and highlighting challenges of dealing with executive audiences. The panel will present the results of a large survey among executive students and identify the three most significant elements emerged from the survey: the importance of theory that is actionable, the importance of varied pedagogical tools and practices, and the importance of relevance beyond practical tools. Based on a survey that will be distributed to the audience at the beginning of the panel, the audience will be actively engaged in sharing their experiences on the three topics aiming at capitalize and sum up the collective knowledge of the room.
Purpose
In recognising the key role of business intelligence and big data analytics in influencing companies’ decision-making processes, this paper aims to codify the main phases through which companies can approach, develop and manage big data analytics.
Design/methodology/approach
By adopting a research strategy based on case studies, this paper depicts the main phases and challenges that companies “live” through in approaching big data analytics as a way to support their decision-making processes. The analysis of case studies has been chosen as the main research method because it offers the possibility for different data sources to describe a phenomenon and subsequently to develop and test theories.
Findings
This paper provides a possible depiction of the main phases and challenges through which the approach(es) to big data analytics can emerge and evolve over time with reference to companies’ decision-making processes.
Research limitations/implications
This paper recalls the attention of researchers in defining clear patterns through which technology-based approaches should be developed. In its depiction of the main phases of the development of big data analytics in companies’ decision-making processes, this paper highlights the possible domains in which to define and renovate approaches to value. The proposed conceptual model derives from the adoption of an inductive approach. Despite its validity, it is discussed and questioned through multiple case studies. In addition, its generalisability requires further discussion and analysis in the light of alternative interpretative perspectives.
Practical implications
The reflections herein offer practitioners interested in company management the possibility to develop performance measurement tools that can evaluate how each phase can contribute to companies’ value creation processes.
Originality/value
This paper contributes to the ongoing debate about the role of digital technologies in influencing managerial and social models. This paper provides a conceptual model that is able to support both researchers and practitioners in understanding through which phases big data analytics can be approached and managed to enhance value processes.
Urgent action is needed to keep the chance of limiting global warming to 1.5°C or even 2.0°C. Current outlooks by IPCC, and many other organisations forecast that this will be impossible at current pace of emission 'reductions' – Germany has already hit 1.5° warming this year. Across 2019, particularly during the UN New York Climate summit, numerous organisations declared their ambition to become net carbon neutral. Amongst these were investors and companies, including quite a number of German ones.
We apply a mixed methods approach, utilising data gathered from approx. 900 companies after Climate Week in context of the Energy Efficiency Index of German Industry (EEI), along with media research focusing on decarbonisation plans announced and initiatives pledging climate action.
With this, we analyse how German companies in the manufacturing sectors react to rising societal pressure and emerging policies, particularly what measures they have taken or plan to implement to reduce the footprint of their company, their products and their supply chain. In this, we particularly analyse whether and in what way energy- and resource consumption, as well as carbon emissions are considered in the development and lifecycle of goods manufactured. This is of huge relevance as these goods determine the future footprint of buildings, vehicles and industry.
Regarding the supply chain, current articles indicate that small and medium-sized enterprises (SME) are particularly challenged by increasing demands from their large corporate clients and an alleged lack of preparedness to be able to take and afford prompt decarbonisation action themselves (Buchenau et. al. 2019). Notably the automotive industry recently announced new models that will be 100% carbon neutral all the way through (ibid). We thus analyse if and how factors such as company size, energy intensity and sector affiliation influence a company’s plan to fully decarbonize. Ownership structure and corporate culture, it appears, significantly impact on the degree of decarbonisation action underway.
From the perspective of manufacturing companies, the political, media and economic discourse on decarbonisation in the recent years manifests itself as an increasing social expectation of action. In Germany, in particular, this discourse is also being driven forward by powerful companies, respectively sectors, most notably the automotive industry. Against this background, the present paper examines how German manufacturing companies react to rising societal pressure and emerging policies. It examines which measures the companies have taken or plan to take to reduce their carbon footprint, which aspirations are associated with this and the structural characteristics (company size, energy intensity, and sector) by which these are influenced. A mix methods approach is applied, utilising data gathered from approx. 900 companies in context of the Energy Efficiency Index of German Industry (EEI), along with media research focusing on the announced decarbonisation plans and initiatives. We demonstrate that one-size-serves-all approaches are not suitable to decarbonise industry, as the situation and ambitions differ considerably depending on size, energy intensity and sector. Even though the levels of ambition and urgency are high, micro and energy intensive companies, in particular, are challenged. The present research uncovers a series of questions that call for attention to materialise the ambitions and address the challenges outlined.
The digital transformation is today’s dominant business transformation having a strong influence on how digital services and products are designed in a service-dominant way. A popular underlying theory of value creation and economic exchange that is known as the service-dominant (S-D) logic can be connected to many successful digital business models. However, S-D logic by itself is abstract. Companies cannot directly use it as an instrument for business model innovation and design in an easy way. To address this a comprehensive ideation method based on S-D logic is proposed, called service-dominant design (SDD). SDD is aimed at supporting firms in the transition to a service- and value-oriented perspective. The method provides a simplified way to structure the ideation process based on four model components. Each component consists of practical implications, auxiliary questions and visualization techniques that were derived from a literature review, a use case evaluation of digital mobility and a focus group discussion. SDD represents a first step of having a toolset that can support established companies in the process of service- and value-orientation as part of their digital transformation efforts.
Die digitale Transformation ist die heute vorherrschende geschäftliche Transformation, die einen starken Einfluss darauf hat, wie digitale Dienstleistungen und Produkte dienstleistungsdominant gestaltet werden. Eine beliebte zugrundeliegende Theorie der Wertschöpfung und des wirtschaftlichen Austauschs, die als dienstleistungsdominante Logik (S-D) bekannt ist, kann mit vielen erfolgreichen digitalen Geschäftsmodellen verbunden werden. Allerdings ist die S-D-Logik an sich abstrakt. Unternehmen können sie nicht ohne Weiteres als Instrument für die Innovation und Gestaltung von Geschäftsmodellen nutzen. Um dies zu ändern, wird eine umfassende Ideenfindungsmethode auf der Grundlage der S-D-Logik vorgeschlagen, die als service-dominantes Design (SDD) bezeichnet wird. SDD zielt darauf ab, Unternehmen beim Übergang zu einer service- und wertorientierten Perspektive zu unterstützen. Die Methode bietet eine vereinfachte Möglichkeit, den Ideenfindungsprozess auf der Grundlage von vier Modellkomponenten zu strukturieren. Jede Komponente besteht aus praktischen Implikationen, Hilfsfragen und Visualisierungstechniken, die aus einer Literaturrecherche, einer Anwendungsfallbewertung der digitalen Mobilität und einer Fokusgruppendiskussion abgeleitet wurden. SDD ist ein erster Schritt zu einem Toolset, das etablierte Unternehmen bei der Service- und Werteorientierung im Rahmen ihrer digitalen Transformation unterstützen kann.
Im Management von Sportvereinen wächst zusehends der Anspruch im Markt als professionelles Wirtschaftsunternehmen zu agieren, wobei auch Marken-Überlegungen zunehmend strategische Bedeutung erhalten. Die hier in Ausschnitten vorgestellten Studie "Marken im deutschen Profisport 2012/2013" untersucht die Selbst- und Fremdwahrnehmung der Vereine der fünf umsatzstärksten Teamsportligen in Deutschland (1. und 2. Fußballbundesliga, Handballbundesliga, Basketballbundesliga, Deutschen Eishockey Liga). Die Studienergebnisse basieren auf Meinungen von insgesamt 4.678 befragten Sportfans, sowie 58 Vereinen. Dabei wird vor allem festgestellt, dass eine teils erhebliche Diskrepanz zwischen dem eigenem Anspruch vieler Clubs und dem tatsächlichen Markenimage vorherrscht. Nur gerade einmal elf Clubs werden von den Fans als "echte Marke" wahrgenommen. Kernpunkt der Studie ist ein von den Autoren entwickeltes Klassifizierungssytem für Sportmarken, das zukünftig als Grundlage für markenrelevante Entscheidungen dienen könnte.
This introductory chapter starts with a brief discussion about the differences between the long-standing perspective of sports marketing and more modern sports marketing approach. The discussion leads to the ultimate question whether sports marketing can be seen as a new and independent marketing discipline rather than a normal form of marketing. In addition, a coherent definition of sports marketing will be presented which serves as the underlining definition of this edition volumen. Then the most important characteristics of sports of a marketing perspective will be explained using some real-life examples. The structure as well as the individual chapters of this book will be introduced in the following. This first chapter concludes with the introduction of the German Institute for Sports Marketing which has been founded by the editors of this book.
Marketing in sports
(2014)
In this chapter the principals of marketing will be explained an transferred to the contex of sports. Following a brief introduction the principles of marketing will be outlined and explained in further detail. Then the subject of sports marketing will be introduced from different perspectives using various definitions and approaches. Afterwards the focus is on the unique characteristics of sports marketing before a model of sports marketing will be presented. Then it will be shown how professional sporting organisations might market their products an themselves. The chapter concludes with a detailed case study using the example of FC St. Pauli which is one of only few real brands in German sports.
This chapter presents the diverse facets of sports marketing in Western Europe. It showcases the most important types of sports, most significant leagues, bestknown clubs, most popular athletes and the biggest sporting events in Western Europe while elaborating on the relevant aspects of sports marketing. We examine European sportsconsumers, characterise the sports marketing market in Western Europe an explain the current scientific/academic status of sports marketing. Moreover, we illustrate the motives for the internationalisation taking place in sports marketing. In conclusion, this chapter includes an international case study on the entry of the NFL into the European market.
Although sports is generally defined as motor activity, it has always been much more than that. Since management and sports follow the same objective of achieving highest performance, correlation between these two fields nowadays become increasingly interesting in terms of corporate strategy. This chapter aims to point out how organisations as well as individuals can benefit from the general and psychological values and strategies of sports, by first looking at the general framework of professional sports an futher applying approaches from various types of sports directly to certain business functions like general management, human resource management and marketing management. The chapter concludes with an international case study and brief outlook.
This concluding chapter summarises and discusses the different parts and findings of the anthology on hand. The main statements and conclusions of each chapter are presented. Following up, the editors try to look into the future of the sports business and sports management in general and the future of sports marketing in particular and draw a final conclusion.