Refine
Year of publication
- 2017 (167) (remove)
Document Type
- Conference proceeding (87)
- Journal article (57)
- Book chapter (15)
- Book (4)
- Working Paper (2)
- Doctoral Thesis (1)
- Review (1)
Language
- English (167) (remove)
Has full text
- yes (167) (remove)
Is part of the Bibliography
- yes (167)
Institute
- Informatik (60)
- Technik (38)
- ESB Business School (29)
- Texoversum (21)
- Life Sciences (19)
Publisher
- Springer (36)
- IEEE (27)
- Elsevier (16)
- Gesellschaft für Informatik (15)
- ACM (8)
- Association for Information Systems (AIS) (5)
- Hochschule Reutlingen (5)
- Università Politecnica delle Marche (4)
- Technische Universität Berlin (3)
- Wiley (3)
Purpose: The purpose of this paper is to describe and discuss the current state of fashion business academic education worldwide. This is motivated by the wish to develop recommendations for the fashion business bachelor program of Reutlingen Uni versity.
Design/methodology/approach: This paper is based on a systematic review of relevant fashion business academic programs. A qualitative comparison is conducted through a categorization of the programs’ content and a score system evaluating the programs’ concepts.
Findings: Key findings were that several factors ensure successful fashion business education: Industry connections, international networks, project-based work, personalized career services and innovative approaches in teaching that include all steps along the fashion value chain.
Research limitations/implications: The research was primarily limited due to the limited number of schools assessed. As a result of the restricted time frame, those schools that were presented could only be analyzed regarding a few aspects. Future research should focus on a more in-depth analysis and further-reaching comparisons, e.g. comparisons with teaching concepts outside the fashion business area or with requirements by fashion companies.
Background. The application of lean management is standard in many companies all over the world. It is used to continuously optimise existing production processes and to reduce the complexity of administrative processes. Unfortunately, in higher education, the awareness of lean management as a highly effective methodology is quite low.
Research aims. The research aim is to show how the lean strategy can be applied in university environments. Finally, this paper addresses the question why it is so difficult to implement lean in a university environment and how an institution of higher education can move forward towards becoming a lean university.
Methodology. Based on a literature review, five key lean principles are presented and examples of their implementation are discussed using short case studies from our own institution. We also compare our findings with those in the literature.
Key findings. Lean offers the chance to improve the management of higher education institutions. This requires a commitment on the part of the university top management aiming at convincing all stakeholders that a culture of lean helps the institution to be able to adapt to the rapidly changing environment of higher education.
Reconstructing 3D face shape from a single 2D photograph as well as from video is an inherently ill-posed problem with many ambiguities. One way to solve some of the ambiguities is using a 3D face model to aid the task. 3D morphable face models (3DMMs) are amongst the state of the art methods for 3D face reconstruction, or so called 3D model fitting. However, current existing methods have severe limitations, and most of them have not been trialled on in-the-wild data. Current analysis-by- synthesis methods form complex non linear optimisation processes, and optimisers often get stuck in local optima. Further, most existing methods are slow, requiring in the order of minutes to process one photograph.
This thesis presents an algorithm to reconstruct 3D face shape from a single image as well as from sets of images or video frames in real-time. We introduce a solution for linear fitting of a PCA shape identity model and expression blendshapes to 2D facial landmarks. To improve the accuracy of the shape, a fast face contour fitting algorithm is introduced. These different components of the algorithm are run in iteration, resulting in a fast, linear shape-to- landmarks fitting algorithm. The algorithm, specifically designed to fit to landmarks obtained from in-the-wild images, by tackling imaging conditions that occur in in-the-wild images like facial expressions and the mismatch of 2D–3D contour correspondences, achieves the shape reconstruction accuracy of much more complex, nonlinear state of the art methods, while being multiple orders of magnitudes faster.
Second, we address the problem of fitting to sets of multiple images of the same person, as well as monocular video sequences. We extend the proposed shape-to-landmarks fitting to multiple frames by using the knowledge that all images are from the same identity. To recover facial texture, the approach uses texture from the original images, instead of employing the often-used PCA albedo model of a 3DMM. We employ an algorithm that merges texture from multiple frames in real-time based on a weighting of each triangle of the reconstructed shape mesh.
Last, we make the proposed real-time 3D morphable face model fitting algorithm available as open-source software. In contrast to ubiquitous available 2D-based face models and code, there is a general lack of software for 3D morphable face model fitting, hindering a widespread adoption. The library thus constitutes a significant contribution to the community.
Thematic issue on human-centred ambient intelligence: cognitive approaches, reasoning and learning
(2017)
This editorial presents advances on human-centred Ambient Intelligence applications which take into account cognitive issues when modelling users (i.e. stress, attention disorders), and learn users’ activities/preferences and adapt to them (i.e. at home, driving a car). These papers also show AmI applications in health and education, which make them even more valuable for the general society.
The main challenge when driving heat pumps by PV-electricity is balancing differing electrical and thermal demands. In this article, a heuristic method for optimal operation of a heat pump driven by a maximum share of PV-electricity is presented. For this purpose, the (DHW) are activated in order shift the operation of the heat pump to times of PV-generation. The system under consideration refers to thermal and electrical demands of a single family house. It consists of a heat pump, a thermal energy storage for DHW and of grid connected heating and generation of domestic hot water, the heat pump runs with two different supply temperatures and thereby achieving a maximum overall COP. Within the algorithm for optimization a set of heuristic rules is developed in a way that the operational characteristics of the heat pump in terms of minimum running and stopping times are met as well as the limiting constraints of upper and lower limits of room temperature and energy content of electricity generated, a varying number of heat pump schedules fulfilling the bundary conditions are created. Finally, the schedule offering the maximum on-site utilization of PV-electricity with a minimum number of starts of the heat pump, which serves as secondary condition, is selected. Yearly simulations of this combination have been carried out. Initial results of this method indicate a significant rise in on-site consumption of the PV-electricity and heating demand fulfilment by renewable electricity with no need for a massive TES for the heating system in terms of a big water tank.
Painting galleries typically provide a wealth of data composed of several data types. Those multivariate data are too complex for laymen like museum visitors to first, get an overview about all paintings and to look for specific categories. Finally, the goal is to guide the visitor to a specific painting that he wishes to have a more closer look on. In this paper we describe an interactive visualization tool that first provides such an overview and lets people experiment with the more than 41,000 paintings collected in the web gallery of art. To generate such an interactive tool, our technique is composed of different steps like data handling, algorithmic transformations, visualizations, interactions, and the human user working with the tool with the goal to detect insights in the provided data. We illustrate the usefulness of the visualization tool by applying it to such characteristic data and show how one can get from an overview about all paintings to specific paintings.
How to protect the skin from getting sun burnt? The sun can damage your skin e.g. skin cancer. But the sun has a positive effect to the human. The time in sun and the intensity are key values between enjoy the sunbath and having a negative effect to the skin. A smart device like a UV flower could help you to enjoy the sunbath. It measures the UV index around you and gives this information to a smartphone app. The development steps of such a device are described in this paper. The UV flower is made of textile fabrics.
Medical applications are becoming increasingly important in the current development of health care and therefore a crucial part of the medical industry. An essential component is the development of user interfaces for mobile medical applications. The conceptual process is crucial for the further development of the main development process. Inconsistency or errors in the conceptual phase, have a serious impact on all areas and could prevent the certification for market approval.
This paper presents a guide to support developer with this process. It was developed based on a requirement analysis of the legal requirements to publish a medical device.
A sleep study is a test used to diagnose sleep disorders and is usually done in sleep laboratories. The golden standard for evaluation of sleep is overnight polysomnography (PSG). Unfortunately, in-lab sleep studies are expensive and complex procedures. Furthermore, with a minimum of 22 wire attachments to the patient for sleep recording, this medical procedure is invasive and unfamiliar for the subjects. To solve this problem, low-cost home diagnostic systems, based on noninvasive recording methods requires further researches.
For this intention it is important to find suitable bio vital parameters for classifying sleep phases WAKE, REM, light sleep and deep sleep without any physical impairment at the same time. We decided to analyse body movement (BM), respiration rate (RR) and heart rate variability (HRV) from existing sleep recordings to develop an algorithm which is able to classify the sleep phases automatically. The preliminary results of this project show that BM, RR and HRV are suitable to identify WAKE, REM and NREM stage.
To analyze the humans’ sleep it is necessary as to identify the sleep stages, occurring during the sleep, their durations and sleep cycles. The gold standard procedure for this approach is polysomnography (PSG), which classify the sleep stages based on Rechtschaffen and Kales (R-K) method. This method aside the advantages as high accuracy has however some disadvantages, among others time-consuming and uncomfortable for the patient procedure. Therefore, the development of further methods for the sleep classification in addition to PSG is a promising topic for the investigation and this work has as its aim the presentation of possible ways and goals for this development.
Asymmetric read/write storage technologies such as Flash are becoming
a dominant trend in modern database systems. They introduce
hardware characteristics and properties which are fundamentally
different from those of traditional storage technologies such
as HDDs.
Multi-Versioning Database Management Systems (MV-DBMSs)
and Log-based Storage Managers (LbSMs) are concepts that can
effectively address the properties of these storage technologies but
are designed for the characteristics of legacy hardware. A critical
component of MV-DBMSs is the invalidation model: commonly,
transactional timestamps are assigned to the old and the new version,
resulting in two independent (physical) update operations.
Those entail multiple random writes as well as in-place updates,
sub-optimal for new storage technologies both in terms of performance
and endurance. Traditional page-append LbSM approaches
alleviate random writes and immediate in-place updates, hence reducing
the negative impact of Flash read/write asymmetry. Nevertheless,
they entail significant mapping overhead, leading to write
amplification.
In this work we present an approach called Snapshot Isolation
Append Storage Chains (SIAS-Chains) that employs a combination
of multi-versioning, append storage management in tuple granularity
and novel singly-linked (chain-like) version organization.
SIAS-Chains features: simplified buffer management, multi-version
indexing and introduces read/write optimizations to data placement
on modern storage media. SIAS-Chains algorithmically avoids
small in-place updates, caused by in-place invalidation and converts
them into appends. Every modification operation is executed
as an append and recently inserted tuple versions are co-located.
IT Governance (ITG) is crucial due to its significant impact on enabling innovation and enhancing firm performance. Hence, in the last decade ITG has become important in both academic and in practical research. Although several studies have investigated individual aspects of ITG success and its impact on single determinants, the causal relationship of how ITG promotes firm performance remains unclear. Thus, a more comprehensive understanding about the link between ITG and firm performance is needed. To address this gap, this research aims at understanding how ITG and firm performance are related. Therefore, we conducted a systematic literature review (1) to create an overview on how current research structures the link between ITG mechanisms and firm performance, (2) to uncover key constructs as potential mediators or moderators on the general link between ITG and performance, and (3) to set the basis for future studies on the ITG-firm performance relationship.
We were able to identify a set of specific capabilities corporations need to develop in order to enhance brand love. Furthermore, the effects of most dynamic capabilities on brand love have a strong correlation to the degree of customer orientation. Other results are relevant concerning the proposed moderation and mediation hypotheses. Firstly, the impact of customer orientation on brand love is varied under specific market conditions, supporting our central moderation hypothesis (β = .259, p = .001). To be precise, the impact of customer orientation is strongest in markets that have low competitive differentiation in products and services. Other control variables like age, gender, or market form (B2B versus B2C) lead to no significant heterogeneity in the data set. Finally, mediation analyses show no significant “direct effect” of the existing DC constructs on brand love, supporting the mediating role of customer orientation.
Royal Philip's goal was to use innovation to improve the lives of three billion people a year by 2025. To reach that goal, the company was shifting from selling medical products in a transactional manner to providing integrated healthcare solutions based on digital health technology ("HealthTech").
This shift required a dual transformation. On one hand, the company needed to transform how healthcare was conducted. Healthcare professionals would have to change the way they worked and reimbursement schemes needed to change to incentivize payers, providers, and patients in vastly different ways. On the other hand, Philips needed to redesign how it worked internally. The company componentized its business, introduced digital platforms, and co-created solutions with the various stakeholders of the healthcare industry.
In other words: Royal Philips was transforming itself in order to reinvent healthcare in the digital age.
In 2016, German car manufacturer the Audi Group (AUDI AG) was working on an expanding array of digital innovations. The goals of these innovations varied, and included strengthening customer- and employee-facing processes, digitally enhancing existing products, and developing new, potentially disruptive business models. Audi’s IT unit was critical to each of these efforts. Based on personal interviews with 11 IT- and non-IT executives at Audi, this case examines the different ways in which digitization can help to enhance and transform an organization’s processes, products, and business models. The case also highlights the challenges that arise as large companies “digitize.”
Recent digital technologies like the Internet of Things and Augmented Reality have brought IT into companies’ core products. What were previously purely physical products are becoming hybrid or digitized. Despite receiving a lot of recent attention, digitized products have only seen a slow uptake in businesses so far. In this paper, we study the challenges that keep companies from realizing the desired impacts of digitized products and the practices they employ to address these challenges. To do so, we looked at companies from a set of industries that are highly affected by digital transformation, but at the same time hesitant to move to a more digitized world: the creative industries. Based on a literature review and twelve interviews in creative industries, we developed a conceptual model that can serve as a basis for formulating testable hypotheses for further research in this area.
Electronic word-of-mouth (eWoM) communication has received a lot of attention from the academic community. As multiple research papers focus on specific facets of eWoM, there is a need to integrate current research results systematically. Thus, this paper presents a scientific literature analysis in order to determine the current state-of-the-art in the field of eWoM.
This paper examines the efficacy of social media systems in customer complaint handling. The emergence of social media, as a useful complement and (possibly) a viable alternative to the traditional channels of service delivery, motivates this research. The theoretical framework, developed from literature on social media and complaint handling, is tested against data collected from two different channels (hotline and social media) of a German telecommunication services provider, in order to gain insights into channel efficacy in complaint handling. We contribute to the understanding of firm’s technology usage for complaint handling in two ways:
(a) by conceptualizing and evaluating complaint handling quality across traditional and social media channels and (b) by comparing the impact of complaint handling quality on key performance outcomes such as customer loyalty, positive word-of-mouth, and crosspurchase intentions across traditional and social media channels.
Pokémon Go was the first mobile augmented reality (AR) game to reach the top of the download charts of mobile applications. However, little is known about this new generation of mobile online AR games. Existing theories provide limited applicability for user understanding. Against this background, this research provides a comprehensive framework based on uses and gratification theory, technology risk research, and flow theory. The proposed framework aims to explain the drivers of attitudinal and intentional reactions, such as continuance in gaming or willingness to invest money in in-app purchases. A survey among 642 Pokémon Go players provides insights into the psychological drivers of mobile AR games. The results show that hedonic, emotional, and social benefits and social norms drive consumer reactions while physical risks (but not data privacy risks) hinder consumer reactions. However, the importance of these drivers differs depending on the form of user behavior.
How to separate the wheat from the chaff: improved variable selection for new customer acquisition
(2017)
Steady customer losses create pressure for firms to acquire new accounts, a task that is both costly and risky. Lacking knowledge about their prospects, firms often use a large array of predictors obtained from list vendors, which in turn rapidly creates massive high-dimensional data problems. Selecting the appropriate variables and their functional relationships with acquisition probabilities is therefore a substantial challenge. This study proposes a Bayesian variable selection approach to optimally select targets for new customer acquisition. Data from an insurance company reveal that this approach outperforms nonselection methods and selection methods based on expert judgment as well as benchmarks based on principal component analysis and bootstrap aggregation of classification trees. Notably, the optimal results show that the Bayesian approach selects panel-based metrics as predictors, detects several nonlinear relationships, selects very large numbers of addresses, and generates profits. In a series of post hoc analyses, the authors consider prospects’ response behaviors and cross selling potential and systematically vary the number of predictors and the estimated profit per response. The results reveal that more predictors and higher response rates do not necessarily lead to higher profits.
Characterisation of porous knitted titanium for replacement of intervertebral disc nucleus pulposus
(2017)
Effective restoration of human intervertebral disc degeneration is challenged by numerous limitations of the currently available spinal fusion and arthroplasty treatment strategies. Consequently, use of artificial biomaterial implant is gaining attention as a potential therapeutic strategy. Our study is aimed at investigating and characterizing a novel knitted titanium (Ti6Al4V) implant for the replacement of nucleus pulposus to treat early stages of chronic intervertebral disc degeneration. Specific knitted geometry of the scaffold with a porosity of 67.67 ± 0.824% was used to overcome tissue integration failures. Furthermore, to improve the wear resistance without impairing original mechanical strength, electro-polishing step was employed. Electro-polishing treatment changed a surface roughness from 15.22 ± 3.28 to 4.35 ± 0.87 μm without affecting its wettability which remained at 81.03 ± 8.5°. Subsequently, cellular responses of human mesenchymal stem cells (SCP1 cell line) and human primary chondrocytes were investigated which showed positive responses in terms of adherence and viability. Surface wettability was further enhanced to super hydrophilic nature by oxygen plasma treatment, which eventually caused substantial increase in the proliferation of SCP1 cells and primary chondrocytes. Our study implies that owing to scaffolds physicochemical and biocompatible properties, it could improve the clinical performance of nucleus pulposus replacement.
A wide variety of cell types exhibit substrate topography-based behavior, also known as contact guidance. However, the precise cellular mechanisms underlying this process are still unknown. In this study, we investigated contact guidance by studying the reaction of human endothelial cells (ECs) to well-defined microgroove topographies, both during and after initial cell spreading. As the cytoskeleton plays a major role in cellular adaptation to topographical features, two methods were used to perturb cytoskeletal structures. Inhibition of actomyosin contractility with the chemical inhibitor blebbistatatin demonstrated that initial contact guidance events are independent of traction force generation. However, cell alignment to the grooved substrate was altered at later time points, suggesting an initial ‘passive’ phase of contact guidance, followed by a contractility-dependent ‘active’ phase that relies on mechanosensitive feedback. The actin cytoskeleton was also perturbed in an indirect manner by culturing cells upside down, resulting in decreased levels of contact guidance and suggesting that a possible loss of contact between the actin cytoskeleton and the substrate could lead to cytoskeleton impairment. The process of contact guidance at the microscale was found to be primarily lamellipodia driven, as no bias in filopodia extension was observed on micron-scale grooves.
Intermediate filament reorganization dynamically influences cancer cell alignment and migration
(2017)
The interactions between a cancer cell and its extracellular matrix (ECM) have been the focus of an increasing amount of investigation. The role of the intermediate filament keratin in cancer has also been coming into focus of late, but more research is needed to understand how this piece fits in the puzzle of cytoskeleton-mediated invasion and metastasis. In Panc-1 invasive pancreatic cancer cells, keratin phosphorylation in conjunction with actin inhibition was found to be sufficient to reduce cell area below either treatment alone. We then analyzed intersecting keratin and actin fibers in the cytoskeleton of cyclically stretched cells and found no directional correlation. The role of keratin organization in Panc-1 cellular morphological adaptation and directed migration was then analyzed by culturing cells on cyclically stretched polydimethylsiloxane (PDMS) substrates, nanoscale grates, and rigid pillars. In general, the reorganization of the keratin cytoskeleton allows the cell to become more ‘mobile’- exhibiting faster and more directed migration and orientation in response to external stimuli. By combining keratin network perturbation with a variety of physical ECM signals, we demonstrate the interconnected nature of the architecture inside the cell and the scaffolding outside of it, and highlight the key elements facilitating cancer cell-ECM interactions.
AUDI AG has historically focused on producing and selling premium vehicles but has begun to experiment with providing mobility services, built around car sharing. Its response to the so-called sharing economy addressed strategic and transformational challenges. Strategically, the company pursued additional sources of revenue from targeted, premium mobility services, rather than the less segmented services provided by competitors such as BMW and Zipcar. AUDI AG also transformed its organizational structure, processes and architecture to balance autonomy for innovation and integration for competitiveness.
Cell-cell and cell-extracellular matrix (ECM) adhesion regulates fundamental cellular functions and is crucial for cell-material contact. Adhesion is influenced by many factors like affinity and specificity of the receptor-ligand interaction or overall ligand concentration and density. To investigate molecular details of cell ECM and cadherins (cell-cell) interaction in vascular cells functional nanostructured surfaces were used Ligand-functionalized gold nanoparticles (AuNPs) with 6-8 nm diameter, are precisely immobilized on a surface and separated by non-adhesive regions so that individual integrins or cadherins can specifically interact with the ligands on the AuNPs. Using 40 nm and 90 nm distances between the AuNPs and functionalized either with peptide motifs of the extracellular matrix (RGD or REDV) or vascular endothelial cadherins (VEC), the influence of distance and ligand specificity on spreading and adhesion of endothelial cells (ECs) and smooth muscle cells (SMCs) was investigated. We demonstrate that RGD-dependent adhesion of vascular cells is similar to other cell types and that the distance dependence for integrin binding to ECM-peptides is also valid for the REDV motif. VEC-ligands decrease adhesion significantly on the tested ligand distances. These results may be helpful for future improvements in vascular tissue engineering and for development of implant surfaces.
This work presents a fully integrated GaN gate driver in a 180nm HV BCD technology that utilizes high-voltage energy storing (HVES) in an on-chip resonant LC tank, without the need of any external capacitor. It delivers up to 11nC gate charge at a 5V GaN gate, which exceeds prior art by a factor of 45-83, supporting a broad range of GaN transistor types. The stacked LC tank covers an area of only 1.44mm², which corresponds to a superior value of 7.6nC/mm².
In recent years, significant progress was made on switched-capacitor DCDC converters as they enable fully integrated on chip power management. New converter topologies overcame the fixed input-to-output voltage limitation and achieved high efficiency at high power densities. SC converters are attractive to not only mobile handheld devices with small input and output voltages, but also for power conversion in IoTs, industrial and automotive applications, etc. Such applications need to be capable of handling high input voltages of more than 10V. This talk highlights the challenges of the required supporting circuits and high voltage techniques, which arise for high Vin SC converters. It includes level shifters, charge pumps and back-to-back switches. High Vin conversion is demonstrated in a 4:1 SC DCDC converter with an input voltage as high as 17V with a peak efficiency of 45 %, and a buckboost SC converter with an input voltage range starting from 2 up to 13V, which utilizes a total of 17 ratios and achieves a peak efficiency of 81.5 %. Furthermore a highly integrated micro power supply approach is introduced, which is connected directly to the 120/230 Vrms mains, with an output power of 3mW, resulting in a power density >390μW/mm², which exceeds prior art by a factor of 11.
Managing decentralized corporate energy systems is a challenging task for enterprises. However, the integration of energy objectives into business strategy creates difficulties resulting in inefficient decisions. To improve this, practice-proven methods such as the balanced scorecard and enterprise architecture management are transferred to the energy domain. The methods are evaluated based on a case study. Managing multi-dimensionality and high complexity are the main drivers for an effective and efficient energy management system. Both methods show a positive impact on managing decentralized corporate energy systems and are adaptable to the energy domain.
This paper presents a control strategy for optimal utilization of photovoltaic (PV) generated power in conjunction with an Energy Storage System (ESS). The ESS is specifically designed to be retrofitted into existing PV systems in an end-user application. It can be attached in parallel to the PV system and connects to existing DC/AC inverters. In particular, the study covers the impact such a modification has on the output power of existing PV panels. A distinct degradation of PV output power was found due to the different power characteristics of PV panel and ESS. To overcome such degradation a novel feedback system is proposed. The feedback system continuously modifies the power characteristic of the ESS to match the PV panel and thus achieves optimal power utilization. Impact on PV and power point tracking performance is analyzed. Simulation of the proposed system is performed in MATLAB/Simulink. The results are found to be satisfactory.
A novel configuration of the dual active bridge (DAB) DC/DC converter is presented, enabling more efficient wide voltage range conversion at light loads. A third phase leg as well as a center tapped transformer are introduced to one side of the converter. This concept provides two different turn ratios, thus extending the zero voltage switching operation resulting in higher efficiency. A laboratory prototype was built converting an input voltage of 40V to an output voltage in the range of 350V to 650V. Measurements show a significant increase up to 20% in the efficiency for light-load operation.
Multilevel-cell (MLC) flash is commonly deployed in today’s high density NAND memories, but low latency and high reliability requirements make it barely used in automotive embedded flash applications. This paper presents a time domain voltage sensing scheme that applies a dynamic voltage ramp at the cells’ control gate (CG) in order to achieve fast and reliable sensing suitable for automotive applications.
This publication gives a short introduction and overview of the European project SCOUT and introduces a methodology for a holistic approach to record the state of the art in technical (vehicle and connectivity, human factors regarding physiologic and ergonomic level) and non-technical enablers (societal, economic, legal, regulatory and policy level) of connected and automated driving in Europe. The paper addresses beside the technical topics of environmental perception, E/E architecture, actuators and security, the state of the art of the legal framework in the context of connected and automated driving.
In any autonomous driving system, the map for localization plays a vital part that is often underestimated. The map describes the world around the vehicle outside of the sensor view and is a main input into the decision making process in highly complicated scenarios. Thus there are strict requirements towards the accuracy and timeliness of the map. We present a robust and reliable approach towards crowd based mapping using a GraphSLAM framework based on radar sensors. We show on a parking lot that even in dynamically changing environments, the localization results are very accurate and reliable even in unexplored terrain without any map data. This can be achieved by collaborative map updates from multiple vehicles. To show these claims experimentally, the Joint Graph Optimization is compared to the ground truth on an industrial parking space. Mapping performance is evaluated using a dense map from a total station as reference and localization results are compared with a deeply coupled DGPS/INS system.
The European Economic and Monetary Union (EMU) has been in turmoil for more than six years. The present governance rules do not seem to solve the problems neither permanently nor effectively. There is no vision about the future of Europe in the 21st century. This article describes a realignment of the economic governance, which does not necessarily lead to a transfer or political union. However, it solves the current and future challenges. In fact, the redesign of present rules is the most likely as well as legally and economically option today. The key ideais the detachment from the compulsive idea of an ever closer union. However, this vision requires boldness towards greater flexibility together with an exit clause or a state insolvency procedure for incompliant member states.
This paper models the political budget cycle with stochastic differential equations. The paper highlights the development of future volatility of the budget cycle. In fact, I confirm the proposition of a less volatile budget cycle in future. Moreover, I show that this trend is even amplified due to higher transparency. These findings are new evidence in the literature on electoral cycles. I calibrate a rigorous stochastic model on public deficit-to-GDP data for several countries from 1970 to 2012.
Database management systems (DBMS) are critical performance components in large scale applications under modern update intensive workloads. Additional access paths accelerate look-up performance in DBMS for frequently queried attributes, but the required maintenance slows down update performance. The ubiquitous B+ tree is a commonly used key-indexed access path that is able to support many required functionalities with logarithmic access time to requested records. Modern processing and storage technologies and their characteristics require reconsideration of matured indexing approaches for today's workloads. Partitioned B-trees (PBT) leverage characteristics of modern hardware technologies and complex memory hierarchies as well as high update rates and changes in workloads by maintaining partitions within one single B+-Tree. This paper includes an experimental evaluation of PBTs optimized write pattern and performance improvements. With PBT transactional throughput under TPC-C increases 30%; PBT results in beneficial sequential write patterns even in presence of updates and maintenance operations.
Characteristics of modern computing and storage technologies fundamentally differ from traditional hardware. There is a need to optimally leverage their performance, endurance and energy consumption characteristics. Therefore, existing architectures and algorithms in modern high performance database management systems have to be redesigned and advanced. Multi Version Concurrency Control (MVCC) approaches in data-base management systems maintain multiple physically independent tuple versions. Snapshot isolation approaches enable high parallelism and concurrency in workloads with almost serializable consistency level. Modern hardware technologies benefit from multi-version approaches. Indexing multi-version data on modern hardware is still an open research area. In this paper, we provide a survey of popular multi-version indexing approaches and an extended scope of high performance single-version approaches. An optimal multi-version index structure brings look-up efficiency of tuple versions, which are visible to transactions, and effort on index maintenance in balance for different workloads on modern hardware technologies.
Using measurement and simulation for understanding distributed development processes in the Cloud
(2017)
Organizations increasingly develop software in a distributed manner. The Cloud provides an environment to create and maintain software-based products and services. Currently, it is widely unknown which software processes are suited for Cloud-based development and what their effects in specific contexts are. This paper presents a process simulation to study distributed development in the Cloud. We contribute a simulation model, which helps analyzing different project parameters and their impact on projects carried out in the Cloud. The simulator helps reproducing activities, developers, issues and events in the project, and it generates statistics, e.g., on throughput, total time, and lead and cycle time. The aim of this simulation model is thus to analyze the tradeoffs regarding throughput, total time, project size, and team size. Furthermore, the modified simulation model aims to help project managers select the most suitable planning alternative. Based on observed projects in Finland and Spain, we simulated a distributed project using artificial and real data. Particularly, we studied the variables project size, team size, throughput, and total project duration. A comparison of the real project data with the results obtained from the simulation shows the simulation producing results close to the real data, and we could successfully replicate a distributed software project. By improving the understanding of distributed development processes, our simulation model thus supports project managers in their decision-making.
The business landscape is changing radically because of software. Companies in all industry sectors are continously finding new flexibilities in this programmable world. They are able to deliver new functionalities even after the product is already in the customer's hands. But success is far from guaranteed if they cannot validate their assumptions about what their customers actually need. A competitor with better knowledge of customer needs can disrupt the market in an instant.
This book introduces continuous experimentation, an approach to continuously and systematically test assumptions about the company's product or service strategy and verify customers' needs through experiments. By observing how customers actually use the product or early versions of it, companies can make better development decisions and avoid potentially expensive and wasteful activities. The book explains the cycle of continuous experimentation, demonstrates its use through industry cases, provides advice on how to conduct experiments with recipes, tools, and models, and lists some common pitfalls to avoid. Use it to get started with continuous experimentation and make better product and service development decisions that are in-line with your customers' needs.
Due to rapidly changing technologies and business contexts, many products and services are developed under high uncertainties. It is often impossible to predict customer behaviors and outcomes upfront. Therefore, product and service developers must continuously find out what customers want, requiring a more experimental mode of management and appropriate support for continuously conducting experiments. We have analytically derived an initial model for continuous experimentation from prior work and matched it against empirical case study findings from two startup companies. We examined the preconditions for setting up an experimentation system for continuous customer experiments. The resulting RIGHT model for Continuous Experimentation (Rapid Iterative value creation Gained through High-frequency Testing) illustrates the building blocks required for such a system and the necessary infrastructure. The major findings are that a suitable experimentation system requires the ability to design, manage, and conduct experiments, create so-called minimum viable products or features, link experiment results with a product roadmap, and manage a flexible business strategy. The main challenges are proper, rapid design of experiments, advanced instrumentation of software to collect, analyse, and store relevant data, and integration of experiment results in the product development cycle, software development process, and business strategy. This summary refers to the article The RIGHT Model for Continuous Experimentation, published in the Journal of Systems and Software [Fa17].
First International Workshop on Hybrid dEveLopmENt Approaches in Software Systems Development
(2017)
A software process is the game plan to organize project teams and run projects. Yet, it still is a challenge to select the appropriate development approach for the respective context. A multitude of development approaches compete for the users’ favor, but there is no silver bullet serving all possible setups. Moreover, recent research as well as experience from practice shows companies utilizing different development approaches to assemble the bestfitting approach for the respective company: a more traditional process provides the basic framework to serve the organization, while project teams embody this framework with more agile (and/or lean) practices to keep their flexibility. The first HELENA workshop aims to bring together the community to discuss recent findings and to steer future work.
The ability to develop and deploy high-quality software at a high speed gets increasing relevance for the comptetitiveness of car manufacturers. Agile practices have shown benefits such as faster time to market in several application domains. Therefore, it seems to be promising to carefully adopt agile practices also in the automotive domain. This article presents findings from an interview-based qualitative survey. It aims at understanding perceived forces that support agile adoption. Particularly, it focuses on embedded software development for electronic control units in the automotive domain.
Software and system development faces numerous challenges of rapidly changing markets. To address such challenges, companies and projects design and adopt specific development approaches by combining well-structured comprehensive methods and flexible agile practices. Yet, the number of methods and practices is large, and available studies argue that the actual process composition is carried out in a fairly ad-hoc manner. The present paper reports on a survey on hybrid software development approaches. We study which approaches are used in practice, how different approaches are combined, and what contextual factors influence the use and combination of hybrid software development approaches. Our results from 69 study participants show a variety of development approaches used and combined in practice. We show that most combinations follow a pattern in which a traditional process model serves as framework in which several fine-grained (agile) practices are plugged in. We further show that hybrid software development approaches are independent from the company size and external triggers. We conclude that such approaches are the results of a natural process evolution, which is mainly driven by experience, learning, and pragmatism.
The digital transformation of the automotive industry has a significant impact on how development processes need to be organized in future. Dynamic market and technological environments require capabilities to react on changes and to learn fast. Agile methods are a promising approach to address these needs but they are not tailored to the specific characteristics of the automotive domain like product line development. Although, there have been efforts to apply agile methods in the automotive domain for many years, significant and widespread adoptions have not yet taken place. The goal of this literature review is to gain an overview and a better understanding of agile methods for embedded software development in the automotive domain, especially with respect to product line development. A mapping study was conducted to analyze the relation between agile software development, embedded software development in the automotive domain and software product line development. Three research questions were defined and 68 papers were evaluated. The study shows that agile and product line development approaches tailored for the automotive domain are not yet fully explored in the literature. Especially, literature on the combination of agile and product line development is rare. Most of the examined combinations are customizations of generic approaches or approaches stemming from other domains. Although, only few approaches for combining agile and software product line development in the automotive domain were found, these findings were valuable for identifying research gaps and provide insights into how existing approaches can be combined, extended and tailored to suit the characteristics of the automotive domain.
Incubators in multinational corporations : development of a corporate incubator operator model
(2017)
This paper analyzes the components of a corporate incubator operator model in multinational companies. Thereby, three relevant phases were identified: pre incubation, incubation, and exit. Each phase contains different criteria that represent critical success factors for a corporate incubator, which are based on theoretical findings and lessons learned from practice. During the pre-incubation phase companies should define their need for a corporate incubator, the origin of ideas and the selection criteria for incubator tenants. The actual phase of incubation refers to the incubator program, which should be flexible with respect to each tenant. Furthermore, resource allocation plays an important role during the incubator program. Exit options after a successful incubation differ according to internal ideas and external start-ups, as well as the objective of the incubator. The research is based on a comprehensive screening of existing incubator literature and a qualitative content analysis of statements from eight experts of international corporate incubators.
Gallium nitride high electron mobility transistors (GaN-HEMTs) have low capacitances and can achieve low switching losses in applications where hard turn-on is required. Low switching losses imply a fast switching; consequently, fast voltage and current transients occur. However, these transients can be limited by package and layout parasitics even for highly optimized systems. Furthermore, a fast switching requires a fast charging of the input capacitance, hence a high gate current.
In this paper, the switching speed limitations of GaN-HEMTs due to the common source inductance and the gate driver supply voltage are discussed. The turn-on behavior of a GaN-HEMT is simulated and the impact of the parasitics and the gate driver supply voltage on the switching losses is described in detail. Furthermore, measurements are performed with an optimized layout for a drain-source voltage of 500 V and a drain-source current up to 60 A.
Modern power semiconductor devices have low capacitances and can therefore achieve very fast switching transients under hard-switching conditions. However, these transients are often limited by parasitic elements, especially by the source inductance and the parasitic capacitances of the power semiconductor. These limitations cannot be compensated by conventional gate drivers. To overcome this, a novel gate driver approach for power semiconductors was developed. It uses a transformer which accelerates the switching by transferring energy from the source path to the gate path.
Experimental results of the novel gate driver approach show a turn-on energy reduction of 78% (from 80 μJ down to 17 μJ) with a drain-source voltage of 500V and a drain current of 60 A. Furthermore, the efficiency improvement is demonstrated for a hard-switching boost converter. For a switching frequency of 750 kHz with an input voltage of 230V and an output voltage of 400V, it was possible to extend the output power range by 35%(from 2.3kW to 3.1 kW), due to the reduction of the turn-on losses, therefore lowering the junction temperature of the GaN-HEMT.
The presented wide-Vin step-down converter introduces a parallel-resonant converter (PRC), comprising an integrated 5-bit capacitor array and a 300 nH resonant coil, placed in parallel to a conventional buck converter. Unlike conventional resonant concepts, the implemented soft-switching control eliminates input voltage dependent losses over a wide operating range. This ensures high efficiency across a wide range of Vin= 12-48V, 100-500mA load and 5V output at up to 15MHz switching frequency. The peak efficiency of the converter is 76.3 %. Thanks to the low output current ripple, the output capacitor can be as small as 50 nF, while the inductor tolerates a larger ESR, resulting in small component size. The proposed PRC architecture is also suitable for future power electronics applications using fast-switching GaN devices.
More and more power electronics applications utilize GaN transistors as they enable higher switching frequencies in comparison to conventional Si devices. Faster switching shrinks down the size of passives and enables compact solutions in applications like renewable energy, electrical cars and home appliances. GaN transistors benefit from ~10× smaller gate charge QG and gate drive voltages in the range of typically 5V vs. ~15V for Si.
Modern power transistors are able to switch at very high transition speed, which can cause EMC violations and overshoot. This is addressed by a gate driver with variable gate current, which is able to control the transition speed. The key idea is that the gate driver can influence the di/dt and dv/dt transition separately and optimize whichever transition promises the highest improvement while keeping switching losses low. To account for changes in the load current, supply voltage, etc., a control loop is required in the driver to ensure optimized switching. In this paper, an efficient control scheme for an automotive gate driver with variable output current capability is presented. The effectiveness of the control loop is demonstrated for a MOSFET bridge consisting of OptiMOS-T2™devices with a total gate charge of 39nC. This bridge setup shows dv/dt transitions between 50 to 1000ns, depending on driving current. The driver is able to switch between gate current levels of 1 to 500mA in 10/15ns (rising/falling transition). With the implemented control loop the driver is measured to significantly reduce the ringing and thereby reduce device stress and electromagnetic emissions while keeping switching losses 52% lower than with a constant current driver.
A concept for a slope shaping gate driver IC is proposed, used to establish control over the slew rates of current and voltage during the turn-on and turn off switching transients.
It combines the high speed and linearity of a fully-integrated closed-loop analog gate driver, which is able to perform real-time regulation, with the advantages of digital control, like flexibility and parameter independency, operating in a predictive cycle-bycycle regulation. In this work, the analog gate drive integrated circuit is partitioned into functional blocks and modeled in the small-signal domain, which also includes the non-linearity of parameters. An analytical stability analysis has been performed in order to ensure full functionality of the system controlling a modern generation IGBT and a superjunction MOSFET. Major parameters of influence, such as gate resistor and summing node capacitance, are investigated to achieve stable control. The large-signal behavior, investigated by simulations of a transistor level design, verifies the correct operation of the circuit. Hence, the gate driver can be designed for robust operation.
In a digitally controlled slope shaping system, reliable detection of both voltage and current slope is required to enable a closed-loop control for various power switches independent of system parameters. In most state-of-the-art works, this is realized by monitoring the absolute voltage and current values. Better accuracy at lower DC power loss is achieved by sensing techniques for a reliable passive detection, which is achieved through avoiding DC paths from the high voltage network into the sensing network. Using a high-speed analog-to-digital converter, the whole waveform of the transient derivative can be stored digitally and prepared for a predictive cycle-by-cycle regulation, without requiring high-precision digital differentiation algorithms. To gain an accurate representation of the voltage and current derivative waveforms, system parasitics are investigated and classified in three sections: (1) component parasitics, which are identified by s-parameter measurements and extraction of equivalent circuit models, (2) PCB design issues related to the sensing circuit, and (3) interconnections between adjacent boards.
The contribution of this paper is an optimized sensing network on the basis of the experimental study supporting fast transition slopes up to 100 V/ns and 1 A/ns and beyond, making the sensing technique attractive for slope shaping of fast switching devices like modern generation IGBTs, CoolMOSTM and SiC mosfets. Measurements of the optimized dv/dt and di/dt setups are demonstrated for a hard switched IGBT power stage.
Introducing continuous experimentation in large software-intensive product and service organisations
(2017)
Software development in highly dynamic environments imposes high risks to development organizations. One such risk is that the developed software may be of only little or no value to customers, wasting the invested development efforts.Continuous experiment ation, as an experiment-driven development approach, may reduce such development risks by iteratively testing product and service assumptions that are critical to the success of the software. Although several experiment-driven development approaches are available, there is little guidance available on how to introduce continuous experimentation into an organization. This article presents a multiple-case study that aims at better understanding the process of introducing continuous experimentation into an organization with an already established development process. The results from the study show that companies are open to adopting such an approach and learning throughout the introduction process. Several benefits were obtained, such as reduced development efforts, deeper customer insights, and better support for development decisions. Challenges included complex stakeholder structures, difficulties in defining success criteria, and building experimen- tation skills. Our findings indicate that organizational factors may limit the benefits of experimentation. Moreover, introducing continuous experimentation requires fundamental changes in how companies operate, and a systematic introduction process can increase the chances of a successful start.
Medical applications are becoming increasingly important in the current development of health care and therefore a crucial part of the medical industry. The work focuses on the analysis of requirements and the challenges arisen from designing mobile medical applications in relation to the user interface. The paper describes the current status in the development of mobile medical apps and illustrates the development of e-health market. The author will explain the requirements and will illustrate the hurdles and problems. He refers to the German market which is similar to the European and compares that with the market in the USA.
To assess the quality of a person’s sleep, it is essential to examine the sleep behaviour by identifying the several sleep stages, their durations and sleep cycles. The established and gold standard procedure for sleep stage scoring is overnight polysomnography (PSG) with the Rechtschaffen and Kales (R-K) method. Unfortunately, the conduct of PSG is time-consuming and unfamiliar for the subjects and might have an impact of the recorded data. To avoid the disadvantages with PSG, it is important to make further investigations in low-cost home diagnostic systems. For this intention it is necessary to find suitable bio vital parameters for classifying sleep stages without any physical impairments at the same time. Due to the promising results in several publications we want to analyse existing methods for sleep stage classification based on the parameters body movement,
heartbeat and respiration. Our aim was to find different behaviour patterns in the several sleep stages. Therefore, the average values of 15 whole-night PSG recordings -obtained from the ‘DREAMS
Subjects Database’- where analysed in the light of heartbeat, body movement and respiration with 10 different methods.
Sleep quality and in general, behavior in bed can be detected using a sleep state analysis. These results can help a subject to regulate sleep and recognize different sleeping disorders. In this work, a sensor grid for pressure and movement detection supporting sleep phase analysis is proposed. In comparison to the leading standard measuring system, which is Polysomnography (PSG), the system proposed in this project is a non invasive sleep monitoring device. For continuous analysis or home use, the PSG or wearable actigraphy devices tends to be uncomfortable. Besides this fact, they are also very expensive. The system represented in this work classifies respiration and body movement with only one type of sensor and also in a non invasive way. The sensor used is a pressure sensor. This sensor is low cost and can be used for commercial proposes. The system was tested by carrying out an experiment that recorded the sleep process of a subject. These recordings showed the potential for classification of breathing rate and body movements. Although previous researches show the use of pressure sensors in recognizing posture and breathing, they have been mostly used by positioning the sensors between the mattress and bedsheet. This project however, shows an innovative way to position the sensors under the mattress.
To evaluate the quality of sleep, it is important to determine how much time was spent in each sleep stage during the night. The gold standard in this domain is an overnight polysomnography (PSG). But the recording of the necessary electrophysiological signals is extensive and complex and the environment of the sleep laboratory, which is unfamiliar to the patient, might lead to distorted results. In this paper, a sleep stage detection algorithm is proposed that uses only the heart rate signal, derived from electrocardiogram (ECG), as a discriminator. This would make it possible for sleep analysis to be performed at home, saving a lot of effort and money. From the heart rate, using the fast Fourier transformation (FFT), three parameters were calculated in order to distinguish between the different sleep stages. ECG data along with a hypnogram scored by professionals was used from Physionet database, making it easy to compare the results. With an agreement rate of 41.3%, this approach is a good foundation for future research.
LDMOS transistors in integrated power technologies are often subject to thermo-mechanical stress, which degrades the on-chip metallization and eventually leads to a short. This paper investigates small sense lines embedded in the LDMOS metallization. It will be shown that their resistance depends strongly on the stress cycle number. Thus, they can be used as aging sensors and predict impending failures. Different test structures have been investigated to identify promising layout configurations. Such sensors are key components for resilient systems that adaptively reduce stress to allow aggressive LDMOS scaling without increasing the risk of failure.
A gate driver approach is presented for the reduction of turn-on losses in hard switching applications. A significant turn-on loss reduction of up to 55% has been observed for SiCMOSFETs. The gate driver approach uses a transformer which couples energy from the power path back into the gate path during switching events, providing increased gate driver current and thereby faster switching speed.
The gate driver approach was tested on a boost converter running at a switching frequency up to 300 kHz. With an input voltage of 300V and an output voltage of 600V, it was possible to reduce the converter losses by 8% at full load. Moreover, the output power range could be extended by 23% (from 2.75kW to 3.4 kW) due to the reduction of the turn-on losses.
IT platforms as the foundation of digitized processes and products are vital in a digital economy. However, many companies’ platforms are liabilities, not strategic assets because of their complexity. Consequently, companies initiate IT complexity reduction programs. But these technology-centric programs at best provide temporary relief. Soon after, companies’ platforms become just as complex as before. Based on four case studies, we identify three non-technical drivers of platform complexity: (1) Lacking awareness of consequences business decisions have on platform complexity, (2) Lacking motivation to avoid platform complexity, (3) Lacking authority to protect platforms from complexity. We propose measures to address these drivers that can help achieve more sustainable impact on platform complexity: (1) Removing information asymmetries between those creating complexity and those dealing with complexity, (2) Redefining incentives to include long-term effects on platform complexity, (3) Redressing power imbalances between those who create complexity and those who have to manage it.
Electric freight vehicles have the potential to mitigate local urban road freight transport emissions, but their numbers are still insignificant. Logistics companies often consider electric vehicles as too costly compared to vehicles powered by combustion engines. Research within the body of the current literature suggests that increasing the driven mileage can enhance the competitiveness of electric freight vehicles. In this paper we develop a numeric simulation approach to analyze the cost-optimal balance between a high utilization of medium-duty electric vehicles – which often have low operational costs – and the common requirement that their batteries will need expensive replacements. Our work relies on empirical findings of the real-world energy consumption from a large German field test with medium-duty electric vehicles. Our results suggest that increasing the range to the technical maximum by intermediate (quick) charging and multi-shift usage is not the most cost-efficient strategy in every case. A low daily mileage is more cost-efficient at high energy prices or consumptions, relative to diesel prices or consumptions, or if the battery is not safeguarded by a long warranty. In practical applications our model may help companies to choose the most suitable electric vehicle for the application purpose or the optimal trip length from a given set of options. For policymakers, our analysis provides insights on the relevant parameters that may either reduce the cost gap at lower daily mileages, or increase the utilization of medium-duty electric vehicles, in order to abate the negative impact of urban road freight transport on the environment.
In this work we investigate the behavior of MIS- and Schottky-gate AlGaN/GaN HEMTs under high-power pulsestress. A special setup capable of applying pulses of constant power is used to evaluate the electro-thermal response in different operating points. For both types of devices, the time to failure was found to decrease with increasing drain-source voltage. Overall, the Schottky-gate device displays a higher pulse robustness. The pulse withstand time of the MIS-gate device is limited by the occurrence of a thermal instability at approximately 240°C while the Schottky-gate device displays a rapid increase of the gate leakage current prior to failure. The mechanism responsible for this gate current is further investigated by static and transient temperature measurements and yielded activation energies of 0.6 eV and 0.84 eV.
This paper studies whether a monetary union can be managed solely by a rule based approach. The Five Presidents’ Report of the European Union rejects this idea. It suggests a centralisation of powers. We analyse the philosophy of policy rules from the vantage point of the German economic school of thought. There is evidence that a monetary union consisting of sovereign states is well organised by rules, together with the principle of subsidiarity. The root cause of the euro crisis is rather the weak enforcement of rules, compounded by structural problems. Therefore, we suggest a genuine rule-based paradigm for a stable future of the Economic and Monetary Union.
Under update intensive workloads (TPC, LinkBench) small updates dominate the write behavior, e.g. 70% of all updates change less than 10 bytes across all TPC OLTP workloads. These are typically performed as in-place updates and result in random writes in page-granularity, causing major write-overhead on Flash storage, a write amplification of several hundred times and lower device longevity.
In this paper we propose an approach that transforms those small in-place updates into small update deltas that are appended to the original page. We utilize the commonly ignored fact that modern Flash memories (SLC, MLC, 3D NAND) can handle appends to already programmed physical pages by using various low-level techniques such as ISPP to avoid expensive erases and page migrations. Furthermore, we extend the traditional NSM page-layout with a delta-record area that can absorb those small updates. We propose a scheme to control the write behavior as well as the space allocation and sizing of database pages.
The proposed approach has been implemented under Shore- MT and evaluated on real Flash hardware (OpenSSD) and a Flash emulator. Compared to In-Page Logging it performs up to 62% less reads and writes and up to 74% less erases on a range of workloads. The experimental evaluation indicates: (i) significant reduction of erase operations resulting in twice the longevity of Flash devices under update-intensive workloads; (ii) 15%-60% lower read/write I/O latencies; (iii) up to 45% higher transactional throughput; (iv) 2x to 3x reduction in overall write
amplification.
In the present paper we demonstrate the novel technique to apply the recently proposed approach of In-Place Appends – overwrites on Flash without a prior erase operation. IPA can be applied selectively: only to DB-objects that have frequent and relatively small updates. To do so we couple IPA to the concept of NoFTL regions, allowing the DBA to place update-intensive DB-objects into special IPA-enabled regions. The decision about region configuration can be (semi-)automated by an advisor analyzing DB-log files in the background.
We showcase a Shore-MT based prototype of the above approach, operating on real Flash hardware. During the demonstration we allow the users to interact with the system and gain hands-on experience under different demonstration scenarios.
In the present paper we demonstrate a novel approach to handling small updates on Flash called In-Place Appends (IPA). It allows the DBMS to revisit the traditional write behavior on Flash. Instead of writing whole database pages upon an update in an out-of-place manner on Flash, we transform those small updates into update deltas and append them to a reserved area on the very same physical Flash page. In doing so we utilize the commonly ignored fact that under certain conditions Flash memories can support in-place updates to Flash pages without a preceding erase operation.
The approach was implemented under Shore-MT and evaluated on real hardware. Under standard update-intensive workloads we observed 67% less page invalidations resulting in 80% lower garbage collection overhead, which yields a 45% increase in transactional throughput, while doubling Flash longevity at the same time. The IPA outperforms In-Page Logging (IPL) by more than 50%.
We showcase a Shore-MT based prototype of the above approach, operating on real Flash hardware – the OpenSSD Flash research platform. During the demonstration we allow the users to interact with the system and gain hands on experience of its performance under different demonstration scenarios. These involve various workloads such as TPC-B, TPC-C or TATP.
The purpose of this paper is to study the impact of transparency on the political budget cycle (PBC) over time and across countries. So far, the literature on electoral cycles finds evidence that cycles depend on the stage of an economy. However, the author shows – for the first time – a reliance of the budget cycle on transparency. The author uses a new data set consisting of 99 developing and 34 Organization for Economic Cooperation and Development countries. First, the author develops a model and demonstrates that transparency mitigates the political cycles. Second, the author confirms the proposition through the econometric assessment. The author uses time series data from 1970 to 2014 and discovers smaller cycles in countries with higher transparency, especially G8 countries.
This paper describes a new method for condition monitoring of a roller chain. In contrast to conventional methods, no additional accelerometers are used to measure and interpret frequency spectra but the chain condition is evaluated using an easy to interpret similarity measure based on correlation functions using the driving motor torque. An additional clustering of current data and reference measurements yields an easy to understand representation of the chain condition.
In this paper we describe the design and development process of an electromagnetic picker for rivets. These rivets are used in a production process of leather or textile design objects like riveted waist belts or purses. The picker is designed such that it replaces conventional mechanical pickers thus avoiding mechanical wear problems and increasing the process quality. The paper illustrates the challenges in the design process of this mechatronic system. The design process was based on both simulation and experiments leading to a prototype that satisfies the requirements.
In retail environments, consumers commonly evaluate products while standing on some type of flooring and concurrently being exposed to music; however, no study has examined the interaction of these two atmospheric cues. To bridge this gap, this research examines whether retailers can benefit from creating multisensory atmospheric congruent rather than incongruent retail environments of flooring and music. The results of an experiment in a real retail store reveal positive effects of multisensory congruent retail environments (e.g., soft music combined with soft flooring) on product evaluations. This study provides a new process explanation with consumers’ purchase-related self-confidence mediating these effects. Specifically, consumers in congruent rather than incongruent retail environments experience more purchase-related self confidence, which in turn leads to more favorable product evaluations. Furthermore, this study shows that consumers with a low rather than a high preference for haptic information are influenced more by multisensory atmospheric congruence when evaluating a product haptically.
As the market penetration of alternative fuel vehicles is still uncertain, defining green design cues for their design is of specific relevance to target environmentally conscious customers. This paper is a review of the existing literature aiming at summarizing the market penetration scenarios of alternative fuel vehicles over the next years, consumer demand for sustainable materials, and present methodologies to represent characteristics of eco-friendly mobility in the interior of alternative fuel vehicles. In particular, present attempts to correlate materials with green design cues are explored. Finally, projections for the future of the field are suggested, posing enchanting research questions to further unify the field of environmentally conscious design with the domain of product personality.
Digitization fosters the development of IT environments with many rather small structures, like Internet of Things (IoT), microservices, or mobility systems. They are needed to support flexible and agile digitized products and services. The goal is to create service-oriented enterprise architectures (EA) that are self optimizing and resilient. The present research paper investigates methods for decision-making concerning digitization architectures for Internet of Things and microservices. They are based on evolving enterprise architecture reference models and state of the art elements for architectural engineering for microgranular systems. Decision analytics in this field becomes increasingly complex and decision support, particularly for the development and evolution of sustainable enterprise architectures, is sorely needed. The challenging of the decision processes can be supported with in a more flexible and intuitive way by an architecture management cockpit.
Digitization transforms business process models and processes in many enterprises. However, many of them need guidance, how digitization is impacting the design of their information systems. Therefore, this paper investigates the influence of digitization on information system design. We apply a two-phase research method applying a literature review and an exploratory case study. The case study took place in the IT service provider of a large insurance enterprise. The study’s results suggest that a number of areas of information system design are affected, such as architecture, processes, data and services.
In a time of digital transformation, the ability to quickly and efficiently adapt software systems to changed business requirements becomes more important than ever. Measuring the maintainability of software is therefore crucial for the long-term management of such products. With service-based systems (SBSs) being a very important form of enterprise software, we present a holistic overview of such metrics specifically designed for this type of system, since traditional metrics – e.g. object oriented ones – are not fully applicable in this case. The selected metric candidates from the literature review were mapped to 4 dominant design properties: size, complexity, coupling, and cohesion. Microservice-based systems (μSBSs) emerge as an agile and fine grained variant of SBSs. While the majority of identified metrics are also applicable to this specialization (with some limitations), the large number of services in combination with technological heterogeneity and decentralization of control significantly impacts automatic metric collection in such a system. Our research therefore suggests that specialized tool support is required to guarantee the practical applicability of the presented metrics to μSBSs.
Towards a practical maintainability quality model for service- and microservice-based systems
(2017)
Although current literature mentions a lot of different metrics related to the maintainability of service-based systems (SBSs), there is no comprehensive quality model (QM) with automatic evaluation and practical focus. To fill this gap, we propose a Maintainability Model for Services (MM4S), a layered maintainability QM consisting of service properties (SPs) related with automatically collectable Service Metrics (SMs). This research artifact created within an ongoing Design Science Research (DSR) project is the first version ready for detailed evaluation and critical feedback. The goal of MM4S is to serve as a simple and practical tool for basic maintainability estimation and control in the context of BSs and their specialization
microservice-based systems (μSBSs).
Structural and functional thermosetting composite materials are exposed to different kinds of stress which can damage the polymer matrix, thus impairing the intended properties. Therefore, self-healing materials have attracted the attention of many research groups over the last decades in order to provide satisfactory material properties and outstanding product durability. The present article provides a critical overview of promising self-healing strategies for crosslinked thermoset polymers. It is organized in two parts: an overview about the different approaches to self-healing is given in the first part, whereas the second part focuses on the specific chemistries of the main strategies to achieve self-healing through crosslinking. It is attempted to provide a comprehensive discussion of different approaches which are described in the scientific literature. By comparison of the advantages and disadvantages, the authors wish to provide helpful insights on the assessment of the potential to transfer the extensive present knowledge about self-healing materials and methods to surface varnishing thermoset coatings.
This article provides a general overview of the most promising candidates of bio based materials and deals with the most important issues when it comes to their incorporation into PF resins. Due to their abundance on Earth, much knowledge of lignin-based materials has already been gained and uses of lignin in PF resins have been studied for many decades. Other natural polyphenols that are less frequently considered for impregnation are covered as well, as they do also possess some potential for PF substitution.
High quality decorative laminate panels typically consist of two major types of components: the surface layers comprising décor and overlay papers that are impregnated with melamine-based resins, and the core which is made of stacks of kraft papers impregnated with phenolic (PF) resin. The PF-impregnated layers impart superior hydrolytic stability, mechanical strength and fire-resistance to the composite. The manufacturing involves the complex interplay between resin, paper and impregnation/drying processes. Changes in the input variables cause significant alterations in the process characteristics and adaptations of the used materials and specific process conditions may, in turn, be required. This review summarizes the main variables influencing both processability and technological properties of phenolic resin impregnated papers and laminates produced therefrom. It is aimed at presenting the main influences from the involved components (resin and paper), how these may be controlled during the respective process steps (resin preparation and paper production), how they influence the impregnation and lamination conditions, how they affect specific aspects of paper and laminate performance, and how they interact with each other
(synergies).
Context: Development of software intensive products and services increasingly occurs by continuously deploying product or service increments, such as new features and enhancements, to customers. Product and service developers must continuously find out what customers want by direct customer feedback and usage behaviour observation. Objective: This paper examines the preconditions for setting up an experimentation system for continuous customer experiments. It describes the RIGHT model for Continuous Experimentation (Rapid Iterative value creation Gained through High-frequency Testing), illustrating the building blocks required for such a system. Method: An initial model for continuous experimentation is analytically derived from prior work. The model is matched against empirical case study findings from two startup companies and further developed. Results: Building blocks for a continuous experimentation system and infrastructure are presented. Conclusions: A suitable experimentation system requires at least the ability to release minimum viable products or features with suitable instrumentation, design and manage experiment plans, link experiment results with a product roadmap, and manage a flexible business strategy. The main challenges are proper, rapid design of experiments, advanced instrumentation of software to collect, analyse, and store relevant data, and the integration of experiment results in both the product development cycle and the software development process.
Software engineering education is under constant pressure to provide students with industry-relevant knowledge and skills. Educators must address issues beyond exercises and theories that can be directly rehearsed in small settings. Industry training has similar requirements of relevance as companies seek to keep their workforce up to date with technological advances. Real-life software development often deals with large, software-intensive systems and is influenced by the complex effects of teamwork and distributed software development, which are hard to demonstrate in an educational environment. A way to experience such effects and to increase the relevance of software engineering education is to apply empirical studies in teaching. In this paper, we show how different types of empirical studies can be used for educational purposes in software engineering. We give examples illustrating how to utilize empirical studies, discuss challenges, and derive an initial guideline that supports teachers to include empirical studies in software engineering courses. Furthermore, we give examples that show how empirical studies contribute to high-quality learning outcomes, to student motivation, and to the awareness of the advantages of applying software engineering principles. Having awareness, experience, and understanding of the actions required, students are more likely to apply such principles under real-life constraints in their working life.
In vitro composed vascularized adipose tissue is and will continue to be in great demand e.g. for the treatment of extensive high-graded burns or the replacement of tissue after tumor removal. Up to date, the lack of adequate culture conditions, mainly a culture medium, decelerates further achievements. In our study, we evaluated the influence of epidermal growth factor (EGF) and hydrocortisone (HC), often supplemented in endothelial cell (EC) specific media, on the co-culture of adipogenic differentiated adipose derived stem cells (ASCs) and microvascular endothelial cells (mvECs). In ASCs, EGF and HC are thought to inhibit adipogenic differentiation and have lipolytic activities. Our results showed that in indirect co-culture for 14 days, adipogenic differentiated ASCs further incorporated lipids and partly gained an univacuolar morphology when kept in media with low levels of EGF and HC. In media with high EGF and HC levels, cells did not incorporate further lipids, on the contrary, cells without lipid droplets appeared. Glycerol release, to measure lipolysis, also increased with elevated amounts of EGF and HC in the culture medium. Adipogenic differentiated ASCs were able to release leptin in all setups. MvECs were functional and expressed the cell specific markers, CD31 and von Willebrand factor (vWF), independent of the EGF and HC content as long as further EC specific factors were present. Taken together, our study demonstrates that adipogenic differentiated ASCs can be successfully co-cultured with mvECs in a culture medium containing low or no amounts of EGF and HC, as long as further endothelial cell and adipocyte specific factors are available.
Curriculum design for the German language class in the double-degree programme business engineering
(2017)
This paper aims to give an overview on how German is taught as a foreign language to students enrolled in the Bachelor of Business Engineering, a double-degree programme offered in Universiti Malaysia Pahang. The double degree students have the opportunity to complete their first two years of study in Malaysia and their last two years in Germany. Taking the TestDaF examination is compulsory for double-degree students. Hence, the German Language curriculum has been meticulously planned to ensure the students would be competent in the language. As such, the settings of the language class are discussed thoroughly in this paper. Additionally, it also discusses the challenges faced in teaching German as foreign language. This paper ends with some suggestions for improvement.
Decreasing batch sizes in production in line with Industrie 4.0 will lead to tremendous changes of the control of logistic processes in future production systems. Intelligent bins are crucial enablers to establish decentrally controlled material flow systems in value chain networks as well as at the intralogistics level. These intelligent bins have to be integrated into an overall decentralized monitoring and control approach and have to interact with humans and other entities just like other cyber-physical systems (CPS) within the cyber-physical production system (CPPS). To realize a decentralized material supply following the overall aim of a decentralized control of all production and logistics processes, an intelligent bin system is currently developed at the ESB Logistics Learning Factory. This intelligent bin system will be integrated into the self developed, cloud-based and event-oriented SES system (so-called “Self Execution System”) which goes beyond the common functionalities and capabilities of traditional manufacturing execution systems (MES).
To ensure a holistic integration of the intelligent bin for different material types into the SES framework, the required hard- and software components for the decentrally controlled bin system will be split into a common and an adaptable component. The common component represents the localization and network layer which is common for every bin, whereas the flexible component will be customizable to different requirements, like to the specific characteristics of the parts.
Close and safe interaction of humans and robots in joint production environments is technically feasible, however should not be implemented as an end in itself but to deliver improvement in any of a production system’s target dimensions. Firstly, this paper shows that an essential challenge for system integrators during the design of HRC applications is to identify a suitable distribution of available tasks between a robotic and a human resource. Secondly, it proposes an approach to determine task allocation by considering the actual capabilities of both human and robot in order to improve work quality. It matches those capabilities with given requirements of a certain task in order to identify the maximum congruence as the basis for the allocation decision. The approach is based on a study and subsequent generic description of human and robotic capabilities as well as a heuristic procedure that facilities the decision making process.
Technologies for mapping the “digital twin“ have been under development for approximately 20 years. Nowadays increasingly intelligent, individualized products encourages companies to respond innovatively to customer requirements and to handle the rising product variations quickly.
An integrated engineering network, spanning across the entire value chain, is operated to intelligently connect various company divisions, and to generate a business ecosystem for products, services and communities. The conditions for the digital twin are thereby determined in which the digital world can be fed into the real, and the real world back into the digital to deal such intelligent products with rising variations.
The term digital twin can be described as a digital copy of a real factory, machine, worker etc., that is created and can be independently expanded, automatically updated as well as being globally available in real time. Every real product and production site is permanently accompanied by a digital twin. First prototypes of such digital twins already exist in the ESB Logistics Learning Factory on a cloud- and app based software that builds on a dynamic, multidimensional data and information model. A standardized language of the robot control systems via software agents and positioning systems has to be integrated. The aspect of the continuity of the real factory in the digital factory as an economical means of ensuring continuous actuality of digital models looks as the basis of changeability.
For the indoor localization sensor combinations that in addition to the hardware already contain the software required for the sensor data fusion should be used. Processing systems, scenario-live-simulations and digital shop floor management results in a mandatory procedural combination. Essential to the digital twin is the ability to consistently provide all subsystems with the latest state of all required information, methods and algorithms.
This paper presents a novel multi-modal CNN architecture that exploits complementary input cues in addition to sole color information. The joint model implements a mid-level fusion that allows the network to exploit cross modal interdependencies already on a medium feature-level. The benefit of the presented architecture is shown for the RGB-D image understanding task. So far, state-of-the-art RGB-D CNNs have used network weights trained on color data. In contrast, a superior initialization scheme is proposed to pre-train the depth branch of the multi-modal CNN independently. In an end-to-end training the network parameters are optimized jointly using the challenging Cityscapes dataset. In thorough experiments, the effectiveness of the proposed model is shown. Both, the RGB GoogLeNet and further RGB-D baselines are outperformed with a significant margin on two different tasks: semantic segmentation and object detection. For the latter, this paper shows how to extract object level groundtruth from the instance level annotations in Cityscapes in order to train a powerful object detector.
Layout generators, commonly denoted as PCells (parameterized cells), play an important role in the layout design of analog ICs (integrated circuits). PCells can automatically create parts of a layout, whose properties are controlled by the PCell parameters. Any layout, whether hand-crafted or automatically generated, has to be verified against design rules using a DRC (design rule check) in order to assure proper functionality and producibility. Due to the growing complexity of today’s PCells it would be beneficial if a PCell itself could be ensured to produce DRC clean layouts for any allowed parameter values, i.e. a formal verification of the PCell’s code rather than checking all possible instances of the PCell. In this paper we demonstrate the feasibility of such a formal PCell verification for a simple NMOS transistor PCell. The set from which the parameter values can be chosen was found during the verification process.
We present a new methodology for automatic selection and sizing of analog circuits demonstrated on the OTA circuit class. The methodology consists of two steps: a generic topology selection method supported by a “part-sizing” process and subsequent final sizing. The circuit topologies provided by a reuse library are classified in a topology tree. The appropriate topology is selected by traversing the topology tree starting at the root node. The decision at each node is gained from the result of the part-sizing, which is in fact a node-specific set of simulations. The final sizing is a simulation-based optimization. We significantly reduce the overall simulation effort compared to a classical simulation-based optimization by combining the topology selection with the part-sizing process in the selection loop. The result is an interactive user friendly system, which eases the analog designer’s work significantly when compared to typical industrial practice in analog circuit design. The topology selection method and sizing process are implemented as a tool into a typical analog design environment. The design productivity improvement achievable by our method is shown by a comparison to other design automation approaches.
A new method for the analysis of movement dependent parasitics in full custom designed MEMS sensors
(2017)
Due to the lack of sophisticated microelectromechanical systems (MEMS) component libraries, highly optimized MEMS sensors are currently designed using a polygon driven design flow. The strength of this design flow is the accurate mechanical simulation of the polygons by finite element (FE) modal analysis. The result of the FE-modal analysis is included in the system model together with the data of the (mechanical) static electrostatic analysis. However, the system model lacks the dynamic parasitic electrostatic effects, arising from the electric coupling between the wiring and the moving structures. In order to include these effects in the system model, we present a method which enables the quasi dynamic parasitic extraction with respect to in-plane movements of the sensor structures. The method is embedded in the polygon driven MEMS design flow using standard EDA tools. In order to take the influences of the fabrication process into account, such as etching process variations, the method combines the FE-modal analysis and the fabrication process simulation data. This enables the analysis of dynamic changing electrostatic parasitic effects with respect to movements of the mechanical structures. Additionally, the result can be included into the system model allowing the simulation of positive feedback of the electrostatic parasitic effects to the mechanical structures.
This paper introduces a novel placement methodology for a common-centroid (CC) pattern generator. It can be applied to various integrated circuit (IC) elements, such as transistors, capacitors, diodes, and resistors. The proposed method consists of a constructive algorithm which generates an initial, close to the optimum, solution, and an iterative algorithm which is used subsequently, if the output of constructive algorithm does not satisfy the desired criteria. The outcome of this work is an automatic CC placement algorithm for IC element arrays. Additionally, the paper presents a method for the CC arrangement evaluation. It allows for evaluating the quality of an array, and a comparison of different placement methods.
The diversity of energy prosumer types makes it difficult to create appropriate incentive mechanisms that satisfy both prosumers and energy system operators alike. Meanwhile, European energy suppliers buy guarantees of origin (GoO) which allow them to sell green energy at premium prices while in reality delivering grey energy to their customers. Blockchain technology has proven itself to be a robust paying system in which users transact money without the involvement of a third party. Blockchain tokens can be used to represent a unit of energy and, just as GoOs, be submitted to the market. This paper focuses on simulating marketplace using the ethereum blockchain and smart contracts, where prosumers can sell tokenized GoOs to consumers willing to subsidize renewable energy producers. Such markets bypass energy providers by allowing consumers to obtain tokenized GoOs directly from the producers, which in turn benefit directly from the earnings. Two market strategies where tokens are sold as GoOs have been simulated. In the Fix Price Strategy prosumers sell their tokens to the average GoO price of 2014. The Variable Price Strategy focuses on selling tokens at a price range defined by the difference between grey and green energy. The study finds that the ethereum blockchain is robust enough to functions as a platform for tokenized GoO trading. Simulation results have been compared and the results indicate that prosumers earn significantly more money by following the Variable Price
Strategy.
The success of an autonomous robotic system is influenced by several interdependent factors not easily identifiable. This paper is set to lay the foundation of a new integrated approach in order to deeply examine all the parameters and understand their contribution to success. After introducing the problem, two cutting edge autonomous systems for the process of unloading of containers will be presented. Then the STIC analysis, a recently developed method for modelling and interpreting all the parameters, will be introduced. The preliminary results of applying such a methodology to a first study case, based on one of the two systems available to the authors, will be shortly presented. Future research is in the end recommended in order to prove that this methodology is the only way to efficiently and effectively mitigate the risk that stops potential users from investing in autonomous systems in the logistics sector.
In this paper we build on our research in data management on native Flash storage. In particular we demonstrate the advantages of intelligent data placement strategies. To effectively manage phsical Flash space and organize the data on it, we utilize novel storage structures such as regions and groups. These are coupled to common DBMS logical structures, thus require no extra overhead for the DBA. The experimental results indicate an improvement of up to 2x, which doubles the longevity of Flash SSD. During the demonstration the audience can experience the advantages of the proposed approach on real Flash hardware.
Real estate markets are known to fluctuate. The real estate market in Stuttgart, Germany, has been booming for more than a decade: square-meter price hit top levels and real estate agents claim that market prices will continue to increase. In this paper, we test this market understanding by developing and analyzing a system dynamics model that depicts the Stuttgart real estate market. Simulating the model explains oscillating behavior arising from significant time delays and endogenous feedback structures – and not necessarily oscillating interest rates, as market experts assume. Scenarios provide insights into the system's behavior reacting to changes exogenous to the model. The first scenario tests the market development under increasing interest rates. The other scenario deals with possible effects on the real estate market if the regional automotive economy suffers from intense competition with new market players entering with alternative fuel vehicles and new technologies. With a policy run we test market structure changes to eliminate cyclical effects. The paper confirms that the business cycle in the Stuttgart real estate market arises from within the system's underlying structure, thus emphasizing the importance of understanding feedback structures.
Strategic alliances have become important strategic options for firms to achieve competitive advantage. Yet, there are many examples of alliance failures. Scholars have studied this phenomenon and identified many reasons for alliance failure, including lack of trust between the partnering firms. Paradoxically, the concept of trust is still not fully understood, specifically how and under what conditions trust comes to break down within the broader process of alliance building. We synthesize a process model that describes the “alliance capability”, including trust, openness, partner contributions, and relational rents. We then translate this framework into a formal simulation model and analyze it thoroughly. In analyzing trust dynamics we identify and explore a tipping boundary, separating a regime of alliance failures and successes. We apply our core findings to openness strategies – decisions about how much knowledge to share with partners. Our analyses reveal that strategies informed by a static mental model of trust, contributions, and openness, under undervalue openness. Further, too little openness risks early failure due to the being trapped in a vicious cycle of trust depletion.
Clinical reading centers provide expertise for consistent, centralized analysis of medical data gathered in a distributed context. Accordingly, appropriate software solutions are required for the involved communication and data management processes. In this work, an analysis of general requirements and essential architectural and software design considerations for reading center information systems is provided. The identified patterns have been applied to the implementation of the reading center platform which is currently operated at the Center of Ophthalmology of the University Hospital of Tübingen.
The wet chemical deposition of solution processed transparent conducting oxides (TCO) provides an alternative low cost and economical deposition technique to realize large-areas of conducting films. Since the price for the most common TCO Indium Tin Oxide rises enormously, Aluminum Zinc Oxide (AZO) as alternative TCO reaches more and more interest. The optoelectronical properties of nanoparticle coatings strongly depend beneath the porosity of the coating on the shape and size of the used particles. By using bigger or rod-shaped particles it is possible to minimize the amount of grain boundaries resulting in an improvement of the electrical properties, whereas particles bigger than 100 nm should not be used if highly transparent coatings are necessary as these big particles scatter the visible light and lower the transmittance of the coatings. In this work we present a simple method to synthesize AZO particles with different shape and size, but comparable electronical properties. We use a simple, well reproducible polyol method for synthesis and influence the shape and size of the particles by adding different amounts of water to the precursor solution. We can show that the addition of aluminum as dopant strongly hinders the crystal growth but the addition of water counteracts this, so that both, spherical and rod-shaped particles can be obtained.
Digitization will require companies to fundamentally reengineer their sales processes. Adapting the concept of value selling to the digital age will enable them to deliver superior value to their customers. Specifically, social selling will provide them with an answer to the ever-increasing complexity of customer journeys. This article, based on a survey among 235 German companies, assesses the status quo and outlines opportunities. Moreover, it introduces a novel approach for developing well-grounded social selling metrics.
In an exploratory study about online communication of large and medium-sized B2B companies from the German state of Baden-Württemberg, their message content communicated via websites, and their websites' appeal for international prospects has been analyzed. It revealed many basic content items absent, making the site less attractive for further exploration, and difficult or international prospects to enter into a dialog, become leads, and possible customers. The subsequent survey elicited organizational backgrounds, available resources, and objectives for online communication. It could trace deficiencies back to a lack of understanding of the importance of digital communication for lead generation, and the customer journey in general, absence of a communication strategy, lack of urgency, and lack of resources to implement desired changes and additions to communication content.