Refine
Year of publication
- 2017 (167) (remove)
Document Type
- Conference proceeding (87)
- Journal article (57)
- Book chapter (15)
- Book (4)
- Working Paper (2)
- Doctoral Thesis (1)
- Review (1)
Language
- English (167) (remove)
Has full text
- yes (167) (remove)
Is part of the Bibliography
- yes (167)
Institute
- Informatik (60)
- Technik (38)
- ESB Business School (29)
- Texoversum (21)
- Life Sciences (19)
Publisher
- Springer (36)
- IEEE (27)
- Elsevier (16)
- Gesellschaft für Informatik (15)
- ACM (8)
- Association for Information Systems (AIS) (5)
- Hochschule Reutlingen (5)
- Università Politecnica delle Marche (4)
- Technische Universität Berlin (3)
- Wiley (3)
Purpose: The purpose of this study was to investigate the value of the web representation of certain fashion hot spots and how these results can be shown on fashion maps in an illustrated way.
Design/methodology/approach: A new ranking was created, which was evaluated with a self-instructed index, to gain solid results. Numbers were collected from Google, Instagram, Facebook, Twitter and web.alert.io. Additionally, fashion maps were created for an illustrative visualization of the results.
Findings: Compared with the ranking of a trend forecasting agency, called Global Language Monitor, which concepted a ranking of non-virtual fashion cities, the web representation and therefore the ranking of the research project, differs mainly in the situation of the cities among the first 10, viz. the rank on which a city occurs, but fewer in the actual cities mentioned.
Research limitations: The research was limited to subjective analysis of data, leading to partly subjective results, as well as the selected number of social media platforms, that had been used.
Originality/value: This is the first study to explore the web representation value of fashion metropolises in comparison to their non-virtual ranking. The results are partly based on results that already existed, concerning transformations of fashion cities or in general which cities own the status of a fashion city.
Cell-cell and cell-extracellular matrix (ECM) adhesion regulates fundamental cellular functions and is crucial for cell-material contact. Adhesion is influenced by many factors like affinity and specificity of the receptor-ligand interaction or overall ligand concentration and density. To investigate molecular details of cell ECM and cadherins (cell-cell) interaction in vascular cells functional nanostructured surfaces were used Ligand-functionalized gold nanoparticles (AuNPs) with 6-8 nm diameter, are precisely immobilized on a surface and separated by non-adhesive regions so that individual integrins or cadherins can specifically interact with the ligands on the AuNPs. Using 40 nm and 90 nm distances between the AuNPs and functionalized either with peptide motifs of the extracellular matrix (RGD or REDV) or vascular endothelial cadherins (VEC), the influence of distance and ligand specificity on spreading and adhesion of endothelial cells (ECs) and smooth muscle cells (SMCs) was investigated. We demonstrate that RGD-dependent adhesion of vascular cells is similar to other cell types and that the distance dependence for integrin binding to ECM-peptides is also valid for the REDV motif. VEC-ligands decrease adhesion significantly on the tested ligand distances. These results may be helpful for future improvements in vascular tissue engineering and for development of implant surfaces.
Smart meter based business models for the electricity sector : a systematical literature research
(2017)
The Act on the Digitization of the Energy Transition forces German industries and households to introduce smart meters in order to save engery, to gain individual based electricity tariffs and to digitize the energy data flow. Smart meter can be regarded as the advancement of the traditional meter. Utilizing this new technology enables a wide range of innovative business models that provide additional value for the electricity suppliers as well as for their customers. In this study, we followed a two-step approach. At first, we provide a state-of-the-art comparison of these business models found in the literature and identify structural differences in the way they add value to the offered products and services. Secondly, the business models are grouped into categories with respect to customer segmetns and the added value to the smart grid. Findings indicate that most business models focus on the end-costumer as their main customer.
This research is about Omnichannel Retailing and addresses the question how the omnichanneling of retailers in the fashion market can be measured. Our sources will include books, interviews, newspapers and scientific databases.
Omnichanneling is a current topic in the fashion market, retailers all over the world face the question on how to adapt to the challenges Omnichannel Retailing sets. We are going to define what Omnichanneling is by explaining the differences between Multiple-, Multi-, Cross- and Omnichannel Retailing. After we defined omnichanneling itself, we took a set of 26 retailers to evaluate regarding their Omnichannel capabilities. Then we create an index with criteria that can measure the Omnichannel capability of each retailer.
The Omnichannel Score is based on 31 criteria, which analyze the retailers in offline, online, mobile and social aspects enables to see differences between retailers. Our findings were that retailers in the US fashion market are more advanced in Omnichannel Retailing than retailers in the German fashion market. Our top three Omnichannel retailers were Sears with an Omnichannel Score of 91, followed by KOHL’S and Marks&Spencer, both with a Omnichannel Score of 88. The best Omnichannel Retailer from Germany was Adidas with the fourth place and an Omnichannel Score of 81.
Towards a practical maintainability quality model for service- and microservice-based systems
(2017)
Although current literature mentions a lot of different metrics related to the maintainability of service-based systems (SBSs), there is no comprehensive quality model (QM) with automatic evaluation and practical focus. To fill this gap, we propose a Maintainability Model for Services (MM4S), a layered maintainability QM consisting of service properties (SPs) related with automatically collectable Service Metrics (SMs). This research artifact created within an ongoing Design Science Research (DSR) project is the first version ready for detailed evaluation and critical feedback. The goal of MM4S is to serve as a simple and practical tool for basic maintainability estimation and control in the context of BSs and their specialization
microservice-based systems (μSBSs).
In a time of digital transformation, the ability to quickly and efficiently adapt software systems to changed business requirements becomes more important than ever. Measuring the maintainability of software is therefore crucial for the long-term management of such products. With service-based systems (SBSs) being a very important form of enterprise software, we present a holistic overview of such metrics specifically designed for this type of system, since traditional metrics – e.g. object oriented ones – are not fully applicable in this case. The selected metric candidates from the literature review were mapped to 4 dominant design properties: size, complexity, coupling, and cohesion. Microservice-based systems (μSBSs) emerge as an agile and fine grained variant of SBSs. While the majority of identified metrics are also applicable to this specialization (with some limitations), the large number of services in combination with technological heterogeneity and decentralization of control significantly impacts automatic metric collection in such a system. Our research therefore suggests that specialized tool support is required to guarantee the practical applicability of the presented metrics to μSBSs.
This paper introduces a novel placement methodology for a common-centroid (CC) pattern generator. It can be applied to various integrated circuit (IC) elements, such as transistors, capacitors, diodes, and resistors. The proposed method consists of a constructive algorithm which generates an initial, close to the optimum, solution, and an iterative algorithm which is used subsequently, if the output of constructive algorithm does not satisfy the desired criteria. The outcome of this work is an automatic CC placement algorithm for IC element arrays. Additionally, the paper presents a method for the CC arrangement evaluation. It allows for evaluating the quality of an array, and a comparison of different placement methods.
Technologies for mapping the “digital twin“ have been under development for approximately 20 years. Nowadays increasingly intelligent, individualized products encourages companies to respond innovatively to customer requirements and to handle the rising product variations quickly.
An integrated engineering network, spanning across the entire value chain, is operated to intelligently connect various company divisions, and to generate a business ecosystem for products, services and communities. The conditions for the digital twin are thereby determined in which the digital world can be fed into the real, and the real world back into the digital to deal such intelligent products with rising variations.
The term digital twin can be described as a digital copy of a real factory, machine, worker etc., that is created and can be independently expanded, automatically updated as well as being globally available in real time. Every real product and production site is permanently accompanied by a digital twin. First prototypes of such digital twins already exist in the ESB Logistics Learning Factory on a cloud- and app based software that builds on a dynamic, multidimensional data and information model. A standardized language of the robot control systems via software agents and positioning systems has to be integrated. The aspect of the continuity of the real factory in the digital factory as an economical means of ensuring continuous actuality of digital models looks as the basis of changeability.
For the indoor localization sensor combinations that in addition to the hardware already contain the software required for the sensor data fusion should be used. Processing systems, scenario-live-simulations and digital shop floor management results in a mandatory procedural combination. Essential to the digital twin is the ability to consistently provide all subsystems with the latest state of all required information, methods and algorithms.
The purpose of this paper is to determine the relevance of social media for luxury brand management. It employs both a multi-methodological approach: After analyzing the online performance of the three luxury brands Burberry, Louis Vuitton and Gucci, the empirical research includes a survey as well as an eye tracking test executed with Tobii Studio. The findings reveal that online and social media have given luxury fashion businesses the opportunity to establish a sustainable interaction with their customers and distinguish themselves from the competition. Still, the online business holds many challenges for luxury companies to overcome. This paper gives instructions as to how social media can be effectively incorporated into a luxury company.
Painting galleries typically provide a wealth of data composed of several data types. Those multivariate data are too complex for laymen like museum visitors to first, get an overview about all paintings and to look for specific categories. Finally, the goal is to guide the visitor to a specific painting that he wishes to have a more closer look on. In this paper we describe an interactive visualization tool that first provides such an overview and lets people experiment with the more than 41,000 paintings collected in the web gallery of art. To generate such an interactive tool, our technique is composed of different steps like data handling, algorithmic transformations, visualizations, interactions, and the human user working with the tool with the goal to detect insights in the provided data. We illustrate the usefulness of the visualization tool by applying it to such characteristic data and show how one can get from an overview about all paintings to specific paintings.
The diversity of energy prosumer types makes it difficult to create appropriate incentive mechanisms that satisfy both prosumers and energy system operators alike. Meanwhile, European energy suppliers buy guarantees of origin (GoO) which allow them to sell green energy at premium prices while in reality delivering grey energy to their customers. Blockchain technology has proven itself to be a robust paying system in which users transact money without the involvement of a third party. Blockchain tokens can be used to represent a unit of energy and, just as GoOs, be submitted to the market. This paper focuses on simulating marketplace using the ethereum blockchain and smart contracts, where prosumers can sell tokenized GoOs to consumers willing to subsidize renewable energy producers. Such markets bypass energy providers by allowing consumers to obtain tokenized GoOs directly from the producers, which in turn benefit directly from the earnings. Two market strategies where tokens are sold as GoOs have been simulated. In the Fix Price Strategy prosumers sell their tokens to the average GoO price of 2014. The Variable Price Strategy focuses on selling tokens at a price range defined by the difference between grey and green energy. The study finds that the ethereum blockchain is robust enough to functions as a platform for tokenized GoO trading. Simulation results have been compared and the results indicate that prosumers earn significantly more money by following the Variable Price
Strategy.
Clinical reading centers provide expertise for consistent, centralized analysis of medical data gathered in a distributed context. Accordingly, appropriate software solutions are required for the involved communication and data management processes. In this work, an analysis of general requirements and essential architectural and software design considerations for reading center information systems is provided. The identified patterns have been applied to the implementation of the reading center platform which is currently operated at the Center of Ophthalmology of the University Hospital of Tübingen.
Data Integration of heterogeneous data sources relies either on periodically transferring large amounts of data to a physical Data Warehouse or retrieving data from the sources on request only. The latter results in the creation of what is referred to as a virtual Data Warehouse, which is preferable when the use of the latest data is paramount. However, the downside is that it adds network traffic and suffers from performance degradation when the amount of data is high. In this paper, we propose the use of a readCheck validator to ensure the timeliness of the queried data and reduced data traffic. It is further shown that the readCheck allows transactions to update data in the data sources obeying full Atomicity, Consistency, Isolation, and Durability (ACID) properties.
The increasing number of connected mobile devices such as fitness trackers and smartphones define new data for health insurances, enabling them to gain deeper insights into the health of their customers. These additional data sources plus the trend towards an interconnected health community, including doctors, hospitals and insurers, lead to challenges regarding data filtering, organization and dissemination. First, we analyze what kind of information is relevant for a digital health insurance. Second, functional and non-functional requirements for storing and managing health data in an interconnected environment are defined. Third, we propose a data architecture for a digitized health insurance, consisting of a data model and an application architecture.
We present a topology of MIMO arrays of inductive antennas exhibiting inherent high crosstalk cancellation capabilities. A single layer PCB is etched into a 3-channels array of emitting/receiving antennas. Once coupled with another similar 3-channels emitter/receiver, we measured an Adjacent Channel Rejection Ratio (ACRR) as high as 70 dB from 150 Hz to 150 kHz. Another primitive device made out of copper wires wound around PVC tubes to form a 2-channels “non-contact slip-ring” exhibited 22 dB to 47 dB of ACRR up to 15MHz. In this paper we introduce the underlying theoretical model behind the crosstalk suppression capabilities of those so-called “Pie-Chart antennas”: an extension of the mutual inductance compensation method to higher number of channels using symmetries. We detail the simple iterative building process of those antennas, illustrate it with numerical analysis and evaluate there effectiveness via real experiments on the 3-channels PCB array and the 2-channels rotary array up to the limit of our test setup. The Pie Chart design is primarily intended as an alternative solution to costly electronic filters or cumbersome EM shields in wireless AND wired applications, but not exclusively.
Gallium nitride high electron mobility transistors (GaN-HEMTs) have low capacitances and can achieve low switching losses in applications where hard turn-on is required. Low switching losses imply a fast switching; consequently, fast voltage and current transients occur. However, these transients can be limited by package and layout parasitics even for highly optimized systems. Furthermore, a fast switching requires a fast charging of the input capacitance, hence a high gate current.
In this paper, the switching speed limitations of GaN-HEMTs due to the common source inductance and the gate driver supply voltage are discussed. The turn-on behavior of a GaN-HEMT is simulated and the impact of the parasitics and the gate driver supply voltage on the switching losses is described in detail. Furthermore, measurements are performed with an optimized layout for a drain-source voltage of 500 V and a drain-source current up to 60 A.
Modern power semiconductor devices have low capacitances and can therefore achieve very fast switching transients under hard-switching conditions. However, these transients are often limited by parasitic elements, especially by the source inductance and the parasitic capacitances of the power semiconductor. These limitations cannot be compensated by conventional gate drivers. To overcome this, a novel gate driver approach for power semiconductors was developed. It uses a transformer which accelerates the switching by transferring energy from the source path to the gate path.
Experimental results of the novel gate driver approach show a turn-on energy reduction of 78% (from 80 μJ down to 17 μJ) with a drain-source voltage of 500V and a drain current of 60 A. Furthermore, the efficiency improvement is demonstrated for a hard-switching boost converter. For a switching frequency of 750 kHz with an input voltage of 230V and an output voltage of 400V, it was possible to extend the output power range by 35%(from 2.3kW to 3.1 kW), due to the reduction of the turn-on losses, therefore lowering the junction temperature of the GaN-HEMT.
A gate driver approach is presented for the reduction of turn-on losses in hard switching applications. A significant turn-on loss reduction of up to 55% has been observed for SiCMOSFETs. The gate driver approach uses a transformer which couples energy from the power path back into the gate path during switching events, providing increased gate driver current and thereby faster switching speed.
The gate driver approach was tested on a boost converter running at a switching frequency up to 300 kHz. With an input voltage of 300V and an output voltage of 600V, it was possible to reduce the converter losses by 8% at full load. Moreover, the output power range could be extended by 23% (from 2.75kW to 3.4 kW) due to the reduction of the turn-on losses.
Steady growing research material in a variety of databases, repositories and clouds make academic content more than ever hard to discover. Finding adequate material for the own research however is essential for every researcher. Based on recent developments in the field of artificial intelligence and the identified digital capabilities of future universities a change in the basic work of academic research is predicted. This study defines the idea of how artificial intelligence could simplifiy academic research at a digital university. Today's studies in the field of AI spectacle the true potential and its commanding impact on academic research.
Software engineering education is under constant pressure to provide students with industry-relevant knowledge and skills. Educators must address issues beyond exercises and theories that can be directly rehearsed in small settings. Industry training has similar requirements of relevance as companies seek to keep their workforce up to date with technological advances. Real-life software development often deals with large, software-intensive systems and is influenced by the complex effects of teamwork and distributed software development, which are hard to demonstrate in an educational environment. A way to experience such effects and to increase the relevance of software engineering education is to apply empirical studies in teaching. In this paper, we show how different types of empirical studies can be used for educational purposes in software engineering. We give examples illustrating how to utilize empirical studies, discuss challenges, and derive an initial guideline that supports teachers to include empirical studies in software engineering courses. Furthermore, we give examples that show how empirical studies contribute to high-quality learning outcomes, to student motivation, and to the awareness of the advantages of applying software engineering principles. Having awareness, experience, and understanding of the actions required, students are more likely to apply such principles under real-life constraints in their working life.
Context: Development of software intensive products and services increasingly occurs by continuously deploying product or service increments, such as new features and enhancements, to customers. Product and service developers must continuously find out what customers want by direct customer feedback and usage behaviour observation. Objective: This paper examines the preconditions for setting up an experimentation system for continuous customer experiments. It describes the RIGHT model for Continuous Experimentation (Rapid Iterative value creation Gained through High-frequency Testing), illustrating the building blocks required for such a system. Method: An initial model for continuous experimentation is analytically derived from prior work. The model is matched against empirical case study findings from two startup companies and further developed. Results: Building blocks for a continuous experimentation system and infrastructure are presented. Conclusions: A suitable experimentation system requires at least the ability to release minimum viable products or features with suitable instrumentation, design and manage experiment plans, link experiment results with a product roadmap, and manage a flexible business strategy. The main challenges are proper, rapid design of experiments, advanced instrumentation of software to collect, analyse, and store relevant data, and the integration of experiment results in both the product development cycle and the software development process.
Thematic issue on human-centred ambient intelligence: cognitive approaches, reasoning and learning
(2017)
This editorial presents advances on human-centred Ambient Intelligence applications which take into account cognitive issues when modelling users (i.e. stress, attention disorders), and learn users’ activities/preferences and adapt to them (i.e. at home, driving a car). These papers also show AmI applications in health and education, which make them even more valuable for the general society.
Sleep quality and in general, behavior in bed can be detected using a sleep state analysis. These results can help a subject to regulate sleep and recognize different sleeping disorders. In this work, a sensor grid for pressure and movement detection supporting sleep phase analysis is proposed. In comparison to the leading standard measuring system, which is Polysomnography (PSG), the system proposed in this project is a non invasive sleep monitoring device. For continuous analysis or home use, the PSG or wearable actigraphy devices tends to be uncomfortable. Besides this fact, they are also very expensive. The system represented in this work classifies respiration and body movement with only one type of sensor and also in a non invasive way. The sensor used is a pressure sensor. This sensor is low cost and can be used for commercial proposes. The system was tested by carrying out an experiment that recorded the sleep process of a subject. These recordings showed the potential for classification of breathing rate and body movements. Although previous researches show the use of pressure sensors in recognizing posture and breathing, they have been mostly used by positioning the sensors between the mattress and bedsheet. This project however, shows an innovative way to position the sensors under the mattress.
To analyze the humans’ sleep it is necessary as to identify the sleep stages, occurring during the sleep, their durations and sleep cycles. The gold standard procedure for this approach is polysomnography (PSG), which classify the sleep stages based on Rechtschaffen and Kales (R-K) method. This method aside the advantages as high accuracy has however some disadvantages, among others time-consuming and uncomfortable for the patient procedure. Therefore, the development of further methods for the sleep classification in addition to PSG is a promising topic for the investigation and this work has as its aim the presentation of possible ways and goals for this development.
We present a new methodology for automatic selection and sizing of analog circuits demonstrated on the OTA circuit class. The methodology consists of two steps: a generic topology selection method supported by a “part-sizing” process and subsequent final sizing. The circuit topologies provided by a reuse library are classified in a topology tree. The appropriate topology is selected by traversing the topology tree starting at the root node. The decision at each node is gained from the result of the part-sizing, which is in fact a node-specific set of simulations. The final sizing is a simulation-based optimization. We significantly reduce the overall simulation effort compared to a classical simulation-based optimization by combining the topology selection with the part-sizing process in the selection loop. The result is an interactive user friendly system, which eases the analog designer’s work significantly when compared to typical industrial practice in analog circuit design. The topology selection method and sizing process are implemented as a tool into a typical analog design environment. The design productivity improvement achievable by our method is shown by a comparison to other design automation approaches.
Asymmetric read/write storage technologies such as Flash are becoming
a dominant trend in modern database systems. They introduce
hardware characteristics and properties which are fundamentally
different from those of traditional storage technologies such
as HDDs.
Multi-Versioning Database Management Systems (MV-DBMSs)
and Log-based Storage Managers (LbSMs) are concepts that can
effectively address the properties of these storage technologies but
are designed for the characteristics of legacy hardware. A critical
component of MV-DBMSs is the invalidation model: commonly,
transactional timestamps are assigned to the old and the new version,
resulting in two independent (physical) update operations.
Those entail multiple random writes as well as in-place updates,
sub-optimal for new storage technologies both in terms of performance
and endurance. Traditional page-append LbSM approaches
alleviate random writes and immediate in-place updates, hence reducing
the negative impact of Flash read/write asymmetry. Nevertheless,
they entail significant mapping overhead, leading to write
amplification.
In this work we present an approach called Snapshot Isolation
Append Storage Chains (SIAS-Chains) that employs a combination
of multi-versioning, append storage management in tuple granularity
and novel singly-linked (chain-like) version organization.
SIAS-Chains features: simplified buffer management, multi-version
indexing and introduces read/write optimizations to data placement
on modern storage media. SIAS-Chains algorithmically avoids
small in-place updates, caused by in-place invalidation and converts
them into appends. Every modification operation is executed
as an append and recently inserted tuple versions are co-located.
A 3D face modelling approach for pose-invariant face recognition in a human-robot environment
(2017)
Face analysis techniques have become a crucial component of human-machine interaction in the fields of assistive and humanoid robotics. However, the variations in head-pose that arise naturally in these environments are still a great challenge. In this paper, we present a real-time capable 3D face modelling framework for 2D in-the-wild images that is applicable for robotics. The fitting of the 3D Morphable Model is based exclusively on automatically detected landmarks. After fitting, the face can be corrected in pose and transformed back to a frontal 2D representation that is more suitable for face recognition. We conduct face recognition experiments with non-frontal images from the MUCT database and uncontrolled, in the wild images from the PaSC database, the most challenging face recognition database to date, showing an improved performance. Finally, we present our SCITOS G5 robot system, which incorporates our framework as a means of image pre-processing for face analysis.
In a digitally controlled slope shaping system, reliable detection of both voltage and current slope is required to enable a closed-loop control for various power switches independent of system parameters. In most state-of-the-art works, this is realized by monitoring the absolute voltage and current values. Better accuracy at lower DC power loss is achieved by sensing techniques for a reliable passive detection, which is achieved through avoiding DC paths from the high voltage network into the sensing network. Using a high-speed analog-to-digital converter, the whole waveform of the transient derivative can be stored digitally and prepared for a predictive cycle-by-cycle regulation, without requiring high-precision digital differentiation algorithms. To gain an accurate representation of the voltage and current derivative waveforms, system parasitics are investigated and classified in three sections: (1) component parasitics, which are identified by s-parameter measurements and extraction of equivalent circuit models, (2) PCB design issues related to the sensing circuit, and (3) interconnections between adjacent boards.
The contribution of this paper is an optimized sensing network on the basis of the experimental study supporting fast transition slopes up to 100 V/ns and 1 A/ns and beyond, making the sensing technique attractive for slope shaping of fast switching devices like modern generation IGBTs, CoolMOSTM and SiC mosfets. Measurements of the optimized dv/dt and di/dt setups are demonstrated for a hard switched IGBT power stage.
A concept for a slope shaping gate driver IC is proposed, used to establish control over the slew rates of current and voltage during the turn-on and turn off switching transients.
It combines the high speed and linearity of a fully-integrated closed-loop analog gate driver, which is able to perform real-time regulation, with the advantages of digital control, like flexibility and parameter independency, operating in a predictive cycle-bycycle regulation. In this work, the analog gate drive integrated circuit is partitioned into functional blocks and modeled in the small-signal domain, which also includes the non-linearity of parameters. An analytical stability analysis has been performed in order to ensure full functionality of the system controlling a modern generation IGBT and a superjunction MOSFET. Major parameters of influence, such as gate resistor and summing node capacitance, are investigated to achieve stable control. The large-signal behavior, investigated by simulations of a transistor level design, verifies the correct operation of the circuit. Hence, the gate driver can be designed for robust operation.
Software startups often make assumptions about the problems and customers they are addressing as well as the market and the solutions they are developing. Testing the right assumptions early is a means to mitigate risks. Approaches such as Lean Startup foster this kind of testing by applying experimentation as part of a constant build-measure-learn feedback loop. The existing research on how software startups approach experimentation is very limited. In this study, we focus on understanding how software startups approach experimentation and identify challenges and advantages with respect to conducting experiments. To achieve this, we conducted a qualitative interview study. The initial results show that startups often spent a disproportionate amount of time focusing on creating solutions without testing critical assumptions. Main reasons are the lack of awareness, that these assumptions can be tested early and a lack of knowledge and support on how to identify, prioritize and test these assumptions. However, startups understand the need for testing risky assumptions and are open to conducting experiments.
A new method for the analysis of movement dependent parasitics in full custom designed MEMS sensors
(2017)
Due to the lack of sophisticated microelectromechanical systems (MEMS) component libraries, highly optimized MEMS sensors are currently designed using a polygon driven design flow. The strength of this design flow is the accurate mechanical simulation of the polygons by finite element (FE) modal analysis. The result of the FE-modal analysis is included in the system model together with the data of the (mechanical) static electrostatic analysis. However, the system model lacks the dynamic parasitic electrostatic effects, arising from the electric coupling between the wiring and the moving structures. In order to include these effects in the system model, we present a method which enables the quasi dynamic parasitic extraction with respect to in-plane movements of the sensor structures. The method is embedded in the polygon driven MEMS design flow using standard EDA tools. In order to take the influences of the fabrication process into account, such as etching process variations, the method combines the FE-modal analysis and the fabrication process simulation data. This enables the analysis of dynamic changing electrostatic parasitic effects with respect to movements of the mechanical structures. Additionally, the result can be included into the system model allowing the simulation of positive feedback of the electrostatic parasitic effects to the mechanical structures.
In the present paper we demonstrate a novel approach to handling small updates on Flash called In-Place Appends (IPA). It allows the DBMS to revisit the traditional write behavior on Flash. Instead of writing whole database pages upon an update in an out-of-place manner on Flash, we transform those small updates into update deltas and append them to a reserved area on the very same physical Flash page. In doing so we utilize the commonly ignored fact that under certain conditions Flash memories can support in-place updates to Flash pages without a preceding erase operation.
The approach was implemented under Shore-MT and evaluated on real hardware. Under standard update-intensive workloads we observed 67% less page invalidations resulting in 80% lower garbage collection overhead, which yields a 45% increase in transactional throughput, while doubling Flash longevity at the same time. The IPA outperforms In-Page Logging (IPL) by more than 50%.
We showcase a Shore-MT based prototype of the above approach, operating on real Flash hardware – the OpenSSD Flash research platform. During the demonstration we allow the users to interact with the system and gain hands on experience of its performance under different demonstration scenarios. These involve various workloads such as TPC-B, TPC-C or TATP.
In the present paper we demonstrate the novel technique to apply the recently proposed approach of In-Place Appends – overwrites on Flash without a prior erase operation. IPA can be applied selectively: only to DB-objects that have frequent and relatively small updates. To do so we couple IPA to the concept of NoFTL regions, allowing the DBA to place update-intensive DB-objects into special IPA-enabled regions. The decision about region configuration can be (semi-)automated by an advisor analyzing DB-log files in the background.
We showcase a Shore-MT based prototype of the above approach, operating on real Flash hardware. During the demonstration we allow the users to interact with the system and gain hands-on experience under different demonstration scenarios.
Under update intensive workloads (TPC, LinkBench) small updates dominate the write behavior, e.g. 70% of all updates change less than 10 bytes across all TPC OLTP workloads. These are typically performed as in-place updates and result in random writes in page-granularity, causing major write-overhead on Flash storage, a write amplification of several hundred times and lower device longevity.
In this paper we propose an approach that transforms those small in-place updates into small update deltas that are appended to the original page. We utilize the commonly ignored fact that modern Flash memories (SLC, MLC, 3D NAND) can handle appends to already programmed physical pages by using various low-level techniques such as ISPP to avoid expensive erases and page migrations. Furthermore, we extend the traditional NSM page-layout with a delta-record area that can absorb those small updates. We propose a scheme to control the write behavior as well as the space allocation and sizing of database pages.
The proposed approach has been implemented under Shore- MT and evaluated on real Flash hardware (OpenSSD) and a Flash emulator. Compared to In-Page Logging it performs up to 62% less reads and writes and up to 74% less erases on a range of workloads. The experimental evaluation indicates: (i) significant reduction of erase operations resulting in twice the longevity of Flash devices under update-intensive workloads; (ii) 15%-60% lower read/write I/O latencies; (iii) up to 45% higher transactional throughput; (iv) 2x to 3x reduction in overall write
amplification.
In this paper we build on our research in data management on native Flash storage. In particular we demonstrate the advantages of intelligent data placement strategies. To effectively manage phsical Flash space and organize the data on it, we utilize novel storage structures such as regions and groups. These are coupled to common DBMS logical structures, thus require no extra overhead for the DBA. The experimental results indicate an improvement of up to 2x, which doubles the longevity of Flash SSD. During the demonstration the audience can experience the advantages of the proposed approach on real Flash hardware.
In an exploratory study about online communication of large and medium-sized B2B companies from the German state of Baden-Württemberg, their message content communicated via websites, and their websites' appeal for international prospects has been analyzed. It revealed many basic content items absent, making the site less attractive for further exploration, and difficult or international prospects to enter into a dialog, become leads, and possible customers. The subsequent survey elicited organizational backgrounds, available resources, and objectives for online communication. It could trace deficiencies back to a lack of understanding of the importance of digital communication for lead generation, and the customer journey in general, absence of a communication strategy, lack of urgency, and lack of resources to implement desired changes and additions to communication content.
The purpose of this paper is to study the impact of transparency on the political budget cycle (PBC) over time and across countries. So far, the literature on electoral cycles finds evidence that cycles depend on the stage of an economy. However, the author shows – for the first time – a reliance of the budget cycle on transparency. The author uses a new data set consisting of 99 developing and 34 Organization for Economic Cooperation and Development countries. First, the author develops a model and demonstrates that transparency mitigates the political cycles. Second, the author confirms the proposition through the econometric assessment. The author uses time series data from 1970 to 2014 and discovers smaller cycles in countries with higher transparency, especially G8 countries.
The European Economic and Monetary Union (EMU) has been in turmoil for more than six years. The present governance rules do not seem to solve the problems neither permanently nor effectively. There is no vision about the future of Europe in the 21st century. This article describes a realignment of the economic governance, which does not necessarily lead to a transfer or political union. However, it solves the current and future challenges. In fact, the redesign of present rules is the most likely as well as legally and economically option today. The key ideais the detachment from the compulsive idea of an ever closer union. However, this vision requires boldness towards greater flexibility together with an exit clause or a state insolvency procedure for incompliant member states.
This paper models the political budget cycle with stochastic differential equations. The paper highlights the development of future volatility of the budget cycle. In fact, I confirm the proposition of a less volatile budget cycle in future. Moreover, I show that this trend is even amplified due to higher transparency. These findings are new evidence in the literature on electoral cycles. I calibrate a rigorous stochastic model on public deficit-to-GDP data for several countries from 1970 to 2012.
This paper studies whether a monetary union can be managed solely by a rule based approach. The Five Presidents’ Report of the European Union rejects this idea. It suggests a centralisation of powers. We analyse the philosophy of policy rules from the vantage point of the German economic school of thought. There is evidence that a monetary union consisting of sovereign states is well organised by rules, together with the principle of subsidiarity. The root cause of the euro crisis is rather the weak enforcement of rules, compounded by structural problems. Therefore, we suggest a genuine rule-based paradigm for a stable future of the Economic and Monetary Union.
Incubators in multinational corporations : development of a corporate incubator operator model
(2017)
This paper analyzes the components of a corporate incubator operator model in multinational companies. Thereby, three relevant phases were identified: pre incubation, incubation, and exit. Each phase contains different criteria that represent critical success factors for a corporate incubator, which are based on theoretical findings and lessons learned from practice. During the pre-incubation phase companies should define their need for a corporate incubator, the origin of ideas and the selection criteria for incubator tenants. The actual phase of incubation refers to the incubator program, which should be flexible with respect to each tenant. Furthermore, resource allocation plays an important role during the incubator program. Exit options after a successful incubation differ according to internal ideas and external start-ups, as well as the objective of the incubator. The research is based on a comprehensive screening of existing incubator literature and a qualitative content analysis of statements from eight experts of international corporate incubators.
Newly developed active pharmaceutical ingredients (APIs) are often poorly soluble in water. As a result the bioavailability of the API in the human body is reduced. One approach to overcome this restriction is the formulation of amorphous solid dispersions (ASDs), e.g., by hot-melt extrusion (HME). Thus, the poorly soluble crystalline form of the API is transferred into a more soluble amorphous form. To reach this aim in HME, the APIs are embedded in a polymer matrix. The resulting amorphous solid dispersions may contain small amounts of residual crystallinity and have the tendency to recrystallize. For the controlled release of the API in the final drug product the amount of crystallinity has to be known. This review assesses the available analytical methods that have been recently used for the characterization of ASDs
and the quantification of crystalline API content. Well established techniques like near- and mid-infrared spectroscopy (NIR and MIR, respectively), Raman spectroscopy, and emerging ones like UV/VIS, terahertz, and ultrasonic spectroscopy are considered in detail. Furthermore, their advantages and limitations are discussed with regard to general practical applicability as process analytical technology (PAT) tools in industrial manufacturing. The review focuses on spectroscopic methods which have been proven as most suitable for in-line and on-line process analytics. Further aspects are spectroscopic techniques that have been or could be integrated into an extruder.
The digital transformation of the automotive industry has a significant impact on how development processes need to be organized in future. Dynamic market and technological environments require capabilities to react on changes and to learn fast. Agile methods are a promising approach to address these needs but they are not tailored to the specific characteristics of the automotive domain like product line development. Although, there have been efforts to apply agile methods in the automotive domain for many years, significant and widespread adoptions have not yet taken place. The goal of this literature review is to gain an overview and a better understanding of agile methods for embedded software development in the automotive domain, especially with respect to product line development. A mapping study was conducted to analyze the relation between agile software development, embedded software development in the automotive domain and software product line development. Three research questions were defined and 68 papers were evaluated. The study shows that agile and product line development approaches tailored for the automotive domain are not yet fully explored in the literature. Especially, literature on the combination of agile and product line development is rare. Most of the examined combinations are customizations of generic approaches or approaches stemming from other domains. Although, only few approaches for combining agile and software product line development in the automotive domain were found, these findings were valuable for identifying research gaps and provide insights into how existing approaches can be combined, extended and tailored to suit the characteristics of the automotive domain.
Context: The current situation and future scenarios of the automotive domain require a new strategy to develop high quality software in a fast pace. In the automotive domain, it is assumed that a combination of agile development practices and software product lines is beneficial, in order to be capable to handle high frequency of improvements. This assumption is based on the understanding that agile methods introduce more flexibility in short development intervals. Software product lines help to manage the high amount of variants and to improve quality by reuse of software for long term development.
Goal: This study derives a better understanding of the expected benefits for a combination. Furthermore, it identifies the automotive specific challenges that prevent the adoption of agile methods within the software product line.
Method: Survey based on 16 semi structured interviews from the automotive domain, an internal workshop with 40 participants and a discussion round on ESE congress 2016. The results are analyzed by means of thematic coding.
The ability to develop and deploy high-quality software at a high speed gets increasing relevance for the comptetitiveness of car manufacturers. Agile practices have shown benefits such as faster time to market in several application domains. Therefore, it seems to be promising to carefully adopt agile practices also in the automotive domain. This article presents findings from an interview-based qualitative survey. It aims at understanding perceived forces that support agile adoption. Particularly, it focuses on embedded software development for electronic control units in the automotive domain.
Intermediate filament reorganization dynamically influences cancer cell alignment and migration
(2017)
The interactions between a cancer cell and its extracellular matrix (ECM) have been the focus of an increasing amount of investigation. The role of the intermediate filament keratin in cancer has also been coming into focus of late, but more research is needed to understand how this piece fits in the puzzle of cytoskeleton-mediated invasion and metastasis. In Panc-1 invasive pancreatic cancer cells, keratin phosphorylation in conjunction with actin inhibition was found to be sufficient to reduce cell area below either treatment alone. We then analyzed intersecting keratin and actin fibers in the cytoskeleton of cyclically stretched cells and found no directional correlation. The role of keratin organization in Panc-1 cellular morphological adaptation and directed migration was then analyzed by culturing cells on cyclically stretched polydimethylsiloxane (PDMS) substrates, nanoscale grates, and rigid pillars. In general, the reorganization of the keratin cytoskeleton allows the cell to become more ‘mobile’- exhibiting faster and more directed migration and orientation in response to external stimuli. By combining keratin network perturbation with a variety of physical ECM signals, we demonstrate the interconnected nature of the architecture inside the cell and the scaffolding outside of it, and highlight the key elements facilitating cancer cell-ECM interactions.
There is no doubt that the amplification of channel integration towards an omni-channel structure is a powerful idea whose time has finally come. The digitally cross-linked world postulates all-encompassing, ubiquitous, and unobtrusive future services. In the concomitant, increasingly competitive market, retailers are starting to lay the foundation for omnichannel, meeting the expectations of a digitally cunning audience wanting their shopping experience to be as seamless and uncomplicated as possible. Nevertheless, recent researches show that there are still enough avenues for further research on omnichannel. Until now, the performance of companies was solely considered by experts from a suppliers’ point of view. It would be rather interesting to find out whether the desire to meet the increased cus-tomer expectations is also recognized by the customers themselves. This paper seeks to answering how the purchasing behavior has changed and what customers demand. In addition, it elaborates the opportunities that are promoted by omni-channel. Searching out all the effects, the paper will get to a final step, where it can be attested how the omnichannel performance of fashion and lifestyle retailers can be measured from a consumers’ perspective by developing an exclusive index. The study is confined to four fashion and lifestyle retailers: Hugo Boss AG, Levi Strauss & Co, Pull and Bear as well as COS. Using the scientific method of mystery shopping and a multi-item checklist including 54 key performance indicators, the paper aims to examine to which extend the four selected retailers provide a seamless customer journey, according to the five decision-making phases.
The best fully automated analysis process achieves even better classification results than the established manual process. The best algorithms for the three analysis steps are (i) SGLTR (Savitzky-Golay Laplace operator filter thresholding regions) and LM (Local Maxima) for automated peak identification, (ii) EM clustering (Expectation Maximization) and DBSCAN (Density-Based Spatial Clustering of Applications with Noise) for the clustering step and (iii) RF (Random Forest) for multivariate classification. Thus, automated methods can replace the manual steps in the analysis process to enable an unbiased high throughput use of the technology.
Reconstructing 3D face shape from a single 2D photograph as well as from video is an inherently ill-posed problem with many ambiguities. One way to solve some of the ambiguities is using a 3D face model to aid the task. 3D morphable face models (3DMMs) are amongst the state of the art methods for 3D face reconstruction, or so called 3D model fitting. However, current existing methods have severe limitations, and most of them have not been trialled on in-the-wild data. Current analysis-by- synthesis methods form complex non linear optimisation processes, and optimisers often get stuck in local optima. Further, most existing methods are slow, requiring in the order of minutes to process one photograph.
This thesis presents an algorithm to reconstruct 3D face shape from a single image as well as from sets of images or video frames in real-time. We introduce a solution for linear fitting of a PCA shape identity model and expression blendshapes to 2D facial landmarks. To improve the accuracy of the shape, a fast face contour fitting algorithm is introduced. These different components of the algorithm are run in iteration, resulting in a fast, linear shape-to- landmarks fitting algorithm. The algorithm, specifically designed to fit to landmarks obtained from in-the-wild images, by tackling imaging conditions that occur in in-the-wild images like facial expressions and the mismatch of 2D–3D contour correspondences, achieves the shape reconstruction accuracy of much more complex, nonlinear state of the art methods, while being multiple orders of magnitudes faster.
Second, we address the problem of fitting to sets of multiple images of the same person, as well as monocular video sequences. We extend the proposed shape-to-landmarks fitting to multiple frames by using the knowledge that all images are from the same identity. To recover facial texture, the approach uses texture from the original images, instead of employing the often-used PCA albedo model of a 3DMM. We employ an algorithm that merges texture from multiple frames in real-time based on a weighting of each triangle of the reconstructed shape mesh.
Last, we make the proposed real-time 3D morphable face model fitting algorithm available as open-source software. In contrast to ubiquitous available 2D-based face models and code, there is a general lack of software for 3D morphable face model fitting, hindering a widespread adoption. The library thus constitutes a significant contribution to the community.