Ja
Refine
Year of publication
- 2016 (106) (remove)
Document Type
- Journal article (39)
- Conference proceeding (30)
- Book (16)
- Book chapter (12)
- Issue of a journal (4)
- Review (2)
- Working Paper (2)
- Report (1)
Is part of the Bibliography
- yes (106)
Institute
- ESB Business School (36)
- Informatik (30)
- Life Sciences (14)
- Technik (13)
- Texoversum (10)
Publisher
- Hochschule Reutlingen (18)
- Gesellschaft für Informatik e.V (15)
- Universität Tübingen (10)
- De Gruyter (5)
- Koordinierungsstelle Forschung und Entwicklung der Fachhochschulen des Landes Baden-Württemberg (5)
- Elsevier (4)
- Stellenbosch University (4)
- MIM, Marken-Institut München (3)
- Association for Information Systems (2)
- MDPI (2)
Diese Arbeit befasst sich mit möglichen Eingabegeräten für VR-Anwendungen, die mit HMDs betrachtet werden. Es wird überprüft, ob grundlegende Interaktionsmöglichkeiten wie Navigation durch den Raum, Texteingabe und Objektauswahl mit den evaluierten Geräten umsetzbar ist. Untersucht werden der Leap Motion Controller, die Kinect 2, das Myo-Armband, der Xbox-Controller und die Razer Hydra.
In dieser Ausarbeitung geht es um den aktuellen Stand der Digitalisierung der Textilindustrie. Sie dient als Grundlage zur Master-Thesis und soll die Frage beantworten, ob ein Informations-System, das die Textilprozesskette begleitet, benötigt wird. Dazu werden die einzelnen Prozessschritte kurz erläutert. In der Ausarbeitung wird auch die Verbindung zwischen der Textilindustrie und den neuen Möglichkeiten mit dem Internet der Dinge beleuchtet.
In dieser Arbeit werden verschiedene Lösungsansätze für die Konstruktion von Display-Walls in digitalen Schowrooms gesammelt und evaluiert. Besonders interessant ist dabei ein digitaler Ansatz, bei dem die Ausgabegeräte von den Zuspielern getrennt sind. Diese Lösung verspricht eine große Flexibilität und eine einfache Erweiterbarkeit im Vergleich zu herkömmlichen Ansätzen. Um diese Aussagen zu prüfen, soll ein funktionaler Prototyp auf Basis der Ergebnisse entwickelt werden.
Im Rahmen der wissenschaftlichen Vertiefung soll auf Basis einer vorhandenen Gebrauchstauglichkeitsanalyse einer mobilen Applikation das Risikomanagement geplant und durchgeführt werden. Die Applikation ist Bestandteil eines In vitro-Diagnostikums, welches transplantierten Patienten im Alltag bei der Bewertung ihrer Blutwerte und des Gesundheitszustandes, sowie bei der korrekt dosierten Einnahme der erforderlichen Medikamente unterstützen soll.
Diese Ausarbeitung befasst sich mit der Fragestellung, inwiefern interaktive Systeme innerhalb eines historischen Ausstellungskontextes herangezogen werden können, um die methodische Vermittlung von Informationen zu fördern und zu unterstützen. Als Anwendungsfall wird hierbei auf das Schloss Aulendorf zurückgegriffen.
Durch das breite Angebot an Cloud-Plattformen fällt es schwer, die passende Plattform für einen bestimmten Anwendungsfall auszuwählen. Es wird häufig die Frage gestellt, welche Unterschiede die einzelnen Cloud Plattformen aufweisen und welche Eigenschaften und Vorteile jede einzelne besitzt. In diesem Artikel werden deshalb zunächst die Prinzipien von Cloud Computing näher gebracht. Außerdem werden die Plattformen Amazon Web Services, Microsoft Azure, Pivotal Cloud Foundry und OpenStack näher beleuchtet und auf die Aspekte der Skalierung und Lastverteilung untersucht.
Die Themen der Konferenz gehen von Präsentationen historischer Inhalte mit interaktiven Systemen (in Museen), über die Erstellung einer Risikomanagementakte im Medizinumfeld, bis zu den Interaktionsgeräten in VR-Anwendungen. In dieser Ausgabe der Informatics Inside ist der inhärent interdisziplinäre Charakter der Informatik mehr denn je spürbar. Denn die Informatik ist auch "inside" der Kunst, der Medizin, der Chemie und der Textilien.
In recent years robotic systems have matured enough to perform simple home or office tasks, guide visitors in environments such as museums or stores and aid people in their daily life. To make the interaction with service and even industrial robots as fast and intuitive as possible, researchers strive to create transparent interfaces close to human-human interaction. As facial expressions play a central role in human-human communication, robot faces were implemented with varying degrees of human-likeness and expressiveness. We propose an emotion model to parameterize a screen based facial animation via inter-process communication. A software will animate transitions and add additional animations to make a digital face appear “alive” and equip a robotic system with a virtual face. The result will be an inviting appearance to motivate potential users to seek interaction with the robot.
Um sich im Kommunikationswettbewerb zu profilieren und Streuverluste zu minimieren, bedienen sich Unternehmen vermehrt den sogenannten "nicht klassischen" Kommunikationsinstrumenten. Sponsoring stellt dabei einen erfolgsversprechenden Ansatz dar, da Sponsoring in einem attraktiven, emotional- aufgeladenen und nicht -kommerziellen Umfeld stattfindet. Aufgrund der zunehmenden Reizüberflutung der Konsumenten erscheint die Erreichung gesteckter Sponsoringziele durch bloße Sichtbarkeit jedoch nicht mehr zufriedenstellend realisierbar. Der vorliegende Beitrag behandelt das Thema Aktuelle Trends im Sponsoring im Sport. Die Analyse der aktuellen Entwicklungen zeigt, dass sich die Wirkungsvoraussetzungen des Sponsoring im Zeitverlauf verändert haben. Es bedarf neuer und innovativer Aktivierungsmaßmahmen, um die Reizüberflutung der Konsumenten zu überwinden und die Potentiale des Sponsorings zu nutzen. Die Darstellung praktischer Beispiele aus dem Sportmarketing zeigt, dass die handelnden Akteure die neuen Herausforderungen des Sponsorings erkannt haben. Es werden die aktuellen Entwicklungen hinsichtlich Digitalisierung, Internationalisierung, Professionalisierung und unkonventionaller Aktivierung aufgezeigt.
This article is a review of the book "Brain computation as hierarchical abstraction" by Dana H. Ballard published by MIT press in 2015. The book series computational neuroscience familiarizes the reader with the computational aspects of brain functions based on neuroscientific evidence. It provides an excellent introduction of the functioning, i.e. the structure, the network and the routines of the brain in our daily life. The final chapters even discuss behavioral elements such as decision-making, emotions and consciousness. These topics are of high relevance in other sciences such as economics and philosophy. Overall, Ballard’s book stimulates a scientifically well-founded debate and, more importantly, reveals the need of an interdisciplinary dialogue towards social sciences.
Optimization-based analog layout automation does not yet find evident acceptance in the industry due to the complexity of the design problem. This paper presents a Self-organized Wiring and Arrangement of Responsive Modules (SWARM), able to consider crucial design constraints both implicitly and explicitly. The flexibility of algorithmic methods and the expert knowledge captured in PCells combine into a flow of supervised module interaction. This novel approach targets the creation of constraint-compliant layout blocks which fit into a specified zone. Provoking a synergetic self-organization, even optimal layout solutions can emerge from the interaction. Various examples depict the power of that new concept and the potential for future developments.
Aufgrund seiner hohen gesellschaftlichen Bedeutung, der emotionalen Strahlkraft und der überdurchschnittlich hohen medialen Reichweiten hat sich der Sport zu einer der bedeutendsten Kommunikationsplattformen entwickelt. Unternehmen nutzen Sportsponsoring, um ihre Bekanntheit im hochemotionalen Umfeld des Sports zu steigern sowie Produkte und Marken mittels eines Imagetransfers zu profilieren. Sportsponsoring bietet eine attraktive Möglichkeit, den heutigen kommunikationspolitischen Problemstellungen des steigenden Werbedrucks, der erhöhten Reizüberflutung und der sinkenden Effizienz klassischer Kommunikationsinstrumente entgegenzutreten.
Im vorliegenden Beitrag werden Best Practice Beispiele der wichtigsten Organisationsformen des Sponsoring anhand eines eigens entwickelten Untersuchungsdesigns analysiert. Die Fallbeispiele zeigen, dass die Erreichung anvisierter Sponsoringziele von internen und externen Einflussfaktoren abhängig gemacht werden kann. Während die bloße Sichtbarkeit eines Sponsorships alleine nicht zielführend ist, geht es vielmehr darum, die Sponsoringpartnerschaft durch systematische Aktivierungskonzepte und eine ganzheitliche Integration bekannt zu machen. Die untersuchten Fallbeispiele liefern kreative Lösungsansätze und lassen Rückschlüsse auf Erfolgsfaktoren des Sportsponsoring zu.
Bei großen Sportereignissen wie der diesjährigen Fußball- Europameisterschaft oder den Olympischen Sommerspielen geht es für Verbände und offizielle Sponsoren um Millionen, entsprechend scharf verteidigen sie ihre Werberechte. Burger King zeigt, wie sich dieses "Monopol" kreativ umgehen lässt. Im vorliegenden Beitrag werden exemplarisch zwei Ambush Marketing-Aktivitäten von Burger King im Rahmen der Fußball-Europameisterschaften 2016 vorgestellt. Nicht Sponsor Burger King setzte Ambush Marketing dabei gezielt und kreativ im Rahmen der EM ein, um gegen den offiziellen UEFA-Sponsor und Marktführer McDonald's Punkte zu sammeln.
Entwicklung eines Portfolios von Energieeffizienzdienstleistungen für kommunale EVU. - Kurzfassung
(2016)
The digitization of our society changes the way we live, work, learn, communicate, and collaborate. The Internet of Things, enterprise social networks, adaptive case management, mobility systems, analytics for big data, and cloud services environments are emerging to support smart connected products and services and the digital transformation. Biological metaphors of living and adaptable ecosystems provide the logical foundation for self-optimizing and resilient run-time environments for intelligent business services and service-oriented enterprise architectures. Our aim is to support flexibility and agile transformations for both business domains and related information technology. The present research paper investigates mechanisms for decision analytics in the context of multi-perspective explorations of enterprise services and their digital enterprise architectures by extending original architecture reference models with state of art elements for agile architectural engineering for the digitization and collaborative architectural decision support. The paper’s context focuses on digital transformations of business and IT and integrates fundamental mappings between adaptable digital enterprise architectures and service-oriented information systems. We are putting a spotlight on the example domain – Internet of Things.
Large, deep full-thickness skin wounds from high-graded burns or trauma are not able to reepithelialize sufficiently, resulting in scar formation, mobility limitations, and cosmetic deformities. In this study, in vitro-constructed tissue replacements are needed. Furthermore, such full-skin equivalents would be helpful as in vivo-like test systems for toxicity, cosmetic, and pharmaceutical testing. Up to date, no skin equivalent is available containing the underlying subcutaneous fatty tissue. In this study, we composed a full-skin equivalent and evaluated three different media for the coculture of mature adipocytes, fibroblasts, and keratinocytes. Therefore, adipocyte medium was supplemented with ascorbyl-2-phosphate and calcium chloride, which are important for successful epidermal stratification (Air medium). This medium was further supplemented with two commercially available factor combinations often used for the in vitro culture of keratinocytes (Air-HKGS and Air- KGM medium). We showed that in all media, keratinocytes differentiated successfully to build a stratified epidermal layer and expressed cytokeratin 10 and 14. Perilipin A-positive adipocytes could be found in all tissue models for up to 14 days, whereas adipocytes in the Air-HKGS and Air-KGM medium seemed to be smaller. Adipocytes in all tissue models were able to release adipocyte-specific factors, whereas the supplementation of keratinocyte-specific factors had a slightly negative effect on adipocyte functionality. The permeability of the epidermis of all models was comparable since they were able to withstand a deep penetration of cytotoxic Triton X in the same manner. Taken together, we were able to compose functional three-layered fullskin equivalents by using the Air medium.
Blood vessel reconstruction is still an elusive goal for the development of in vitro models as well as artificial vascular grafts. In this study, we used a novel photo curable cytocompatible polyacrylate material (PA) for freeform generation of synthetic vessels. We applied stereolithography for the fabrication of arbitrary 3D tubular structures with total dimensions in the centimeter range, 300 µm wall thickness, inner diameters of 1 to 2 mm and defined pores with a constant diameter of approximately 100 µm or 200 µm. We established a rinsing protocol to remove remaining cytotoxic substances from the photo-cured PA and applied thio-modified heparin and RGDC-peptides to functionalize the PA surface for enhanced endothelial cell adhesion. A rotating seeding procedure was introduced to ensure homogenous endothelial monolayer formation at the inner luminal tube wall. We showed that endothelial cells stayed viable and adherent and aligned along the medium flow under fluid-flow conditions comparable to native capillaries. The combined technology approach comprising of freeform additive manufacturing (AM), biomimetic design, cytocompatible materials which are applicable to AM, and biofunctionalization of AM constructs has been introduced as BioRap® technology by the authors.
Adapting characteristics of biomaterials specifically for in vitro and in vivo applications is becoming increasingly important in order to control interactions between material and biological systems. These complex interactions are influenced by surface properties like chemical composition, charge, mechanical and topographic attributes. In many cases it is not useful or even not possible to alter the base material but changing surface, to improve biocompatibility or to make surfaces bioactive, may be achieved by thin coatings. An already established method is the coating with polyelectrolyte multilayers (PEM). To adjust adhesion, proliferation and improve vitality of certain cell types, we modified the roughness of PEM coatings. We included different types nanoparticles (NP’s) in different concentrations into PEM coatings for controlling surface roughness. Surface properties were characterized and the reaction of 3 different cell types on these coatings was tested.
Knee osteoarthritis is a common complication and can lead to total loss of joint function in patients. Treatment by either partial or total knee replacement with appropriate UHMWPE based implantsis highly invasive, may cause complications and may show unsatisfying results. Alternatively, treatment may be done by insertion of an elastic interpositional knee spacer with optimized material characteristics.
We report the development of high performance polyurethane-based polymers modified with bioactive molecules for fabrication of such knee spacers. In order to tailor mechanical and tribological properties and to improve resist to enzymatic degradation we propose a core-shell model for the spacer with specifically adapted properties.
Efficiency in supply chain risk management (SCRM) is a major topic in industries with serial production and a complex supply chain due to limited management and financial resources. A high number of possible risk situations and intertwined processes create a more challenging environment for resource allocation. Managers cannot perform SCRM in all possible supply chain areas and hence have to decide where available resources should be utilised for highest possible risk reduction. This makes it important to quickly and systematically evaluate input and output relationships among risk mitigation actions to determine which actions are deployed first for efficient risk level reduction. This paper introduces a new SCRM method based on the failure mode and effects analysis (FMEA) in order to perform an efficiency-oriented risk action prioritisation. By considering the cost-benefit evaluation of identified risk mitigation actions for each assessed risk and by determining the implementation effort for risk mitigation actions, also considered as the cost for realising a specific risk action the method allows finding those risk and risk mitigation actions, which are most efficient for risk reduction and should be implemented first in the process of risk steering.
Gamification, the use of game elements for non-gaming purposes, may just make a huge impact on education, a contribution the world in general and South Africa in particular, desperately needs. In today’s fast-paced work environment, there is not only a severe skills shortage, but also a great need for graduates with practical knowledge - students that are not purely “book smart”. Didactic teaching habits have created an education realm in which reciting facts is more often than not what gets students to pass. Learning factories are physical, operational factories that serve as exemplary and realistic hands-on learning environments and provide an important step towards more industry-prepared graduates. Top universities around the world are establishing such environments and are showing superb results. This paper explores the potential benefit of applying gamification in such a setting to enhance the learning environment even further, and provide opportunities for training otherwise difficult to teach topics, such as shop floor management.
During the first years of their employment, the graduates are a liability to industry. The employer goes an extra mile to bridge the gap between university-exiting and profitable employment of engineering graduates. Unfortunately some cannot take this risk. Given this scenario, this paper presents a learning factory approach as a platform for the application of knowledge so as to develop the required engineering competences in South African engineering graduates before they enter the labour market. It spells out the components of a Stellenbosch University Learning Factory geared towards production of engineering graduates with the required industrial skills. It elaborates on the didactics embedded in the learning factory environment, tailor-made to produce engineers who can productively contribute to the growth of the industry upon exiting the university.
Increasingly volatile market conditions and manufacturing environments combined with a rising demand for highly personalized products, the emergence of new technologies like cyber-physical systems and additive manufacturing as well as an increasing cross-linking of different entities (Industrie 4.0) will result in fundamental changes of future work and logistics systems. The place of production, the logistical network and the respective production system will underlie the requirements of constant changes and therefore sources and sinks of logistical networks have to obey the versatility of (cyber-physical) production systems. To cope with the arising complexity to control and monitor changeable production and logistics systems, decentralized control systems are the mean of choice since centralized systems are pushed to their limits in this regard. This paradigm shift will affect the overall concept under which production and logistics is planned, managed and controlled and how companies interact and collaborate within the emerging value chains by using dynamic methods to generate and execute the created network and to allocate available resources to fulfill the demand for customized products. In this field of research learning factories, like the ESB Logistics Learning Factory at ESB Business School (Reutlingen University), provide a great potential as a risk free test bed to develop new methods and technical solutions, to investigate new technologies regarding their practical use and to transfer the latest state of knowledge and specific competences into the training of students and professionals. Keeping with these guiding principles ESB Business School is transferring its existing production system into a cyber-physical production system to investigate innovative solutions for the design of human-machine collaboration and technical assistance systems as wells as to develop decentralized control methods for intralogistics systems following the requirements of changeable work systems including the respective design of dynamic inbound and outbound logistic networks.
The fast moving process of digitization1 demands flexibility in order to adapt to rapidly changing business requirements and newly emerging business opportunities. New features have to be developed and deployed to the production environment a lot faster. To be able to cope with this increased velocity and pressure, a lot of software developing companies have switched to a Microservice Architecture (MSA) approach. Applications built this way consist of several fine-grained and heterogeneous services that are independently scalable and deployable. However, the technological and business architectural impacts of microservices based applications directly affect their integration into the digital enterprise architecture. As a consequence, traditional Enterprise Architecture Management (EAM) approaches are not able to handle the extreme distribution, diversity, and volatility of micro-granular systems and services. We are therefore researching mechanisms for dynamically integrating large amounts of microservices into an adaptable digital enterprise architecture.
This paper is concerned with the study, optimization and control of the moisture sorption kinetics of agricultural products at temperatures typically found in processing and storage. A nonlinear autoregressive with exogenous inputs (NARX) neural network was developed to predict moisture sorption kinetics and consequently equilibrium moisture contents of shiitake mushrooms (Lentinula edodes (Berk.) Pegler) over a wide range of relative humidity and different temperatures. Sorption kinetic data of mushroom caps was separately generated using a continuous, gravimetric dynamic vapour sorption analyser at emperatures of 25-40 °C over a stepwise variation of relative humidity ranging from 0 to 85%. The predictive power of the neural network was based on physical data, namely relative humidity and temperature. The model was fed with a total of 4500 data points by dividing them into three subsets, namely, 70% of the data was used for training, 15% of the data for testing and 15% of the data for validation, randomly selected from the whole dataset. The NARX neural network was capable of precisely simulating equilibrium moisture contents of mushrooms derived from the dynamic vapour sorption kinetic data throughout the entire range of relative humidity.
This case study describes the emerging customized omnichannel loyalty solution of Marc O’Polo from a customer’s perspective. After the introduction of Marc O’Polo and their general omnichannel strategy, the loyalty program is described in detail, like Marc O’Polo for members and the mobile app, social media, direct mail and in-store capabilities. A discussion chapter closes the case study with research implications and open questions for Marc O’Polo.
Loyalty programs become more important in an omnichannel environment of fashion retail business. After the definition of customer loyalty and loyalty programs the main characteristics of omnichannel loyalty programs are described. As touchpoints of omnichannel loyalty programs mobile, social media, direct mail and in-store capabilities are detailed. A discussion chapter closes with recommendations for fashion retailers.
This case study of Breuninger aims to analyze how Breuninger adapts to the emerging omnichannel environment in fashion business. From a consumer’s perspective Breuninger and the general omnichannel strategy of Breuninger is explained, before the loyalty program of Breuninger is analyzed in detail. Key factors as the mobile app and the mobile Breuninger card, social media, direct mail and in-store capabilities are described. A discussion chapter finalizes the case.
In times of e-commerce and digitalization, new markets are opening, young companies have the possibility to grow and new perspectives arise in terms of customer relationship. Customers require more possibilities of personalization. In the same time, companies have access to new and especially more information about the customer. Seems like it was a correlation that could evolve greatly if there weren't privacy issues. Vast amount of data about consumers are collected in Big Data warehouses. These shall be analyzed via predictive analytics and customers shall be classified by algorithms like clustering models, propensity models or collaborative filtering. All these subjects are growing in importance, as they are shaping the global marketing landscape. Marketers develop together with IT scientists new ways of analyzing customer databases and benefit from more accurate segmentation methods as that have been used until now. The following paper shall provide a literature review on new methods of consumer segmentation regarding the high inflow of new information via e-commerce. It will introduce readers in the subject of predictive analytics and will discuss several predictive models. The writing of the paper is not based on own empirical researches, but shall serve as a reference text for further researches. A conclusion will complete the paper.
Like many others, fashion companies have to deal with a global and very competitive environment. Thus companies rely on accurate sales forecasts - as key success factor of an efficient supply chain management. However, forecasters have to take into account some specificities of the fashion industry. To respond to these constraints, a variety of different forecasting methods exists, including new, computer-based predictive analytics. After the evaluation of different methods, their application to the fashion industry is investigated through semi structured expert interviews. Despite several benefits predictive analytics is not yet frequently used in practice. This research does not only reflect an industry profile, but also gives important insights about the future potential and obstacles of predictive analytics.
Today the optimization of metal forming processes is done using advanced simulation tools in a virtual process, e.g. FEM-studies. The modification of the free parameters represents the different variants to be analysed. So experienced engineers may derive useful proposals in an acceptable time if good initial proposals are available. As soon as the number of free parameters growths or the total process takes long times and uses different succeeding forming steps it might be quite difficult to find promising initial ideas. In metal forming another problem has to be considered. The optimization using a series of local improvements, often called a gradient approach may find a local optimum, but this could be far away from a satisfactory solution. Therefore non-deterministic approaches, e.g. Bionic Optimization have to be used. These approaches like Evolutionary Optimization or Particle Swarm Optimization are capable to cover a large range of high dimensional optimization spaces and discover many local optima. So the chance to include the global optimum increases when using such non-deterministic methods. Unfortunately these bionic methods require large numbers of studies of different variants of the process to be optimized. The number of studies tends to increase exponentially with the number of free parameters of the forming process. As the time for one single study might be not too small as well, the total time demand will be inacceptable, taking weeks to months even if high performance computing will be used. Therefore the optimization process needs to be accelerated. Among the many ideas to reduce the time and computer power requirement Meta- and Hybrid Optimization seem to produce the most efficient results. Hybrid Optimization often consists of global searches of promising regions within the parameter space. As soon as the studies indicate that there could be a local optimum, a deterministic study tries to identify this local region. If it shows better performance than other optima found until now, it is preserved for a more detailed analysis. If it performs worse than other optima the region is excluded from further search. Meta-Optimization is often understood as the derivation of Response Surfaces of the functions of free parameters. Once there are enough studies performed, the optimization is done using the Response Surfaces as representatives e.g. for the goal and the restrictions of the optimization problem. Having found regions where interesting solutions are to be expected, the studies available up to now are used to define the Response Surfaces. In many cases low degree polynomials are used, defining their coefficients by least square methods. Both proposals Hybrid Optimization and Meta-Optimization, sometimes used in combination often help to reduce the total optimization processes by large numbers of variants to be studied. In consequence they are highly recommended when dealing with time consuming optimization studies.
Due to the complexity of assembly processes, a high ratio of tasks is still performed by human workers. Short-cyclically changing work contents due to smaller lot sizes, especially the varied series assesmbly, increases both the need for information support as well as the risk of rising physical and psychological stress. The use of technical and digital assistance systems can counter these challenges. Through the integration of information and communication technology as well as collaborative assembly technologies, hybrid cyber-physical assembly systems will emerge. Widely established assembly planning approaches for digital and technical support systems in cyber physical assembly systems will be outlined and discussed with regard to synergies and delimitations of planning perspectives.
The Eighth International Conference on Advances in Databases, Knowledge, and Data Applications (DBKDA 2016), held between June 26 - 30, 2016 - Lisbon, Portugal, continued a series of international events covering a large spectrum of topics related to advances in fundamentals on databases, evolution of relation between databases and other domains, data base technologies and content processing, as well as specifics in applications domains databases. Advances in different technologies and domains related to databases triggered substantial improvements for content processing, information indexing, and data, process and knowledge mining. The push came from Web services, artificial intelligence, and agent technologies, as well as from the generalization of the XML adoption. High-speed communications and computations, large storage capacities, and load-balancing for distributed databases access allow new approaches for content processing with incomplete patterns, advanced ranking algorithms and advanced indexing methods. Evolution on e-business, ehealth and telemedicine, bioinformatics, finance and marketing, geographical positioning systems put pressure on database communities to push the ‘de facto’ methods to support new requirements in terms of scalability, privacy, performance, indexing, and heterogeneity of both content and technology.
The most important objective of event marketing is to improve the image of a brand or a company. The paper presents an image transfer model for event marketing. Based on current research, an image transfer model for event marketing is developed and the conditions required for an image transfer to take place from an event to a brand or a company are explained. Depending on which conditions are met, there are different consequences with regard to the image transfer from the event to the brand or company that are structured and characterized in detail. The image transfer model is developed against the backdrop of selected event types often used in actual practice. The focus of its application lies mainly in brand-oriented leisure and infotainment events directed towards external target groups. The model provides a discussion and analysis of the impact category of the image transfer in event marketing. The paper explains that the possibility of an attitude change is given within the context of event marketing. The presented model serves to structure the image transfer in event marketing. It is intended to illustrate the steps that are involved in the emergence of an image transfer as well as the resulting alternative consequences.
Sport marketing is the specific application of marketing principles and processes to sports products and services. In 2014 the biggest sports event in the world, the FIFA World Cup, took place in Brazil. Billions of spectators around the world saw Germany win the trophy in Rio de Janeiro for the fourth time in history. Yet unlike in previous World Cups, conversation was not only taking place at the numerous public viewings which were held in open spaces like bars and restaurants. For the entire tournament social media like Facebook or Twitter were playing a dominant role in all aspects. With 672 million tweets on Twitter and three billion conversations on Facebook, this was the most social World Cup as well as the most social mega sports event so far. It did not matter whether it were users, athletes or companies, everyone was trying to catch up on the conversation to be informed or inform others about their opinion or latest news. This paper analyzes the implementation of social media marketing during mega sports events with a focus on Adidas’ and Nike’s social media campaigns in the frame of the FIFA World Cup 2014 in Brazil. The analysis shows that social media marketing in the frame of mega sports events gains importance. Those companies finding topics that affect people personally with a relationship to their products achieve success through social media marketing.
Analysis of multicellular patterns is required to understand tissue organizational processes. By using a multi-scale object oriented image processing method, the spatial information of cells can be extracted automatically. Instead of manual segmentation or indirect measurements, such as general distribution of contrast or flow, the orientation and distribution of individual cells is extracted for quantitative analysis. Relevant objects are identified by feature queries and no low-level knowledge of image processing is required.
Wasted paradise – imagining the Maldives without the garbage island of Thilafushi : Version 1.2
(2016)
To address the high level of waste production in the Maldives, the local government decided to transform the coral island of Thilafushi into an immense waste dumb in 1992. Meanwhile, each day, 330 tons of waste is ferried to Thilafushi. The policy had the positive consequence of relieving the garbage burden in Malé, the main island, and surrounding tourist atolls. However, it can also lead to serious environmental and economic damage in the long range. First, the garbage is in visual range of one of the most prominent tourist destinations. Second, if the wind blows a certain way, unfiltered fumes from burning waste travels to tourist atolls. Third, water quality can erode as hazardous waste from batteries and other toxic waste is floating in the ocean. Over time, these effects can accumulate to significantly hamper the number of tourists that travel to the Maldives – one of the state’s main sources of financial income. In our paper, we lay out the situation in more detail and translate it into a simulation model. We test different policies to propose the Maldives government how to better solve the waste problem.
Die zunehmende erneuerbare Stromerzeugung erfordert Anstrengungen, um den damit verbundenen Angebotsschwankungen und der zusätzlichen Netzbelastung entgegen zu wirken. Eine dezentrale und am Bedarf orientierte Stromerzeugung mittels Kraft-Wärme-Kopplung (KWK) kann hier einen wesentlichen Beitrag leisten, um eine sichere und konstante Stromversorgung zu gewährleisten und die Netze zu entlasten. Zu diesem Zweck ist jedoch ein Steuerungssystem erforderlich, das die KWK-Anlagen in die Lage versetzt, sowohl die Deckung des Wärmebedarfs im Objekt aufrecht zu erhalten, als auch die elektrische Energie genau zu den Zeiten zu erzeugen, in denen sie benötigt wird. Die Entkopplung von Stromerzeugung und Deckung des Wärmebedarfs kann dabei über den standardmäßig vorhandenen Wärmespeicher erfolgen. Dieser stellt damit das zentrale Element der Gesamtanlage dar, für die das Steuerungssystem zur Eigenstromoptimierung im Rahmen des Forschungsvorhabens entwickelt und erprobt wurde.
3D morphable face models are a powerful tool in computer vision. They consist of a PCA model of face shape and colour information and allow to reconstruct a 3D face from a single 2D image. 3D morphable face models are used for 3D head pose estimation, face analysis, face recognition, and, more recently, facial landmark detection and tracking. However, they are not as widely used as 2D methods - the process of building and using a 3D model is much more involved.
In this paper, we present the Surrey Face Model, a multi resolution 3D morphable model that we make available to the public for non-commercial purposes. The model contains different mesh resolution levels and landmark point annotations as well as metadata for texture remapping. Accompanying the model is a lightweight open-source C++ library designed with simplicity and ease of integration as its foremost goals. In addition to basic functionality, it contains pose estimation and face frontalisation algorithms. With the tools presented in this paper, we aim to close two gaps. First, by offering different model resolution levels and fast fitting functionality, we enable the use of a 3D Morphable Model in time-critical applications like tracking. Second, the software library makes it easy for the community to adopt the 3D morphable face model in their research, and it offers a public place for collaboration.
Recycling of poly(ethylene terephthalate) (PET) is of crucial importance, since worldwide amounts of PETwaste increase rapidly due to its widespread applications. Hence, several methods have been developed, like energetic, material, thermo-mechanical and chemical recycling of PET. Most frequently, PET-waste is incinerated for energy recovery, used as additive in concrete composites or glycolysed to yield mixtures of monomers and undefined oligomers. While energetic and thermo-mechanical recycling entail downcycling of the material, chemical recycling requires considerable amounts of chemicals and demanding processing steps entailing toxic and ecological issues. This review provides a thorough survey of PET-recycling including energetic, material, thermo-mechanical and chemical methods. It focuses on chemical methods describing important reaction parameters and yields of obtained reaction products. While most methods yield monomers, only a few yield undefined low molecular weight oligomers for impaired applications (dispersants or plasticizers). Further, the present work presents an alternative chemical recycling method of PET in comparison to existing chemical methods.
Die vorliegende Arbeit thematisiert die Identifizierung und Darstellung von Ansätzen, wie man Menschen zu einem besseren Entscheidungsverhalten bei Finanzprodukten und -dienstleistungen bewegen kann. Hierfür werden sogenannte Nudges bei Krediten, Kreditkarten, Hypotheken, der Altersvorsorge und Aktien/Anleihen erläutert. Die Arbeit beginnt mit einer knappen Einführung in die Entscheidungstheorie. Danach wird die seit Jahrzehnten dominierende neoklassische Kapitalmarktheorie kurz erläutert und der Bogen zur jungen Disziplin der Behavioral Finance gespannt. Im Anschluss daran werden Verzerrungen und Heuristiken entlang des Entscheidungsprozesses aufgezeigt und erklärt. Das nächste Kapitel, „Libertärer Paternalismus“, bildet den theoretischen Rahmen für Nudging. Im letzten Kapitel werden Nudgingansätze bei Krediten, Kreditkarten, Hypotheken, der Altersvorsorge und Aktien/Anleihen dargestellt.
In a recent publication Novy-Marx (2013) finds evidence that the variable gross profitability has a strong statistical influence on the common variation of stock returns. He also points out that there is common variation in stock returns related to firm profitability that is not captured by the three-factor model of Fama and French (1993). Thus, this thesis augments the three-factor model by the factor gross profitability and examines whether a profitability-based four-factor model is able to better explain monthly portfolio excess returns on the German stock market compared to the three-factor model of Fama and French (1993) and the Capital Asset Pricing Model (CAPM). Based on monthly stock returns of the CDAX over the period July 2008 to June 2014 this thesis documents four main findings. First, a significant positive market risk premium and a significant positive value premium can be identified. No evidence is found for a size or a profitability effect. Second, all included factors have a strong significant effect on monthly portfolio excess returns. Third, the four-factor model clearly outperforms both the three-factor model of Fama and French (1993) and the CAPM in capturing the common variation in monthly portfolio excess returns. The CAPM performs worst. Finally, the results indicate that the three-factor model of Fama and French (1993) is somewhat better in explaining the cross-section of portfolio excess returns than the four-factor model. Again, the CAPM performs worst. Nevertheless, the four-factor model is considered to be an improvement over the three-factor model of Fama and French (1993) and the CAPM in determining stock returns on the German stock market.
Software process improvement (SPI) is around for decades: frameworks are proposed, success factors are studied, and experiences have been reported. However, the sheer mass of concepts, approaches, and standards published over the years overwhelms practitioners as well as researchers. What is out there? Are there new emerging approaches? What are open issues? Still, we struggle to answer the question for what is the current state of SPI and related research? We present initial results from a systematic mapping study to shed light on the field of SPI and to draw conclusions for future research directions. An analysis of 635 publications draws a big picture of SPI-related research of the past 25 years. Our study shows a high number of solution proposals, experience reports, and secondary studies, but only few theories. In particular, standard SPI models are analyzed and evaluated for applicability, especially from the perspective of SPI in small-to-medium-sized companies, which leads to new specialized frameworks. Furthermore, we find a growing interest in success factors to aid companies in conducting SPI.
This summary refers to the paper Software process improvement : where is the evidence? [Ku15].
This paper was published as full research paper in the ICSSP’2015 proceedings.
A software process is the game plan to organize project teams and run projects. Yet, it still is a challenge to select the appropriate development approach for the respective context. A multitude of development approaches compete for the users’ favor, but there is no silver bullet serving all possible setups. Moreover, recent research as well as experience from practice shows companies utilizing different development approaches to assemble the best-fitting approach for the respective company: a more traditional process provides the basic framework to serve the organization, while project teams embody this framework with more agile (and/or lean) practices to keep their flexibility. The paper at hand provides insights into the HELENA study with which we aim to investigate the use of “Hybrid dEveLopmENt Approaches in software systems development”. We present the survey design and initial findings from the survey’s test runs. Furthermore, we outline the next steps towards the full survey.
Software process improvement (SPI) has been around for decades: frameworks are proposed, success factors are studied, and experiences have been reported. However, the sheer mass of concepts, approaches, and standards published over the years overwhelms practitioners as well as researchers. What is out there? Are there new trends and emerging approaches? What are open issues? Still, we struggle to answer these questions about the current state of SPI and related research. In this article, we present results from an updated systematic mapping study to shed light on the field of SPI, to develop a big picture of the state of the art, and to draw conclusions for future research directions. An analysis of 769 publications draws a big picture of SPI-related research of the past quarter-century. Our study shows a high number of solution proposals, experience reports, and secondary studies, but only few theories and models on SPI in general. In particular, standard SPI models like CMMI and ISO/IEC 15,504 are analyzed, enhanced, and evaluated for applicability in practice, but these standards are also critically discussed, e.g., from the perspective of SPI in small to-medium-sized companies, which leads to new specialized frameworks. New and specialized frameworks account for the majority of the contributions found (approx. 38%). Furthermore, we find a growing interest in success factors (approx. 16%) to aid companies in conducting SPI and in adapting agile principles and practices for SPI (approx. 10%). Beyond these specific topics, the study results also show an increasing interest into secondary studies with the purpose of aggregating and structuring SPI-related knowledge. Finally, the present study helps directing future research by identifying under-researched topics awaiting further investigation.
A seamless convergence of the digital and physical factory aiming in personalized Product Emergence Process (PPEP) for smart products within ESB Logistics Learning Factory at Reutlingen University.
A completely new business model with reference to Industrie4.0 and facilitated by 3D experience software in today's networked society in which customers expect immediate responses, delightful experience and simple solutions is one of the mission scenarios in the ESB Logistics Learning Factory at ESB Business School (Reutlingen University).
The business experience platform provides software solutions for every organization in the company respectively in the factory. An interface with dashboards, project management apps, 3D - design and construction apps with high end visualization, manufacturing and simulation apps as well as intelligence and social network apps in a collaborative interactive environment help the user to learn the creation of a value end to end process for a personalized virtual and later real produced product.
Instead of traditional ways of working and a conventional operating factory real workers and robots work semi-intuitive together. Centerpiece in the self-planned interim factory is the smart personalized product, uniquely identifiable and locatable at all times during the production process – a scooter with an individual colored mobile phone – holder for any smart phone produced with a 3D printer in lot size one. Smart products have in the future solutions incorporated internet based services – designed and manufactured - at the costs of mass products. Additionally the scooter is equipped with a retrievable declarative product memory. Monitoring and control is handled by sensor tags and a raspberry positioned on the product. The engineering design and implementation of a changeable production system is guided by a self-execution system that independently find amongst others esplanade workplaces.
The imparted competences to students and professionals are project management method SCRUM, customization of workflows by Industrie4.0 principles, the enhancements of products with new personalized intelligent parts, electrical and electronic selfprogrammed components and the control of access of the product memory information, to plan in a digital engineering environment and set up of the physical factory to produce customer orders. The gained action-orientated experience refers to the chances and requirements for holistic digital and physical systems.