Refine
Document Type
- Journal article (1243)
- Conference proceeding (1038)
- Book chapter (381)
- Book (223)
- Doctoral Thesis (54)
- Working Paper (38)
- Anthology (32)
- Report (25)
- Patent / Standard / Guidelines (24)
- Issue of a journal (19)
Is part of the Bibliography
- yes (3086)
Institute
- ESB Business School (1106)
- Informatik (874)
- Technik (509)
- Life Sciences (343)
- Texoversum (220)
- Zentrale Einrichtungen (16)
Publisher
- Springer (472)
- IEEE (252)
- Elsevier (243)
- Hochschule Reutlingen (191)
- MDPI (98)
- Wiley (72)
- Gesellschaft für Informatik e.V (69)
- Universität Tübingen (65)
- De Gruyter (60)
- VDE Verlag (48)
Geschäftsmodelle in der Energiewirtschaft : ein Kompendium von der Methodik bis zur Anwendung
(2017)
Ob Student oder Angestellter, Forscher oder Unternehmer, Politiker oder Dozent, ob im Start-up oder im Unternehmens-Oldie „Energieversorger“ – heute kommt vermeintlich keiner ohne ein gutes Geschäftsmodell aus. Warum ist das so? Was macht Geschäftsmodelle zu „fleißigen Lieschen“ nicht nur der Betriebswirtschaftslehre, sondern auch der Ingenieure, Volkswirte oder Informatiker? Das Geschäftsmodell beschreibt das Prinzip, nach dem eine Organisation Werte schafft, vermittelt und erfasst. Es ermöglicht durch diese Vereinfachung und Strukturierung eine leichtere Kommunikation und Analyse des Gesamtkonstrukts oder seiner Bestandteile. Es dient als Planungsinstrument, mit dessen Hilfe Innovationen effizienter und gezielter identifiziert werden können. Geschäftsmodelle können auf Ebene von Unternehmen oder einzelner Geschäftseinheiten entwickelt werden. Das vorliegende Kompendium dient dem Studenten wie dem Praktiker der Energiewirtschaft als methodische Basis zur eigenständigen Entwicklung von Geschäftsmodellen. Daher wird im 1. Kapitel aus Wissenschaft und Forschung abgeleitet, was ein Geschäftsmodell ist und wie es angewendet wird. Kapitel 2 beschreibt die Herausforderungen der Energiewirtschaft. Die Branche ist seit Jahrzehnten im Wandel. Neue Technologien zur (dezentralen) Erzeugung, Digitalisierung, sich wandelnde politische Ziele und Instrumente (Liberalisierung, Kernkraftausstieg, Energiewende,…) und neue Kundenbedürfnisse erfordern, dass die Unternehmen – große wie kleine, etablierte wie neue Anbieter, in öffentlichem wie in privatem Eigentum – angesichts erodierender Margen und zunehmendem Wettbewerb in diesem Umfeld erfolgversprechende Wege in die Zukunft suchen. Schon mit dem Begriff „Geschäftsmodell“ wird heute die Hoffnung eines Heilsbringers in diesem Dickicht erhofft, dem natürlich ein Strukturierungsinstrument – mehr ist das Geschäftsmodell schließlich nicht – nicht gerecht werden kann. In Kapitel 3 werden im Prinzip bekannte Geschäftsmodelle der Energiewirtschaft geschildert, sowie ihre Patterns, angelehnt an andere Branchen, ausdifferenziert. Dies sollte dem relativen Neuling den Einstieg in die Branche erleichtern und dem nach neuen Geschäftsmodellen Suchenden die Basis für eigene Innovation bieten. In Kapitel 4 werden Geschäftsmodelle für virtuelle Kraftwerke geschildert. Anhand dieses Beispiels wird auch ausgeführt, wie Geschäftsmodelle von Partnern entlang der Wertschöpfungskette ineinander greifen müssen. Im letzten Kapitel 5 wird schließlich auf Erfolgsfaktoren zur Entwicklung und Umsetzung von Geschäftsmodellen eingegangen.
Farben umgeben den Menschen tagtäglich und beeinflussen unser Befinden und Verhalten – teils bewusst, teils unbewusst. Diese Tatsache veranlasst auch das Marketing, sich mit den Wirkungen von Farben auseinanderzusetzen, um diese gezielt anwenden zu können. Der richtige Einsatz von Farben im Marketing kann die (Werbe-)Botschaft und die gewünschte Wirkung einer Aktivität oder Marke unterstützen und zudem Aufmerksamkeit bei Konsumenten generieren. Die Erkenntnisse über Wirkungen von Farben im Marketing sind somit entscheidend für die Wahrnehmung der Konsumenten sowie für den Erfolg des Marketings eines Unternehmens. Im Rahmen dieser Arbeit werden zunächst Farben und deren Wirkungen im Hinblick auf die Farbsymbolik sowie Farbkombinationen und Farbtöne beschrieben. Im Anschluss wird auf den Einsatz von Farben im Marketing eingegangen. Anhand von Beispielen aus der Marketing-Praxis werden Farben als Element der Corporate Identity sowie der Marke vorgestellt und die Anwendung von Farben in der Werbung analysiert. Zur Verdeutlichung der Wirkung von Farben im Marketing wird ein Moodboard für einen Aktionsartikel des Discounters ALDI Süd entworfen. Abschließend werden die Erkenntnisse zusammengefasst und eine Handlungsempfehlung für den Einsatz von Farben im Marketing ausgesprochen.
Marketing mit Instagram
(2017)
In einer reizüberfluteten Konsumgesellschaft gestaltet es sich nicht immer einfach, potentielle Kunden über den passenden Kanal zu erreichen. Für die Markenpositionierung müssen steigende Budgets in die Produktion von qualitativ hochwertigen Inhalten investiert werden, um durch den wachsenden digitalen Wettbewerb Authentizität und Relevanz für den Konsumenten zu wahren. Dabei ist der Fokus längst von der reinen Werbebotschaft zum "Storytelling" gewandert. Die Bedeutung der Erlebnisqualität einer Marke rückt in den Vordergrund, denn die Zielgruppe möchte nicht nur den objektiven Mehrwert erfahren, sondern zugleich eine spannende Geschichte, die sie auch selbst mit dem Produkt oder der Dienstleistung erleben können. Durch die Zunahme des mobilen Konsums von Content finden gerade soziale Netzwerke wie Instagram bei der Planung der Marketing-Aktivitäten eines Unternehmens steigende Beachtung. Im Textileinzelhandel wird diese Erkenntnis bereits seit längerer Zeit aktiv genutzt, nicht so im deutschen Lebensmitteleinzelhandel. Vor diesem Hintergrund stellt sich die Frage, ob eine ästhetische Inszenierung von Lebensmitteln auf Instagram überhaupt beim Rezipienten ankommt? Die Antwort ist eindeutig: Amerikanische Unternehmen wie Wholefoods machen es vor. Mit über 1,9 Millionen Abonnenten und fast 2.500 geposteten Beiträgen (Stand Februar 2017) erfreut sich das Unternehmensprofil des Lebensmitteleinzelhändlers aus Texas großer Beliebtheit. Marketing mit Instagram und Le-bensmittel kann also erfolgreich verknüpft werden, aber wie? Dies soll im Rahmen der vorliegenden Arbeit untersucht werden.
THE PROBLEM: Companies create problems for customers and employees when product innovation goes unmanaged. Eventually, excessive operational complexity hurts the bottom line.
THREE SOLUTIONS: Focus on product integration, not product proliferation. Make sure your product developers work closely with customerfacing and operational employees. And settle on a high-level purpose that can guide decision making.
Es wird ein Verfahren zum Ermitteln von Deskriptoren DI, welche mit Eigenschaften eines Partikelkollektivs korrelieren, beschrieben. Die Deskriptoren Di werden durch Auswerten von Messsignalen, welche mittels einer optischen Reflexions- oder Transmissionsmethode ermittelt wurden, bei der Licht in das Partikelkollektiv eingestrahlt und rückreflektiertes Licht mittels eines Fotodetektors detektiert wird, ermittelt. Das Verfahren weist die folgenden Schritte auf: a) Aufnehmen eines Intensitätssignals I(t) von dem Fotodetektor, wobei das Intensitätssignals I(t) eine zeitabhängige Intensität von durch den Fotodetektor detektiertem Licht angibt; b) Erstellen eines digitalisierten Intensitätssignals It durch Digitalisieren des aufgenommenen Intensitätssignals I(t) mit einer Samplingperiode &Dgr;t innerhalb eines Abtastfensters T vorbestimmter Zeitdauer; c) Erstellen eines Satzes von Koeffizientenwerten ai durch Umwandeln des digitalisierten Intensitätssignals It mithilfe einer mathematischen, vorzugsweise surjektiven Transformation; d) Ableiten der Deskriptoren DI aus den erstellten Koeffizientenwerten. Das Verfahren und eine zu dessen Ausführung vorgesehene Vorrichtung können deutlich einfacher implementiert werden als herkömmliche Verfahren, bei denen Partikelkollektive durch Erstellen einer Sehnenlängenverteilung (CLD) untersucht werden sollen. Die mittels des Verfahrens ermittelten Deskriptoren können bei einer Prozessanalyse verwendet werden, um beispielsweise einfach und schnell erkennen zu können, wenn sich ein Partikelkollektiv anomal verhält.
Broad acceptance of finite-element-based analysis of structural problems and the increased availability of CAD-systems for structural tasks, which help to generate meshes of non-trivial geometries, have been setting a standard for the evaluation of designs in mechanical engineering in the last few decades. The development of automated or semi-automated optimizers, integrated into the Computer-Aided Engineering (CAE)-packages or working as outer loop machines, requiring the solver to do the analysis of the specific designs, has been accepted by most advanced users of the simulation community as well. The availability and inexpensive processing power of computers is increasing without any limitations foreseen in the coming years. There is little doubt that virtual product development will continue using the tools that have proved to be so successful and so easy to handle.
Current fields of interest
(2016)
If we review the research done in the field of optimization, the following topics appear to be the focus of current development:
– Optimization under uncertainties, taking into account the inevitable scatter of parts, external effects and internal properties. Reliability and robustness both have to be taken into account when running optimizations, so the name Robust Design Optimization (RDO) came into use.
– Multi-Objective Optimization (MOO) handles situations in which different participants in the development process are developing in different directions. Typically we think of commercial and engineering aspects, but other constellations have to be looked at as well, such as comfort and performance or price and consumption.
– Process development of the entire design process, including optimization from early stages, might help avoid inefficient efforts. Here the management of virtual development has to be re-designed to fit into a coherent scheme.
...
There are many other fields where interesting progress is being made. We limit our discussion to the first three questions.
To illustrate the power and the pitfalls of Bionic Optimization, we will show some examples spanning classes of applications and employing various strategies. These applications cover a broad range of engineering tasks. Nevertheless, there is no guarantee that our experiences and our examples will be sufficient to deal with all questions and issues in a comprehensive way. As general rule it might be stated, that for each class of problems, novices should begin with a learning phase. So, in this introductory phase, we use simple and quick examples, e.g., using small FE-models, linear load cases, short time intervals and simple material models. Here beginners within the Bionic Optimization community can learn which parameter combinations to use. In Sect. 3.3 we discuss strategies for optimization study acceleration. Making use of these parameters as starting points is one way to set the specific ranges, e.g., number of parents and kids, crossing, mutation radii and, numbers of generations. On the other hand, these trial runs will doubtless indicate that Bionic Optimization needs large numbers of individual designs, and considerable time and computing power. We recommend investing enough time preparing each task in order to avoid the frustration should large jobs fail after long calculation times.
Application to CAE systems
(2016)
Due to the broad acceptance of CAD-systems based on 3D solids, the geometric data of all common CAE (Computer-Aided Engineering) software, at least in mechanical engineering, are based on these solids. We use solid models, where the space filled by material is defined in a simple and easily useable way. Solid models allow for the development of automated meshers that transform solid volumes into finite elements. Even after some unacceptable initial trials, users are able to generate meshes of non-trivial geometries within minutes to hours, instead of days or weeks. Once meshing had no longer been the cost limiting factor of finite element studies, numerical simulation became a tool for smaller industries as well.
Due to the broad acceptance of CAD-systems based on 3D solids , the geometric data of all common CAE (Computer-Aided Engineering) software, at least in mechanical engineering, are based on these solids. We use solid models , where the space filled by material is defined in a simple and easily useable way. Solid models allow for the development of automated meshers that transform solid volumes into finite elements. Even after some unacceptable initial trials, users are able to generate meshes of non-trivial geometries within minutes to hours, instead of days or weeks. Once meshing had no longer been the cost limiting factor of finite element studies, numerical simulation became a tool for smaller industries as well.
In the early days of automated meshing development, there were discussions over the use of tetragonal (Fig. 4.1) or hexagonal based meshes. But, after a short period of time, it became evident, that there were and will always be many problems using automated meshers to generate hexagonal elements . So today nearly all automated 3D-meshing systems use tetragonal elements .
We have seen that bionic optimization can be a powerful tool when applied to problems with non-trivial landscapes of goals and restrictions. This, in turn, led us to a discussion of useful methodologies for applying this optimization to real problems. On the other hand, it must be stated that each optimization is a time consuming process as soon as the problem expands beyond a small number of free parameters related to simple parabolic responses. Bionic optimization is not a quick approach to solving complex questions within short times. In some cases it has the potential to fail entirely, either by sticking to local maxima or by random exploration of the parameter space without finding any promising solutions. The following sections present some remarks on the efficiency and limitations users must be aware of. They aim to increase the knowledge base of using and encountering bionic optimization. But they should not discourage potential users from this promising field of powerful strategies to find good or even the best possible designs.
In this chapter we introduce methods to improve mechanical designs by bionic methods. In most cases we assume that a general idea of the part or system is given by a set of data or parameters. Our task is to modify these free parameters so that a given goal or objective is optimized without violation of any of the existing restrictions.
Motivation
(2016)
Since human beings started to work consciously with their environment, they have tried to improve the world they were living in. Early use of tools, increasing quality of these tools, use of new materials, fabrication of clay pots, and heat treatment of metals: all these were early steps of optimization. But even on lower levels of life than human beings or human society, we find optimization processes. The organization of a herd of buffalos to face their enemies, the coordinated strategies of these enemies to isolate some of the herd’s members, and the organization of bird swarms on their long flights to their winter quarters: all these social interactions are optimized strategies of long learning processes, most of them the result of a kind of collective intelligence acquired during long selection periods.
Wissen schaffen für klimafreundliche Energietechnik : das Erfolgsgeheimnis der Kraft-Wärme-Kopplung
(2017)
Als hocheffiziente Energieerzeugungstechnologie spielt die KWK eine wichtige Rolle bei der Energiewende und dem Klimaschutz. Gerade für den Gebäudebereich und Unternehmen kann die KWK eine wirtschaftliche und umweltschonende Möglichkeit sein, Strom und Wärme zu erzeugen. Für die Planung, Umsetzung und den Betrieb von KWK-Anlagen werden fachkundige Handwerke und Ingenieure gebraucht. Aus diesem Grund hat das Ministerium für Umwelt, Klima und Energiewirtschaft in den Jahren 2015 und 2016 zusammen mit dem Handwerkstag Baden-Württemberg ... den Qualifizierungskurs "Kraft-Wärme-Kopplung - Kompetenz für den Wärme- und Energiemarkt von heute und morgen" durchgeführt... Aufgrund des Erfolges der Seminarreihe wird diese auch 2017 ff. fortgesetzt.
Durch den Anstieg volatiler Stromerzeugung aus Wind und Sonne benötigt die deutsche Energiewirtschaft in zunehmendem Ausmaß flexibel verfügbare Leistung und Arbeit. Diese Flexibiliät können bspw. Batteriespeicher bereitstellen. In welchem Ausmaß jene diese Rolle übernehmen können und somit der Markt wachsen wird, hängt von der Professionalität der Marktakteure und den Umfeldparametern ab. Wenn diese stimmen, steigen Akteure in die Entwicklung von Geschäftsmodellen ein. Gleichzeitig streben sie an, die Weiterentwicklung dieser Umfeldparameter im eigenen Interesse zu beeinflussen, womit ein weiteres Marktpotenzial entsteht.Im Folgenden werden erkennbare Geschäftsmodelle aufgezeigt und in den Kontext der Ergebnisse aktueller Potenzialanalysen gestellt. Dabei entwickeln sich die zwei Teilmärkte für Batteriespeicher - Heimspeicher und Großspeicher - unter unterschiedlichen Rahmenbedingungen.
Primäres Ziel und Aufgabe dieser Arbeit ist ... die Entwicklung einer neuen Recyclingmethode für PET, die die Nachteile der bisherigen Verwertungsmethoden vermeidet und unter weitgehendem Erhalt der bereits erbrachten Syntheseleistung definierte Oligomere liefert. Aus diesen können in Folge hochwertige Produkte hergestellt werden.
Es werden eine elektronische Treiberschaltung und ein Ansteuerverfahren offenbart. Die Treiberschaltung weist einen Ausgang auf; einen ersten Ausgangstransistor mit einem Steuerknoten und einer Laststrecke, wobei die Laststrecke zwischen den Ausgang und einen ersten Versorgungsknoten geschaltet ist; einen Spannungsregler, der dazu ausgebildet ist, eine Spannung über der Laststrecke des ersten Ausgangstransistors zu steuern; und einen ersten Treiber, der dazu ausgebildet ist, den ersten Ausgangstransistor in Abhängigkeit von einem ersten Steuersignal anzusteuern.
Die vorliegende Erfindung betrifft ein Verfahren zur Regelung einer Totzeit in einem Synchronwandler (100), in welchem ein zyklisches Schalten eines Steuerschalters (2) und eines Synchronschalters (3) erfolgen, wobei der Steuerschalter (2) mittels eines ersten Schaltsignals (S1) und der Synchronschalter (3) mittels eines zweiten Schaltsignals (S2) geschaltet werden. Das Verfahren umfasst ein Erfassen und Vorhalten eines Spannungswertes, welcher eine Spannung (VSW) über den Synchronschalter (3) zu einem bestimmten Zeitpunkt beschreibt, und ein Anpassen des ersten und/oder zweiten Schaltsignals (S1, S2) für einen folgenden Zyklus basierend auf dem vorgehaltenen Spannungswert.
Die Erfindung betrifft einen Energieübertrager (100) zur induktiven Energieübertragung von einem primären Schaltkreis (10) des Energieübertragers (100) an eine erste (5) und eine zweite (15) Spannungsdomäne eines sekundären Schaltkreises (20) des Energieübertragers (100) und zur Informationsübertragung vom sekundären Schaltkreis (20) zum primären Schaltkreis (10). Dabei umfasst der Energieübertrager (100): – einen Transformator (30), über den der primäre Schaltkreis (10) und der sekundäre Schaltkreis (20) induktiv miteinander gekoppelt sind und über den sowohl die Energieübertragung als auch die Informationsübertragung erfolgt; und – ein Amplitudenmodulationsmodul (50) zum Modulieren der Strom- und/oder Spannungsamplitude im sekundären Schaltkreis (20) mit Hilfe eines Amplitudenmodulationsschalters (55), wobei der Amplitudenmodulationsschalter (55) zwischen der ersten (5) und zweiten (15) Spannungsdomäne des sekundären Schaltkreises (20) angeordnet ist und ausgelegt ist, durch Öffnen und Schließen des Amplitudenmodulationsschalters (55) die Strom- und/oder Spannungsamplitude im primären Schaltkreis (10) zu ändern, um somit Information vom sekundären Schaltkreis (20) zum primären Schaltkreis (10) zu übertragen. Die vorliegende Erfindung betrifft ferner einen Gate-Treiber zum Schalten eines Leistungsschalters (500) und ein Verfahren zur induktiven Übertragung von Energie und zur kombinierten Informationsübertragung.
Today the optimization of metal forming processes is done using advanced simulation tools in a virtual process, e.g. FEM-studies. The modification of the free parameters represents the different variants to be analysed. So experienced engineers may derive useful proposals in an acceptable time if good initial proposals are available. As soon as the number of free parameters growths or the total process takes long times and uses different succeeding forming steps it might be quite difficult to find promising initial ideas. In metal forming another problem has to be considered. The optimization using a series of local improvements, often called a gradient approach may find a local optimum, but this could be far away from a satisfactory solution. Therefore non-deterministic approaches, e.g. Bionic Optimization have to be used. These approaches like Evolutionary Optimization or Particle Swarm Optimization are capable to cover a large range of high dimensional optimization spaces and discover many local optima. So the chance to include the global optimum increases when using such non-deterministic methods. Unfortunately these bionic methods require large numbers of studies of different variants of the process to be optimized. The number of studies tends to increase exponentially with the number of free parameters of the forming process. As the time for one single study might be not too small as well, the total time demand will be inacceptable, taking weeks to months even if high performance computing will be used. Therefore the optimization process needs to be accelerated. Among the many ideas to reduce the time and computer power requirement Meta- and Hybrid Optimization seem to produce the most efficient results. Hybrid Optimization often consists of global searches of promising regions within the parameter space. As soon as the studies indicate that there could be a local optimum, a deterministic study tries to identify this local region. If it shows better performance than other optima found until now, it is preserved for a more detailed analysis. If it performs worse than other optima the region is excluded from further search. Meta-Optimization is often understood as the derivation of Response Surfaces of the functions of free parameters. Once there are enough studies performed, the optimization is done using the Response Surfaces as representatives e.g. for the goal and the restrictions of the optimization problem. Having found regions where interesting solutions are to be expected, the studies available up to now are used to define the Response Surfaces. In many cases low degree polynomials are used, defining their coefficients by least square methods. Both proposals Hybrid Optimization and Meta-Optimization, sometimes used in combination often help to reduce the total optimization processes by large numbers of variants to be studied. In consequence they are highly recommended when dealing with time consuming optimization studies.
In this article an energy harvesting system for measuring the wind speed starting from 2 m/s (about 2 Bft) is presented, which uses the same source for measuring and supplying power (energy autarkic). The use of the same source for measurement and power supply increases the number of potential applications since needed power is present with the measuring signal. For the case of measuring the wind velocity, one might consider applications in tunnels, tubes, pipelines, air conditioning or for controlling clogging of filters. Bluetooth Low Energy (BLE) is chosen as radio technology, since it provides the possibility to realize a unidirectional communication; requiring only a single telegram (advertising telegram) for sending the measured value. A more complex establishment of communication required by WLAN or 6LoWPAN could therefore be avoided to significantly reduce the overall energy consumption. Since the advertisement telegram does not make any provision for security or against hacking in general, a security concept is presented which includes encryption and resilience against replay attacks in a unidirectional communication system.
To facilitate the presented concepts beyond wind sensors, the system is divided into three major modules namely the generator-sensor module, the radio module and the energy management module. Whereas the first two might be changed in different applications the energy management module could be reused in many different applications. It supplies and stores the needed energy and switches power on and off in a deterministic way to ensure the radio module can operate correctly.
Sesam öffne Dich
(2017)
Aus ähnlichen Beweggründen, die Ferdinand von Steinbeis im Großen verfolgte, ist es den Bemühungen des Textilfachlehrers Reichelt und dem damaligen Prof. Stängle zu verdanken, dass gegen Ende des 19. Jahrhunderts an der in der Zwischenzeit zum Technikum für Textilindustrie avancierten Reutlinger Textilanstalt eine reichhaltige Sammlung von Modellen und Gewebemustern eingerichtet und aufgebaut wurde. Den Schwerpunkt der aus über 500000 Mustern und Gewebefragmenten bestehenden Kollektion bilden europäische Jacquard-Gewebe und engliche Anzugsstoffe aus dem 19. Jahrhundert. Das Glanzstück der Reutlinger Gewebesammlung bilden ... etwa 900 altjapanische Gewebefragmente aus der Zeit von 1530 bis 1880 ...
We report the temperature dependence of metal-enhanced fluorescence (MEF) of individual photosystem I (PSI) complexes from Thermosynechococcus elongatus (T. elongatus) coupled to gold nanoparticles (AuNPs). A strong temperature dependence of shape and intensity of the emission spectra is observed when PSI is coupled to AuNPs. For each temperature, the enhancement factor (EF) is calculated by comparing the intensity of individual AuNP-coupled PSI to the mean intensity of ‘uncoupled’ PSI. At cryogenic temperature (1.6 K) the average EF was 4.3-fold. Upon increasing the temperature to 250 K the EF increases to 84-fold. Single complexes show even higher EFs up to 441.0-fold. At increasing temperatures the different spectral pools of PSI from T. elongatus become distinguishable. These pools are affected differently by the plasmonic interactions and show different enhancements. The remarkable increase of the EFs is explained by a rate model including the temperature dependence of the fluorescence yield of PSI and the spectral overlap between absorption and emission spectra of AuNPs and PSI, respectively.
We present a fully automatic approach to real-time 3D face reconstruction from monocular in-the-wild videos. We use a 3D morphable face model to obtain a semi-dense shape and combine it with a fast median-based super-resolution technique to obtain a high-fidelity textured 3D face model. Our system does not need prior training and is designed to work in uncontrolled scenarios.
Mit dem Betrieb von KWK-Anlagen lässt sich nennenswert Primärenergie einsparen. KWK-Anlagen werden aus diesem Grund aufgrund verschiedener Gesetze und Richtlinien gefördert. Zum wirtschaftlichen Betrieb einer KWK-Anlage ist es erforderlich, den größtmöglichen Teil des erzeugten elektrischen Stroms entweder selbst zu verbrauchen oder an Dritte (Mieter, Wohnungseigentümer…) zu verkaufen. Mit dem KWKG 2016 werden größere KWK-Anlagen interessant, und Anlagen mit geringerer jährlicher Laufzeit können sich sogar wirtschaftlicher darstellen als reine Grundlastanlagen.
It is assumed that more education leads to better understanding of complex systems. Some researchers, however, find indications that simple mechanisms like stocks and flows are not well understood even by people who have passed higher education. In this paper, we test people’s understanding of complex systems with the widely studied stock-and-flow (SF) tasks. SF tasks assess people’s understanding of the interplay between stocks and flows. We investigate SF failure of domain experts and novices in different knowledge domains. In particular, we compare performance on the original study’s bathtub task with the square wave pattern with two alternative cover stories from the engineering and business domains on different groups of business and engineering students from different semesters. Further, we show that, while engineering students perform better than business students, with progressing in higher education, students may lose the capability of dealing with simple SF tasks. We thus find hints on déformation professionelle in higher education.
Propofol is an intravenous anesthetic. Currently, it is not possible to routinely measure blood concentration of the drug in real time. However, multi-capillary column ion-mobility spectrometry of exhaled gas can estimate blood propofol concentration.Unfortunately, adhesion of volatile propofol on plastic materials complicates measurements. Therefore, it is necessary to consider the extent to which volatile propofol adheres to various plastics used in sampling tubing. Perfluoralkoxy (PFA), polytetrafluorethylene (PTFE), polyurethane (PUR), silicone, and Tygon tubing were investigated in an experimental setting using a calibration gas generator (HovaCAL). Propofol gas was measured for one hour at 26 °C, 50 °C, and 90 °C tubing temperature. Test tubing segments were then flushed with N2 to quantify desorption. PUR and Tygon sample tubing absorbed all volatile propofol. The silicone tubing reached the maximum propofol concentration after 119 min which was 29 min after propofol gas exposure stopped. The use of PFAor PTFE tubing produced comparable and reasonably accurate propofol measurements. The desaturation time for the PFA was 10 min shorter at 26 °C than for PTFE. PFA tubing thus seems most suitable for measurement of volatile propofol,with PTFE as an alternative.
Due to the complexity of assembly processes, a high ratio of tasks is still performed by human workers. Short-cyclically changing work contents due to smaller lot sizes, especially the varied series assesmbly, increases both the need for information support as well as the risk of rising physical and psychological stress. The use of technical and digital assistance systems can counter these challenges. Through the integration of information and communication technology as well as collaborative assembly technologies, hybrid cyber-physical assembly systems will emerge. Widely established assembly planning approaches for digital and technical support systems in cyber physical assembly systems will be outlined and discussed with regard to synergies and delimitations of planning perspectives.
This paper investigates the electrothermal stability and the predominant defect mechanism of a Schottky gate AlGaN/GaN HEMT. Calibrated 3-D electrothermal simulations are performed using a simple semiempirical dc model, which is verified against high-temperature measurements up to 440°C. To determine the thermal limits of the safe operating area, measurements up to destruction are conducted at different operating points. The predominant failure mechanism is identified to be hot-spot formation and subsequent thermal runaway, induced by large drain–gate leakage currents that occur at high temperatures. The simulation results and the high temperature measurements confirm the observed failure patterns.
Purpose: Human breath analysis is proposed with increasing frequency as a useful tool in clinical application. We performed this study to find the characteristic volatile organic compounds (VOCs) in the exhaled breath of patients with idiopathic pulmonary fibrosis (IPF) for discrimination from healthy subjects. Methods: VOCs in the exhaled breath of 40 IPF patients and 55 healthy controls were measured using a multi-capillary column and ion mobility spectrometer. The patients were examined by pulmonary function tests, blood gas analysis, and serum biomarkers of interstitial pneumonia. Results: We detected 85 VOC peaks in the exhaled breath of IPF patients and controls. IPF patients showed 5 significant VOC peaks; p-cymene, acetoin, isoprene, ethylbenzene, and an unknown compound. The VOC peak of p-cymene was significantly lower (p < 0.001), while the VOC peaks of acetoin, isoprene, ethylbenzene, and the unknown compound were significantly higher (p < 0.001 for all) compared with the peaks of controls. Comparing VOC peaks with clinical parameters, negative correlations with VC (r =−0.393, p = 0.013), %VC (r =−0.569, p < 0.001), FVC (r = −0.440, p = 0.004), %FVC (r =−0.539, p < 0.001), DLco (r =−0.394, p = 0.018), and %DLco (r =−0.413, p = 0.008) and a positive correlation with KL-6 (r = 0.432, p = 0.005) were found for p-cymene. Conclusion: We found characteristic 5 VOCs in the exhaled breath of IPF patients. Among them, the VOC peaks of p-cymene were related to the clinical parameters of IPF. These VOCs may be useful biomarkers of IPF.
Data collected from internet applications are mainly stored in the form of transactions. All transactions of one user form a sequence, which shows the user´s behaviour on the site. Nowadays, it is important to be able to classify the behaviour in real time for various reasons: e.g. to increase conversion rate of customers while they are in the store or to prevent fraudulent transactions before they are placed. However, this is difficult due to the complex structure of the data sequences (i.e. a mix of categorical and continuous data types, constant data updates) and the large amounts of data that are stored. Therefore, this thesis studies the classification of complex data sequences. It surveys the fields of time series analysis (temporal data mining), sequence data mining or standard classification algorithms. It turns out that these algorithms are either difficult to be applied on data sequences or do not deliver a classification: Time series need a predefined model and are not able to handle complex data types; sequence classification algorithms such as the apriori algorithm family are not able to utilize the time aspect of the data. The strengths and weaknesses of the candidate algorithms are identified and used to build a new approach to solve the problem of classification of complex data sequences. The problem is thereby solved by a two-step process. First, feature construction is used to create and discover suitable features in a training phase. Then, the blueprints of the discovered features are used in a formula during the classification phase to perform the real time classification. The features are constructed by combining and aggregating the original data over the span of the sequence including the elapsed time by using a calculated time axis. Additionally, a combination of features and feature selection are used to simplify complex data types. This allows catching behavioural patterns that occur in the course of time. This new proposed approach combines techniques from several research fields. Part of the algorithm originates from the field of feature construction and is used to reveal behaviour over time and express this behaviour in the form of features. A combination of the features is used to highlight relations between them. The blueprints of these features can then be used to achieve classification in real time on an incoming data stream. An automated framework is presented that allows the features to adapt iteratively to a change in underlying patterns in the data stream. This core feature of the presented work is achieved by separating the feature application step from the computational costly feature construction step and by iteratively restarting the feature construction step on the new incoming data. The algorithm and the corresponding models are described in detail as well as applied to three case studies (customer churn prediction, bot detection in computer games, credit card fraud detection). The case studies show that the proposed algorithm is able to find distinctive information in data sequences and use it effectively for classification tasks. The promising results indicate that the suggested approach can be applied to a wide range of other application areas that incorporate data sequences.
Social sustainable supply chain management in the textile and apparel industry : a literature review
(2017)
So far, a vast amount of studies on sustainability in supply chain management have been conducted by academics over the last decade. Nevertheless, socially related aspects are still neglected in the related discussion. The primary motivation of the present literature review has arisen from this shortcoming, thus the key purpose of this study is to enrich the discussion by providing a state of-the-art, focusing exclusively on social issues in sustainable supply chain management (SSCM) by considering the textile/apparel sector as the field of application. The authors conduct a literature review, including content analysis which covers 45 articles published in English peer-reviewed journals, and proposes a comprehensive map which integrates the latest findings on socially related practices in the textile/apparel industry with the dominant conceptualization in order to reveal potential research areas in the field. The results show an ongoing lack of investigation regarding the social dimension of the triple bottom line in SSCM. Findings indicate that a company’s internal orientation is the main assisting factor in sustainable supply chain management practices. Further, supplier collaboration and assessment can be interpreted as an offer for suppliers deriving from stakeholders and a focal company’s management of social risk. Nevertheless, suppliers do also face or even create huge barriers in improving their social performance. This calls for more empirical research and qualitative or quantitative survey methods, especially at the supplier level located in developing countries.
Foam has been employed as an improved or enhanced oil recovery method to overcome gravity override and the channeling and fingering of the injected gas, which arises because of the low density and viscosity of the injected fluid combined with the rock heterogeneity. A major challenge, however, is the stability of the generated foam when it contacts the oil. In this study we investigate the feasibility of using inexpensive nanoparticles made of coal fly ash, an abundantly available waste product of coal power plants, as a foam booster. We investigate the viability of reducing the size of fly ash particles to 100−200 nm using high-frequency ultrasonic grinding. We also study the foaminess (foamability), strength, and stability of the foams made with minor concentrations of fly ash nanoparticles and surfactant, both in bulk and porous media. The effect of monovalent and divalent ion concentration on the foaminess of the nanoash suspension combined with very low concentrations of a commercial alpha olefin sulfonate (AOS) surfactant, in the presence and absence of oil, is studied. We observe that bulk foam that contains very small amounts of nanoash particles shows a higher stability in the presence of model oils. Furthermore, experiments in porous media exhibit remarkably stronger foam with mixtures of nanoash and surfactant, such that the amount of produced liquids from the cores significantly increases. For the first time we show that nanoash can be used to stabilize nitrogen foam in the presence of crude oil at high temperature and pressure. In the presence of oil, the nanoash−AOS foam shows a higher stability, although crude oil tends to form stable emulsions in water in the presence of nanoash.
Social networks, smart portable devices, Internet of Things (IoT) on base of technologies like analytics for big data and cloud services are emerging to support flexible connected products and agile services as the new wave of digital transformation. Biological metaphors of living and adaptable ecosystems with service-oriented enterprise architectures provide the foundation for self-optimizing and resilient run-time environments for intelligent business services and related distributed information systems. We are extending Enterprise Architecture (EA) with mechanisms for flexible adaptation and evolution of information systems having distributed IoT and other micro-granular digital architecture to support next digitization products, services, and processes. Our aim is to support flexibility and agile transformation for both IT and business capabilities through adaptive digital enterprise architectures. The present research paper investigates additionally decision mechanisms in the context of multi-perspective explorations of enterprise services and Internet of Things architectures by extending original enterprise architecture reference models with state of art elements for architectural engineering and digitization.
In times of dynamic markets, enterprises have to be agile to be able to quickly react to market influences. Due to the increasing digitization of products, the enterprise IT often is affected when business models change. Enterprise Architecture Management (EAM) targets a holistic view of the enterprise’ IT and their relations to the business. However, Enterprise Architectures (EA) are complex structures consisting of many layers, artifacts and relationships between them. Thus, analyzing EA is a very complex task for stakeholders. Visualizations are common vehicles to support analysis. However, in practice visualization capabilities lack flexibility and interactivity. A solution to improve the support of stakeholders in analyzing EAs might be the application of visual analytics. Starting from a systematic literature review, this article investigates the features of visual analytics relevant for the context of EAM.
The Eighth International Conference on Advances in Databases, Knowledge, and Data Applications (DBKDA 2016), held between June 26 - 30, 2016 - Lisbon, Portugal, continued a series of international events covering a large spectrum of topics related to advances in fundamentals on databases, evolution of relation between databases and other domains, data base technologies and content processing, as well as specifics in applications domains databases. Advances in different technologies and domains related to databases triggered substantial improvements for content processing, information indexing, and data, process and knowledge mining. The push came from Web services, artificial intelligence, and agent technologies, as well as from the generalization of the XML adoption. High-speed communications and computations, large storage capacities, and load-balancing for distributed databases access allow new approaches for content processing with incomplete patterns, advanced ranking algorithms and advanced indexing methods. Evolution on e-business, ehealth and telemedicine, bioinformatics, finance and marketing, geographical positioning systems put pressure on database communities to push the ‘de facto’ methods to support new requirements in terms of scalability, privacy, performance, indexing, and heterogeneity of both content and technology.
Die Kombination von Softwareproduktlinien und agiler Softwareentwicklung in der Automobilbranche ist vielversprechend. Das Ziel ist hierbei, sowohl die Vorteile agiler Methoden wie kurze Entwicklungszyklen als auch die Vorteile systematischer Wiederverwendung wie beispielsweise das effektive Management von Varianten zu erzielen. Allerdings ist die Kombination auch mit Herausforderungen verbunden und erfordert eine geeignete Einführungs- oder Transformationsstrategie. Basierend auf Erkenntnissen einer Interviewstudie und existierenden Produktlinienentwicklungen werden Herausforderungen und Lösungsideen aufgezeigt.
This article studies the development of e-governance over time and across countries. We use a large data sample consisting of 99 developing and 34 OECD countries to study this notion. Firstly, we study the development of e-governance. Secondly, we estimate models to check the determining factors of e-governance over time and across countries. The study reveals that the level of e-governance is determined by the degree of e-participation, online access as well as GDP per capita.
Dieser Beitrag leistet einen Beitrag zur Marketingforschung, da er den jungen aber von zunehmender Relevanz geprägten Forschungsstrang zum Themenkomplex CEM grundlegend entwickelt. Zum einen zeigt das identifizierte Rahmenkonzept auf, dass CEM über einzelne unternehmerische Fähigkeiten wie dem Design von Serviceerlebnissen, das die bisherige CEM-Forschung bestimmt hat, hinausgeht. Zum anderen leistet das Konzept einen Beitrag zur Synthese fragmentierter, aber miteinander zusammenhängender Literaturströmungen in der Marketingforschung ...
Nenne sie niemals Senioren!
(2017)
The book provides suggestions on how to start using bionic optimization methods, including pseudo-code examples of each of the important approaches and outlines of how to improve them. The most efficient methods for accelerating the studies are discussed. These include the selection of size and generations of a study’s parameters, modification of these driving parameters, switching to gradient methods when approaching local maxima, and the use of parallel working hardware.
Bionic optimization means finding the best solution to a problem using methods found in nature. As evolutionary strategies and particle swarm optimization seem to be the most important methods for structural optimization, we primarily focus on them. Other methods such as neural nets or ant colonies are more suited to control or process studies, so their basic ideas are outlined in order to motivate readers to start using them.
A set of sample applications shows how bionic optimization works in practice. From academic studies on simple frames made of rods to earthquake-resistant buildings, readers follow the lessons learned, difficulties encountered and effective strategies for overcoming them. For the problem of tuned mass dampers, which play an important role in dynamic control, changing the goal and restrictions paves the way for multi-objective-optimization. As most structural designers today use commercial software such as FE-Codes or CAE systems with integrated simulation modules, ways of integrating bionic optimization into these software packages are outlined and examples of typical systems and typical optimization approaches are presented.
The closing section focuses on an overview and outlook on reliable and robust as well as on multi-objective optimization, including discussions of current and upcoming research topics in the field concerning a unified theory for handling stochastic design processes.
The most important objective of event marketing is to improve the image of a brand or a company. The paper presents an image transfer model for event marketing. Based on current research, an image transfer model for event marketing is developed and the conditions required for an image transfer to take place from an event to a brand or a company are explained. Depending on which conditions are met, there are different consequences with regard to the image transfer from the event to the brand or company that are structured and characterized in detail. The image transfer model is developed against the backdrop of selected event types often used in actual practice. The focus of its application lies mainly in brand-oriented leisure and infotainment events directed towards external target groups. The model provides a discussion and analysis of the impact category of the image transfer in event marketing. The paper explains that the possibility of an attitude change is given within the context of event marketing. The presented model serves to structure the image transfer in event marketing. It is intended to illustrate the steps that are involved in the emergence of an image transfer as well as the resulting alternative consequences.
Sport marketing is the specific application of marketing principles and processes to sports products and services. In 2014 the biggest sports event in the world, the FIFA World Cup, took place in Brazil. Billions of spectators around the world saw Germany win the trophy in Rio de Janeiro for the fourth time in history. Yet unlike in previous World Cups, conversation was not only taking place at the numerous public viewings which were held in open spaces like bars and restaurants. For the entire tournament social media like Facebook or Twitter were playing a dominant role in all aspects. With 672 million tweets on Twitter and three billion conversations on Facebook, this was the most social World Cup as well as the most social mega sports event so far. It did not matter whether it were users, athletes or companies, everyone was trying to catch up on the conversation to be informed or inform others about their opinion or latest news. This paper analyzes the implementation of social media marketing during mega sports events with a focus on Adidas’ and Nike’s social media campaigns in the frame of the FIFA World Cup 2014 in Brazil. The analysis shows that social media marketing in the frame of mega sports events gains importance. Those companies finding topics that affect people personally with a relationship to their products achieve success through social media marketing.
Analysis of multicellular patterns is required to understand tissue organizational processes. By using a multi-scale object oriented image processing method, the spatial information of cells can be extracted automatically. Instead of manual segmentation or indirect measurements, such as general distribution of contrast or flow, the orientation and distribution of individual cells is extracted for quantitative analysis. Relevant objects are identified by feature queries and no low-level knowledge of image processing is required.
While the topic of Customer Relationship Management (CRM) has generated an increasing amount of research attention in recent years, still lacking is a comprehensive overview that helps to explain how companies can implement CRM successfully. To address these issues, this article identifies and discusses factors that are associated with a greater degree of CRM success. More specifically, we identify and discuss determinants on strategy, human resources, information management, structure and processes as well as specific factors within the implementation phase which help to improve CRM success. First, our results indicate that the implementation of CRM processes is associated with better company performance, especially at the relationship initiation and maintenance stage. Second, the findings emphasis a predominant influence of firm-based factors vis-à-vis structural industry, and customer-based factors. Furthermore, cross-functional CRM teams and a top management feeling responsible for CRM projects help to improve CRM success. In addition, internal processes which are related to customer contact points have to be redesigned to enhance the interaction between employees and customers. The current article sheds more light on what really drives CRM success.
Customer prioritization is a common marketing activity in business practice. It aims at an increase in average customer profitability and return on sales by treating important customers more intensively. After a short introduction highlighting the importance of customer prioritzation, the present article provides an overview of key aspects of customer prioritization. First, companies need to select a prioritization criterion, determine the method to identify important customers, and decide on how to treat these customers in a particular way. Second, companies face challenges and need to address key requirements for implementing customer prioritization within a company. Finally, the article emphasizes positive and negative consequences of customer prioritization.
It is assumed that more education leads to better understanding of complex systems. Some researchers claim, however, find indications that simple mechanisms like stocks and flows are not well understood even by people who have passed higher education. In this paper, we test people’s understanding of complex systems with the widely studied stock-and-flow (SF) tasks (Booth Sweeney and Sterman 2000). SF tasks assess people’s understanding of the interplay between stocks and flows. We investigate SF failure of domain experts and novices in different knowledge domains. In particular, we compare performance on the original study’s Bathtub task with the square wave pattern (Booth Sweeney and Sterman 2000) with two alternative cover stories from the engineering and business domains on different groups of business and engineering students from different semesters. Further, we show that, while engineering students perform better than business students, with progressing in higher education, students seem to lose the capability of dealing with simple SF tasks from domains other than their field. We thus find hints on déformation professionelle in higher education.
The financial crisis of 2007-2010 was probably one of the greatest, most lustrous black-swan events that people of our generation(s) will experience – and at its heart, it was a dynamic phenomenon. It is stated in the vision of the System Dynamics Society that we aspire to transform society by influencing decision-making. Yet, it seems as if system dynamics did not play any significant role in this crisis: we did not examine the markets, we did not provide insights to banks, and we did not warn governments or the people. In our presentation we describe the dynamics involved in a housing bubble, and describe what made the last one different. With the insights gained from this exercise we conclude that, from a system dynamics perspective, the dimension of the financial crisis of 2007-2009 was eminently foreseeable, which will lead us to pose the following question: where were we as a field while this crisis was unfolding, why were we not active players? We present a range of potential answers to this question, hoping to provoke some reflection… and maybe some (re)action.