Refine
Year of publication
- 2020 (312) (remove)
Document Type
- Journal article (144)
- Conference proceeding (93)
- Book chapter (41)
- Book (10)
- Report (9)
- Doctoral Thesis (6)
- Working Paper (4)
- Anthology (3)
- Issue of a journal (2)
Is part of the Bibliography
- yes (312)
Institute
- ESB Business School (105)
- Informatik (101)
- Life Sciences (43)
- Technik (37)
- Texoversum (25)
Publisher
- Springer (47)
- Elsevier (30)
- Hochschule Reutlingen (20)
- IEEE (15)
- MDPI (13)
- Springer Gabler (8)
- ACM (7)
- De Gruyter (6)
- Wiley (6)
- AMD Akademie Mode & Design (5)
Motto der Herbstkonferenz Informatics Inside 2020 ist KInside. Wieder einmal blicken Studierende inside und schauen sich Methoden, Anwendungen und Zusammenhänge genauer an. Die Beiträge sind vielfältig und entsprechend dem Studiengang human-centered. Es ist der Anspruch, dass sich die Themen um die Bedürfnisse der Menschen drehen und eingesetzte Methoden kein Selbstzweck sind, sondern am Nutzen für den Menschen gemessen werden.
Steady state efficiency optimization techniques for induction motors are state of the art and various methods have already been developed. This paper provides new insights in the efficiency optimized operation in dynamic regime. The paper proposes an anticipative flux modification in order to decrease losses during torque and speed transients. These trajectories are analyzed based on a numerical study for different motors. Measurement results for one motor are given as well.
Sol-Gel basierte Flammschutzmittel stellen einen vielversprechenden Ansatz für Textilien dar, gerade im Bereich des Ersatzes von derzeit etablierten halogenhaltigen Flammschutzmitteln. Letztere sind aufgrund ihrer toxikologisch Bedenklichkeit sowie ihrer mitunter bioakkumulierenden Eigenschaften in die Kritik geraten. In diesem Forschungsvorhaben wurde daher untersucht auf welche Weise ein Flammschutz per Sol-Gel-Ansatz auf Stickstoff- und/oder Phosphorbasis als halogenfreie Alternative verwirklicht werden kann. Die Sol-Gel-Schicht fungierte dabei zum einen als nicht brennbarer Binder, zum anderen konnten über das Einführen entsprechender funktioneller Seitenketten für den Flammschutz aktive Gruppen direkt mit eingebunden werden. Verschiedene Ansätze wurden dabei verfolgt. Vor allem durch die Nutzung von additivierten Systemen, d.h. durch Sol-Gel-Schichten mit Zusätzen von stickstoff- und/oder phosphorhaltigen Verbindungen konnte ein Flammschutz nach DIN EN ISO 15025 (Schutzkleidung – Schutz gegen Hitze und Flammen) erhalten werden. Anhand eines Modellsystems, bei dem in zwei aufeinanderfolgenden Schritten zuerst eine funktionalisierte Sol-Gel-Schicht und anschließend eine Phosphorverbindung in einem zweiten Schritt aufgebracht wurde, konnten die Vorteile des Flammschutzes auf Sol-Gel-Basis nachgewiesen werden. Dabei wurde unter anderem auch gezeigt, dass ein Mechanismus auf Basis der Bildung einer Schutzschicht hauptsächlich verantwortlich für den Flammschutz ist. Dieses Ergebnis ist für eine zukünftige, weitere Optimierung entsprechender Ausrüstungen nicht zu unterschätzen. Durch Ausrüstungsversuche im semi-industriellen Maßstab konnte weiterhin gezeigt werden, dass einer großtechnischen Umsetzung der angewandten Ausrüstungen prinzipiell nichts im Wege steht. Abstriche müssen bis dato lediglich bezüglich der Waschstabilität gemacht werden. Die Sol-Gel-Schichten überstanden zwar im allgemeinen typische Waschprozesse, eine Permanenz der Flammfestigkeit von additivierten Systemen ergab sich aber nur in einzelnen Fällen. Ausgehend von den Ergebnissen wurde ein neuer Ansatz vorgestellt, der über den hier zugrundeliegenden Ansatz hinausgeht. Dieser sieht vor, durch den Einsatz von neu-synthetisierten Silanen mit Stickstoff- und Phosphorgruppen Sol-Gel-Schichten herzustellen, die ein vielversprechendes Verhalten zeigen. Hier konnte auch nach ersten Waschtests eine Aufrechterhaltung der verbesserten Flammfestigkeit nachgewiesen werden. Insgesamt konnte innerhalb des Forschungsvorhabens gezeigt werden, dass ein Flammschutz auf Sol-Gel-Basis für Textilien erhalten werden kann. Darüberhinaus konnte auch erklärt werden auf welchem Mechanismus dieser Flammschutz begründet ist und wie die derzeit noch ungenügende Waschpermanenz verbessert werden kann.
Driven by digital transformation, manufacturing systems are heading towards autonomy. The implementation of autonomous elements in manufacturing systems is still a big challenge. Especially small and medium sized enterprises (SME) often lack experience to assess the degree of Autonomous Production. Therefore, a description model for the assessment of stages for Autonomous Production has been identified as a core element to support such a transformation process. In contrast to existing models, the developed SME-tailored model comprises different levels within a manufacturing system, from single manufacturing cells to the factory level. Furthermore, the model has been validated in several case studies.
Process quality has reached a high level on mass production, utilizing well known methods like the DoE. The drawback of the unterlying statistical methods is the need for tests under real production conditions, which cause high costs due to the lost output. Research over the last decade let to methods for correcting a process by using in-situ data to correct the process parameters, but still a lot of pre-production is necessary to get this working. This paper presents a new approach in improving the product quality in process chains by using context data - which in part are gathered by using Industry 4.0 devices - to reduce the necessary pre-production.
In recent years, machine learning algorithms have made a huge development in performance and applicability in industry and especially maintenance. Their application enables predictive maintenance and thus offers efficiency increases. However, a successful implementation of such solutions still requires high effort in data preparation to obtain the right information, interdisciplinarity in teams as well as a good communication to employees. Here, small and medium sized enterprises (SME) often lack in experience, competence and capacity. This paper presents a systematic and practice-oriented method for an implementation of machine learning solutions for predictive maintenance in SME, which has already been validated.
The chemical synthesis of polysiloxanes from monomeric starting materials involves a series of hydrolysis, condensation and modification reactions with complex monomeric and oligomeric reaction mixtures. Real-time monitoring and precise process control of the synthesis process is of great importance to ensure reproducible intermediates and products and can readily be performed by optical spectroscopy. In chemical reactions involving rapid and simultaneous functional group transformations and complex reaction mixtures, however, the spectroscopic signals are often ambiguous due to overlapping bands, shifting peaks and changing baselines. The univariate analysis of individual absorbance signals is hence often only of limited use. In contrast, batch modelling based on the multivariate analysis of the time course of principal components (PCs) derived from the reaction spectra provides a more efficient tool for real time monitoring. In batch modelling, not only single absorbance bands are used but information over a broad range of wavelengths is extracted from the evolving spectral fingerprints and used for analysis. Thereby, process control can be based on numerous chemical and morphological changes taking place during synthesis. “Bad” (or abnormal) batches can quickly be distinguished from “normal” ones by comparing the respective reaction trajectories in real time. In this work, FTIR spectroscopy was combined with multivariate data analysis for the in-line process characterization and batch modelling of polysiloxane formation. The synthesis was conducted under different starting conditions using various reactant concentrations. The complex spectral information was evaluated using chemometrics (principal component analysis, PCA). Specific spectral features at different stages of the reaction were assigned to the corresponding reaction steps. Reaction trajectories were derived based on batch modelling using a wide range of wavelengths. Subsequently, complexity was reduced again to the most relevant absorbance signals in order to derive a concept for a low-cost process spectroscopic set-up which could be used for real-time process monitoring and reaction control.
Der Anspruch an Energieversorger wird wachsen: in Zukunft gewinnen vor allem Aufgaben wie die Entwicklung digitalisierter Produkte/Dienstleistungen sowie ökologische Aktivitäten an Relevanz. Dies zeigt die Hochschule Reutlingen in ihrer aktuellen Untersuchung unter Aufsichtsräten, Geschäftsführern und Führungskräften. Trotz der erwarteten Veränderungen: die Aufsichtsräte sind sich zwar ihrem Druck zu mehr Professionalisierung bewusst, scheinen aktuell aber nur mäßig für die künftigen Herausforderungen des Unternehmens gerüstet. Besonders relevant dabei: die Professionalisierung der Gremienarbeit in kommunalen EVU ermöglicht einen höheren wahrgenommenen Unternehmenserfolg. So die Studie des Reutlinger Energiezentrums and der Hochschule Reutlingen im Auftrag von fünf Unternehmen der Branche.
Despite strong political efforts in Europe, industrial small- and medium sized enterprises (SMEs) seem to neglect adopting practices for energy effciency. By taking a cultural perspective, this study investigated what drives the establishment of energy effciency and corresponding practices in SMEs. Based on 10 ethnographic case studies and a quantitative survey among 500 manufacturing SMEs, the results indicate the importance of everyday employee behavior in achieving energy savings. The studied enterprises value behavior related measures as similarly important as technical measures. Raising awareness for energy issues within the organization, therefore, constitutes an essential leadership task that is oftentimes perceived as challenging and frustrating. It was concluded that the embedding of energy efficiency in corporate strategy, the use of a broad spectrum of different practices, and the empowerment and involvement of employees serve as major drivers in establishing energy effciency within SMEs. Moreover, the findings reveal institutional influences on shaping the meanings of energy effciency for the SMEs by raising attention for energy effciency in the enterprises and making energy effciency decisions more likely. The main contribution of the paper is to offer an alternative perspective on energy effciency in SMEs beyond the mere adoption of energy-effcient technology.
nKV in action: accelerating KVstores on native computational storage with NearData processing
(2020)
Massive data transfers in modern data intensive systems resulting from low data-locality and data-to-code system design hurt their performance and scalability. Near-data processing (NDP) designs represent a feasible solution, which although not new, has yet to see widespread use.
In this paper we demonstrate various NDP alternatives in nKV, which is a key/value store utilizing native computational storage and near-data processing. We showcase the execution of classical operations (GET, SCAN) and complex graph-processing algorithms (Betweenness Centrality) in-situ, with 1.4x-2.7x better performance due to NDP. nKV runs on real hardware - the COSMOS+ platform.
Massive data transfers in modern key/value stores resulting from low data-locality and data-to-code system design hurt their performance and scalability. Near-data processing (NDP) designs represent a feasible solution, which although not new, have yet to see widespread use.
In this paper we introduce nKV, which is a key/value store utilizing native computational storage and near-data processing. On the one hand, nKV can directly control the data and computation placement on the underlying storage hardware. On the other hand, nKV propagates the data formats and layouts to the storage device where, software and hardware parsers and accessors are implemented. Both allow NDP operations to execute in host-intervention-free manner, directly on physical addresses and thus better utilize the underlying hardware. Our performance evaluation is based on executing traditional KV operations (GET, SCAN) and on complex graph-processing algorithms (Betweenness Centrality) in-situ, with 1.4×-2.7× better performance on real hardware – the COSMOS+ platform.
Massive data transfers in modern data intensive systems resulting from low data-locality and data-to-code system design hurt their performance and scalability. Near-data processing (NDP) and a shift to code-to-data designs may represent a viable solution as packaging combinations of storage and compute elements on the same device has become viable.
The shift towards NDP system architectures calls for revision of established principles. Abstractions such as data formats and layouts typically spread multiple layers in traditional DBMS, the way they are processed is encapsulated within these layers of abstraction. The NDP-style processing requires an explicit definition of cross-layer data formats and accessors to ensure in-situ executions optimally utilizing the properties of the underlying NDP storage and compute elements. In this paper, we make the case for such data format definitions and investigate the performance benefits under NoFTL-KV and the COSMOS hardware platform.
The tale of 1000 cores: an evaluation of concurrency control on real(ly) large multi-socket hardware
(2020)
In this paper, we set out the goal to revisit the results of “Starring into the Abyss [...] of Concurrency Control with [1000] Cores” and analyse in-memory DBMSs on today’s large hardware. Despite the original assumption of the authors, today we do not see single-socket CPUs with 1000 cores. Instead multi-socket hardware made its way into production data centres. Hence, we follow up on this prior work with an evaluation of the characteristics of concurrency control schemes on real production multi-socket hardware with 1568 cores. To our surprise, we made several interesting findings which we report on in this paper.
In this paper, we present a new approach for achieving robust performance of data structures making it easier to reuse the same design for different hardware generations but also for different workloads. To achieve robust performance, the main idea is to strictly separate the data structure design from the actual strategies to execute access operations and adjust the actual execution strategies by means of so-called configurations instead of hard-wiring the execution strategy into the data structure. In our evaluation we demonstrate the benefits of this configuration approach for individual data structures as well as complex OLTP workloads.
Modern mixed (HTAP)workloads execute fast update-transactions and long running analytical queries on the same dataset and system. In multi-version (MVCC) systems, such workloads result in many short-lived versions and long version-chains as well as in increased and frequent maintenance overhead.
Consequently, the index pressure increases significantly. Firstly, the frequent modifications cause frequent creation of new versions, yielding a surge in index maintenance overhead. Secondly and more importantly, index-scans incur extra I/O overhead to determine, which of the resulting tuple versions are visible to the executing transaction (visibility-check) as current designs only store version/timestamp information in the base table – not in the index. Such index-only visibility-check is critical for HTAP workloads on large datasets.
In this paper we propose the Multi Version Partitioned B-Tree (MV-PBT) as a version-aware index structure, supporting index-only visibility checks and flash-friendly I/O patterns. The experimental evaluation indicates a 2x improvement for analytical queries and 15% higher transactional throughput under HTAP workloads. MV-PBT offers 40% higher tx. throughput compared to WiredTiger’s LSM-Tree implementation under YCSB.
Customer orientation should be the core engine of every organisation while IT can be considered as the enabler to generate competitive advantages along customer processes in marketing, sales and service. Research shows that customer relationship management (CRM) enables organisations to perform better and experience indicates that organisations that focus on customer orientation are more successful. With marketplace organisations such as Amazon, Alibaba or Conrad shaping the future of customer centricity and information technology, German B2B organisations need to shift their value contribution from product-centric to customer-centric. While these organisations are currently attempting to implement CRM software and putting their customers more into focus, the question remains how organisations are approaching the implementation of CRM and whether these attempts are paying off in terms of business performance.
Here, we study resin cure and network formation of solid melamine formaldehyde pre-polymer over a large temperature range viadynamic temperature curing profiles. Real-time infrared spectroscopy is used to analyze the chemical changes during network formation and network hardening. By applying chemometrics (multivariate curve resolution,MCR), the essential chemical functionalities that constitute the network at a given stage of curing are mathematically extracted and tracked over time. The three spectral components identified by MCR were methylol-rich, ether linkages-rich and methylene linkages-rich resin entities. Based on dynamic changes of their characteristic spectral patterns in dependence of temperature, curing is divided into five phases: (I) stationary phase with free methylols as main chemical feature, (II) formation of flexible network cross-linked by ether linkages, (III) formation of rigid, ether-cross-linked network, (IV) further hardening via transformation of methylols and ethers into methylene-cross-linkages, and (V) network consolidation via transformation of ether into methylene bridges. The presented spectroscopic/chemometric approach can be used as methodological basis for the functionality design of MF-based surface films at the stage of laminate pressing, i.e., for tailoring the technological property profile of cured MF films using a causal understanding of the underlying chemistry based on molecular markers and spectroscopic fingerprints.
Dieser Beitrag gibt einen Überblick über die verschiedenen Möglichkeiten der Bilanzierung einens Initial Coin Offerings (ICO) beim Emittenten auf der Passivseite nach den Regelungen der IFRS. Ziel ist es, die bilanzielle Einordnung anhand verschiedenenr Arten von Token zu erörtern und den Emittenten bei der Ausgestaltung der Token sowie der anschließenden Bilanzierung zu unterstützen. Die Ergebnisse zeigen, dass die Standards für die bilanzielle Einordnung von ICO-Token zwar ausreichen, allerdings eine große Bandbreite der Bilanzierung zu berücksichtigen ist und eine detaillierte Regelung durch einen eigenen IFRS daher schwierig erscheint.