Refine
Document Type
- Journal article (5)
- Conference proceeding (5)
Has full text
- yes (10)
Is part of the Bibliography
- yes (10)
Institute
- Informatik (5)
- NXT Nachhaltigkeit und Technologie (3)
- Technik (2)
Publisher
- Hanser (3)
- Association for Computing Machinery (2)
- IEEE (2)
- Association of Computing Machinery (1)
- Springer (1)
- Springer Nature (1)
Die vierte industrielle Revolution stellt neue Anforderungen an Unternehmen und insbesondere an KMU. Das verfügbare Know-how bei der Implementierung von Industrie 4.0-Ansätzen stellt für viele KMU eine Herausforderung dar. Derzeit existieren in der Literatur verschiedene Wege zur Erstellung einer auf das Unternehmen angepassten Industrie 4.0 Roadmap. Eine Ausrichtung auf die Belange von KMU fehlt jedoch gänzlich. Mit dieser Arbeit werden verschiedene Ansätze zur Erstellung einer Industrie 4.0-Roadmap zusammengefasst und anschließend untersucht, worauf KMU mit ihren spezifischen Eigenschaften besonders ihren Fokus legen sollten.
SLAM systems are mainly applied for robot navigation while research on feasibility for motion planning with SLAM for tasks like bin-picking, is scarce. Accurate 3D reconstruction of objects and environments is important for planning motion and computing optimal gripper pose to grasp objects. In this work, we propose the methods to analyze the accuracy of a 3D environment reconstructed using a LSD-SLAM system with a monocular camera mounted onto the gripper of a collaborative robot. We discuss and propose a solution to the pose space conversion problem. Finally, we present several criteria to analyze the 3D reconstruction accuracy. These could be used as guidelines to improve the accuracy of 3D reconstructions with monocular LSD-SLAM and other SLAM based solutions.
Zukünftige Montagearbeitsplätze müssen veränderten Herausforderungen, wie z. B. der zunehmenden Anzahl von Mensch Roboter-Kollaborationen, gerecht werden. Die Virtual Reality (VR)-Technik bietet im Rahmen der Arbeitsplatzgestaltung neue Möglichkeiten, diesen veränderten Planungsherausforderungen gerecht zu werden. Die Ausarbeitung stellt eine Methode zur Bewertung des sinnvollen Einsatzes der VR-Technik für einen spezifischen Arbeitsplatz vor. Außerdem wird aufgezeigt, wie die VR-Technik in den Prozess der Arbeitsplatzgestaltung integriert werden kann.
Efficient and robust 3D object reconstruction based on monocular SLAM and CNN semantic segmentation
(2019)
Various applications implement slam technology, especially in the field of robot navigation. We show the advantage of slam technology for independent 3d object reconstruction. To receive a point cloud of every object of interest void of its environment, we leverage deep learning. We utilize recent cnn deep learning research for accurate semantic segmentation of objects. In this work, we propose two fusion methods for cnn-based semantic segmentation and slam for the 3d reconstruction of objects of interest in order to obtain a more robustness and efficiency. As a major novelty, we introduce a cnn-based masking to focus slam only on feature points belonging to every single object. Noisy, complex or even non-rigid features in the background are filtered out, improving the estimation of the camera pose and the 3d point cloud of each object. Our experiments are constrained to the reconstruction of industrial objects. We present an analysis of the accuracy and performance of each method and compare the two methods describing their pros and cons.
Der Zusammenschluss von Unternehmen in Lieferantennetzwerken auf Basis digitaler Plattformen bietet eine Möglichkeit, der Forderung nach Flexibilität in der Industrie 4.0 nachzukommen. Anhand der Charakterisierung eines realen Lieferantennetzwerkes werden use cases für die Lieferantenanbindung hergeleitet. Diese dienen als Diskussionsgrundlage von Potenzialen und Herausforderungen der Anbindung, wobei sich die Frage nach der optimalen Integrationstiefe stellt. Hierzu wurde ein anwenderorientiertes Entscheidungsmodell abgeleitet.
Current data-intensive systems suffer from scalability as they transfer massive amounts of data to the host DBMS to process it there. Novel near-data processing (NDP) DBMS architectures and smart storage can provably reduce the impact of raw data movement. However, transferring the result-set of an NDP operation may increase the data movement, and thus, the performance overhead. In this paper, we introduce a set of in-situ NDP result-set management techniques, such as spilling, materialization, and reuse. Our evaluation indicates a performance improvement of 1.13 × to 400 ×.
Massive data transfers in modern data-intensive systems resulting from low data-locality and data-to-code system design hurt their performance and scalability. Near-Data processing (NDP) and a shift to code-to-data designs may represent a viable solution as packaging combinations of storage and compute elements on the same device has become feasible. The shift towards NDP system architectures calls for revision of established principles. Abstractions such as data formats and layouts typically spread multiple layers in traditional DBMS, the way they are processed is encapsulated within these layers of abstraction. The NDP-style processing requires an explicit definition of cross-layer data formats and accessors to ensure in-situ executions optimally utilizing the properties of the underlying NDP storage and compute elements. In this paper, we make the case for such data format definitions and investigate the performance benefits under RocksDB and the COSMOS hardware platform.
Near-data processing in database systems on native computational storage under HTAP workloads
(2022)
Today’s Hybrid Transactional and Analytical Processing (HTAP) systems, tackle the ever-growing data in combination with a mixture of transactional and analytical workloads. While optimizing for aspects such as data freshness and performance isolation, they build on the traditional data-to-code principle and may trigger massive cold data transfers that impair the overall performance and scalability. Firstly, in this paper we show that Near-Data Processing (NDP) naturally fits in the HTAP design space. Secondly, we propose an NDP database architecture, allowing transactionally consistent in-situ executions of analytical operations in HTAP settings. We evaluate the proposed architecture in state-of-the-art key/value-stores and multi-versioned DBMS. In contrast to traditional setups, our approach yields robust, resource- and cost-effcient performance.
Many modern DBMS architectures require transferring data from storage to process it afterwards. Given the continuously increasing amounts of data, data transfers quickly become a scalability limiting factor. Near-Data Processing and smart/computational storage emerge as promising trends allowing for decoupled in-situ operation execution, data transfer reduction and better bandwidth utilization. However, not every operation is suitable for an in-situ execution and a careful placement and optimization is needed.
In this paper we present an NDP-aware cost model. It has been implemented in MySQL and evaluated with nKV. We make several observations underscoring the need for optimization.
Near-Data Processing is a promising approach to overcome the limitations of slow I/O interfaces in the quest to analyze the ever-growing amount of data stored in database systems. Next to CPUs, FPGAs will play an important role for the realization of functional units operating close to data stored in non-volatile memories such as Flash.It is essential that the NDP-device understands formats and layouts of the persistent data, to perform operations in-situ. To this end, carefully optimized format parsers and layout accessors are needed. However, designing such FPGA-based Near-Data Processing accelerators requires significant effort and expertise. To make FPGA-based Near-Data Processing accessible to non-FPGA experts, we will present a framework for the automatic generation of FPGA-based accelerators capable of data filtering and transformation for key-value stores based on simple data-format specifications.The evaluation shows that our framework is able to generate accelerators that are almost identical in performance compared to the manually optimized designs of prior work, while requiring little to no FPGA-specific knowledge and additionally providing improved flexibility and more powerful functionality.