Informatik
Refine
Year of publication
- 2019 (85) (remove)
Document Type
- Conference proceeding (70)
- Journal article (14)
- Doctoral Thesis (1)
Has full text
- yes (85) (remove)
Is part of the Bibliography
- yes (85)
Institute
- Informatik (85)
Publisher
Ever since the 1980s, researchers in computer science and robotics have been working on making autonomous cars. Due to recent breakthroughs in research and devel- opment, such as the Bertha Benz Project [ZBS+14], the goal of fully autonomous vehicles seems closer than ever before. Yet a lot of questions remain unanswered. Especially now that the automotive industry moves towards autonomous systems in series production vehicles, the task of precise localization has to be solved with automotive grade sensors and keep memory and processing consumption at a mini- mum. This thesis investigates the Simultaneous Localization and Mapping (SLAM) prob- lem for autonomous driving scenarios on a parking lot using low cost automotive sensors. The main focus is herby devoted to the RAdio Detection And Ranging (RADAR) sensor, which has not been widely analyzed in an autonomous driving scenario so far, even though they are abundant in the automotive industry for ap- plications such as Adaptive Cruise Control (ACC). Due to the high noise floor, the radar sensor has widely been disregarded in the Intelligent Transportation Systems and Robotics communities with regards to SLAM applications. However in this thesis, it is shown that the RADAR sensor proves to be an affordable, robust and precise sensor, when modeling its physical properties correctly. In this regard, a GraphSLAM based framework is introduced, which extracts features from the RADAR sensor and generates an optimized map of the surroundings using the RADAR sensor alone. This framework is used to enable crowd based localization, which is not limited to the RADAR sensor alone. By integrating an automotive Light Detection and Ranging (LiDAR) and stereo camera sensor, a robust and precise localization system can be built that that is suitable for autonomous driving even in complex parking lot scenarios. It it is thereby shown that the RADAR sensor is strongly contributing to obtaining good results in a sensor fusion setup. These results were obtained on an extensive dataset on a parking lot, which has been recorded over the course of several months. It contains different weather conditions, different configurations of parked cars and a multitude of different trajectories to validate the approaches described in this thesis and to come to the conclusion that the RADAR sensor is a reliable sensor in series autonomous driving systems, both in a multi sensor framework and as a single component for localization.
Additive manufacturing (AM) is a promising manufacturing method for many industrial sectors. For this application, industrial requirements such as high production volumes and coordinated implementation must be taken into account. These tasks of the internal handling of production facilities are carried out by the Production Planning and Control (PPC) information system. A key factor in the planning and scheduling is the exact calculation of manufacturing times. For this purpose we investigate the use of Machine Learning (ML) for the prediction of manufacturing times of AM facilities.
In Folge der gegenwärtigen Digitalisierung in der produzierenden Industrie werden Anwendungen oder Services mit potentiell positiven Auswirkungen auf Faktoren wie Effektivität und Arbeitsqualität entwickelt. Ein geeigneter Ansatz zur Stärkung motivierender Aspekte im Arbeitskontext kann Gamification darstellen. In dieser Arbeit ist die initiale Konzeption und Evaluation eines Gamification-Ansatzes für Anwender eines KI-Service zur Maschinenoptimierung dargestellt und möglichen Anforderungen an ein Konzept zur Motivationssteigerung extrahiert.
In dieser Ausarbeitung wird eine zeitliche Vorhersage von Erdbeben getroffen. Hierfür werden mit einem Datensatz aus Labor-Erdbeben Convolutional Neural Networks (CNN) trainiert. Die trainierten Netzwerke geben Vorhersagen, indem sie einen Input an seismischen Daten klassifizieren. Durch das Klassifizieren kann das CNN die zeitliche Entfernung zum nächsten Erdbeben vorhersagen. Es werden hierfür zwei Ansätze miteinander verglichen. Beim ersten Ansatz werden die Originaldaten in ein CNN gegeben. Beim zweiten Ansatz wird vor dem CNN eine Vorverarbeitung der Daten mit den Mel Frequency Cepstral Coefficients (MFCC) durchgeführt. Es zeigt sich, dass mit beiden Ansätzen eine gute Klassifikation möglich ist. Die Kombination aus MFCC und CNN liefert die besseren quantitativen Ergebnisse. Hierbei konnte eine Genauigkeit von 65 % erreicht werden.
Semi-automated image data labelling using AprilTags as a pre-processing step for machine learning
(2019)
Data labelling is a pre-processing step to prepare data for machine learning. There are many ways to collect and prepare this data, but these are usually associated with a greater effort. This paper presents an approach to semi-automated image data labelling using AprilTags. The AprilTags attached to the object, which contain a unique ID, make it possible to link the object surfaces to a particular class. This approach will be implemented and used to label data of a stackable box.
The data is evaluated by training a You Only Look Once (YOLO) net, with a subsequent evaluation of the detection results. These results show that the semi-automatically collected and labelled data can certainly be used for machine learning. However, if concise features of an object surface are covered by the AprilTag, there is a risk that the concerned class will not be recognized. It can be assumed that the labelled data can not only be used for YOLO, but also for other machine learning approaches.
Bereits zum elften Mal findet nun die Studierendenkonferenz Informatics Inside statt. Als Teil des Masterstudiengangs Human-Centered Computing organisieren Masterstudierende selbständig eine vollumfängliche wissenschaftliche Konferenz. Die Informatik ist nach wie vor ständigem Wandel unterworfen. Unsere Studierenden tragen diesem Wandel bei, indem sie in ihrer wissenschaftllichen Vertiefung aktuelle Problemstellungen durch innovative Konzepte lösen. Inzwischen ist die Informatik aber auch nicht immer sofort sichtbar. Das merken wir immer dann, wenn etwas nicht wie vorgesehen funktioniert. Das diesjährige Motto der Informatics Inside ist experience (IT);, verdeckt als Funktionsaufruf:).
Serverless computing is an emerging cloud computing paradigm with the goal of freeing developers from resource management issues. As of today, serverless computing platforms are mainly used to process computations triggered by events or user requests that can be executed independently of each other. These workloads benefit from on-demand and elastic compute resources as well as per-function billing. However, it is still an open research question to which extent parallel applications, which comprise most often complex coordination and communication patterns, can benefit from serverless computing.
In this paper, we introduce serverless skeletons for parallel cloud programming to free developers from both parallelism and resource management issues. In particular, we investigate on the well known and widely used farm skeleton, which supports the implementation of a wide range of applications. To evaluate our concepts, we present a prototypical development and runtime framework and implement two applications based on our framework: Numerical integration and hyperparameter optimization - a commonly applied technique in machine learning. We report on performance measurements for both applications and discuss
the usefulness of our approach.
Continuous refactoring is necessary to maintain source code quality and to cope with technical debt. Since manual refactoring is inefficient and error prone, various solutions for automated refactoring have been proposed in the past. However, empirical studies have shown that these solutions are not widely accepted by software developers and most refactorings are still performed manually. For example, developers reported that refactoring tools should support functionality for reviewing changes. They also criticized that introducing such tools would require substantial effort for configuration and integration into the current development environment.
In this paper, we present our work towards the Refactoring-Bot, an autonomous bot that integrates into the team like a human developer via the existing version control platform. The bot automatically performs refactorings to resolve code smells and presents the changes to a developer for asynchronous review via pull requests. This way, developers are not interrupted in their workflow and can review the changes at any time with familiar tools. Proposed refactorings can then be integrated into the code base via the push of a button. We elaborate on our vision, discuss design decisions, describe the current state of development, and give an outlook on planned development and research activities.
To remain competitive in a fast changing environment, many companies started to migrate their legacy applications towards a Microservices architecture. Such extensive migration processes require careful planning and consideration of implications and challenges likewise. In this regard, hands-on experiences from industry practice are still rare. To fill this gap in scientific literature, we contribute a qualitative study on intentions, strategies, and challenges in the context of migrations to Microservices. We investigated the migration process of 14 systems across different domains and sizes by conducting 16 in-depth interviews with software professionals from 10 companies. Along with a summary of the most important findings, we present a separate discussion of each case. As primary migration drivers, maintainability and scalability were identified. Due to the high complexity of their legacy systems, most companies preferred a rewrite using current technologies over splitting up existing code bases. This was often caused by the absence of a suitable decomposition approach. As such, finding the right service cut was a major technical challenge, next to building the necessary expertise with new technologies. Organizational challenges were especially related to large, traditional companies that simultaneously established agile processes. Initiating a mindset change and ensuring smooth collaboration between teams were crucial for them. Future research on the evolution of software systems can in particular profit from the individual cases presented.
While Microservices promise several beneficial characteristics for sustainable long-term software evolution, little empirical research covers what concrete activities industry applies for the evolvability assurance of Microservices and how technical debt is handled in such systems. Since insights into the current state of practice are very important for researchers, we performed a qualitative interview study to explore applied evolvability assurance processes, the usage of tools, metrics, and patterns, as well as participants’ reflections on the topic. In 17 semi-structured interviews, we discussed 14 different Microservice-based systems with software professionals from 10 companies and how the sustainable evolution of these systems was ensured. Interview transcripts were analyzed with a detailed coding system and the constant comparison method.
We found that especially systems for external customers relied on central governance for the assurance. Participants saw guidelines like architectural principles as important to ensure a base consistency for evolvability. Interviewees also valued manual activities like code review, even though automation and tool support was described as very important. Source code quality was the primary target for the usage of tools and metrics. Despite most reported issues being related to Architectural Technical Debt (ATD), our participants did not apply any architectural or service-oriented tools and metrics. While participants generally saw their Microservices as evolvable, service cutting and finding an appropriate service granularity with low coupling and high cohesion were reported as challenging. Future Microservices research in the areas of evolution and technical debt should take these findings and industry sentiments into account.
Small and Medium Enterprises (SMEs) which play substantial role in the development of any economy have been on the rise in the recent periods. Consequently, these enterprises are faced with a myriad of challenges which could potentially be solved through adoption of technology. Nonetheless, it has been observed that the new technological uptake among SMEs remains limited with the majority of them opting to maintain the status quo with regards to technology awareness and innovation strategies.
In a literature review, this paper explores three major dynamics curtailing adoption of new technologies by SMEs in the manufacturing: Knowledge absorptive capacity and management factors, organisational structures as well as technological awareness. Firstly, with regards to knowledge absorptive capacity and management factors, this study shows how these factors drive innovation potentials in SMEs.
Secondly, with regards to technological awareness factors, this study documents how perceived usefulness, costs, network and infrastructure, education and skills, training and attitude as well as knowledge influence adoption of new technologies among SMEs in the world. Lastly, the study concludes by analysing how organisational structures drive innovation potentials of SMEs in the wake of swift and profound technological changes in the market.
The relevance of technology knowledge in digital transformation especially in small and mediumsized enterprises (SMEs) that are still largely dependent on physical human capital has become increasingly obvious. This is due to the rapid revolution in business environment coupled with increased living examples of firms disrupted by advancement in technological knowledge. Consequently, we find it progressively vital for SMEs to spot and mitigate both threats and take advantage of opportunities arising from digital transformation dynamism.
Our study aims at exploring the relevance of technology knowledge in SMEs for digital transformation to uncover the opportunities, roadmaps, and models that SMEs can take advantage of in the digital transformation and gain a competitive edge.
We conclude that irrespective relevance of technology knowledge for digital transformation coupled with its low costs and accessibility, SMEs are yet to realize the full potential of technological knowledge. This is mainly due to technologies appearing, changing and also vanishing so rapidly in the digital age, that gaining proper understanding without dedicated resources is utterly difficult for SMEs - making them less competitive as incumbent large firms in the market.
In a time of upheaval and digitalization, new business models for companies play an important role. Decentralized power generation and energy efficiency indicators to achieve climate goals and to reduce global warming are currently forcing energy companies to develop new business models. In recent years, many methods of business model development have been introduced to create new business ideas. But what are the obstacles in implementing these business models in the energy sector to develop new business opportunities? And what challenges do companies face in this respect? To answer this question, a systematic literature review was conducted in this paper. As a result, eight categories were identified which summarise the main barriers for the implementation of new business models in the energy domain.
We introduce IPA-IDX – an approach to handle index modifications modern storage technologies (NVM, Flash) as physical in-place appends, using simplified physiological log records. IPA-IDX provides similar performance and longevity advantages for indexes as basic IPA [5] does for tables. The selective application of IPA-IDX and basic IPA to certain regions and objects, lowers the GC overhead by over 60%, while keeping the total space overhead to 2%. The combined effect of IPA and IPA-IDX increases performance by 28%.
Workflow driven support systems in the peri-operative area have the potential to optimize clinical processes and to allow new situation-adaptive support systems. We started to develop a workflow management system supporting all involved actors in the operating theatre with the goal to synchronize the tasks of the different stakeholders by giving relevant information to the right team members. Using the OMG standards BPMN, CMMN and DMN gives us the opportunity to bring established methods from other industries into the medical field. The system shows each addressed actor their information in the right place at the right time to make sure every member can execute their task in time to ensure a smooth workflow. The system has the overall view of all tasks. Accordingly, a workflow management system including the Camunda BPM workflow engine to run the models, and a middleware to connect different systems to the workflow engine and some graphical user interfaces to show necessary information or to interact with the system are used. The complete pipeline is implemented with a RESTful web service. The system is designed to include different systems like hospital information system (HIS) via the RESTful web service very easily and without loss of data. The first prototype is implemented and will be expanded.
In this paper, an approach is introduced how reinforcement learning can be used to achieve interoperability between heterogeneous Internet of Things (IoT) components. More specifically, we model an HTTP REST service as a Markov Decision Process and adapt Q-Learning to the properties of REST so that an agent in the role of an HTTP REST client can learn the semantics of the service and, especially an optimal sequence of service calls to achieve an application specific goal. With our approach, we want to open up and facilitate a discussion in the community, as we see the key for achieving interoperability in IoT by the utilization of artificial intelligence techniques.
Interoperability is an important topic in the Internet of Things (IoT), because this domain incorporates diverse and heterogeneous objects, communication protocols and data formats. Many models and classification schemes have been proposed to make the degree of interoperability measurable - however only on the basis of a hierarchical scale. In the course of this paper we introduce a novel approach to measure the degree of interoperability using a metric scaled quantity. We consider IoT as a distributed system, where interoperable objects exchange messages with each other. Under this premise, we interpret messages as operation calls and formalize this view as a causal model. The analysis of this model enables us to quantify the interoperable behavior of communicating objects.
Potentials of smart contracts-based disintermediation in additive manufacturing supply chains
(2019)
We investigate which potentials are created by using smart contracts for disintermediation in supply chains for additive manufacturing. Using a qualitative, critical realist research approach, we analyzed three case studies with companies active in additive manufactures. Based on interviews with experts from these companies, we could identify eight key requirements for disintermediation and associate four potentials of smart contracts-based disintermediation.
Artefaktkorrektur und verfeinerte Metriken für ein EEG-basiertes System zur Müdigkeitserkennung
(2019)
Fragestellung: Müdigkeit ist ein oft unterschätztes, aber dennoch großes Problem im Straßenverkehr. Von rund 2,5 Mio. Verkehrsunfällen 2015 in Deutschland, waren 2898 Unfälle, mit insgesamt 59 Toten (~1,7 % der Todesfälle), auf Übermüdung zurückzuführen. Schätzungen gehen von einer Dunkelziffer von bis zu 20 % aus. In einer ersten eigenen Studie wurde überprüft, ob ein mobiles EEG in einem Fahrsimulator Müdigkeitszustände zuverlässig erkennen kann. Die Erkennungsrate lag lediglich bei 61 %. Ziel dieser Arbeit ist, das verwendete Messsystem zu verbessern. Dazu wird die Genauigkeit durch eine Artefaktkorrektur und mit Hilfe von verfeinerten Qualitätsmetriken erhöht. Eine erkannte Übermüdung wird dem Fahrer dann in angemessener Weise angezeigt, so dass er entsprechend reagieren kann.
Patienten und Methoden: Die Independent Component Analysis (ICA) ist ein multivariates Verfahren, um mehrere Zufallsvariablen zu analysieren. Für die Entscheidung, ob ein Fahrer gerade müde oder wach ist, wird der erstellte Merkmalsvektor für jede Sequenz mit ICA klassifiziert. Dafür wird ein trainierter Machine-Learning-Algorithmus eingesetzt, der in der Lage ist, auch unbekannte Datensätze in Klassen einzuteilen. Um die benötigten Frequenzwerte zu erhalten, wurde für jeden EEG-Kanal eine Fourier Transformation durchgeführt. Der erstellte Merkmalsvektor wird im nächsten Schritt durch ein Künstliches Neuronales Netz klassifiziert. Für das Training werden vorab erstellte Merkmalsvektoren mit den Klassen „Wach“ und „Müde“ versehen. Diese Daten werden zufällig gemischt und im Verhältnis 2:1 in eine Trainings- und Testmenge geteilt. Das Experiment wurde mit acht Personen mit jeweils zweimal 45 min Testfahrt durchgeführt.
Ergebnisse: Der komplette Datensatz besteht aus 150.000 Signalwerten, welche zu ca. 7000 Sequenzen zusammengefasst werden. Durch die Anwendung der Qualitätsmetrik bleiben 4370 Sequenzen für das Training übrig. Bei invaliden Sequenzen aufgrund von EEG-Artefakten gibt es deutliche Unterschiede. Im „Wach“ Zustand werden dreimal so viele Sequenzen verworfen als im „Müde“ Zustand. Insgesamt werden bei wachen Probanden im Schnitt ca. 50 % der Sequenzen verworfen, bei Müden lediglich 25 %. Im Durchschnitt erreicht das System eine Erkennungsrate von 73 % für beide Zustände. Vergleicht man nun das Verhältnis von „Wach“ und „Müde“ und lässt „Leichte Müdigkeit“ außen vor, liegen die Ergebnisse bei über 90 %.
Schlussfolgerungen: Die Ergebnisse zeigen, dass die Aufmerksamkeit während des Experiments abnimmt bzw. die Müdigkeit zunimmt. Dies verdeutlichen zum einen subjektive und objektive Beobachtungen von Müdigkeitsanzeichen. Zum anderen lassen sich messbare und klassifizierbare Unterschiede im EEG Signal nachweisen. Die als Merkmale eingesetzten Theta-Wellen zeigten eine niedrigere Amplitude gegen Ende des Experiments. Die Erweiterung der binären Klassifizierung führt zu einer weiteren Stabilisierung der Ergebnisse. Artefaktkorrektur und Qualitätsmetriken steigern die Güte der Daten weiter. Die entwickelte Anwendung zur Müdigkeitserkennung ermittelt messbare Zeichen von Müdigkeit und kann eine gute Entscheidung über die Fahrtauglichkeit treffen.