004 Informatik
Refine
Document Type
- Conference proceeding (431)
- Journal article (99)
- Book chapter (25)
- Doctoral Thesis (12)
- Anthology (7)
- Book (6)
- Patent / Standard / Guidelines (1)
- Working Paper (1)
Is part of the Bibliography
- yes (582)
Institute
- Informatik (500)
- ESB Business School (52)
- Technik (31)
Publisher
- Hochschule Reutlingen (87)
- Springer (86)
- IEEE (77)
- Gesellschaft für Informatik (61)
- Elsevier (36)
- ACM (31)
- IARIA (13)
- University of Hawai'i at Manoa (10)
- AIS Electronic Library (AISeL) (9)
- SCITEPRESS (8)
OpenAPI, WADL, RAML, and API Blueprint are popular formats for documenting Web APIs. Although these formats are in general both human and machine-readable, only the part of the format describing the syntax of a Web API is machine-understandable. Descriptions, which explain the meaning and purpose of Web API elements, are embedded as natural language text snippets into documents and target human readers but not machines. To enable machines to read and process these state-of-practice Web API documentation, we propose a Transformer model that solves the generic task of identifying a Web API element within a syntax structure that matches a natural language query. For our first prototype, we focus on the Web API integration task of matching output with input parameters and fined-tuned a pre-trained CodeBERT model to the downstream task of question answering with samples from 2,321 OpenAPI documentation. We formulate the original question answering problem as a multiple choice task: given a semantic natural language description of an output parameter (question) and the syntax of the input schema (paragraph), the model chooses the input parameter (answer) in the schema that best matches the description. The paper describes the data preparation, tokenization, and fine-tuning process as well as discusses possible applications of our model as part of a recommender system. Furthermore, we evaluate the generalizability and the robustness of our fine-tuned model, with the result that it achieves an accuracy of 81.46% correctly chosen parameters.
Context
Web APIs are one of the most used ways to expose application functionality on the Web, and their understandability is important for efficiently using the provided resources. While many API design rules exist, empirical evidence for the effectiveness of most rules is lacking.
Objective
We therefore wanted to study 1) the impact of RESTful API design rules on understandability, 2) if rule violations are also perceived as more difficult to understand, and 3) if demographic attributes like REST-related experience have an influence on this.
Method
We conducted a controlled Web-based experiment with 105 participants, from both industry and academia and with different levels of experience. Based on a hybrid between a crossover and a between-subjects design, we studied 12 design rules using API snippets in two complementary versions: one that adhered to a rule and one that was a violation of this rule. Participants answered comprehension questions and rated the perceived difficulty.
Results
For 11 of the 12 rules, we found that violation performed significantly worse than rule for the comprehension tasks. Regarding the subjective ratings, we found significant differences for 9 of the 12 rules, meaning that most violations were subjectively rated as more difficult to understand. Demographics played no role in the comprehension performance for violation.
Conclusions
Our results provide first empirical evidence for the importance of following design rules to improve the understandability of Web APIs, which is important for researchers, practitioners, and educators.
In den letzten Jahren hat der Trend zur Digitalisierung und Konnektivität die Kundenerwartungen an den B2B-Kundenservice verändert. Vorliegender Artikel arbeitet mit zwei klaren Studienzielen und untersucht zum einen die Rolle von IoT (Internet of Things) und Cybersicherheit als Erfolgsfaktoren für den Business-to-Business (B2B) Kundenservice und zum anderen wie eine sichere Integration zu einem Wettbewerbsvorteil auf dem deutschen Markt beitragen kann. Durch einen qualitativen Ansatz mithilfe von 20 Befragungen wurde untersucht, dass IoT und Cybersicherheit als Erfolgsfaktoren für den deutschen B2B-Kundenservice angesehen werden können. Als Ergebnis liefert diese Studie fünf Kernaussagen (Hypothesen) aus qualitativen Interviews. Neben der Diskussion allgemeiner Erfolgsfaktoren und deren Einfluss, wurde die Rolle von IoT bei der Optimierung des B2B Kundendienstes diskutiert. Zudem werden potenzielle Sicherheitsrisken in Zusammenhang mit den Dienstleistungsmodellen, notwendige Anforderungen an Cybersicherheit sowie Datenerfassung erörtert. Abschließend wurde ein Modell entwickelt, das interne und externe Aspekte aufzeigt, die dazu beitragen, dass IoT und Cybersicherheit als Erfolgsfaktoren in der Aktivitätskette des Kunden in der Pre-Sales‑, Sales- und After-Sales-Phase erlebt werden.
Dieser praxis-nahe und industrie-übergreifende Artikel liefert somit Einblicke basierend auf qualitativen Erkenntnissen für weitere Forschung in der Theorie und befähigt Organisationen das Thema ganzeinheitlich zu betrachten.
In this paper, it aims to model wind speed time series at multiple sites. The five-parameter Johnson distribution is deployed to relate the wind speed at each site to a Gaussian time series, and the resultant m-dimensional Gaussian stochastic vector process Z(t) is employed to model the temporal-spatial correlation of wind speeds at m different sites. In general, it is computationally tedious to obtain the autocorrelation functions (ACFs) and cross-correlation functions (CCFs) of Z(t), which are different to those of wind speed times series. In order to circumvent this correlation distortion problem, the rank ACF and rank CCF are introduced to characterize the temporal-spatial correlation of wind speeds, whereby the ACFs and CCFs of Z(t) can be analytically obtained. Then, Fourier transformation is implemented to establish the cross-spectral density matrix of Z(t), and an analytical approach is proposed to generate samples of wind speeds at m different sites. Finally, simulation experiments are performed to check the proposed methods, and the results verify that the five-parameter Johnson distribution can accurately match distribution functions of wind speeds, and the spectral representation method can well reproduce the temporal-spatial correlation of wind speeds.
Most digital innovations fail when they transition from the exploring to the scaling stage. We describe how freeyou, the digital innovation spinoff of a major German insurer, successfully scaled online-only car insurance, focusing particularly on how it managed the IT-related challenges. The stark differences between the stages required very different approaches to application development, IT organization and data analytics. Based on freeyou’s experience, we provide recommendations for successful transitioning from exploring to scaling.
What might the attendee be able to do after being in your session?
Our work shows how to connect intra-operative devices via IEEE 11073 Service-oriented Device Connectivity (SDC).
Description of the Problem or Gap
Standardized device communication is essential for interoperability, availability of device data, and therefore for the intelligent operating room (OR) and arising solutions. The SDC standard was developed to make information from medical devices available in a uniform manner and enable interoperability. Existing devices are rarely SDC-capable and need interfaces to be interoperable via SDC.
Methods: What did you do to address the problem or gap?
We conceived an SDC-based architecture consisting of a service provider and service consumer. In our concept, the service provider is connected to the medical device and capable to translate the proprietary protocol of the device into SDC and vice versa. The service consumer is used to request or send information via the SDC protocol to the service provider and can function as a uniform bidirectional interface (e.g. for displaying or controlling). This concept was exemplarily demonstrated with the patient monitor MX800 of Philips to retrieve the device data (e.g. vital parameters) via SDC and partly for the operating light marLED X of KLS Martin Group.
Results: What was the outcome(s) of what you did to address the problem or gap?
The patient monitor MX800 was connected to a Raspberry Pi (RPi) via LAN, on which the service provider is running. The python script on the RPi establishes a connection to the monitor and translates incoming and outgoing messages from the proprietary protocol to SDC and vice versa to/from the service consumer. The service consumer is running on a laptop and acts as a simulation for different kinds of systems that want to get vital parameters or other information from the patient monitor. The operating light marLED X was connected to an RPi via USB-to-RS232. A python script on the RPi establishes a connection to the light and makes it possible via proprietary commands to get information of the light (e.g. status) and to control it (e.g. toggle the light, increment the intensity). A translation to SDC is not integrated yet.
Discussion of Results
Our practical implementation shows that medical devices can be accessed via external connections to get device data and control the device via commands. The example SDC implementation of the patient monitor MX800 makes it possible to request its data via the standardized communication protocol SDC. This is also possible for the operating light marLED X if its proprietary protocol is analyzed to be translatable to/from SDC. This would allow to control the device from an external system, or automatically depending on the status of the ongoing procedure. The advantage is, that existing intra-operative devices can be extended by a service provider which is capable of translating the proprietary protocol of the device in SDC and vice versa. This enables interoperability and an intelligent OR that, for example, is aware of all devices, their status, and data and can use this information to optimally support the surgeons and their team (e.g. provision of information, automated documentation). This interoperability allows that future innovations merely need to understand the SDC protocol instead of all vendor-dependent communication protocols.
Conclusion
Standardized device communication is essential to reach interoperability, and therefore intelligent ORs. Our contribution addresses the possibility of subsequently making medical devices SDC-capable. This may eliminate the need of understanding all the different proprietary protocols when developing new innovative solutions for the OR.
In recent years, 3D facial reconstructions from single images have garnered significant interest. Most of the approaches are based on 3D Morphable Model (3DMM) fitting to reconstruct the 3D face shape. Concurrently, the adoption of Generative Adversarial Networks (GAN) has been gaining momentum to improve the texture of reconstructed faces. In this paper, we propose a fundamentally different approach to reconstructing the 3D head shape from a single image by harnessing the power of GAN. Our method predicts three maps of normal vectors of the head’s frontal, left, and right poses. We are thus presenting a model-free method that does not require any prior knowledge of the object’s geometry to be reconstructed.
The key advantage of our proposed approach is the substantial improvement in reconstruction quality compared to existing methods, particularly in the case of facial regions that are self-occluded in the input image. Our method is not limited to 3d face reconstruction. It is generic and applicable to multiple kinds of 3D objects. To illustrate the versatility of our method, we demonstrate its efficacy in reconstructing the entire human body.
By delivering a model-free method capable of generating high-quality 3D reconstructions, this paper not only advances the field of 3D facial reconstruction but also provides a foundation for future research and applications spanning multiple object types. The implications of this work have the potential to extend far beyond facial reconstruction, paving the way for innovative solutions and discoveries in various domains.
The aim of this work is the development of artificial intelligence (AI) application to support the recruiting process that elevates the domain of human resource management by advancing its capabilities and effectiveness. This affects recruiting processes and includes solutions for active sourcing, i.e. active recruitment, pre-sorting, evaluating structured video interviews and discovering internal training potential. This work highlights four novel approaches to ethical machine learning. The first is precise machine learning for ethically relevant properties in image recognition, which focuses on accurately detecting and analysing these properties. The second is the detection of bias in training data, allowing for the identification and removal of distortions that could skew results. The third is minimising bias, which involves actively working to reduce bias in machine learning models. Finally, an unsupervised architecture is introduced that can learn fair results even without ground truth data. Together, these approaches represent important steps forward in creating ethical and unbiased machine learning systems.
Human recognition is an important part of perception systems, such as those used in autonomous vehicles or robots. These systems often use deep neural networks for this purpose, which rely on large amounts of data that ideally cover various situations, movements, visual appearances, and interactions. However, obtaining such data is typically complex and expensive. In addition to raw data, labels are required to create training data for supervised learning. Thus, manual annotation of bounding boxes, keypoints, orientations, or actions performed is frequently necessary. This work addresses whether the laborious acquisition and creation of data can be simplified through targeted simulation. If data are generated in a simulation, information such as positions, dimensions, orientations, surfaces, and occlusions are already known, and appropriate labels can be generated automatically. A key question is whether deep neural networks, trained with simulated data, can be applied to real data. This work explores the use of simulated training data using examples from the field of pedestrian detection for autonomous vehicles. On the one hand, it is shown how existing systems can be improved by targeted retraining with simulation data, for example to better recognize corner cases. On the other hand, the work focuses on the generation of data that hardly or not occur at all in real standard datasets. It will be demonstrated how training data can be generated by targeted acquisition and combination of motion data and 3D models, which contain finely graded action labels to recognize even complex pedestrian situations. Through the diverse annotation data that simulations provide, it becomes possible to train deep neural networks for a wide variety of tasks with one dataset. In this work, such simulated data is used to train a novel deep multitask network that brings together diverse, previously mostly independently considered but related, tasks such as 2D and 3D human pose recognition and body and orientation estimation.
In modern collaborative production environments where industrial robots and humans are supposed to work hand in hand, it is mandatory to observe the robot’s workspace at all times. Such observation is even more crucial when the robot’s main position is also dynamic e.g. because the system is mounted on a movable platform. As current solutions like physically secured areas in which a robot can perform actions potentially dangerous for humans, become unfeasible in such scenarios, novel, more dynamic, and situation aware safety solutions need to be developed and deployed.
This thesis mainly contributes to the bigger picture of such a collaborative scenario by presenting a data-driven convolutional neural network-based approach to estimate the two-dimensional kinematic-chain configuration of industrial robot-arms within raw camera images. This thesis also provides the information needed to generate and organize the mandatory data basis and presents frameworks that were used to realize all involved subsystems. The robot-arm’s extracted kinematic-chain can also be used to estimate the extrinsic camera parameters relative to the robot’s three-dimensional origin. Further a tracking system, based on a two-dimensional kinematic chain descriptor is presented to allow for an accumulation of a proper movement history which enables the prediction of future target positions within the given image plane. The combination of the extracted robot’s pose with a simultaneous human pose estimation system delivers a consistent data flow that can be used in higher-level applications.
This thesis also provides a detailed evaluation of all involved subsystems and provides a broad overview of their particular performance, based on novel generated, semi automatically annotated, real datasets.