Informatik
Refine
Document Type
- Conference proceeding (567)
- Journal article (199)
- Book chapter (62)
- Doctoral Thesis (18)
- Book (10)
- Anthology (10)
- Patent / Standard / Guidelines (2)
- Report (2)
- Working Paper (2)
Is part of the Bibliography
- yes (872)
Institute
- Informatik (872)
- Technik (2)
- ESB Business School (1)
Publisher
- Springer (173)
- Hochschule Reutlingen (104)
- IEEE (89)
- Gesellschaft für Informatik (60)
- Elsevier (46)
- ACM (33)
- IARIA (26)
- Springer Gabler (15)
- De Gruyter (12)
- Association for Information Systems (AIS) (11)
High Performance Computing (HPC) enables significant progress in both science and industry. Whereas traditionally parallel applications have been developed to address the grand challenges in science, as of today, they are also heavily used to speed up the time-to-result in the context of product design, production planning, financial risk management, medical diagnosis, as well as research and development efforts. However, purchasing and operating HPC clusters to run these applications requires huge capital expenditures as well as operational knowledge and thus is reserved to large organizations that benefit from economies of scale. More recently, the cloud evolved into an alternative execution environment for parallel applications, which comes with novel characteristics such as on-demand access to compute resources, pay-per-use, and elasticity. Whereas the cloud has been mainly used to operate interactive multi-tier applications, HPC users are also interested in the benefits offered. These include full control of the resource configuration based on virtualization, fast setup times by using on-demand accessible compute resources, and eliminated upfront capital expenditures due to the pay-per-use billing model. Additionally, elasticity allows compute resources to be provisioned and decommissioned at runtime, which allows fine-grained control of an application's performance in terms of its execution time and efficiency as well as the related monetary costs of the computation. Whereas HPC-optimized cloud environments have been introduced by cloud providers such as Amazon Web Services (AWS) and Microsoft Azure, existing parallel architectures are not designed to make use of elasticity. This thesis addresses several challenges in the emergent field of High Performance Cloud Computing. In particular, the presented contributions focus on the novel opportunities and challenges related to elasticity. First, the principles of elastic parallel systems as well as related design considerations are discussed in detail. On this basis, two exemplary elastic parallel system architectures are presented, each of which includes (1) an elasticity controller that controls the number of processing units based on user-defined goals, (2) a cloud-aware parallel execution model that handles coordination and synchronization requirements in an automated manner, and (3) a programming abstraction to ease the implementation of elastic parallel applications. To automate application delivery and deployment, novel approaches are presented that generate the required deployment artifacts from developer-provided source code in an automated manner while considering application-specific non-functional requirements. Throughout this thesis, a broad spectrum of design decisions related to the construction of elastic parallel system architectures is discussed, including proactive and reactive elasticity control mechanisms as well as cloud-based parallel processing with virtual machines (Infrastructure as a Service) and functions (Function as a Service). To evaluate these contributions, extensive experimental evaluations are presented.
Im Fokus der Arbeit steht die Unterstützung der Stentgraftauswahl bei endovaskulärer Versorgung eines infrarenalen Aortenaneurysmas. Im Rahmen der Arbeit wurde eine Methode zur Auswertung von Ergebnissen einer Finite Elemente-Analyse zum Stentgraftverhalten konzipiert, implementiert und im Rahmen einer deutschlandweiten Benutzerstudie mit 16 Chirurgen diskutiert. Die entwickelte Mensch-Maschine-Schnittstelle ermöglicht dem Gefäßmediziner eine interaktive Analyse berechneter Fixierungskräfte und Kontaktzustände mehrerer Stentgrafts im Kontext mit dem zu behandelnden Aortenabschnitt. Die entwickelte Methode ermöglicht eine tiefergehende Auseinandersetzung der Mediziner mit numerischen Simulationen und Stentgraftbewertungsgrößen. Hierdurch konnte im Rahmen der Benutzerstudie das Einsatzpotenzial numerischer Simulationen zur Unterstützung der Stentgraftauswahl ermittelt und eine Anforderungsspezifikation an ein System zur simulationsbasierten Stentgraftplanung definiert werden. Im Ergebnis wurde als wesentliches Einsatzpotenzial die Festlegung eines Mindestmaßes an Überdimensionierung, die Optimierung der Schenkellänge von bifurkativen Stentgrafts sowie der Vergleich unterschiedlicher Stentgraftdesigns ermittelt. Zu den wesentlichen Funktionen eines Systems zur simulationsbasierten Stentgraftauswahl gehören eine Übersichtskarte zu farbkodiertem Migrationsrisiko pro Stentgraft und Landungszone, die Visualisierung des Abdichtungszustandes der Stentkomponenten sowie die Darstellung von Stentgraft- und Gefäßdeformationen im 3D-Modell.
Wie sieht eine erfolgreiche Einführung von Industrie 4.0 aus? Dieses Buch stellt das Konzept, die Paradigmen und relevanten Technologien von Industrie 4.0 sowie deren Gesamtzusammenhänge systematisch vor. Entgegen der gängigen, rein technologischen und anwendungsbezogenen Betrachtungsweise, führt das Buch zusätzlich strategische, taktische und operative Betrachtungsebenen zu einem integrativen Strang zusammen. Zentrales Herzstück dabei ist ein Vorgehensmodell, das den Handlungsbedarf auf strategischer und operativer Ebene beschreibt. Ein Praxisfall, unterschiedliche Industrie 4.0-Use Cases und namhafte Experten aus Forschung und Praxis machen diese Lektüre interessant für Neueinsteiger, aber auch für Umsetzungsinteressierte des mittleren und oberen Managements, die eine neue Sichtweise auf die Komplexität des Themas gewinnen möchten. Das Glossar macht das Buch zum wertvollen Nachschlagewerk für das Thema Industrie 4.0.
Das Buch führt in die Grundlagen der Softwaretechnik ein. Dabei liegt sein Fokus auf der systematischen und modellbasierten Software- und Systementwicklung aber auch auf dem Einsatz agiler Methoden. Die Autoren legen besonderen Wert auf die gleichwertige Behandlung praktischer Aspekte und zugrundeliegender Theorien, was das Buch als Fach- und Lehrbuch gleichermaßen geeignet macht. Die Softwaretechnik wird im Rahmen eines systematischen Frameworks umfassend beschrieben. Ausgewählte und aufeinander abgestimmte Konzepte und Methoden werden durchgängig und integriert dargestellt.
Der Siegeszug von Social Media im privaten Umfeld hat die Vorteile dieser Kommunikationswerkzeuge aufgezeigt. Unternehmen versuchen, diese Erfolge für sich zu nutzen und setzen Social Media für ihre Kommunikationsaktivitäten ein. In der externen Kommunikation etwa ermöglichen diese Werkzeuge einen schnellen und unkomplizierten Nachrichtenaustausch mit Kunden oder helfen Kundenexpertise in organisationale Prozesse, etwa Produktentwicklung oder Kundenbeschwerdemanagement, zu integrieren. Auch in der internen Kommunikation entstehen durch den Einsatz von Social Media neue Kanäle. Eine spezielle Gruppe von Social-Media Werkzeugen für die interne Kommunikation und Kollaboration wird als Enterprise Social Networks (ESN) bezeichnet.
Die Digitalisierung, der ständige technologische Fortschritt und immer kürzere Produktlebenszyklen stellen Unternehmen derzeit vor große Herausforderungen. Um am Markt erfolgreich zu sein, müssen Geschäftsmodelle häufiger und schneller als früher an veränderte Marktbedingungen angepasst werden. Schnelle Anpassungsfähigkeit, auch Agilität genannt, ist in der heutigen Zeit ein entscheidender Wettbewerbsfaktor. Aufgrund des ständig wachsenden IT-Anteils von Produkten und der Tatsache, dass diese mit Hilfe von IT hergestellt werden, hat die Änderung des Geschäftsmodells große Auswirkungen auf die Unternehmensarchitektur (EA). Die Entwicklung von EAs ist jedoch eine sehr komplexe Aufgabe, da viele Beteiligte mit gegensätzlichen Interessen in den Entscheidungsprozess eingebunden sind. Daher ist ein hohes Maß an Zusammenarbeit erforderlich. Um Unternehmen bei der Entwicklung ihrer EA zu unterstützen, wird in diesem Artikel eine neuartige integrative Methode vorgestellt, die die Interessen der Stakeholder systematisch in die Entscheidungsfindung einbezieht. Durch die Anwendung der Methode wird die Zusammenarbeit zwischen den beteiligten Interessengruppen verbessert, indem Berührungspunkte zwischen ihnen identifiziert werden. Darüber hinaus machen die standardisierten Aktivitäten die Entscheidungsfindung transparenter und vergleichbarer, ohne die Kreativität einzuschränken.
Unternehmen stehen aktuell aufgrund der Digitalisierung, des stetigen technologischen Fortschritts und immer kürzer werdenden Produktlebenszyklen vor großen Herausforderungen. Um am Markt bestehen zu können, müssen Geschäftsmodelle öfter und schneller an sich verändernde Marktverhältnisse angepasst werden als dies früher der Fall war. Eine schnelle Anpassungsfähigkeit, auch Agilität genannt, ist in der heutigen Zeit ein entscheidender Wettbewerbsfaktor. Aufgrund des stetig wachsenden IT Anteils in Produkten sowie der Tatsache, dass diese IT-gestützt hergestellt werden, haben Änderungen des Geschäftsmodells große Auswirkungen auf die Unternehmensarchitektur eines Unternehmens.
Eine Unternehmensarchitektur umspannt das Unternehmen, indem diese die fachlichen und technischen Strukturen, insbesondere die gesamte IT, des Unternehmens beinhaltet und integriert. Das Management der Unternehmensarchitektur ist die Disziplin zur Beherrschung und Abstimmung dieser Strukturen. An der Gestaltung der Unternehmensarchitektur wirken viele Stakeholder mit individuellen und teils gegensätzlichen Interessen aus den unterschiedlichsten Bereichen des Unternehmens mit. Dies macht die Entscheidungsfindung zu einer komplexen Aufgabe.
Die in dieser Arbeit entworfene integrative Methode für die Entscheidungsfindung hat das Ziel, die Betroffenen und Beteiligten, im Folgenden Stakeholder, bei ihren Entscheidungen zu unterstützen. Die Grundidee hierbei ist die systematische Einbeziehung der Interessen der Stakeholder und davon abgeleiteter Visualisierungen. Dies verleiht der Methode ihren integrativen Charakter und hilft Abhängigkeiten zwischen Stakeholdern zu erkennen. Dadurch wird die Zusammenarbeit zwischen den an Entscheidungen beteiligten Stakeholdern gefördert. Neben der systematischen Einbeziehung von Visualisierungen wird im Rahmen dieser Arbeit das Konzept der Technik eingeführt. Techniken werden ebenfalls von den Interessen der Stakeholder abgeleitet und dienen der Unterstützung bei der Durchführung von Aktivitäten der Entscheidungsfindung, indem Vorgehensweisen bei bestimmten Aufgaben vorgegeben oder Teilprozesse der Entscheidungsfindung sogar automatisiert durchgeführt werden. Das Konzept der Technik, die systematische Ableitung von den Interessen der Stakeholder sowie das Zusammenspiel mit Visualisierungen wird in dieser Arbeit in Form einer erweiterten Konzeptualisierung der Architekturbeschreibung definiert.
Da die Werkzeugunterstützung in der Praxis häufig eine Herausforderung darstellt, rundet diese Arbeit ein eigens konzipiertes und prototypisch validiertes Architekturcockpit ab. Das Cockpit ist eine auf einem elektronischen Sitzungsraum basierende Werkzeugunterstützung der eingeführten integrativen Methode.
Blockchains yield to new workloads in database management systems and K/V-stores. Distributed Ledger Technology (DLT) is a technique for managing transactions in ’trustless’ distributed systems. Yet, clients of nodes in blockchain networks are backed by ’trustworthy’ K/V-Stores, like LevelDB or RocksDB in Ethereum, which are based on Log-Structured Merge Trees (LSM Trees). However, LSM-Trees do not fully match the properties of blockchains and enterprise workloads.
In this paper, we claim that Partitioned B-Trees (PBT) fit the properties of this DLT: uniformly distributed hash keys, immutability, consensus, invalid blocks, unspent and off-chain transactions, reorganization and data state / version ordering in a distributed log-structure. PBT can locate records of newly inserted key-value pairs, as well as data of unspent transactions, in separate partitions in main memory. Once several blocks acquire consensus, PBTs evict a whole partition, which becomes immutable, to secondary storage. This behavior minimizes write amplification and enables a beneficial sequential write pattern on modern hardware. Furthermore, DLT implicate some type of log-based versioning. PBTs can serve as MV-store for data storage of logical blocks and indexing in multi-version concurrency control (MVCC) transaction processing.
In this paper we build on our research in data management on native Flash storage. In particular we demonstrate the advantages of intelligent data placement strategies. To effectively manage phsical Flash space and organize the data on it, we utilize novel storage structures such as regions and groups. These are coupled to common DBMS logical structures, thus require no extra overhead for the DBA. The experimental results indicate an improvement of up to 2x, which doubles the longevity of Flash SSD. During the demonstration the audience can experience the advantages of the proposed approach on real Flash hardware.
Higher education institutions (HEIs) rely heavily on information technology (IT) to create innovations. Therefore, IT governance (ITG) is essential for education activities, particularly during the ongoing COVID-19 pandemic. However, the traditional concept of ITG is not fully equipped to deal with the current changes occurring in the digital age. Today's ITG requires an agile approach that can respond to disruptions in the HEI environment. Consequently, universities increasingly need to adopt agile strategies to ensure superior performance. This research proposes a conceptualization comprising three agile dimensions within the ITG construct: structures, processes, and relational mechanisms. An extensive qualitative evaluation of industry uncovered 46 agile governance mechanisms. Moreover, 16 professors rated these elements to assess agile ITG in their HEIs to determine those most effective for HEIs. This led to the identification of four structure elements, seven processes, and seven relational mechanisms.
Digital assistants like Alexa, Google Assistant or Siri have seen a large adoption over the past years. Using artificial intelligence (AI) technologies, they provide a vocal interface to physical devices as well as to digital services and have spurred an entire new ecosystem. This comprises the big tech companies themselves, but also a strongly growing community of developers that make these functionalities available via digital platforms. At present, only few research is available to understand the structure and the value creation logic of these AI-based assistant platforms and their ecosystem. This research adopts ecosystem intelligence to shed light on their structure and dynamics. It combines existing data collection methods with an automated approach that proves useful in deriving a network-based conceptual model of Amazon’s Alexa assistant platform and ecosystem. It shows that skills are a key unit of modularity in this ecosystem, which is linked to other elements such as service, data, and money flows. It also suggests that the topology of the Alexa ecosystem may be described using the criteria reflexivity, symmetry, variance, strength, and centrality of the skill coactivations. Finally, it identifies three ways to create and capture value on AI-based assistant platforms. Surprisingly only a few skills use a transactional business model by selling services and goods but many skills are complementary and provide information, configuration, and control services for other skill provider products and services. These findings provide new insights into the highly relevant ecosystems of AI-based assistant platforms, which might serve enterprises in developing their strategies in these ecosystems. They might also pave the way to a faster, data-driven approach for ecosystem intelligence.
Fatigue and drowsiness are responsible for a significant percentage of road traffic accidents. There are several approaches to monitor the driver's drowsiness, ranging from the driver's steering behavior to the analysis of the driver, e.g. eye tracking, blinking, yawning, or electrocardiogram (ECG). This paper describes the development of a low-cost ECG sensor to derive heart rate variability (HRV) data for drowsiness detection. The work includes hardware and software design. The hardware was implemented on a printed circuit board (PCB) designed so that the board can be used as an extension shield for an Arduino. The PCB contains a double, inverted ECG channel including low-pass filtering and provides two analog outputs to the Arduino, which combines them and performs the analog-to-digital conversion. The digital ECG signal is transferred to an NVidia embedded PC where the processing takes place, including QRS-complex, heart rate, and HRV detection as well as visualization features. The resulting compact sensor provides good results in the extraction of the main ECG parameters. The sensor is being used in a larger frame, where facial-recognition-based drowsiness detection is combined with ECG-based detection to improve the recognition rate under unfavorable light or occlusion conditions.
Integrierte Schaltkreise (IC) sind ein integraler Bestandteil vieler Geräte wie zum Beispiel Smartphones, Computer oder Fernseher. Auf den Schaltkreisen werden immer mehr Funktionen integriert. Um die Arbeit auch zukünftig in gegebener Zeit bewältigen zu können, bedarf es daher einer Möglichkeit für die gleichzeitige Zusammenarbeit der Entwickler. Unter dem Arbeitstitel eCEDA (eCollaboration for Electronic Design Automation) wird ein Konzept für eine Webanwendung entwickelt, die die Echtzeitkollaboration von Entwicklern im Chipentwurf ermöglichen soll. Dieses Konzept sowie verschiedene Aspekte der Kollaboration werden in dieser Arbeit behandelt.
"Learning by doing" in Higher Education in technical disciplines is mostly realized by hands-on labs. It challenges the exploratory aptitude and curiosity of a person. But, exploratory learning is hindered by technical situations that are not easy to establish and to verify. Technical skills are, however, mandatory for employees in this area. On the other side, theoretical concepts are often compromised by commercial products. The challenge is to contrast and reconcile theory with practice. Another challenge is to implement a self-assessment and grading scheme that keeps up with the scalability of e-learning courses. In addition, it should allow the use of different commercial products in the labs and still grade the assignment results automatically in a uniform way. In two European Union funded projects we designed, implemented, and evaluated a unique e-learning reference model, which realizes a modularized teaching concept that provides easily reproducible virtual hands-on labs. The novelty of the approach is to use software products of industrial relevance to compare with theory and to contrast different implementations. In a sample case study, we demonstrate the automated assessment for the creative database modeling and design task. Pilot applications in several European countries demonstrated that the participants gained highly sustainable competences that improved their attractiveness for employment.
A new class of information system architecture, decision-oriented service systems, is spreading more and more. Decision-oriented service systems provide services that support decisions in business processes and products based on the capabilities of cloud-computing environments. To pave the way for the creation of design methods of business processes and products based on decision-oriented service systems, this article introduces a capability-oriented approach. Starting from technological capabilities, more abstract operational and dynamic capabilities are created. The framework created is based on an integrated conceptualization of decision-oriented service systems that allows capturing synergetic effects. By creating the framework, the gap between the technological capabilities of technologies and the strategic goals of enterprises shall be narrowed.
We examine the role of communication from users on dropout from digital learning systems to answer the following questions: (1) how does the sentiment within qualitative signals (user comments) affect dropout rates? (2) does the variance in the proportion of positive and negative sentiments affect dropout rates? (3) how do quantitative signals (e.g. likes) moderate the effect of the qualitative signals? and (4) how does the effect of qualitative signals on dropout rates change across early and late stages of learning? Our hypotheses draws from learning theory and self-regulation theory, and were tested using data of 447 learning videos across 32 series of online tutorials, spanning 12 different fields of learning. The findings indicate a main effect of negative sentiment on dropout rates but no effect of positive sentiment on preventing dropout behaviour. This main effect is stronger in the early stages of learning and weakens at later stages. We also observe an effect of the extent of variance of positive and negative sentiments on dropout behaviour. The effects are negatively moderated by quantitative signals. Overall, making commenting more broad-based rather than polarised can be a useful strategy in managing learning, transferring knowledge, and building consensus.
Purpose – This paper aims to complement the current understanding about user engagement in electronic word-of-mouth (eWoM) communications across online services and product communities. It examines the effect of the senders’ prior experience with products and services, and their extent of acquaintance with other community members, on user engagement with the eWoM.
Design/methodology/approach – The study used a sample of 576 unique user postings from the corporate fan page of two German firms: a service community of a telecom provider and a product community of a car manufacturer. Multiple regression analysis is used to test the conceptual model.
Findings – Senders’ prior experience and acquaintance positively affect user engagement with eWoM, and these effects differ across communities for products and services and across their influence on “likes” and “comments”. The results also suggest that communities for products are orientated toward information sharing, while those discussing services engage in information building.
Research limitations/implications – This research explains mechanisms of user engagement with eWoM and opens directions for future research around motives, content and social media tools within the structures of online communities. The insights on information-handling dimensions of online tools and antecedents to their use contribute to the research on two prioritized topics by the Marketing Science Institute – "Measuring and
Communicating the Value of Online Marketing Activities and Investments" and "Leveraging Digital/Social/Mobile Technology".
Practical implications – This research offers insights for firms to leverage user engagement and facilitate eWoM generation through members who have a higher number of acquaintances or who have more experience with the product or service. Executives should concentrate their community engagement strategies on the identification and utilization of power users. The conceptualization and empirical test about the role of likes and comments will help social media managers to create and better capture value from their social media metrics.
Originality/value – The insights about the underlying factors that influence engagement with eWoM advance our understanding about the usage of online content.
Context
Web APIs are one of the most used ways to expose application functionality on the Web, and their understandability is important for efficiently using the provided resources. While many API design rules exist, empirical evidence for the effectiveness of most rules is lacking.
Objective
We therefore wanted to study 1) the impact of RESTful API design rules on understandability, 2) if rule violations are also perceived as more difficult to understand, and 3) if demographic attributes like REST-related experience have an influence on this.
Method
We conducted a controlled Web-based experiment with 105 participants, from both industry and academia and with different levels of experience. Based on a hybrid between a crossover and a between-subjects design, we studied 12 design rules using API snippets in two complementary versions: one that adhered to a rule and one that was a violation of this rule. Participants answered comprehension questions and rated the perceived difficulty.
Results
For 11 of the 12 rules, we found that violation performed significantly worse than rule for the comprehension tasks. Regarding the subjective ratings, we found significant differences for 9 of the 12 rules, meaning that most violations were subjectively rated as more difficult to understand. Demographics played no role in the comprehension performance for violation.
Conclusions
Our results provide first empirical evidence for the importance of following design rules to improve the understandability of Web APIs, which is important for researchers, practitioners, and educators.
Software development consists to a large extent of human-based processes with continuously increasing demands regarding interdisciplinary team work. Understanding the dynamics of software teams can be seen as highly important to successful project execution. Hence, for future project managers, knowledge about non-technical processes in teams is significant. In this paper, we present a course unit that provides an environment in which students can learn and experience the role of different communication patterns in distributed agile software development. In particular, students gain awareness about the importance of communication by experiencing the impact of limitations of communication channels and the effects on collaboration and team performance. The course unit presented uses the controlled experiment instrument to provide the basic organization of a small software project carried out in virtual teams. We provide a detailed design of the course unit to allow for implementation in further courses. Furthermore, we provide experiences obtained from implementing this course unit with 16 graduate students. We observed students struggling with technical aspects and team coordination in general, while not realizing the importance of communication channels (or their absence). Furthermore, we could show the students that lacking communication protocols impact team coordination and performance regardless of the communication channels used.
Im Rahmen der Vernetzung des Autos drängen neue Wettbewerber in die Automobilindustrie. Mittels disruptiver Innovationsmethoden haben Google, Apple, Facebook und Co. bereits Branchen grundlegend verändert und Marktführer wie Nokia oder Otto innerhalb weniger Jahre abgelöst. Die folgende Arbeit befasst sich mit diesen Methoden und der Fragestellung, wie sie in den automotiven Produktentstehungsprozess integriert werden können, um nachhaltig erfolgreiche Geschäftsmodelle am Markt platzieren zu können.