Nein
Refine
Document Type
- Conference proceeding (295)
- Journal article (77)
- Book chapter (51)
- Anthology (10)
- Book (5)
- Doctoral Thesis (5)
- Patent / Standard / Guidelines (2)
- Report (1)
Is part of the Bibliography
- yes (446)
Institute
- Informatik (446) (remove)
Publisher
- Springer (181)
- IEEE (88)
- Association for Computing Machinery (38)
- Elsevier (12)
- Università Politecnica delle Marche (11)
- Association for Information Systems (10)
- Haufe (8)
- Hochschule Reutlingen (8)
- Emerald (7)
- American Marketing Association (5)
Application systems often need to be deployed in different variants if requirements that influence their implementation, hosting, and configuration differ between customers. Therefore, deployment technologies, such as Ansible or Terraform, support a certain degree of variability modeling. Besides, modern application systems typically consist of various software components deployed using multiple deployment technologies that only support their proprietary, non-interoperable variability modeling concepts. The Variable Deployment Metamodel (VDMM) manages the deployment variability across heterogeneous deployment technologies based on a single variable deployment model. However, VDMM currently only supports modeling conditional components and their relations which is sometimes too coarse-grained since it requires modeling entire components, including their implementation and deployment configuration for each different component variant. Therefore, we extend VDMM by a more fine-grained approach for managing the variability of component implementations and their deployment configurations, e.g., if a cheap version of a SaaS deployment provides only a community edition of the software and not the enterprise edition, which has additional analytical reporting functionalities built-in. We show that our extended VDMM can be used to realize variable deployments across different individual deployment technologies using a case study and our prototype OpenTOSCA Vintner.
In the era of digital transformation, the notion of software quality transcends its traditional boundaries, necessitating an expansion to encompass the realms of value creation for customers and the business. Merely optimizing technical aspects of software quality can result in diminishing returns. Product discovery techniques can be seen as a powerful mechanism for crafting products that align with an expanded concept of quality - one that incorporates value creation. Previous research has shown that companies struggle to determine appropriate product discovery techniques for generating, validating, and prioritizing ideas for new products or features to ensure they meet the needs and desires of the customers and the business. For this reason, we conducted a grey literature review to identify various techniques for product discovery. First, the article provides an overview of different techniques and assesses how frequently they are mentioned in the literature review. Second, we mapped these techniques to an existing product discovery process from previous research to provide concrete guidelines for establishing product discovery in their organizations. The analysis shows, among other things, the increasing importance of techniques to structure the problem exploration process and the product strategy process. The results are interpreted regarding the importance of the techniques to practical applications and recognizable trends.
Gamification has been increasingly applied to software engineering education in the past. The approaches vary from applying game elements on a conceptual phase in the course to using specific tools to engage the students more and support their learning goals. However, existing tools usually have game elements, such as quizzes or challenges, but do not provide a more computer game-like experience. Therefore, we try to raise the level of gamified learning experience to another level by proposing Gamify-IT. Gamify-IT is a Unity- and web-based game platform intended to help students learn software engineering. It follows an immersive role-play game characteristic where the students explore a world, find and solve minigames and clear dungeons with SE tasks. Lecturers can configure the worlds, e.g., to add content hints. Furthermore, they can add and configure minigames and dungeons to include exercises in a fully gamified way. Thereby, they customize their course in Gamify-IT to adapt the world very precisely to other materials such as lectures or exercises. Results of an evaluation of our initial prototype show that (i) students like to engage with the platform, (ii) students are motivated to learn when using Gamify-IT, and (iii) the minigames support students in understanding the learning objectives.
The introduction of smart contracts has expanded the applicability of blockchains to many domains beyond finance and cryptocurrencies. Moreover, different blockchain technologies have evolved that target special requirements. As a result, in practice, often a combination of different blockchain systems is required to achieve an overall goal. However, due to the heterogeneity of blockchain protocols, the execution of distributed business transactions that span several blockchains leads to multiple interoperability and integration challenges. Therefore, in this article, we examine the domain of Cross-Chain Smart Contract Invocations (CCSCIs), which are distributed transactions that involve the invocation of smart contracts hosted on two or more blockchain systems. We conduct a systematic multi-vocal literature review to get an overview of the available CCSCI approaches. We select 20 formal literature studies and 13 high-quality gray literature studies, extract data from them, and analyze it to derive the CCSCI Classification Framework. With the help of the framework, we group the approaches into two categories and eight subcategories. The approaches differ in multiple characteristics, e.g., the mechanisms they follow, and the capabilities and transaction processing semantics they offer. Our analysis indicates that all approaches suffer from obstacles that complicate real-world adoption, such as the low support for handling heterogeneity and the need for trusted third parties.
Blockchains have become increasingly important in recent years and have expanded their applicability to many domains beyond finance and cryptocurrencies. This adoption has particularly increased with the introduction of smart contracts, which are immutable, user-defined programs directly deployed on blockchain networks. However, many scenarios require business transactions to simultaneously access smart contracts on multiple, possibly heterogeneous blockchain networks while ensuring the atomicity and isolation of these transactions, which is not natively supported by current blockchain systems. Therefore, in this work, we introduce the Transactional Cross-Chain Smart Contract Invocation (TCCSCI) approach that supports such distributed business transactions while ensuring their global atomicity and serializability. The approach introduces the concept of Resource Manager Smart Contracts, and 2PC for Blockchains (2PC4BC), a client-driven Atomic Commit Protocol (ACP) specialized for blockchain-based distributed transactions. We validate our approach using a prototypical implementation, evaluate its introduced overhead, and prove its correctness.
This research evaluates current measurement scales for ambidexterity and proposes a new approach for the measurement of this important construct. We argue that current measurement approaches may be unsuitable to capture the concept of ambidexterity. Through a systematic scale development process, we derive a measurement scale with dual items that simultaneously refer to both dimensions, exploitation and exploration, thus reflecting the true nature of ambidexterity. An extensive pre-test with 39 executives suggests that our scale is suitable for capturing ambidexterity. Our measurement model enhances conceptual clarity of ambidexterity and can serve as a base for future investigations of the concept.
Fragestellung: Das klinische Standardverfahren und Referenz der Schlafmessung und der Klassifizierung der einzelnen Schlafstadien ist die Polysomnographie (PSG). Alternative Ansätze zu diesem aufwändigen Verfahren könnten einige Vorteile bieten, wenn die Messungen auf eine komfortablere Weise durchgeführt werden. Das Hauptziel dieser Forschung Studie ist es, einen Algorithmus für die automatische Klassifizierung von Schlafstadien zu entwickeln, der ausschließlich Bewegungs- und Atmungssignale verwendet [1].
Patienten und Methoden: Nach der Analyse der aktuellen Forschungsarbeiten haben wir multinomiale logistische Regression als Grundlage für den Ansatz gewählt [2]. Um die Genauigkeit der Auswertung zu erhöhen, wurden vier Features entwickelt, die aus Bewegungs- und Atemsignalen abgeleitet wurden. Für die Auswertung wurden die nächtlichen Aufzeichnungen von 35 Personen verwendet, die von der Charité-Universitätsmedizin Berlin zur Verfügung gestellt wurden. Das Durchschnittsalter der Teilnehmer betrug 38,6 +/– 14,5 Jahre und der BMI lag bei durchschnittlich 24,4 +/– 4,9 kg/m2. Da der Algorithmus mit drei Stadien arbeitet, wurden die Stadien N1, N2 und N3 zum NREM-Stadium zusammengeführt. Der verfügbare Datensatz wurde strikt aufgeteilt: in einen Trainingsdatensatz von etwa 100 h und in einen Testdatensatz mit etwa 160 h nächtlicher Aufzeichnungen. Beide Datensätze wiesen ein ähnliches Verhältnis zwischen Männern und Frauen auf, und der durchschnittliche BMI wies keine signifikante Abweichung auf.
Ergebnisse: Der Algorithmus wurde implementiert und lieferte erfolgreiche Ergebnisse: die Genauigkeit der Erkennung von Wach-/NREM-/REM-Phasen liegt bei 73 %, mit einem Cohen’s Kappa von 0,44 für die analysierten 19.324 Schlafepochen von jeweils 30 s. Die beobachtete gewisse Überschätzung der NREM-Phase lässt sich teilweise durch ihre Prävalenz in einem typischen Schlafmuster erklären. Selbst die Verwendung eines ausbalancierten Trainingsdatensatzes konnte dieses Problem nicht vollständig lösen.
Schlussfolgerungen: Die erreichten Ergebnisse haben die Tauglichkeit des Ansatzes prinzipiell bestätigt. Dieser hat den Vorteil, dass nur Bewegungs- und Atemsignale verwendet werden, die mit weniger Aufwand und komfortabler für Benutzer aufgezeichnet werden können als z. B. Herz- oder EEG-Signale. Daher stellt das neue System eine deutliche Verbesserung im Vergleich zu bestehenden Ansätzen dar. Die Zusammenführung der beschriebenen algorithmischen Software mit dem in [1] beschriebenen Hardwaresystem zur Messung von Atem- und Körperbewegungssignalen zu einem autonomen, berührungslosen System zur kontinuierlichen Schlafüberwachung ist eine mögliche Richtung zukünftiger Arbeiten.
Context
In a world of high dynamics and uncertainties, it is almost impossible to have a long-term prediction of which products, services, or features will satisfy the needs of the customer. To counter this situation, the conduction of Continuous Improvement or Design Thinking for product discovery are common approaches. A major constraint in conducting product discovery activities is the high effort to discover and validate features and requirements. In addition, companies struggle to integrate product discovery activities into their agile processes and iterations.
Objective
This paper aims at suggests a supportive tool, the “Discovery Effort Worthiness (DEW) Index”, for product owners and agile teams to determine a suitable amount of effort that should be spent on Design Thinking activities. To operationalize DEW, proposals for practitioners are presented that can be used to integrate product discovery into product development and delivery.
Method
A case study was conducted for the development of the DEW index. In addition, we conducted an expert workshop to develop proposals for the integration of product discovery activities into the product development and delivery process.
Results
First, we present the "Discovery Effort Worthiness Index" in form of a formula. Second, we identified requirements that must be fulfilled for systematic integration of product discovery activities into product development and delivery. Third, we derived from the requirements proposals for the integration of product discovery activities with a company's product development and delivery.
Conclusion
The developed "Discovery Effort Worthiness Index" provides a tool for companies and their product owners to determine how much effort they should spend on Design Thinking methods to discover and validate requirements. Integrating product discovery with product development and delivery should ensure that the results of product discovery are incorporated into product development. This aims to systematically analyze product risks to increase the chance of product success.
Human pose estimation (HPE) is integral to scene understanding in numerous safety-critical domains involving human-machine interaction, such as autonomous driving or semi-automated work environments. Avoiding costly mistakes is synonymous with anticipating failure in model predictions, which necessitates meta-judgments on the accuracy of the applied models. Here, we propose a straightforward human pose regression framework to examine the behavior of two established methods for simultaneous aleatoric and epistemic uncertainty estimation: maximum a-posteriori (MAP) estimation with Monte-Carlo variational inference and deep evidential regression (DER). First, we evaluate both approaches on the quality of their predicted variances and whether these truly capture the expected model error. The initial assessment indicates that both methods exhibit the overconfidence issue common in deep probabilistic models. This observation motivates our implementation of an additional recalibration step to extract reliable confidence intervals. We then take a closer look at deep evidential regression, which, to our knowledge, is applied comprehensively for the first time to the HPE problem. Experimental results indicate that DER behaves as expected in challenging and adverse conditions commonly occurring in HPE and that the predicted uncertainties match their purported aleatoric and epistemic sources. Notably, DER achieves smooth uncertainty estimates without the need for a costly sampling step, making it an attractive candidate for uncertainty estimation on resource-limited platforms.
Smart cities are considered data factories that generate an enormous amount of data from various sources. In fact data is the backbone of any smart services. Therefore, the strategic beneficial handling of this digital capital is crucial for cities. Some smart city pioneers have already written down their approach to data in the form of data strategies, but what should a city's data strategy include, and how can the goals and measures defined in the strategies be operationalized? This paper addresses these questions by looking closely at the data strategies of cities in Germany and the top three countries in the EU Digital Economy and Society Index. The in-depth analysis of 8 city data strategies has yielded 11 dimensions that cities should consider in their data strategy. These are relevance of data, principles, methods, data sharing, technology, data culture, data ethics, organizational structure, data security and privacy, collaborations, data literacy. In addition, data governance is a concept to put these 11 strategic dimensions into practice through standardization measures, training programs, and defining roles and responsibilities by developing a data catalog.