Refine
Document Type
- Conference proceeding (846) (remove)
Language
- English (846) (remove)
Is part of the Bibliography
- yes (846)
Institute
- Informatik (459)
- Technik (220)
- ESB Business School (136)
- Texoversum (21)
- Life Sciences (11)
- Zentrale Einrichtungen (2)
Publisher
- IEEE (222)
- Springer (135)
- Gesellschaft für Informatik (42)
- ACM (38)
- Hochschule Reutlingen (31)
- Association for Information Systems (AIS) (29)
- IARIA (19)
- SSRN (18)
- SCITEPRESS (14)
- VDE Verlag GmbH (14)
- RWTH Aachen (11)
- Stellenbosch University (11)
- SPIE (8)
- University of Hawai'i at Manoa (8)
- Università Politecnica delle Marche (8)
- Curran Associates (5)
- IOS Press (5)
- Curran Associates Inc. (4)
- Deutsche Gesellschaft für Computer- und Roboterassistierte Chirurgie e.V. (4)
- European Association for the Development of Renewable Energy, Environment and Power Quality (4)
- Fac. of Organization & Informatics, Univ. of Zagreb (4)
- RWTH (4)
- Science and Technology Publications (4)
- Springer International Publishing (4)
- University of Colorado (4)
- University of Hawaii at Manoa (4)
- Academy of Management (3)
- Cambridge University Press (3)
- Hochschule Ulm (3)
- Hometrica Consulting (3)
- Leibniz-Universität Hannover (3)
- Technische Universität Berlin (3)
- University of Konstanz, University Library (3)
- VDE (3)
- VDE Verlag (3)
- VDE-Verlag (3)
- AIP Publishing (2)
- Academic Conferences International (2)
- American Marketing Association (2)
- Arbeitsgemeinschaft Simulation (ASIM) (2)
- Association for Computing Machinery (2)
- EACH USP (2)
- EuroMed Press (2)
- HTWG Konstanz (2)
- IADIS Press (2)
- IBM Research Division (2)
- Institutionelles Repositorium der Leibniz Universität Hannover (2)
- International Society for Photogrammetry and Remote Sensing (2)
- International Society for Professional Innovation Management (2)
- Meta Basim Press (2)
- Nacional'nyi techničnyj universytet Dnipr (2)
- Open Proceedings.org, Univ. of Konstanz (2)
- SciTePress (2)
- System Dynamics Society (2)
- Technische Informationsbibliothek (2)
- The Association for Computing Machinery, Inc. (2)
- University of Hawaii (2)
- Universität Trier (2)
- WIP (2)
- eceee secretariat (2)
- ADL Romania (1)
- Academic Conferences International Limited (1)
- Amsterdam Fashion Institute (1)
- Apprimus Wissenschaftsverlag (1)
- Association for Computing Machinery ACM (1)
- Athens Institute for Education and Research (1)
- Atlantis Press (1)
- Baltic Management Development Association (BMDA) (1)
- British Institute of Non-Destructive Testing (1)
- CIDR (1)
- Cadence Design Systems (1)
- Copenhagen Business School (1)
- Cornell University (1)
- De Gruyter (1)
- De Gruyter Oldenbourg (1)
- Design Society (1)
- Deutsche Gesellschaft für Computer- und Roboterassistierte Chirurgie e. V. (1)
- Deutsche Gesellschaft für Medizinische Physik (1)
- Development and Entrepreneurship Agency (1)
- Druckerei & Verlagshaus Mainz (1)
- Duncker & Humblot (1)
- ECPE (1)
- EMAC (1)
- EPubli (1)
- ESI ITI GmbH (1)
- ETA-Florence Renewable Energies (1)
- Ed2.0Work (1)
- Edizioni Novacultur (1)
- EuroMedPress (1)
- Eurographics Association (1)
- European Accounting Association (1)
- Faculty of Economics (1)
- Faculty of Organization and Informatics, University of Zagreb (1)
- Fraunhofer-Institut für Arbeitswirtschaft und Organisation (1)
- GMDS e.V. (1)
- GRID-FTN (1)
- German Medical Science Publishing House (1)
- Ghent University (1)
- Global Alliance of Marketing & Management Associations (1)
- Global Science Institute (1)
- Hochschule Bonn-Rhein-Sieg (1)
- Hochschule Furtwangen (1)
- Hochschule Nordhausen (1)
- IADIS (1)
- IWCS (1)
- International Association for Development of the Information Society (1)
- International Management Development Association (IMDA) (1)
- International TRIZ Official Association (1)
- International Wire and Cable Symposium Inc. (1)
- Johannes Kepler University Linz (1)
- KTH Royal Institute of Technology (1)
- Karlsruhe Institute of Technology (1)
- Lund University (1)
- MDPI (1)
- Morressier (1)
- Newcastle University (1)
- NextMed (1)
- OpenProceedings (1)
- PeerJ Inc. (1)
- Power Sources Manufacturers Association (1)
- Pro Business (1)
- Qatar University (1)
- SATC (1)
- SISSA (1)
- Science and Technology Publications, Lda (1)
- Sciendo (1)
- Shaker (1)
- Society for Industrial and Systems Engineering (1)
- Tarquin (1)
- Technical Conference Management (1)
- The Association for Computing Machinery (1)
- The British Institute of Non-Destructive Testing (1)
- The Design Society (1)
- Universitat Politècnica de València (1)
- University of Maribor University Press (1)
- University of Portsmouth (1)
- University of Waikato (1)
- University of Zagreb Faculty of Organization and Informatics (1)
- Universität Bremen (1)
- Universität Hannover (1)
- Universität des Saarlandes (1)
- Weka Fachmedien GmbH (1)
- eceee (1)
- libreriauniversitaria.it.edizioni (1)
- vwh (1)
Delivering value to customers in real-time requires companies to utilize real-time deployment of software to expose features to users faster, and to shorten the feedback loop. This allows for faster reaction and helps to ensure that the development is focused on features providing real value. Continuous delivery is a development practice where the software functionality is deployed continuously to customer environment. Although this practice has been established in some domains such as B2C mobile software, the B2B domain imposes specific challenges. This article presents a case study that is conducted in a medium-sized software company operating in the B2B domain. The objective of this study is to analyze the challenges and benefits of continuous delivery in this domain. The results suggest that technical challenges are only one part of the challenges a company encounters in this transition. The company must also address challenges related to the customer and procedures. The core challenges are caused by having multiple customers with diverse environments and unique properties, whose business depends on the software product. Some customers require to perform manual acceptance testing, while some are reluctant towards new versions. By utilizing continuous delivery, it is possible for the case company to shorten the feedback cycles, increase the reliability of new versions, and reduce the amount of resources required for deploying and testing new releases.
The experimental characterization of the thermal impedance Zth of large power MOSFETs is commonly done by measuring the junction temperature Tj in the cooling phase after the device has been heated, preferably to a high junction temperature for increased accuracy. However, turning off a large heating current (as required by modern MOSFETs with low on-state resistances) takes some time because of parasitic inductances in the measurement system. Thus, most setups do not allow the characterization of the junction temperature in the time range below several tens of μs.
In this paper, an optimized measurement setup is presented which allows accurate Tj characterization already 3 μs after turn-off of heating. With this, it becomes possible to experimentally investigate the influence of thermal capacitances close to the active region of the device. Measurement results will be presented for advanced power MOSFETs with very large heating currents up to 220 A. Three bonding variants are investigated and the observed differences will be explained.
This paper aims at presenting a solution that enables end customers of the energy system to participate in new local micro-energy-markets by providing them with a distributed, decentralized, transparent and secure Peer to Peer (P2P) payment system, which functions automatically applying new concepts of Machine to Machine (M2M) communication technologies. This work was performed within the German project VK_2G, funded by the DBU. The key results were: Providing means to perform microtransactions in a P2P fashion between end consumers and prosumers in local communities at low cost in a transparent and secure manner; Developing a platform with pre-defined smart contracts able to be tailored to different end customers ‘needs in an easy way and; Integrating both the market platform as well as the local control of generation and loads. This solution has been developed, integrated and tested in a laboratory prototype. This paper discusses this solution and presents the results of the first test.
Today, companies face increasing market dynamics, rapidly evolving technologies, and rapid changes in customer behavior. Traditional approaches to product development typically fail in such environments and require companies to transform their often feature-driven mindset into a product-led mindset. A promising first step on the way to a product-led company is a better understanding of how product planning can be adapted to the requirements of an increasingly dynamic and uncertain market environment in the sense of product roadmapping. The authors developed the DEEP product roadmap assessment tool to help companies evaluate their current product roadmap practices and identify appropriate actions to transition to a more product-led company. Objective: The goal of this paper is to gain insight into the applicability and usefulness of version 1.1 of the DEEP model. In addition, the benefits, and implications of using the DEEP model in corporate contexts will be explored. Method: We conducted a multiple case study in which participants were observed using the DEEP model. We then interviewed each participant to understand their perceptions of the DEEP model. In addition, we conducted interviews with each company's product management department to learn how the application of the DEEP model influenced their attitudes toward product roadmapping. Results: The study showed that by applying the DEEP model, participants better understood which artifacts and methods were critical to product roadmapping success in a dynamic and uncertain market environment. In addition, the application of the DEEP model helped convince management and other stakeholders of the need to change current product roadmapping practices. The application also proved to be a suitable starting point for the transformation in the participating companies.
Transaction processing is of growing importance for mobile computing. Booking tickets, flight reservation, banking, ePayment, and booking holiday arrangements are just a few examples for mobile transactions. Due to temporarily disconnected situations the synchronisation and consistent transaction processing are key issues. Serializability is a too strong criteria for correctness when the semantics of a transaction is known. We introduce a transaction model that allows higher concurrency for a certain class of transactions defined by its semantic. The transaction results are ”escrow serializable” and the synchronisation mechanism is non-blocking. Experimental implementation showed higher concurrency, transaction throughput, and less resources used than common locking or optimistic protocols.
Industrial practice is characterized by random events, also referred to as internal and external turbulences, which disturb the target-oriented planning and execution of production and logistics processes. Methods of probabilistic forecasting, in contrast to single value predictions, allow an estimation of the probability of various future outcomes of a random variable in the form of a probability density function instead of predicting the probability of a specific single outcome. Probabilistic forecasting methods, which are embedded into the analytics process to gain insights for the future based on historical data, therefore offer great potential for incorporating uncertainty into planning and control in industrial environments. In order to familiarize students with these potentials, a training module on the application of probabilistic forecasting methods in production and intralogistics was developed in the learning factory 'Werk150' of the ESB Business School (Reutlingen University). The theoretical introduction to the topic of analytics, probabilistic forecasting methods and the transition to the application domain of intralogistics is done based on examples from other disciplines such as weather forecasting and energy consumption forecasting. In addition, data sets of the learning factory are used to familiarize the students with the steps of the analytics process in a practice-oriented manner. After this, the students are given the task of identifying the influencing factors and required information to capture intralogistics turbulences based on defined turbulence scenarios (e.g. failure of a logistical resource) in the learning factory. Within practical production scenario runs, the students apply probabilistic forecasting using and comparing different probabilistic forecasting methods. The graduate training module allows the students to experience the potentials of using probabilistic forecasting methods to improve production and intralogistics processes in context with turbulences and to build up corresponding professional and methodological competencies.
Enterprise Architectures (EA) consists of many architecture elements, which stand in manifold relationships to each other. Therefore Architecture Analysis is important and very difficult for stakeholders. Due changing an architecture element has impacts on other elements different stakeholders are involved. In practice EAs are often analyzed using visualizations. This article aims at contributing to the field of visual analytics in EAM by analyzing how state of-the-art software platforms in EAM support stakeholders with respect to providing and visualizing the “right” information for decision-making tasks. We investigate the collaborative decision-making process in an experiment with master students using professional EAM tools by developing a research study and accomplishing them in a master’s level class with students.
Ambitious goals set by the European Union strategy towards the emission reduction of multimodal logistic chains and new requirements for intermodal terminals set by the evolution of customer needs, contribute to a shift in the driver for the infrastructure development: from economy of scale to economy of density. This paper aims to present an innovative method for designing a process oriented technology chain for intermodal terminals in order to fulfill these new demanding requirements. The results of the case study of the Zero Emission Logistic Terminal Reutlingen are presented, highlighting how this particular context enables the design and development of a modular concept, paving the way for the generalization of the findings towards the transfer to similar contexts of other European cities.
In this paper it is first identified the trade-off among costs, flexibility and performances of autonomous robotic solutions for material handling processes, where adding value with automation is not as trivial as in production processes: hence the requirement for automated solutions to be simple, lean and efficient becomes even stricter. Then a method for modelling and comparing differential performances and costs of manual and autonomous solutions is developed. As a result of the method, a smart man-machine collaborative interface is designed and its impact evaluated on a specific case of study. Results are then generalized and prove the strong conclusions that in unconstrained environments, where full standardization cannot be achieved, the risk of investing in autonomous solutions can only be mitigated by creating a fast and smart man-machine collaborative interface.
Facial expressions play a dominant role in facilitating social interactions. We endeavor to develop tactile displays to reinstate facial expression modulated communication. The high spatial and temporal dimensionality of facial movements poses a unique challenge when designing tactile encodings of them. A further challenge is developing encodings that are at-tuned to the perceptual characteristics of our skin. A caveat of using vibrotactile displays is that tactile stimuli have been shown to induce perceptual tactile aftereffects when used on the fingers, arm and face. However, at present, despite the prevalence of waist-worn tactile displays, no such investigations of tactile aftereffects at the waist region exist in the literature, though they are warranted by the unique sensory and perceptual signalling characteristics of this area. Using an adaptation paradigm we investigated the presence of perceptual tactile aftereffects induced by continuous and burst vibrotactile stimuli delivered at the navel, side and spinal regions of the waist. We report evidence that the tactile perception topology of the waist is non-uniform, and specifically that the navel and spine regions are resistant to adaptive aftereffects while side regions are more prone to perceptual adaptations to continuous but not burst stimulations. Results of our current investigations highlight the unique set of challenges posed by designing waist-worn tactile displays. These and future perceptual studies can directly inform more realistic and effective implementations of complex high-dimensional spatiotemporal social cues.
Due to the consequential impact of technological breakdowns, companies have to be prepared to deal with breakdowns or even better prevent them. In today's information technology, several methods and tools exist to downscale this concern. Therefore, this paper deals with the initial determination of a resilient enterprise architecture supporting predictive maintenance in the information technology domain and furthermore, concerns several mechanisms on how to reactively and proactively secure the state of resiliency on several abstraction levels. The objective of this paper is to give an overview on existing mechanisms for resiliency and to describe the foundation of an optimized approach, combining infrastructure and process mining techniques.
IT environments that consist of a very large number of rather small structures like microservices, Internet of Things (IoT) components, or mobility systems are emerging to support flexible and agile products and services in the age of digital transformation. Biological metaphors of living and adaptable ecosystems with service-oriented enterprise architectures provide the foundation for self-optimizing, resilient run-time environments and distributed information systems. We are extending Enterprise Architecture (EA) methodologies and models that cover a high degree of heterogeneity and distribution to support the digital transformation and related information systems with micro-granular architectures. Our aim is to support flexibility and agile transformation for both IT and business capabilities within adaptable digital enterprise architectures. The present research paper investigates mechanisms for integrating Microservice Architectures (MSA) by extending original enterprise architecture reference models with elements for more flexible architectural metamodels and EA-mini-descriptions.
The aim of this work is the development of artificial intelligence (AI) application to support the recruiting process that elevates the domain of human resource management by advancing its capabilities and effectiveness. This affects recruiting processes and includes solutions for active sourcing, i.e. active recruitment, pre-sorting, evaluating structured video interviews and discovering internal training potential. This work highlights four novel approaches to ethical machine learning. The first is precise machine learning for ethically relevant properties in image recognition, which focuses on accurately detecting and analysing these properties. The second is the detection of bias in training data, allowing for the identification and removal of distortions that could skew results. The third is minimising bias, which involves actively working to reduce bias in machine learning models. Finally, an unsupervised architecture is introduced that can learn fair results even without ground truth data. Together, these approaches represent important steps forward in creating ethical and unbiased machine learning systems.
AI technologies such as deep learning provide promising advances in many areas. Using these technologies, enterprises and organizations implement new business models and capabilities. In the beginning, AI-technologies have been deployed in an experimental environment. AI-based applications have been created in an ad-hoc manner and without methodological guidance or engineering approach. Due to the increasing importance of AI-technologies, however, a more structured approach is necessary that enable the methodological engineering of AI-based applications. Therefore, we develop in this paper first steps towards methodological engineering of AI-based applications. First, we identify some important differences between the technological foundations of AI- technologies, in particular deep learning, and traditional information technologies. Then we create a framework that enables to engineer AI-applications using four steps: identification of an AI-application type, sub-type identification, lifecycle phase, and definition of details. The introduced framework considers that AI-applications use an inductive approach to infer knowledge from huge collections and streams of data. It not only enables the rapid development of AI-application but also the efficient sharing of knowledge on AI-applications.
Distraction of the driver is one of the most frequent causes for car accidents. We aim for a computational cognitive model predicting the driver’s degree of distraction during driving while performing a secondary task, such as talking with co-passengers. The secondary task might cognitively involve the driver to differing degrees depending on the topic of the conversation or the number of co-passengers. In order to detect these subtle differences in everyday driving situations, we aim to analyse in-car audio signals and combine this information with head pose and face tracking information. In the first step, we will assess driving, video and audio parameters reliably predicting cognitive distraction of the driver. These parameters will be used to train the cognitive model in estimating the degree of the driver’s distraction. In the second step, we will train and test the cognitive model during conversations of the driver with co-passengers during active driving. This paper describes the work in progress of our first experiment with preliminary results concerning driving parameters corresponding to the driver’s degree of distraction. In addition, the technical implementation of our experiment combining driving, video and audio data and first methodological results concerning the auditory analysis will be presented. The overall aim for the application of the cognitive distraction model is the development of a mobile user profile computing the individual distraction degree and being applicable also to other systems.
A large body of literature is concerned with models of presence— the sensory illusion of being part of a virtual scene— but there is still no general agreement on how to measure it objectively and reliably. For the presented study, we applied contemporary theory to measure presence in virtual reality. Thirty-seven participants explored an existing commercial game in order to complete a collection task. Two startle events were naturally embedded in the game progression to evoke physical reactions and head tracking data was collected in response to these events. Subjective presence was recorded using a post-study questionnaire and real-time assessments. Our novel implementation of behavioral measures lead to insights which could inform future presence research: We propose a measure in which startle reflexes are evoked through specific events in the virtual environment, and head tracking data is compared to the range and speed of baseline interactions.
Continuous refactoring is necessary to maintain source code quality and to cope with technical debt. Since manual refactoring is inefficient and error prone, various solutions for automated refactoring have been proposed in the past. However, empirical studies have shown that these solutions are not widely accepted by software developers and most refactorings are still performed manually. For example, developers reported that refactoring tools should support functionality for reviewing changes. They also criticized that introducing such tools would require substantial effort for configuration and integration into the current development environment.
In this paper, we present our work towards the Refactoring-Bot, an autonomous bot that integrates into the team like a human developer via the existing version control platform. The bot automatically performs refactorings to resolve code smells and presents the changes to a developer for asynchronous review via pull requests. This way, developers are not interrupted in their workflow and can review the changes at any time with familiar tools. Proposed refactorings can then be integrated into the code base via the push of a button. We elaborate on our vision, discuss design decisions, describe the current state of development, and give an outlook on planned development and research activities.
Digital transformation has changed corporate reality and, with that, firms’ IT environments and IT governance (ITG). As such, the perspective of ITG has shifted from the design of a relatively stable, closed and controllable system of a self-sufficient enterprise to a relatively fluid, open, agile and transformational system of networked co adaptive entities. Related to this paradigm shift in ITG, this paper aims to clarify how the concept of an effective ITG framework has changed in terms of the demand for agility in organizations. Thus, this study conducted 33 qualitative interviews with executives and senior managers from the banking industry in Germany, Switzerland and Austria. Analysis of the interviews focused on the formation of categories and the assignment of individual text parts (codings) to these categories to allow for a quantitative evaluation of the codings per category. Regarding traditional and agile ITG dimensions, 22 traditional and 25 agile dimensions were identified. Moreover, agile strategies within the agile ITG construct and ten ITG patterns were identified from the interview data. The data show relevant perspectives on the implementation of traditional and new ITG dimensions and highlight ambidextrous aspects in ITG frameworks.
While there has been increased digitization of private homes, only little has been done to understand these specific home technologies, how they serve consumers, among other issues. “Smart home technology” (SHT) refer to a wide range of artifacts from cleaning aids to energy advisors. Given this breadth, clarity surrounding the key characteristics and the multi-faceted impact of SHT is needed to conduct more directed research on SHT. We propose a taxonomy to help outline the salient intended outcomes of SHT. Through a process involving five iterations, we analyzed and classified 79 technologies (gathered from literature and industry reports). This uncovered seven dimensions encompassing 20 salient characteristics. We believe these dimensions/characteristics will help researchers and organizations better design and study the impacts of these technologies. Our long-term agenda is to use the proposed taxonomy for an exploratory inquiry to understand tensions occurring when personal and sustainability-related outcomes compete.
Towards a practical maintainability quality model for service- and microservice-based systems
(2017)
Although current literature mentions a lot of different metrics related to the maintainability of service-based systems (SBSs), there is no comprehensive quality model (QM) with automatic evaluation and practical focus. To fill this gap, we propose a Maintainability Model for Services (MM4S), a layered maintainability QM consisting of service properties (SPs) related with automatically collectable Service Metrics (SMs). This research artifact created within an ongoing Design Science Research (DSR) project is the first version ready for detailed evaluation and critical feedback. The goal of MM4S is to serve as a simple and practical tool for basic maintainability estimation and control in the context of BSs and their specialization
microservice-based systems (μSBSs).
While there are several theoretical comparisons of Object Orientation (OO) and Service Orientation (SO), little empirical research on the maintainability of the two paradigms exists. To provide support for a generalizable comparison, we conducted a study with four related parts. Two functionally equivalent systems (one OO and one SO version) were analyzed with coupling and cohesion metrics as well as via a controlled experiment, where participants had to extend the systems. We also conducted a survey with 32 software professionals and interviewed 8 industry experts on the topic. Results indicate that the SO version of our system possesses a higher degree of cohesion, a lower degree of coupling, and could be extended faster. Survey and interview results suggest that industry sees systems built with SO as more loosely coupled, modifiable, and reusable. OO systems, however, were described as less complex and easier to test.
Current approaches for enterprise architecture lack analytical instruments for cyclic evaluations of business and system architectures in real business enterprise system environments. This impedes the broad use of enterprise architecture methodologies. Furthermore, the permanent evolution of systems desynchronizes quickly model representation and reality. Therefore we are introducing an approach for complementing the existing top-down approach for the creation of enterprise architecture with a bottom approach. Enterprise Architecture Analytics uses the architectural information contained in many infrastructures to provide architectural information. By applying Big Data technologies it is possible to exploit this information and to create architectural information. That means, Enterprise Architectures may be discovered, analyzed and optimized using analytics. The increased availability of architectural data also improves the possibilities to verify the compliance of Enterprise Architectures. Architectural decisions are linked to clustered architecture artifacts and categories according to a holistic EAM Reference Architecture with specific architecture metamodels. A special suited EAM Maturity Framework provides the base for systematic and analytics supported assessments of architecture capabilities.
Smart cities are considered data factories that generate an enormous amount of data from various sources. In fact data is the backbone of any smart services. Therefore, the strategic beneficial handling of this digital capital is crucial for cities. Some smart city pioneers have already written down their approach to data in the form of data strategies, but what should a city's data strategy include, and how can the goals and measures defined in the strategies be operationalized? This paper addresses these questions by looking closely at the data strategies of cities in Germany and the top three countries in the EU Digital Economy and Society Index. The in-depth analysis of 8 city data strategies has yielded 11 dimensions that cities should consider in their data strategy. These are relevance of data, principles, methods, data sharing, technology, data culture, data ethics, organizational structure, data security and privacy, collaborations, data literacy. In addition, data governance is a concept to put these 11 strategic dimensions into practice through standardization measures, training programs, and defining roles and responsibilities by developing a data catalog.
While the concepts of object-oriented antipatterns and code smells are prevalent in scientific literature and have been popularized by tools like SonarQube, the research field for service-based antipatterns and bad smells is not as cohesive and organized. The description of these antipatterns is distributed across several publications with no holistic schema or taxonomy. Furthermore, there is currently little synergy between documented antipatterns for the architectural styles SOA and Microservices, even though several antipatterns may hold value for both. We therefore conducted a Systematic Literature Review (SLR) that identified 14 primary studies. 36 service-based antipatterns were extracted from these studies and documented with a holistic data model. We also categorized the antipatterns with a taxonomy and implemented relationships between them. Lastly, we developed a web application for convenient browsing and implemented a GitHub-based repository and workflow for the collaborative evolution of the collection. Researchers and practitioners can use the repository as a reference, for training and education, or for quality assurance.
Analysis is an important part of the enterprise architecture management process. Prior to decisions regarding transformation of the enterprise architecture, the current situation and the outcomes of alternative action plans have to be analysed. Many analysis approaches have been proposed by researchers and current enterprise architecture management tools implement analysis functionalities. However, few work has been done structuring and classifying enterprise architecture analysis approaches. This paper collects and extends existing classification schemes, presenting a framework for enterprise architecture analysis classification. For evaluation, a collection of enterprise architecture analysis approaches has been classified based on this framework. As a result, the description of these approaches has been assessed, a common set of important categories for enterprise architecture analysis classification has been derived and suggestions for further development are drawn.
Autonomous driving is becoming the next big digital disruption in the automotive industry. However, the possibility of integrating autonomous driving vehicles into current transportation systems not only involves technological issues but also requires the acceptance and adoption of users. Therefore, this paper develops a conceptual model for user acceptance of autonomous driving vehicles. The corresponding model is tested through a standardized survey of 470 respondents in Germany. Finally, the findings are discussed in relation to the current developments in the automotive industry, and recommendations for further research are given.
Many start-ups are in search of cooperation partners to develop their innovative business models. In response, incumbent firms are introducing increasingly more cooperation systems to engage with start-ups. However, many of these cooperations end in failure. Although qualitative studies on cooperation models have tried to improve the effectiveness of incumbent start-up strategies, only a few have empirically examined start-up cooperation behavior. Considering the lack of adequate measurement models in current research, this paper focuses on developing a multi-item scale on cooperation behavior of start-ups, drawing from a series of qualitative and quantitative studies. The resultant scale contributes to recent research on start-up cooperation and provides a framework to add an empirical perspective to current research.
In this work, a comparison between different brushless harmonic-excited wound-rotor synchronous machines is performed. The general idea of all topologies is the elimination of the slip rings and auxiliary windings by using the already existing stator and rotor winding for field excitation. This is achieved by injecting a harmonic airgap field with the help of power electronics. This harmonic field does not interact with the fundamental field, it just transfers the excitation power across the airgap. Alternative methods with varying number of phases, different pole-pair combinations, and winding layouts are covered and compared with a detailed Finite-Element-parameterized model. Parasitic effects due to saturation and coupling between the harmonic and main windings are considered.
Distributed ledger technologies such as the blockchain technology offer an innovative solution to increase visibility and security to reduce supply chain risks. This paper proposes a solution to increase the transparency and auditability of manufactured products in collaborative networks by adopting smart contract-based virtual identities. Compared with existing approaches, this extended smart contract-based solution offers manufacturing networks the possibility of involving privacy, content updating, and portability approaches to smart contracts. As a result, the solution is suitable for the dynamic administration of complex supply chains.
The increase in distributed energy generation, such as photovoltaic systems (PV) or combined heat and power plants (CHP), poses new challenges to almost every distribution network operator (DNO). In the low-voltage (LV) grids, where installed PV capacity approaches the magnitude of household load, reverse power flow occurs at the secondary substa-tions. High PV penetration leads to voltage rise, flicker and loading problems. These problems have been addressed by the application of various techniques amongst which is the deployment of step voltage regulators (SVR). SVR can solve the voltage problem, but do not prevent or reduce reverse power flows. Therefore, the application of SVR in low voltage grids can result in significant power losses upstream. In this paper we present part of a research project investi-gating the application of remote-controlled cable cabinets (CC) with metering units in a low-voltage network as a possible alternative for SVR. A new generation of custom-made remote-control cable cabinets has been deployed and dynamic network reconfigurations (NR) have been realized with the following objectives: (i) reduction of reverse power flow through the secondary substation to the upstream network and therefore a reduction of upstream losses, (ii) reduction of the voltage rise caused by distributed energy resources and (iii) load balancing in the low-voltage grid. Secondary objec-tives are to improve the DNO's insight into the state of the network and to provide further information on future smart grid integration.
For large-scale processes as implemented in organizations that develop software in regulated domains, comprehensive software process models are implemented, e.g., for compliance requirements. Creating and evolving such processes is demanding and requires software engineers having substantial modeling skills to create consistent and certifiable processes. While teaching process engineering to students, we observed issues in providing and explaining models. In this paper, we present an exploratory study in which we aim to shed light on the challenges students face when it comes to modeling. Our findings show that students are capable of doing basic modeling tasks, yet, fail in utilizing models correctly. We conclude that the required skills, notably abstraction and solution development, are underdeveloped due to missing practice and routine. Since modeling is key to many software engineering disciplines, we advocate for intensifying modeling activities in teaching.
Theoretical foundation, effectiveness, and design artefact for machine learning service repositories
(2022)
Machine learning (ML) has played an important role in research in recent years. For companies that want to use ML, finding the algorithms and models that fit for their business is tedious. A review of the available literature on this problem indicates only a few research papers. Given this gap, the aim of this paper is to design an effective and easy-to-use ML service repository. The corresponding research is based on a multi-vocal literature analysis combined with design science research, addressing three research questions: (1) How is current white and gray literature on ML services structured with respect to repositories? (2) Which features are relevant for an effective ML service repository? (3) How is a prototype for an effective ML service repository conceptualized? Findings are relevant for the explanation of user acceptance of ML repositories. This is essential for corporate practice in order to create and use ML repositories effectively.
Due to the rising need for palliative care in Russia, it is crucial to provide timely and high-quality solutions for patients, relatives, and caregivers. A methodology for remote monitoring of patients in need of palliative care and the requirements will be developed for a hardware-software complex for remote monitoring of patients' health at home.
The typed graph model
(2020)
In recent years, the Graph Model has become increasingly popular, especially in the application domain of social networks. The model has been semantically augmented with properties and labels attached to the graph elements. It is difficult to ensure data quality for the properties and the data structure because the model does not need a schema. In this paper, we propose a schema bound Typed Graph Model with properties and labels. These enhancements improve not only data quality but also the quality of graph analysis. The power of this model is provided by using hyper-nodes and hyper edges, which allows to present a data structure on different abstraction levels. We demonstrate by example the superiority of this model over the property graph data model of Hidders and other prevalent data models, namely the relational, object-oriented, and XML model.
The time has come : application of artificial intelligence in small- and medium-sized enterprises
(2022)
Artificial intelligence (AI) is not yet widely used in small- and medium-sized industrial enterprises (SME). The reasons for this are manifold and range from not understanding use cases, not enough trained employees, to too little data. This article presents a successful design-oriented case study at a medium-sized company, where the described reasons are present. In this study, future demand forecasts are generated based on historical demand data for products at a material number level using a gradient boosting machine (GBM). An improvement of 15% on the status quo (i.e. based on the root mean squared error) could be achieved with rather simple techniques. Hence, the motivation, the method, and the first results are presented. Concluding challenges, from which practical users should derive learning experiences and impulses for their own projects, are addressed.
The tale of 1000 cores: an evaluation of concurrency control on real(ly) large multi-socket hardware
(2020)
In this paper, we set out the goal to revisit the results of “Starring into the Abyss [...] of Concurrency Control with [1000] Cores” and analyse in-memory DBMSs on today’s large hardware. Despite the original assumption of the authors, today we do not see single-socket CPUs with 1000 cores. Instead multi-socket hardware made its way into production data centres. Hence, we follow up on this prior work with an evaluation of the characteristics of concurrency control schemes on real production multi-socket hardware with 1568 cores. To our surprise, we made several interesting findings which we report on in this paper.
Compared to the automotive sector, where automation is the rule, in many other less standardized sectors automation is still the exception. This could soon hurt the productivity of industrialized countries, where the unemployment is low and the population is aging. Phenomena like the recent downfall in productivity, due to lockdowns and social distancing for prevention of health hazards during the COVID19 pandemic, only add to the problem. For these reasons, the relevance, motivation and intention for more automation in less standardized sectors has probably never been higher. However, available statistics say that providers and users of technologies struggle to bring more automation into action in automation-unfriendly sectors. In this paper, we present a decision support method for investment in automation that tackles the problem: the STIC analysis. The method takes a holistic and quantitative approach tying together technological, context-related and economic input parameters and synthetizing them in a final economic indicator. Thanks to the modelling of such parameters, it is possible to gain sensibility on the technological and/or process adjustments that would have the highest impact on the efficiency of the automation, thereby delivering value for both technology users and technology providers.
The success of an autonomous robotic system is influenced by several interdependent factors not easily identifiable. This paper is set to lay the foundation of a new integrated approach in order to deeply examine all the parameters and understand their contribution to success. After introducing the problem, two cutting edge autonomous systems for the process of unloading of containers will be presented. Then the STIC analysis, a recently developed method for modelling and interpreting all the parameters, will be introduced. The preliminary results of applying such a methodology to a first study case, based on one of the two systems available to the authors, will be shortly presented. Future research is in the end recommended in order to prove that this methodology is the only way to efficiently and effectively mitigate the risk that stops potential users from investing in autonomous systems in the logistics sector.
The digital twin concept has been widely known for asset monitoring in the industry for a long time. A clear example is the automotive industry. Recently, there has also been significant interest in the application of digital twins in healthcare, especially in genomics in what is known as precision medicine. This work focuses on another medical speciality where digital twins can be applied, sleep medicine. However, there is still great controversy about the fundamentals that constitute digital twins, such as what this concept is based on and how it can be included in healthcare effectively and sustainably. This article reviews digital twins and their role so far in what is known as personalized medicine. In addition, a series of steps will be exposed for a possible implementation of a digital twin for a patient suffering from sleep disorders. For this, artificial intelligence techniques, clinical data management, and possible solutions for explaining the results derived from artificial intelligence models will be addressed.
Due to rapidly changing technologies and business contexts, many products and services are developed under high uncertainties. It is often impossible to predict customer behaviors and outcomes upfront. Therefore, product and service developers must continuously find out what customers want, requiring a more experimental mode of management and appropriate support for continuously conducting experiments. We have analytically derived an initial model for continuous experimentation from prior work and matched it against empirical case study findings from two startup companies. We examined the preconditions for setting up an experimentation system for continuous customer experiments. The resulting RIGHT model for Continuous Experimentation (Rapid Iterative value creation Gained through High-frequency Testing) illustrates the building blocks required for such a system and the necessary infrastructure. The major findings are that a suitable experimentation system requires the ability to design, manage, and conduct experiments, create so-called minimum viable products or features, link experiment results with a product roadmap, and manage a flexible business strategy. The main challenges are proper, rapid design of experiments, advanced instrumentation of software to collect, analyse, and store relevant data, and integration of experiment results in the product development cycle, software development process, and business strategy. This summary refers to the article The RIGHT Model for Continuous Experimentation, published in the Journal of Systems and Software [Fa17].
The relevance of technology knowledge in digital transformation especially in small and mediumsized enterprises (SMEs) that are still largely dependent on physical human capital has become increasingly obvious. This is due to the rapid revolution in business environment coupled with increased living examples of firms disrupted by advancement in technological knowledge. Consequently, we find it progressively vital for SMEs to spot and mitigate both threats and take advantage of opportunities arising from digital transformation dynamism.
Our study aims at exploring the relevance of technology knowledge in SMEs for digital transformation to uncover the opportunities, roadmaps, and models that SMEs can take advantage of in the digital transformation and gain a competitive edge.
We conclude that irrespective relevance of technology knowledge for digital transformation coupled with its low costs and accessibility, SMEs are yet to realize the full potential of technological knowledge. This is mainly due to technologies appearing, changing and also vanishing so rapidly in the digital age, that gaining proper understanding without dedicated resources is utterly difficult for SMEs - making them less competitive as incumbent large firms in the market.
In standardized sectors such as the automotive, the cost-benefit ratio of automation solutions is high as they contribute to increase capacity, decrease costs and improve product quality. In less standardized application fields, the contribution of automation to improvements in capacity, cost and quality blurs. The automation of complex and unstructured tasks requires sophisticated, expensive and low-performing systems, whose impact on product quality is oftentimes not directly perceived by customers. As a result, the full automation of process chains in the general manufacturing or the logistic sectors is often a sub optimal solution. Taking the distance from the false idea that a process should be either fully automated, or fully manual, this paper presents a novel heuristic method for design of lean human-robot interaction, the Quality Interaction Function Deployment, with the objective of the “right level of automation”. Functions are divided among human and automated agents and several automation scenarios are created and evaluated with respect to their compliance to the requirements of all process´ stakeholders. As a result, synergies among operators (manual tasks) and machines (automated tasks) are improved, thus reducing time-losses and increasing productivity.
Context: Organizations are increasingly challenged by high market dynamics, rapidly evolving technologies and shifting user expectations. In consequence, many organizations are struggling with their ability to provide reliable product roadmaps by applying traditional roadmapping approaches. Currently, many companies are seeking opportunities to improve their product roadmapping practices and strive for new roadmapping approaches. A typical first step towards advancing the roadmapping capabilities of an organization is to assess the current situation. Therefore, the so-called maturity model DEEP for assessing the product roadmapping capabilities of companies operating in dynamic and uncertain environments has been developed and published by the authors.
Objective: The aim of this article is to conduct an initial validation of the DEEP model in order to understand its applicability better and to see if important concepts are missing. In addition, the aim of this article is to evolve the model based on the findings from the initial validation.
Method: The model has been given to practitioners such as product managers with the request to perform a self-assessment of the current product roadmapping practices in their company. Afterwards, interviews with each participant have been conducted in order to gain insights.
Results: The initial validation revealed that some of the stages of the model need to be rearranged and minor usability issues were found. The overall structure of the model was well received. The study resulted in the development of the version 1.1 of the DEEP product roadmap maturity model which is also presented in this article.
Steady growing research material in a variety of databases, repositories and clouds make academic content more than ever hard to discover. Finding adequate material for the own research however is essential for every researcher. Based on recent developments in the field of artificial intelligence and the identified digital capabilities of future universities a change in the basic work of academic research is predicted. This study defines the idea of how artificial intelligence could simplifiy academic research at a digital university. Today's studies in the field of AI spectacle the true potential and its commanding impact on academic research.
Electromigration (EM) is becoming a progressively severe reliability challenge due to increased interconnect current densities. A shift from traditional (post-layout) EM verification to robust (pro-active) EM aware design - where the circuit layout is designed with individual EM-robust solutions - is urgently needed. This tutorial will give an overview of EM and its effects on the reliability of present and future integrated circuits (ICs). We introduce the physical EM process and present its specific characteristics that can be affected during physical design. Examples of EM countermeasures which are applied in today’s commercial design flows are presented. We show how to improve the EM-robustness of metallization patterns and we also consider mission proiles to obtain application-oriented current density limits. The increasing interaction of EM with thermal migration is investigated as well. We conclude with a discussion of application examples to shift from the current post layout EM verification towards an EM aware physical design process. Its methodologies, such as EM-aware routing, increase the EM-robustness of the layout with the overall goal of reducing the negative impact of EM on the circuit’s reliability.
This paper describes the design and outcomes of an experimental study that addresses stock-and-flow-failure from a cognitive perspective. It is based on the assumption that holistic (global) and analytic (local) processing are important cognitive mechanisms underlying the ability to infer the behavior of dynamic systems. In a stock-and-flow task that is structurally equivalent to the department store task, we varied the format in which participants are primed to think about an environmental system, in particular whether they are primed to concentrate on lower-level (local) or higher-level (global) system elements. 148 psychology, geography and business students participated in our study. Students’ answers support our hypothesis that global processing increases participants’ ability to infer the overall system behavior. The beneficial influence of global presentation is even stronger when data are presented numerically rather than in the form of a graph. Our results suggest presenting complex dynamic systems in a way that facilitates global processing. This is particularly important as policy-designers and decision makers deal with complex issues in their everyday and professional life.
During the first years of their employment, the graduates are a liability to industry. The employer goes an extra mile to bridge the gap between university-exiting and profitable employment of engineering graduates. Unfortunately some cannot take this risk. Given this scenario, this paper presents a learning factory approach as a platform for the application of knowledge so as to develop the required engineering competences in South African engineering graduates before they enter the labour market. It spells out the components of a Stellenbosch University Learning Factory geared towards production of engineering graduates with the required industrial skills. It elaborates on the didactics embedded in the learning factory environment, tailor-made to produce engineers who can productively contribute to the growth of the industry upon exiting the university.
Internet of things innovations and the industrial internet these days become more and more decisive factors of future success for companies. Especially manufacturing oriented SME will face the challenge to develop innovative technology driven business models alongside technology innovations in this field which will be essential for future competitiveness. Failing in developing these technology driven business models in an internationally highly competitive environment will have a serious impact both on companies and on the society. Hence, securing economic stability and success of these technology driven business models is an indispensable task. To identify challenges for innovative industrial internet business models first it is necessary to understand what the industrial internet means to the leading parties and applying companies and start-ups in the field. Second, challenges from general business model development will be outlined. In a third step risks and challenges in business model development will be discussed with regard to the special characteristics of technology driven business models in the context of the industrial internet and the important role of the technological key component of the business model. Especially the capability to deal with an integrated consideration of the indivisible linked dimensions of economic and technological aspects of these business models is questioned. In the fourth place the specific challenges for industrial internet business models are derived. On the basis of these results it is also discussed what might be done to handle these challenges successfully with the goal to turn them into chances. The need for future research on the integration of the risk management perspective into the development of these technology driven business models is derived. This will help established companies and start-ups to realize great technological innovations for the industrial internet in sound and successful innovative business models.
Relationship Marketing (RM) presumes trust as an important antecedent for the performance of interfirm relationships. Current research is dominated by an interpersonal perspective. In this research tack, trust chiefly emerges as a result of interpersonal relationships. But multiple risks arise if customer trust rests solely on elements inextricably linked to single representatives. Hence, this paper evaluates the impact of organizational capabilities and the moderating role of customer preferences on the trust creation process. The framework presented here is tested cross-industry on 220 customers for IT solutions. The results offer significant insight into the effectiveness of individual and organizational RM strategies.
IT Governance (ITG) is crucial due to its significant impact on enabling innovation and enhancing firm performance. Hence, in the last decade ITG has become important in both academic and in practical research. Although several studies have investigated individual aspects of ITG success and its impact on single determinants, the causal relationship of how ITG promotes firm performance remains unclear. Thus, a more comprehensive understanding about the link between ITG and firm performance is needed. To address this gap, this research aims at understanding how ITG and firm performance are related. Therefore, we conducted a systematic literature review (1) to create an overview on how current research structures the link between ITG mechanisms and firm performance, (2) to uncover key constructs as potential mediators or moderators on the general link between ITG and performance, and (3) to set the basis for future studies on the ITG-firm performance relationship.