004 Informatik
Refine
Document Type
- Conference proceeding (41)
- Journal article (7)
- Anthology (7)
- Book (5)
- Doctoral Thesis (3)
- Book chapter (2)
- Patent / Standard / Guidelines (1)
- Issue of a journal (1)
Has full text
- no (67) (remove)
Is part of the Bibliography
- yes (67)
Institute
- Informatik (56)
- ESB Business School (9)
- Technik (2)
Publisher
- Springer (19)
- Association for Information Systems (7)
- Hochschule Reutlingen (7)
- Elsevier (4)
- Gesellschaft für Informatik e.V (3)
- Curran Associates Inc. (2)
- GMDS e.V. (2)
- Vahlen (2)
- ASME (1)
- Association for Computing Machinery (1)
What might the attendee be able to do after being in your session?
Our work shows how to connect intra-operative devices via IEEE 11073 Service-oriented Device Connectivity (SDC).
Description of the Problem or Gap
Standardized device communication is essential for interoperability, availability of device data, and therefore for the intelligent operating room (OR) and arising solutions. The SDC standard was developed to make information from medical devices available in a uniform manner and enable interoperability. Existing devices are rarely SDC-capable and need interfaces to be interoperable via SDC.
Methods: What did you do to address the problem or gap?
We conceived an SDC-based architecture consisting of a service provider and service consumer. In our concept, the service provider is connected to the medical device and capable to translate the proprietary protocol of the device into SDC and vice versa. The service consumer is used to request or send information via the SDC protocol to the service provider and can function as a uniform bidirectional interface (e.g. for displaying or controlling). This concept was exemplarily demonstrated with the patient monitor MX800 of Philips to retrieve the device data (e.g. vital parameters) via SDC and partly for the operating light marLED X of KLS Martin Group.
Results: What was the outcome(s) of what you did to address the problem or gap?
The patient monitor MX800 was connected to a Raspberry Pi (RPi) via LAN, on which the service provider is running. The python script on the RPi establishes a connection to the monitor and translates incoming and outgoing messages from the proprietary protocol to SDC and vice versa to/from the service consumer. The service consumer is running on a laptop and acts as a simulation for different kinds of systems that want to get vital parameters or other information from the patient monitor. The operating light marLED X was connected to an RPi via USB-to-RS232. A python script on the RPi establishes a connection to the light and makes it possible via proprietary commands to get information of the light (e.g. status) and to control it (e.g. toggle the light, increment the intensity). A translation to SDC is not integrated yet.
Discussion of Results
Our practical implementation shows that medical devices can be accessed via external connections to get device data and control the device via commands. The example SDC implementation of the patient monitor MX800 makes it possible to request its data via the standardized communication protocol SDC. This is also possible for the operating light marLED X if its proprietary protocol is analyzed to be translatable to/from SDC. This would allow to control the device from an external system, or automatically depending on the status of the ongoing procedure. The advantage is, that existing intra-operative devices can be extended by a service provider which is capable of translating the proprietary protocol of the device in SDC and vice versa. This enables interoperability and an intelligent OR that, for example, is aware of all devices, their status, and data and can use this information to optimally support the surgeons and their team (e.g. provision of information, automated documentation). This interoperability allows that future innovations merely need to understand the SDC protocol instead of all vendor-dependent communication protocols.
Conclusion
Standardized device communication is essential to reach interoperability, and therefore intelligent ORs. Our contribution addresses the possibility of subsequently making medical devices SDC-capable. This may eliminate the need of understanding all the different proprietary protocols when developing new innovative solutions for the OR.
Enterprises and societies currently face essential challenges, and digital transformation can contribute to their resolution. Enterprise architecture (EA) is useful for promoting digital transformation in global companies and information societies covering ecosystem partners. The advancement of new business models can be promoted with digital platforms and architectures for Industry 4.0 and Society 5.0. Therefore, products from the sector of healthcare, manufacturing and energy, etc. can increase in value. The adaptive integrated digital architecture framework (AIDAF) for Industry 4.0 and the design thinking approach is expected to promote and implement the digital platforms and digital products for healthcare, manufacturing and energy communities more efficiently. In this paper, we propose various cases of digital transformation where digital platforms and products are designed and evaluated for digital IT, digital manufacturing and digital healthcare with Industry 4.0 and Society 5.0. The vision of AIDAF applications to perform digital transformation in global companies is explained and referenced, extended toward the digitalized ecosystems such as Society 5.0 and Industry 4.0.
Current advances in Artificial Intelligence (AI) combined with other digitalization efforts are changing the role of technology in service ecosystems. Human-centered intelligent systems and services are the target of many current digitalization efforts and part of a massive digital transformation based on digital technologies. Artificial intelligence, in particular, is having a powerful impact on new opportunities for shared value creation and the development of smart service ecosystems. Motivated by experiences and observations from digitalization projects, this paper presents new methodological experiences from academia and practice on a joint view of digital strategy and architecture of intelligent service ecosystems and explores the impact of digitalization based on real case study results. Digital enterprise architecture models serve as an integral representation of business, information, and technology perspectives of intelligent service-based enterprise systems to support management and development. This paper focuses on the novel aspect of closely aligned digital strategy and architecture models for intelligent service ecosystems and highlights the fundamental business mechanism of AI-based value creation, the corresponding digital architecture, and management models. We present key strategy-oriented architecture model perspectives for intelligent systems.
In today’s education, healthcare, and manufacturing sectors, organizations and information societies are discussing new enhancements to corporate structure and process efficiency using digital platforms. These enhancements can be achieved using digital tools. Industry 5.0 and Society 5.0 give several potentials for businesses to enhance the adaptability and efficacy of their industrial processes, paving the door for developing new business models facilitated by digital platforms. Society 5.0 can contribute to a super-intelligent society that includes the healthcare industry. In the past decade, the Internet of Things, Big Data Analytics, Neural Networks, Deep Learning, and Artificial Intelligence (AI) have revolutionized our approach to various job sectors, from manufacturing and finance to consumer products. AI is developing quickly and efficiently. We have heard of the latest artificial intelligence chatbot, ChatGPT. OpenAI created this, which has taken the internet by storm. We tested the effectiveness of a considerable language model referred to as ChatGPT on four critical questions concerning “Society 5.0”, “Healthcare 5.0”, “Industry,” and “Future Education” from the perspectives of Age 5.0.
The volume includes papers presented at the International KES Conference on Human Centred Intelligent Systems 2023 (KES HCIS 2023), held in Rome, Italy on June 14–16, 2023. This book highlights new trends and challenges in intelligent systems, which play an important part in the digital transformation of many areas of science and practice. It includes papers offering a deeper understanding of the human-centred perspective on artificial intelligence, of intelligent value co-creation, ethics, value-oriented digital models, transparency, and intelligent digital architectures and engineering to support digital services and intelligent systems, the transformation of structures in digital businesses and intelligent systems based on human practices, as well as the study of interaction and the co-adaptation of humans and systems.
Dieses forschungsorientierte Buch enthält wichtige Beiträge zur Gestaltung der digitalen Transformation. Es umfasst die folgenden Hauptabschnitte in 20 Kapiteln:
- Digitale Transformation
- Digitales Geschäft
- Digitale Architektur
- Entscheidungshilfe
- Digitale Anwendungen
Es konzentriert sich auf digitale Architekturen für intelligente digitale Produkte und Dienstleistungen und ist eine wertvolle Ressource für Forscher, Doktoranden, Postgraduierte, Absolventen, Studenten, Akademiker und Praktiker, die sich für die digitale Transformation interessieren.
We analyze economics PhDs’ collaborations in peer-reviewed journals from 1990 to 2014 and investigate such collaborations’ quality in relation to each co-author’s research quality, field and specialization. We find that a greater overlap between co-authors’ previous research fields is significantly related to a greater publication success of co-authors’ joint work and this is robust to alternative specifications. Co-authors that engage in a distant collaboration are significantly more likely to have a large research overlap, but this significance is lost when co-authors’ social networks are accounted for. High quality collaboration is more likely to emerge as a result of an interaction between specialists and generalists with overlapping fields of expertise. Regarding interactions across subfields of economics (interdisciplinarity), it is more likely conducted by co- authors who already have interdisciplinary portfolios, than by co-authors who are specialized or starred in different subfields.
This article provides a stochastic agent-based model to exhibit the role of aggregation metrics in order to mitigate polarization in a complex society. Our sociophysics model is based on interacting and nonlinear Brownian agents, which allow us to study the emergence of collective opinions. The opinion of an agent, x i (t) is a continuous positive value in an interval [0, 1]. We find (i) most agent-metrics display similar outcomes. (ii) The middle-metric and noisy-metric obtain new opinion dynamics either towards assimilation or fragmentation. (iii) We show that a developed 2-stage metric provide new insights about convergence and equilibria. In summary, our simulation demonstrates the power of institutions, which affect the emergence of collective behavior. Consequently, opinion formation in a decentralized complex society is reliant to the individual information processing and rules of collective behavior.
Theoretical foundation, effectiveness, and design artefact for machine learning service repositories
(2022)
Machine learning (ML) has played an important role in research in recent years. For companies that want to use ML, finding the algorithms and models that fit for their business is tedious. A review of the available literature on this problem indicates only a few research papers. Given this gap, the aim of this paper is to design an effective and easy-to-use ML service repository. The corresponding research is based on a multi-vocal literature analysis combined with design science research, addressing three research questions: (1) How is current white and gray literature on ML services structured with respect to repositories? (2) Which features are relevant for an effective ML service repository? (3) How is a prototype for an effective ML service repository conceptualized? Findings are relevant for the explanation of user acceptance of ML repositories. This is essential for corporate practice in order to create and use ML repositories effectively.
The rapid development and growth of knowledge has resulted in a rich stream of literature on various topics. Information systems (IS) research is becoming increasingly extensive, complex, and heterogeneous. Therefore, a proper understanding and timely analysis of the existing body of knowledge are important to identify emerging topics and research gaps. Despite the advances of information technology in the context of big data, machine learning, and text mining, the implementation of systematic literature reviews (SLRs) is in most cases still a purely manual task. This might lead to serious shortcomings of SLRs in terms of quality and time. The outlined approach in this paper supports the process of SLRs with machine learning techniques. For this purpose, we develop a framework with embedded steps of text mining, cluster analysis, and network analysis to analyze and structure a large amount of research literature. Although the framework is presented using IS research as an example, it is not limited to the IS field but can also be applied to other research areas.
With significant advancements in digital technologies, firms find themselves competing in an increasingly dynamic business environment. Therefore, the logic of business decisions is based on the agility to respond to emerging trends in a proactive way. By contrast, traditional IT governance (ITG) frameworks rely on hierarchy and standardized mechanisms to ensure better business/IT alignment. This conflict leads to a call for an ambidextrous governance, in which firms alternate between stability and agility in their ITG mechanisms. Accordingly, this research aims to explore how agility might be integrated in ITG. A quantitative research strategy is implemented to explore the impact of agility on the causal relationship among ITG, business/IT alignment, and firm performance. The results show that the integration of agile ITG mechanisms contributes significantly to the explanation of business/IT alignment. As such, firms need to develop a dual governance model powered by traditional and agile ITG mechanisms.
Enterprises and societies currently face crucial challenges, while Society 5.0 can contribute to a supersmart society, especially for manufacturing and healthcare, and Industry 4.0 becomes important in the global manufacturing industry. Smart energy digital platforms are architected to manage energy supply efficiently. Furthermore, the above digital platforms are expected to collect various kinds of data and analyze Big Data for the trends in the sharing economy in ecosystems. The adaptive integrated digital architecture framework (AIDAF) for Design Thinking Approach with Risk Management is expected to make an alignment with digital IT strategy. In this paper, we propose that various energy management systems and related digital platforms are designed and implemented in an alignment to digital IT strategy for sharing economy toward Society 5.0, with the AIDAF framework for Design Thinking Approach with Risk Management. The vision of AIDAF applications to enable sharing economy and digital platforms is explained and extended in the context of Society 5.0. In addition, challenges and future activities for this area are discussed that cover the directions of smart energy for Society 5.0.
An autonomous vehicle is a robotic vehicle with decision and action capability capable of performing assigned tasks without or with minimal human intervention. Autonomous cars have been in development for many years. The Society of Automotive Engineers (SAE International) published in 2014 a classification in five levels of driving automation, with level 0 corresponding to completely manual driving, and level 5 to an ideal dream where the vehicle would be able to navigate entirely autonomously for all missions and in all environments. This work addressed the navigation of an autonomous vehicle in general. We focus on one of the most complex scenarios of the road network and crossing of road intersections. In this paper, the critical features of autonomous intelligent vehicles are reviewed. Furthermore, the associated problems are presented, and the most advanced solutions are derived. This article aims to allow a novice in this field to understand the different facets of localization and perception problems for autonomous vehicles.
The volume includes papers presented at the International KES Conference on Human Centred Intelligent Systems 2022 (KES HCIS 2022), held in Rhodes, Greece on June 20–22, 2022. This book highlights new trends and challenges in intelligent systems, which play an important part in the digital transformation of many areas of science and practice. It includes papers offering a deeper understanding of the human-centred perspective on artificial intelligence, of intelligent value co-creation, ethics, value-oriented digital models, transparency, and intelligent digital architectures and engineering to support digital services and intelligent systems, the transformation of structures in digital businesses and intelligent systems based on human practices, as well as the study of interaction and the co-adaptation of humans and systems.
Das Buch führt in die Grundlagen der Softwaretechnik ein. Dabei liegt sein Fokus auf der systematischen und modellbasierten Software- und Systementwicklung aber auch auf dem Einsatz agiler Methoden. Die Autoren legen besonderen Wert auf die gleichwertige Behandlung praktischer Aspekte und zugrundeliegender Theorien, was das Buch als Fach- und Lehrbuch gleichermaßen geeignet macht. Die Softwaretechnik wird im Rahmen eines systematischen Frameworks umfassend beschrieben. Ausgewählte und aufeinander abgestimmte Konzepte und Methoden werden durchgängig und integriert dargestellt.
By 2019, Germany-based Kärcher, “the world’s leading provider of cleaning technology,” had turned its professional cleaning devices into IoT products. The data generated by these IoT-connected cleaning devices formed a key ingredient in the company’s ongoing strategic shift in its B2B business: Kärcher was transforming from a seller of cleaning devices to a provider of consulting services in order to help professional cleaning companies improve their cleaning processes. Based on interviews with seven IT- and non-IT executives, the case illustrates how the company learned to generate value from IoT products. And it demonstrates how a family-owned company transformed its organization in order to be able to more effectively develop and provide IoT products, while adding roles, developing technology platforms, and changing organizational structures and ways of working.
Intermittent time series forecasting is a challenging task which still needs particular attention of researchers. The more unregularly events occur, the more difficult is it to predict them. With Croston’s approach in 1972 (1.Nr. 3:289–303), intermittence and demand of a time series were investigated the first time separately. He proposes an exponential smoothing in his attempt to generate a forecast which corresponds to the demand per period in average. Although this algorithm produces good results in the field of stock control, it does not capture the typical characteristics of intermittent time series within the final prediction. In this paper, we investigate a time series’ intermittence and demand individually, forecast the upcoming demand value and inter-demand interval length using recent machine learning algorithms, such as long-short-term-memories and light-gradient-boosting machines, and reassemble both information to generate a prediction which preserves the characteristics of an intermittent time series. We compare the results against Croston’s approach, as well as recent forecast procedures where no split is performed.
Rotating machinery occupies a predominant place in many industrial applications. However, rotating machines are often encountered with severe vibration problems. The measurement of these machines’ vibrations signal is of particular importance since it plays a crucial role in predictive maintenance. When the vibrations are too high, they often cause fatigue failure. They announce an unexpected stop or break and, consequently, a significant loss of productivity or an attack on the personnel’s safety. Therefore, fault identification at early stages will significantly enhance the machine’s health and significantly reduce maintenance costs. Although considerable efforts have been made to master the field of machine diagnostics, the usual signal processing methods still present several drawbacks. This paper examines the rotating machinery condition monitoring in the time and frequency domains. It also provides a framework for the diagnosis process based on machine learning by analyzing the vibratory signals.
Enterprises and societies currently face crucial challenges, while Industry 4.0 becomes important in the global manufacturing industry all the more. Industry 4.0 offers a range of opportunities for companies to increase the flexibility and efficiency of production processes. The development of new business models can be promoted with digital platforms and architectures for Industry 4.0. Therefore, products from the healthcare sector can increase in value. The adaptive integrated digital architecture framework (AIDAF) for Industry 4.0 is expected to promote and implement the digital platforms and robotics for healthcare and medical communities efficiently. In this paper, we propose that various digital platforms and robotics are designed and evaluated for digital healthcare as for manufacturing industry with Industry 4.0. We argue that the design of an open healthcare platform “Open Healthcare Platform 2030 - OHP2030” for medical product design and robotics can be developed with AIDAF. The vision of AIDAF applications to enable Industry 4.0 in the OHP2030 research initiative is explained and referenced, extended in the context of Society 5.0.
Autonomous navigation is one of the main areas of research in mobile robots and intelligent connected vehicles. In this context, we are interested in presenting a general view on robotics, the progress of research, and advanced methods related to this field to improve autonomous robots’ localization. We seek to evaluate algorithms and techniques that give robots the ability to move safely and autonomously in a complex and dynamic environment. Under these constraints, we focused our work in the paper on a specific problem: to evaluate a simple, fast and light SLAM algorithm that can minimize localization errors. We presented and validated a FastSLAM 2.0 system combining scan matching and loop closure detection. To allow the robot to perceive the environment and detect objects, we have studied one of the best deep learning technique using convolutional neural networks (CNN). We validate our testing using the YOLOv3 algorithm.
This book highlights new trends and challenges in intelligent systems, which play an essential part in the digital transformation of many areas of science and practice. It includes papers offering a deeper understanding of the human-centred perspective on artificial intelligence, of intelligent value co-creation, ethics, value-oriented digital models, transparency, and intelligent digital architectures and engineering to support digital services and intelligent systems, the transformation of structures in digital business and intelligent systems based on human practices, as well as the study of interaction and co-adaptation of humans and systems. All papers were originally presented at the International KES Conference on Human Centred Intelligent Systems 2021 (KES HCIS 2021) held on June 14–16, 2021 in the KES Virtual Conference Centre.
Study programs in higher education have to reflect important societal and industrial challenges to prepare the next generations of professionals for future tasks. The focus of this paper is the challenge of digitalization and digital transformation. The paper proposes the IS education profile of a Digital Business Architect (DBA). The study program emphasizes design thinking, model centricity, and capability thinking as a response to domain requirements from digital transformation and educational system and structure requirements. Experiences in implementing the DBA include the need for integrating deductive and inductive teaching, a strong basis in real-world cases, and collaborative learning approaches to develop adequate competences in business model management, enterprise modeling, enterprise architecture management, and capability management.
In recent years, the cloud has become an attractive execution environment for parallel applications, which introduces novel opportunities for versatile optimizations. Particularly promising in this context is the elasticity characteristic of cloud environments. While elasticity is well established for client-server applications, it is a fundamentally new concept for parallel applications. However, existing elasticity mechanisms for client-server applications can be applied to parallel applications only to a limited extent. Efficient exploitation of elasticity for parallel applications requires novel mechanisms that take into account the particular runtime characteristics and resource requirements of this application type. To tackle this issue, we propose an elasticity description language. This language facilitates users to define elasticity policies, which specify the elasticity behavior at both cloud infrastructure level and application level. Elasticity at the application level is supported by an adequate programming and execution model, as well as abstractions that comply with the dynamic availability of resources. We present the underlying concepts and mechanisms, as well as the architecture and a prototypical implementation. Furthermore, we illustrate the capabilities of our approach through real-world scenarios.
Context: Currently, most companies apply approaches for product roadmapping that are based on the assumption that the future is highly predicable. However, nowadays companies are facing the challenge of increasing market dynamics, rapidly evolving technologies, and shifting user expectations. Together with the adaption of lean and agile practices it makes it increasingly difficult to plan and predict upfront which products, services or features will satisfy the needs of the customers. Therefore, they are struggling with their ability to provide product roadmaps that fit into dynamic and uncertain market environments and that can be used together with lean and agile software development practices.
Objective: To gain a better understanding of modern product roadmapping processes, this paper aims to identify suitable processes for the creation and evolution of product roadmaps in dynamic and uncertain market environments.
Method: We performed a Grey Literature Review (GLR) according to the guidelines from Garousi et al.
Results: 32 approaches to product roadmapping were identified. Typical characteristics of these processes are the strong connection between the product roadmap and the product vision, an emphasis on stakeholder alignment, the definition of business and customer goals as part of the roadmapping process, a high degree of flexibility with respect to reaching these goals, and the inclusion of validation activities in the roadmapping process. An overall goal of nearly all approaches is to avoid waste by early reducing development and business risks. From the list of the 32 approaches found, four representative roadmapping processes are described in detail.
This book discusses important topics for engineering and managing software startups, such as how technical and business aspects are related, which complications may arise and how they can be dealt with. It also addresses the use of scientific, engineering, and managerial approaches to successfully develop software products in startup companies.
The book covers a wide range of software startup phenomena, and includes the knowledge, skills, and capabilities required for startup product development; team capacity and team roles; technical debt; minimal viable products; startup metrics; common pitfalls and patterns observed; as well as lessons learned from startups in Finland, Norway, Brazil, Russia and USA. All results are based on empirical findings, and the claims are backed by evidence and concrete observations, measurements and experiments from qualitative and quantitative research, as is common in empirical software engineering.
The book helps entrepreneurs and practitioners to become aware of various phenomena, challenges, and practices that occur in real-world startups, and provides insights based on sound research methodologies presented in a simple and easy-to-read manner. It also allows students in business and engineering programs to learn about the important engineering concepts and technical building blocks of a software startup. It is also suitable for researchers at different levels in areas such as software and systems engineering, or information systems who are studying advanced topics related to software business.
The promise of the EVs is twofold. First, rejuvenating a transport sector that still heavily depends on fossil fuels and second, integrating intermittent renewable energies into the power mix. However, it is still not clear how electricity networks will cope with the predicted increase in EVs and their charging demand, especially in combination with conventional energy demand. This paper proposes a methodology which allows to predict the impact of EV charging behavior on the electricity grid. Moreover, this model simulates the driving and charging behavior of heterogeneous EV drivers which differ in their mobility pattern, decision-making heuristics and charging strategies. The simulations show that uncoordinated charging results in charging load clustering. In contrast, decentralized coordination allows to fill the valleys of the conventional load curve and to integrate EVs without the need of a costly expansion of the electricity grid.
Machine learning (ML) techniques are rapidly evolving, both in academia and practice. However, enterprises show different maturity levels in successfully implementing ML techniques. Thus, we review the state of adoption of ML in enterprises. We find that ML technologies are being increasingly adopted in enterprises, but that small and medium-size enterprises (SME) are struggling with the introduction in comparison to larger enterprises. In order to identify enablers and success factors we conduct a qualitative empirical study with 18 companies in different industries. The results show that especially SME fail to apply ML technologies due to insufficient ML knowhow. However, partners and appropriate tools can compensate this lack of resources. We discuss approaches to bridge the gap for SME.
Public transport maps are typically designed in a way to support route finding tasks for passengers while they also provide an overview about stations, metro lines, and city-specific attractions. Most of those maps are designed as a static representation, maybe placed in a metro station or printed in a travel guide. In this paper we describe a dynamic, interactive public transport map visualization enhanced by additional views for the dynamic passenger data on different levels of temporal granularity. Moreover, we also allow extra statistical information in form of density plots, calendar-based visualizations, and line graphs. All this information is linked to the contextual metro map to give a viewer insights into the relations between time points and typical routes taken by the passengers. We illustrate the usefulness of our interactive visualization by applying it to the railway system of Hamburg in Germany while also taking into account the extra passenger data. As another indication for the usefulness of the interactively enhanced metro maps we conducted a user experiment with 20 participants.
Urban platforms are essential for smart and sustainable city planning and operation. Today they are mostly designed to handle and connect large urban data sets from very different domains. Modelling and optimisation functionalities are usually not part of the cities software infrastructure. However, they are considered crucial for transformation scenario development and optimised smart city operation. The work discusses software architecture concepts for such urban platforms and presents case study results on the building sector modelling, including urban data analysis and visualisation. Results from a case study in New York are presented to demonstrate the implementation status.
This chapter presents an introduction to the emerging trends for architecting the digital transformation having a strong focus on digital products, intelligent services, and related systems together with methods, models and architectures. The primary aim of this book is to highlight some of the most recent research results in the field. We are providing a focused set of brief descriptions of the chapters included in the book.
Die Erfindung betrifft ein Verfahren zur extrinsischen Kalibrierung wenigstens eines bildgebenden Sensors, wonach eine Pose des wenigstens einen bildgebenden Sensors relativ zu dem Ursprung (U) eines dreidimensionalen Koordinatensystems einer Handhabungseinrichtung mittels einer Recheneinrichtung bestimmt wird, wobei bekannte dreidimensionale Koordinaten betreffend die Position wenigstens eines Gelenks der Handhabungseinrichtung durch die Recheneinrichtung berücksichtigt werden, und wobei zweidimensionale Koordinaten betreffend die Position des wenigstens einen Gelenks anhand von Rohdaten des wenigstens einen bildgebenden Sensors ermittelt werden, und wobei die Recheneinrichtung die Pose des wenigstens einen bildgebenden Sensors anhand der Korrespondenz zwischen den zweidimensionalen Koordinaten und den dreidimensionalen Koordinaten bestimmt.
The rise of digital technologies has become an important driver for change in multiple industries. Therefore, firms need to develop digital capabilities to manage the transformation process successfully. Prior research assumes that the development of a specific set of digital capabilities leads to higher digital maturity. However, a measurement framework for digital maturity does not exist in scholarly work. Therefore, this paper develops a conceptualization and measuremnent model for digital maturity.
The Third International Conference on Data Analytics (DATA ANALYTICS 2014), held on August 24 - 28, 2014 - Rome, Italy, continued the inaugural event on fundamentals in supporting data analytics, special mechanisms and features of applying principles of data analytics, application oriented analytics, and target-area analytics.
Processing of terabytes to petabytes of data, or incorporating non-structural data and multistructured data sources and types require advanced analytics and data science mechanisms for both raw and partially-processed information. Despite considerable advancements on high performance, large storage, and high computation power, there are challenges in identifying, clustering, classifying, and interpreting of a large spectrum of information.
The energy turnaround, digitalization and decreasing revenues forces enterprises in the energy domain to develop new business models. Business models for renewable energy are compound on different logic than business models for larger scale power plants. Following a design science research approach, we examined the business models of three enterprises in the energy domain in a first step. We identified that these business models result in complex ecosystems with multiple actors and difficult relationships between them. One cause is the fast changing and complicated state regulation in Germany. In order to solve the problem, we captured together with the partners of the enterprises the requirements in a second phase. Further we developed the prototype Business Model Configurator (BMConfig) based on the e3Value Ontology on the metamodelling platform ADOxx. We demonstrate the feasibility of our approach in business model of energy efficiency service based on smart meter data.
The metric and qualitative analysis of models of the upper and lower dental arches is an important aspect of orthodontic treatment planning. Currently available eLearning systems for dental education only allow access to digital learning materials, and do not interactively support the learning progress. Moreover, to date no study compared the efficiency of learning methods based on physical or digital study models. For this pilot study, 18 dental students were separated into two groups to investigate whether the learning success in study model analysis with an interactive elearning system is higher based on digital models or on conventional plaster models. The results show that with the digital method less time is needed per model analysis. Moreover, the digital approach leads to higher total scores than that based on plaster models. We conclude that interactive eLearning using digital dental arch models is a promising tool for dental education.
OR-Pad - Entwicklung eines Prototyps zur sterilen Informationsanzeige am OP-Situs : meeting abstract
(2019)
Hintergrund: Oftmals werden Informationen aus der Krankenakte oder von Bildgebungsverfahren nur auf recht weit vom Operationsgebiet entfernten Monitoren, außerhalb der ergonomischen Sichtachse des Operateurs, dargestellt. Dies führt dazu, dass relevante Informationen übersehen werden oder ihr Informationspotenzial nicht ausgeschöpft werden kann. In Papierform mitgenommene Notizen befinden sich während der OP außerhalb des sterilen Bereichs und sind dadurch für den Operateur nicht ohne Weiteres zugänglich. Auch bei intraoperativen Einträgen für die OP Dokumentation ist der Operateur auf die Mithilfe der Assistenz angewiesen. Durch die zusätzlichen Kommunikationswege entstehen dabei ein personeller und zeitlicher Mehraufwand und das Fehlerpotenzial nimmt zu. Das anwendungsorientierte Forschungsprojekt OR-Pad - Nutzung von portablen Informationsanzeigen im Operationssaal - soll dem Operateur zu einem verbesserten Informationsfluss verhelfen. Die Idee entstand aus der klinischen Routine der Anatomie und Urologie des Universitätsklinikums Tübingen und wird nun durch Fördermittel vom Ministerium für Wissenschaft, Forschung und Kunst Baden-Württemberg sowie vom Europäischen Fonds für regionale Entwicklung an der Hochschule Reutlingen zu einem High Fidelity-Prototypen weiterentwickelt.
Ziel: Ziel des OR-Pad Projekts ist es, während einer OP zum aktuellen Zeitpunkt klinisch relevante Informationen in unmittelbarer Nähe zum Operateur darzustellen. Mithilfe des Systems soll der Informationsfluss zwischen dem Eingriff sowie dessen Vor- und Nachbereitung optimiert werden. Der Operateur soll vorab relevante Informationen, wie aktuelle Röntgenbilder oder persönliche Notizen, zur intraoperativen Anzeige auswählen können, die dann am OP-Situs auf einer sterilen Informationsanzeige dargestellt werden. Durch die Positionierung soll eine ergonomische Sichtachse sowie die direkte Interaktion mit dem System ermöglicht werden. Kontextrelevante Informationen sollen basierend auf dem aktuellen OP-Verlauf durch die Entwicklung einer Situationserkennung automatisch bereitgestellt werden. Zur Optimierung des Informationsflusses gehört ebenfalls die Unterstützung der OP-Dokumentation. Für diese sollen während des Eingriffs manuell vom Operateur sowie automatisch vom System Einträge, wie Zeitpunkte oder intraoperative Aufnahmen, erstellt werden. Aus diesen soll nach dem Eingriff die OP-Dokumentation generiert und damit der Prozess qualitativer und zeiteffizienter gestaltet werden.
Methodik: Zur Erreichung des Ziels werden zunächst die klinischen Anforderungen spezifiziert und in ein Lastenheft überführt. Hierfür werden Interviews und Beobachtungen bei mehreren Interventionen durchgeführt. Nach dem User-Centered-Designprozess werden Personas und Nutzungsszenarien entworfen und mit klinischen Projektpartnern in mehreren Iterationen evaluiert. Es gilt eine Informationsarchitektur aufzubauen, die eine Einbettung klinischer Informationssysteme sowie Bild- und Gerätedaten aus dem OP-Netzwerk erlaubt. Eine Situationserkennung, basierend auf Prozessmodellen, soll zur Abschätzung des Operationsfortschritts entwickelt werden. Zur Befestigung der Informationsanzeige sollen geeignete Haltemechanismen eingesetzt werden. Das OR-Pad System soll laufend im Lehr- und Forschungs-OP der Hochschule Reutlingen getestet und im Sinne agiler Produktentwicklung mit den klinischen Projektpartnern abgestimmt werden. Der finale Funktionsprototyp soll abschließend in den Versuchs-OPs der Anatomie Tübingen getestet und evaluiert werden.
Ergebnisse: Über eine erste Datenerhebung mittels Contextual Inquiry konnten erste Anforderungen an das OR-Pad System erfasst werden, woraus ein Low-Fidelity-Prototyp resultierte. Die Evaluation über Experteninterviews führte in die zweite Iteration, in der das Konzept entsprechend der Ergebnisse angepasst wurde. Über Hospitationen am Uniklinikum Tübingen fand eine weitere Datenerhebung zur Erstellung von Szenarien für die intraoperativen Anwendungsfälle statt. Anhand der Anforderungen wurde ein Konzept für die Benutzerschnittstelle entworfen, die im weiteren Verlauf mit den klinischen Projektpartnern evaluiert wird.
Ganz gleich, ob im privaten oder beruflichen Alltag, begleiten uns digitale Medien heute nahezu überall. Dabei dienen sie nicht nur zur Unterhaltung, sondern helfen uns, Arbeitsabläufe effizienter und produktiver durchzuführen. Doch die Arbeit des Menschen ist bei Weitem nicht überflüssig geworden. Durch die steigenden Anforderungen ist die Nachfrage nach qualifiziertem Fachpersonal heute höher denn je. Währenddessen müssen Mitarbeiter in der Lage sein, mit der rasanten Entwicklung neuer Produkte und Technologien Schritt zu halten. Dabei ist eine qualitative Aus- und Weiterbildung unumgänglich. Beginnend mit der Bildung von Medienkompetenz in Schulen bis hin zur Fach- und Berufsbildung sowie beruflichen Weiterbildung, muss der Umgang mit digitalen Technologien gelehrt sein. Darüber hinaus bieten diese Technologien neue Potenziale zur Verbesserung von Bildungskonzepten und können zudem dabei helfen, den Lernerfolg zu steigern.
Diese Arbeit beschäftigt sich mit der Evaluation einer VR-basierten Lernumgebung und untersucht mögliche Auswirkungen auf den Lernerfolg durch die verkörperte Darstellung eines virtuellen Instruktors. Dazu wurde die technische Implementierung einer kollaborativen Lernumgebung vorgenommen, mit welcher anschließend eine Versuchsreihe mit 16 Probanden durchgeführt wurde. Im Hinblick auf eine mögliche Steigerung der Effizienz in der eigenständigen Bewältigung von Montageaufgaben nach unterschiedlichen Instruktionsarten, wurden keine signifikanten Leistungsverbesserungen festgestellt.
With on-demand access to compute resources, pay-per-use, and elasticity, the cloud evolved into an attractive execution environment for High Performance Computing (HPC). Whereas elasticity, which is often referred to as the most beneficial cloud-specific property, has been heavily used in the context of interactive (multi-tier) applications, elasticity-related research in the HPC domain is still in its infancy. Existing parallel computing theory as well as traditional metrics to analytically evaluate parallel systems do not comprehensively consider elasticity, i.e., the ability to control the number of processing units at runtime. To address these issues, we introduce a conceptual framework to understand elasticity in the context of parallel systems, define the term elastic parallel system, and discuss novel metrics for both elasticity control at runtime as well as the ex post performance evaluation of elastic parallel systems. Based on the conceptual framework, we provide an in depth analysis of existing research in the field to describe the state-of-the art and compile our findings into a research agenda for future research on elastic parallel systems.
Service robots need to be aware of persons in their vicinity in order to interact with them. People tracking enables the robot to perceive persons by fusing the information of several sensors. Most robots rely on laser range scanners and RGB cameras for this task. The thesis focuses on the detection and tracking of heads. This allows the robot to establish eye contact, which makes interactions feel more natural.
Developing a fast and reliable pose invariant head detector is challenging. The head detector that is proposed in this thesis works well on frontal heads, but is not fully pose-invariant. This thesis further explores adaptive tracking to keep track of heads that do not face the robot. Finally, head detector and adaptive tracker are combined within a new people tracking framework and experiments show its effectiveness compared to a state-of the-art system.
Este trabajo se enmarca dentro del vasto contexto de Ciudades Inteligentes, y se centra en el área de la conducción inteligente de vehículos, tanto en zonas urbanas como interurbanas, mediante la recogida de datos en tiempo real, medidos con sensores, por parte de los propios conductores, así como de datos capturados mediante simulación.
El objetivo de este trabajo es doble. Por un lado, el estudio y aplicación de las diferentes técnicas y métodos de detección de valores atípicos en bases de datos multivariantes, además de una comparativa entre ellos mediante las pruebas llevadas a cabo con datos de tráfico real. Y por otro lado, establecer una relación entre las situaciones anómalas de tráfico, como puedan ser atascos o accidentes, con los valores atípicos multivariantes encontrados.
La detección de valores atípicos representa una de las tareas más importantes a la hora de realizar cualquier análisis de datos, sea cual sea el dominio o área de estudio, ya que entre sus funciones primordiales se encuentra el descubrir información útil, que resulta de gran valor, y que por lo general queda oculta por la alta dimensión de los datos.
Con el uso de mecanismos de detección de valores atípicos junto con métodos de clasificación supervisada, se va a poder llevar a cabo el reconocimiento de elementos de la infraestructura vial urbana como pueden ser rotondas, pasos de cebra, cruces o semáforos.
Der souveräne Umgang mit der SPSS Syntax bietet einen unschätzbaren Vorteil für die tägliche Arbeit von Anwendern, die mit der Analyse von Daten zu tun haben. Das Buch ist eine integrierte Einführung in die Steuersprache von IBM SPSS Statistics für Studenten, Forscher und Praktiker. Es behandelt neben den notwendigen Grundlagen die Themengebiete Datenaufbereitung, Datentransformation und -modifikation. Weitere Themengebiete umfassen die Makro- und Matrixsprache, die in der 2. Auflage deutlich erweitert worden sind.
Drei Stufen geben Sicherheit
(2018)
GaN-Transistoren bieten ein enormes Potenzial für kompakte Leistungselektronik, indem sie die Größe von passiven Bauelementen verringern. Allerdings bringt das schnelle Schalten Herausforderungen für den Gate-Treiber mit sich. Ein vollständig integrierter Treiber mit drei Spannungsstufen hilft, diese zu lösen.
The digitization of our society changes the way we live, work, learn, communicate, and collaborate. This defines the strategical context for composing resilient enterprise architectures for micro-granular digital services and products. The change from a closed-world modeling perspective to more flexible open-world composition and evolution of system architectures defines the moving context for adaptable systems, which are essential to enable the digital transformation. Enterprises are presently transforming their strategy and culture together with their processes and information systems to become more digital. The digital transformation deeply disrupts existing enterprises and economies. Since years a lot of new business opportunities appeared using the potential of the Internet and related digital technologies, like Internet of Things, services computing, cloud computing, big data with analytics, mobile systems, collaboration networks, and cyber physical systems. Digitization fosters the development of IT systems with many rather small and distributed structures, like Internet of Things or mobile systems. In this paper, we are focusing on the continuous bottom-up integration of micro-granular architectures for a huge amount of dynamically growing systems and services, like Internet of Things and Microservices, as part of a new digital enterprise architecture. To integrate micro-granular architecture models to living architectural model versions we are extending more traditional enterprise architecture reference models with state of art elements for agile architectural engineering to support the digitalization of services with related products, and their processes.
A new class of information system architecture, decision-oriented service systems, is spreading more and more. Decision-oriented service systems provide services that support decisions in business processes and products based on the capabilities of cloud-computing environments. To pave the way for the creation of design methods of business processes and products based on decision-oriented service systems, this article introduces a capability-oriented approach. Starting from technological capabilities, more abstract operational and dynamic capabilities are created. The framework created is based on an integrated conceptualization of decision-oriented service systems that allows capturing synergetic effects. By creating the framework, the gap between the technological capabilities of technologies and the strategic goals of enterprises shall be narrowed.
Das Buch ist eine integrierte Einführung in die Steuersprache von IBM SPSS Statistics. Neben den notwendigen Syntax-Grundlagen behandelt es die Themengebiete Datenaufbereitung, Datentransformation und -modifikation sowie die Makro- und Matrixsprache, die in der 3. Auflage grundlegend überarbeitet wurden. Die Neuauflage wurde den Entwicklungen von SPSS angepasst, sprachlich verbessert und um weitere Anwendungsbeispiele ergänzt, die anhand realer Daten u. a. des J. D. Power and Associates Customer Satisfaction Index veranschaulicht werden. Das Buch legt besonderen Wert auf die gute Nachvollziehbarkeit der Beispiele durch begleitende Übungen. Die verwendeten Datensätze sind als kostenfreies Zusatzmaterial erhältlich. Das Buch bietet eine prägnante und umfassende Anleitung zur effizienteren Arbeit mit IBM SPSS Statistics und ist sowohl als Einstiegsliteratur für Programmieranfänger, als auch als Nachschlagewerk für fortgeschrittene Anwender geeignet.
Das Buch wurde auf Grundlage der Version 25.0 von IBM SPSS Statistics erstellt, kann aber auch für andere Versionen verwendet werden.
Context: Organizations increasingly develop software in a distributed manner. The cloud provides an environment to create and maintain software-based products and services. Currently, it is unknown which software processes are suited for cloud-based development and what their effects in specific contexts are.
Objective: We aim at better understanding the software process applied to distributed software development using the cloud as development environment. We further aim at providing an instrument which helps project managers comparing different solution approaches and to adapt team processes to improve future project activities and outcomes.
Method: We provide a simulation model which helps analyzing different project parameters and their impact on projects performed in the cloud. To evaluate the simulation model, we conduct different analyses using a Scrumban process and data from a project executed in Finland and Spain. An extra adaptation of the simulation model for Scrum and Kanban was used to evaluate the suitability of the simulation model to cover further process models.
Results: A comparison of the real project data with the results obtaind from the different simulation runs shows the simulation producing results close to the real data, and we could successfully replicate a distributed software project. Furthermore, we could show that the simulation model is suitable to address further process models.
Conclusion: The simulator helps reproducing activities, developers, and events in the project, and it helps analyzing potential tradeoffs, e.g., regarding throughput, total time, project size, team size and work-in-progress limits. Furthermore, the simulation model supports project managers selecting the most suitable planning alternative thus supporting decision-making processes.
Saving energy and road safety became important in the last decades, hence several driving assistant systems were developed that help to improve the driving behaviour. However, these driving systems cover the area of either energy-efficiency or safety. Furthermore, they do not consider the reaction of the driver to a shown recommendation and the driver stress level. In this paper, the decision process of showing a recommendation to the driver in an energy-efficient and safety relevant driving system is presented. The decision process considers the driver's reaction to a shown recommendation and the driver stress in order to increase the user acceptance and the road safety. The results of the evaluation showed that the driving system was able to show recommendations when needed, while suppressing recommendations when the driver ignored a recommendation repeatedly or when the driver was in stress.
Im Rahmen dieser Arbeit wurde ein urbaner Mixed-Reality Fahrsimulator umgesetzt. Die reale Umgebung wird hierbei in einer Greenscreen-Kammer mit Hilfe von Kamerabildern aus Nutzersicht und einem Chroma Key Shader innerhalb der virtuellen Umgebung sichtbar gemacht. Dies soll die Immersion und die Interaktivität innerhalb der virtuellen Umgebung durch die Darstellung und Verwendung von realen Elementen erhöhen.
Als virtuelle Umgebung wurde eine zufallsgenerierte Stadt geschaffen, in der KI-Fahrzeuge fahren. Die Ergebnisse der Entwicklung dieses Fahrsimulators werden in dieser Arbeit erläutert.
Der Fahrsimulator soll der Entwicklung von menschzentrierten Human-Machine-Interfaces und Motion-Capture-Komponenten dienen.
Automatisierte Analyse von Review-Daten beschäftigt sich mit den Möglichkeiten, freien Text zu analysieren und relevante Informationen daraus zu extrahieren. Die Arbeit setzt sich dabei mit Methoden des unüberwachten Lernens auseinander. Hierbei steht die Methode der Topic Modellierung im Mittelpunkt. Es werden Verfahren betrachtet, die im Bereich der textbasierten Informationsgewinnung bekannt sind. Latent Semantic Indexing LSI, (probabilistic) pLSI und Latent Dirichlet Allocation (LDA) werden erläutert und verglichen. Die Arbeit zeigt, wie LDA genutzt wurde, um einen nhaltlichen Überblick über einen Datenkorpus von 1 Mio. Reviews zu bekommen und diesen auf einen feineren Detailgrad zu betrachten. Die Topic-basierte Analyse wird genutzt, um Erkentnisse für ein Opinion Mining System zu generieren, welches eine tiefergehende Analyse vornehmen wird. Der gesamte Prozess ist als vollständig automatisiert und maschinell unüberwacht konzeptioniert.
Im Rahmen der wissenschaftlichen Vertieung werden unterschiedliche empirische Forschungsmethoden erörtert.
Im ersten Schritt werden die Grundlagen der empirischen Forschungsmethoden ermittelt und klassifiziert. Nach der Klassifikation der Forschungsmethoden werden zwei Forschunsmethoden angewandt. Die Auswahl der Methoden fällt auf die quantitative und qualitative Forschungsmethode.
Diese Forschungsmethoden werden während der Analysephase eingesetzt, um den weltweiten Ist-Zustand der Tochtergesellschaften zu ermitteln. Hierbei geht es um die Analyse des Import- und Export-Prozesses bei den Tochtergesellschaften der HUGO BOSS AG. Ziel ist es, die Ergebnisse für die Master Thesis einzusetzen. Anhand der Ergebnisse können Gemeinsamkeiten oder auch Abweichungen im Zollabwicklungsprozess aufgezeigt werden, die später in der Konzeption berücksichtigt werden.
Hierzu wird die qualitative Methode eingesetzt, welche die Basis für die Konzeptionierung der Umfrage liefert. Abschließend wurde für die Verifikation der Ergebnisse die qualitative Methode für die Interviews eingesetzt.
Rapid Prototyping Plattformen reduzieren die Entwicklungszeit, indem das Überprüfen einer Idee in Form eines Prototyps schnell umzusetzen ist und mehr Zeit für die eigentliche Anwendungsentwicklung mit Benutzerschnittstellen zur Verfügung steht. Dieser Ansatz wird schon lange bei technischen Plattformen, wie bspw. dem Arduino, verfolgt. Um diese Form von Prototyping auf Wearables zu übertragen, wird in diesem Paper WearIT vorgestellt. WearIT besteht als Wearable Prototyping Plattform aus vier Komponenten: Einer Weste, Sensor- und Aktorshieldss, einer eigenen bibliothek sowie einem Mainboard bestehend aus Arduino, Raspberry Pi, einer Steckplatine und einem GPS-Modul. Als Ergebnis kann ein Wearable Prototyp schnell, durch das Anbringen von Sensor- und Aktorshields an der WearIT Weste, entwickelt werden. Diese Sensor- und Aktorshields können anschließend durch die WearIT-Bibliothek programmiert werden. Dafür kann über Virtual Network Computing (VNC) mit einem entfernten Rechner auf die Bildschirminhalte des Rasperry Pis zugegriffen und der Arduino programmiert werden.
Die meisten der aktuell im Allag vorfindlichen Touch-Flächen wurden unter Anwendung komplexer und kostenintensiver Technologien realisiert. Gerade für das Anwendungsszenario eines Touchfloors, bei welchem meist eine überdurchschnittlich große Touch-Fläche erwünscht ist, werden kostengünstigere Umsetzungsmöglichkeiten angestrebt. Dieses Paper dient als Ausgangsbasis für die Umsetzung eines Low-cost Touchfloors, der die kollaborative Arbeit eines Projektteams unterstützen soll. Mithilfe einer Analyse des State of the Arts der Touch-Technologien und einer anschließenden Evaluation, wird die Touch-Technologie abgeleitet, die sich am besten zur Realisierung dieses low-cost Touchfloors eignet. Aus der Evaluation geht hervor, dass vor allem optische Touch-Technologien, insbesondere visionsbasierte, für die Umsetzung von kostengünstigen großen Touch-Flächen geeignet sind.
Integrierte Schaltkreise (IC) sind ein integraler Bestandteil vieler Geräte wie zum Beispiel Smartphones, Computer oder Fernseher. Auf den Schaltkreisen werden immer mehr Funktionen integriert. Um die Arbeit auch zukünftig in gegebener Zeit bewältigen zu können, bedarf es daher einer Möglichkeit für die gleichzeitige Zusammenarbeit der Entwickler. Unter dem Arbeitstitel eCEDA (eCollaboration for Electronic Design Automation) wird ein Konzept für eine Webanwendung entwickelt, die die Echtzeitkollaboration von Entwicklern im Chipentwurf ermöglichen soll. Dieses Konzept sowie verschiedene Aspekte der Kollaboration werden in dieser Arbeit behandelt.
Anforderungen an die Mensch-Maschine-Schnittstelle im Automobil auf dem Weg zum autonomen Fahren
(2017)
In den letzten Jahrzehnten haben immer mehr Fahrerassistenzsysteme Einzug in das Automobil gefunden und bereiten damit den Weg zu vollautonomen Fahrzeugen der Zukunft vor. So bieten bereits viele Hersteller Ausstattungsvarianten ihrer Fahrzeuge an, die für den Umstieg in die vollautonome Zukunft gewappnet sind. Um den Menschen mit auf den Weg zu nehmen, werden einige Anforderungen an die Mensch-Maschine-Schnittstelle (MMS) des Automobils gestellt. Für die teilautonomen Fahrzeuge der nächsten Generation gilt es, den Fahrerwechsel zwischen manuellem und autonomen Fahren für die Menschen bestmöglich zu gestalten. Die Arbeit wirft einen Blick auf ausgewählte Ansätze für zukünftige MMS-Systeme und bewertet diese anhand der Übergabezeiten zwischen Mensch und Maschine. Ein Wandel der MMS im Automobil wird empfohlen, um den Menschen mit den neuen Technologien vertraut zu machen.
The third Digital Enterprise Computing Conference DEC 17 at the Herman Hollerith Center in Böblingen brings together students, researchers, and practitioners to discuss solutions, experiences, and future developments for the digital transformation. Digitization of business and IT defines the conference agenda: digital models & architecture, digital marketing, agility & innovation.
Data collected from internet applications are mainly stored in the form of transactions. All transactions of one user form a sequence, which shows the user´s behaviour on the site. Nowadays, it is important to be able to classify the behaviour in real time for various reasons: e.g. to increase conversion rate of customers while they are in the store or to prevent fraudulent transactions before they are placed. However, this is difficult due to the complex structure of the data sequences (i.e. a mix of categorical and continuous data types, constant data updates) and the large amounts of data that are stored. Therefore, this thesis studies the classification of complex data sequences. It surveys the fields of time series analysis (temporal data mining), sequence data mining or standard classification algorithms. It turns out that these algorithms are either difficult to be applied on data sequences or do not deliver a classification: Time series need a predefined model and are not able to handle complex data types; sequence classification algorithms such as the apriori algorithm family are not able to utilize the time aspect of the data. The strengths and weaknesses of the candidate algorithms are identified and used to build a new approach to solve the problem of classification of complex data sequences. The problem is thereby solved by a two-step process. First, feature construction is used to create and discover suitable features in a training phase. Then, the blueprints of the discovered features are used in a formula during the classification phase to perform the real time classification. The features are constructed by combining and aggregating the original data over the span of the sequence including the elapsed time by using a calculated time axis. Additionally, a combination of features and feature selection are used to simplify complex data types. This allows catching behavioural patterns that occur in the course of time. This new proposed approach combines techniques from several research fields. Part of the algorithm originates from the field of feature construction and is used to reveal behaviour over time and express this behaviour in the form of features. A combination of the features is used to highlight relations between them. The blueprints of these features can then be used to achieve classification in real time on an incoming data stream. An automated framework is presented that allows the features to adapt iteratively to a change in underlying patterns in the data stream. This core feature of the presented work is achieved by separating the feature application step from the computational costly feature construction step and by iteratively restarting the feature construction step on the new incoming data. The algorithm and the corresponding models are described in detail as well as applied to three case studies (customer churn prediction, bot detection in computer games, credit card fraud detection). The case studies show that the proposed algorithm is able to find distinctive information in data sequences and use it effectively for classification tasks. The promising results indicate that the suggested approach can be applied to a wide range of other application areas that incorporate data sequences.
The troubles began when Tom, the business analyst, asked the customer what he wants. The customer came up with good ideas for software features. Tom created a brilliant roadmap and defined the requirements for a new software product. Mary, the development team leader, was already eager to start developing and happy when she got the requirements. She and her team went ahead and created the software right away. Afterwards, Paul tested the software against the requirements. As soon as the software fulfilled the requirements, Linda, the product manager, deployed it to the customer. The customer did not like the software and ignored it. Ringo, the head of software development, was fired. How come? Nowadays, we have tremendous capabilities for creating nearly all kinds of software to fulfill the needs of customers. We can apply agile practices for reacting flexibly to changing requirements, we can use distributed development, open source, or other means for creating software at low cost, we can use cloud technologies for deploying software rapidly, and we can get enormous amounts of data showing us how customers actually use software products. However, the sad reality is that around 90% of products fail, and more than 60% of the features of a typical software product are rarely or never used. But there is a silver lining – an insight regarding successful features: Around 60% of the successes stem from a significant change of an initial idea. This gives us a hint on how to build the right software for users and customers.
This book presents emerging trends in the evolution of service-oriented and enterprise architectures. New architectures and methods of both business and IT are integrating services to support mobility systems, internet of things, ubiquitous computing, collaborative and adaptive business processes, big data, and cloud ecosystems. They inspire current and future digital strategies and create new opportunities for the digital transformation of next digital products and services. Services Oriented Architectures (SOA) and Enterprise Architectures (EA) have emerged as a useful framework for developing interoperable, large-scale systems, typically implementing various standards, like web services, REST, and microservices. Managing the adaptation and evolution of such systems presents a great challenge. Service-Oriented Architecture enables flexibility through loose coupling, both between the services themselves and between the IT organizations that manage them. Enterprises evolve continuously by transforming and extending their services, processes and information systems. Enterprise Architectures provide a holistic blueprint to help define the structure and operation of an organization with the goal of determining how an organization can most effectively achieve its objectives. The book proposes several approaches to address the challenges of the service-oriented evolution of digital enterprise and software architectures.
Information Systems in Distributed Environment (ISDE) is becoming a prominent standard in this globalization era due to advancement in information and communication technologies. The advent of the internet has supported Distributed Software Development (DSD) by introducing new concepts and opportunities, resulting in benefits such as scalability, flexibility, interdependence, reduced cost, resource pools, and usage tracking. The distributed development of information systems as well as their deployment and operation in distributed environments impose new challenges for software organizations and can lead to business advantages. In distributed environments, business units collaborate across time zones, organizational boundaries, work cultures and geographical distances, something that ultimately has led to an increasing diversification and growing complexity of cooperation among units. The real-world practice of developing, deployment and operation of information systems in globally distributed projects has been viewed from various perspectives, though technical and engineering in conjunction with managerial and organizational viewpoints have dominated the researcher’s attention so far. Successful participation in distributed environments, however, is ultimately a matter of the participants understanding and exploiting the particularities of their respective local contexts at specific points in time and exploring practical solutions through the local resources available.
This special issue of the Computer standards & interfaces journal therefore includes papers received from the public call for papers and extended and improved versions of those papers that were selected from the best of the International Workshop on Information Systems in Distributed Environment (ISDE 2014). It aims to serve as a forum to bring together academics, researchers, practitioners and students in the field of distributed information system, by presenting novel developments and lesson learned from real world cases, and to promote the exchange of ideas, discussion and advancement in these areas.
Managing software process evolution : traditional, agile and beyond - how to handle process change
(2016)
This book focuses on the design, development, management, governance and application of evolving software processes that are aligned with changing business objectives, such as expansion to new domains or shifting to global production. In the context of an evolving business world, it examines the complete software process lifecycle, from the initial definition of a product to its systematic improvement. In doing so, it addresses difficult problems, such as how to implement processes in highly regulated domains or where to find a suitable notation system for documenting processes, and provides essential insights and tips to help readers manage process evolutions. And last but not least, it provides a wealth of examples and cases on how to deal with software evolution in practice.
Reflecting these topics, the book is divided into three parts. Part 1 focuses on software business transformation and addresses the questions of which process(es) to use and adapt, and how to organize process improvement programs. Subsequently, Part 2 mainly addresses process modeling. Lastly, Part 3 collects concrete approaches, experiences, and recommendations that can help to improve software processes, with a particular focus on specific lifecycle phases.
This book is aimed at anyone interested in understanding and optimizing software development tasks at their organization. While the experiences and ideas presented will be useful for both those readers who are unfamiliar with software process improvement and want to get an overview of the different aspects of the topic, and for those who are experts with many years of experience, it particularly targets the needs of researchers and Ph.D. students in the area of software and systems engineering or information systems who study advanced topics concerning the organization and management of (software development) projects and process improvements projects.
The second Digital Enterprise Computing Conference DEC 16 at the Herman Hollerith Center in Böblingen brings together students, researchers, and practitioners to discuss solutions, experiences, and future developments for the digital transformation. Digitization of business and IT defines the conference agenda: technology acceptance, digital transformation, digital business & administration, digital process challenges, analytics, and big data & data processing.
Das digitale Unternehmen erfordert neue Konzepte des Digital Enterprise Computing. Dieses umfasst eine interdisziplinäre Verbindung von Vorgehensweisen aus der Informatik, der Ökonomie und weiteren relevanten Wissenschaftsdisziplinen. Neue Architekturen mit integrierten Mobility-Systemen, kollaborativen Geschäftsprozessen, Big Data und Cloud-Ökosystemen beflügeln aktuelle und künftige Geschäftsstrategien und machen die digitale Transformation zu neuen Geschäftsfeldern erst möglich. Dafür ist eine enge Kooperation verschiedener Partner aus Wissenschaft, Wirtschaft und Gesellschaft notwendig. Die Jahreskonferenz Digital Enterprise Computing positioniert die Gesellschaft für Informatik als wissenschaftlichen Mitveranstalter und vertieft Erfahrungen aus dem Arbeitskreis Enterprise Architecture Management der Fachgruppe Architekturen im Fachbereich Softwaretechnik der Gesellschaft für Informatik.
The impact of stress of every human being has become a serious problem. Reported impact on persons are a higher rate or health disorders like heart problems, obesity, asthma, diabetes, depressions and many others. An individual in a stressful situation has to deal with altered cognition as well as an affected decision making skill and problem solving. This could lead to a higher risk for accidents in dynamic environments such as automotive. Different papers faced the estimation as well as prediction of drivers’ stress level during driving. Another important question is not only the stress level of the driver himself, but also the influence on and of a group of other drivers in the near area. This paper proposes a system, which determines a group of drivers in a near area as clusters and it derives or computes the individual stress level. This information will be analyzed to generate a stress map, which represents a graphical view about road section with a higher stress influence. Aggregated data can be used to generate navigation routes with a lower stress influence as well as recommend driving behavior to decrease stress influenced driving as well as improve road safety.
In this work, a web-based software architecture and framework for management and diagnosis of large amounts of medical data in an ophthalmologic reading center is proposed. Data management for multi-center studies requires merging of standing data and repeatedly gathered clinical evidence such as vital signs and raw data. If ophthalmologic questions are involved the data acquisition is often provided by non-medical staff at the point of care or a study center, whereas the medical finding is mostly provided by an ophthalmologist in a specialized reading center. The study data such as participants, cohorts and measured values are administrated at a single data center for the entire study. Since a specialized reading center maintains several studies, the medical staff must learn the different data administration for the different data center. With respect to the increasing number and sizes of clinical studies, two aspects must be considered. At first, an efficient software framework is required to support the data management, processing and diagnosis by medical experts at the reading center. In the second place, this software needs a standardized user-interface that has not to be trained/taylore /adapted for each new study. Furthermore different aspects of quality and security controls have to be included. Therefore, the objective of this work is to establish a multi purpose ophthalmologic reading center, which can be connected to different data centers via configurable data interfaces in order to treat various topics simultaneously.