Refine
Document Type
- Journal article (758)
- Conference proceeding (750)
- Book chapter (124)
- Book (22)
- Doctoral Thesis (22)
- Working Paper (13)
- Review (6)
- Report (1)
Language
- English (1696) (remove)
Has full text
- yes (1696) (remove)
Is part of the Bibliography
- yes (1696) (remove)
Institute
- Informatik (615)
- ESB Business School (408)
- Technik (296)
- Life Sciences (266)
- Texoversum (109)
- Zentrale Einrichtungen (6)
Publisher
- Springer (268)
- IEEE (251)
- Elsevier (188)
- MDPI (99)
- Wiley (60)
- Hochschule Reutlingen (57)
- Gesellschaft für Informatik e.V (52)
- Association for Computing Machinery (44)
- De Gruyter (40)
- IARIA (26)
The experimental characterization of the thermal impedance Zth of large power MOSFETs is commonly done by measuring the junction temperature Tj in the cooling phase after the device has been heated, preferably to a high junction temperature for increased accuracy. However, turning off a large heating current (as required by modern MOSFETs with low on-state resistances) takes some time because of parasitic inductances in the measurement system. Thus, most setups do not allow the characterization of the junction temperature in the time range below several tens of μs.
In this paper, an optimized measurement setup is presented which allows accurate Tj characterization already 3 μs after turn-off of heating. With this, it becomes possible to experimentally investigate the influence of thermal capacitances close to the active region of the device. Measurement results will be presented for advanced power MOSFETs with very large heating currents up to 220 A. Three bonding variants are investigated and the observed differences will be explained.
The food system represents a key industry for Europe and Germany in particular. However, it is also the single most significant contributor to climate and environmental change. A food system transformation is necessary to overcome the system’s major and constantly increasing challenges in the upcoming decades. One possible facilitator for this transformation are radical and disruptive innovations that start-ups develop. There are many challenges for start-ups in general and food start-ups in particular. Various support opportunities and resources are crucial to ensure the success of food start-ups. One aim of this study is to identify how the success of start-ups in the food system can be supported and further strengthened by actors in the innovation ecosystem in Germany. There is still room for improvement and collaboration toward a thriving innovation ecosystem. A successful innovation ecosystem is characterised by a well-organised, collaborative, and supportive environment with a vivid exchange between the members in the ecosystem. The interviewees confirmed this, and although the different actors are already cooperating, there is still room for improvement. The most common recommendation for improving cooperation is learning from other countries and bringing the best to Germany.
This paper aims at presenting a solution that enables end customers of the energy system to participate in new local micro-energy-markets by providing them with a distributed, decentralized, transparent and secure Peer to Peer (P2P) payment system, which functions automatically applying new concepts of Machine to Machine (M2M) communication technologies. This work was performed within the German project VK_2G, funded by the DBU. The key results were: Providing means to perform microtransactions in a P2P fashion between end consumers and prosumers in local communities at low cost in a transparent and secure manner; Developing a platform with pre-defined smart contracts able to be tailored to different end customers ‘needs in an easy way and; Integrating both the market platform as well as the local control of generation and loads. This solution has been developed, integrated and tested in a laboratory prototype. This paper discusses this solution and presents the results of the first test.
Small and medium-sized enterprises (SMEs) play a fundamental role in the economic system of the European Union: SMEs represent over 99 percent of all companies and provide two-thirds of the jobs in the private sector. Their innovativeness and economic success have significant influence on growth, jobs and prosperity in Europe.
Information technologies are regarded as key drivers of innovation in small and medium-sized enterprises (SME). Modern information technologies (IT) offer SMEs today many opportunities to improve their competitiveness and market position. Thus, business processes can be designed efficiently, open up new market segments and strengthen the innovation capacity significantly. However, many SMEs still have difficulties in utilizing these new technologies efficiently in order to foster process and product innovation. This is partly due to the fact that many SMEs don’t use IT Service Management and waste resources in running basic IT-functions like the maintenance of printers, software or servers.
Information Technology Service Management (ITSM) is a discipline for managing IT systems centred on the customer’s perspective of IT’s contribution to the business. Thus, by strengthening the performance of SME’s IT departments, ITSM enables process innovation (e.g. eProcurement) and product innovations (e.g. client services) can be promoted. The EU-funded project "IT Service Management for small and medium-sized Enterprises of the Danube Region" (ITSM4SME) aims to make SMEs in the Danube Region aware of the potential of ITSM, to inspire SMEs about the use of information technology and to allow IT-enabled innovations. The aims of the project have been achieved inter alia through a simplified method for IT service management for small IT organisations, practical case studies, a "do-it-yourself" service management modelling tool, an eLearning portal and by training more than 300 participants from SMEs in pilot training courses in Bulgaria, Romania and Slovenia.
Royal Philip's goal was to use innovation to improve the lives of three billion people a year by 2025. To reach that goal, the company was shifting from selling medical products in a transactional manner to providing integrated healthcare solutions based on digital health technology ("HealthTech").
This shift required a dual transformation. On one hand, the company needed to transform how healthcare was conducted. Healthcare professionals would have to change the way they worked and reimbursement schemes needed to change to incentivize payers, providers, and patients in vastly different ways. On the other hand, Philips needed to redesign how it worked internally. The company componentized its business, introduced digital platforms, and co-created solutions with the various stakeholders of the healthcare industry.
In other words: Royal Philips was transforming itself in order to reinvent healthcare in the digital age.
The COVID-19 pandemic necessitated significant changes in foreign language education, forcing teachers to reconstruct their identities and redefine their roles as language educators. To better understand these adaptations and perspectives, it is crucial to study how the pandemic has influenced teaching practices. This mixed-methods study focused on the less-explored aspects of foreign language teaching during the pandemic, specifically examining how language teachers adapted and perceived their practices, including rapport building and learner autonomy, during emergency remote teaching (ERT) in higher education institutions. It also explored teachers’ intentions for their teaching in the post-pandemic era. An online survey was conducted, involving 118 language educators primarily from Germany, with a smaller representation from New Zealand, the United States, and the United Kingdom. The analysis of participants’ responses revealed issues and opportunities regarding lesson formats, tool usage, rapport, and learner autonomy. Our findings offer insights into the desired changes participants envisioned for the post-pandemic era. The results highlight the opportunities ERT had created in terms of teacher development, and we offer suggestions to enhance professional development programmes based on these findings.
Today, companies face increasing market dynamics, rapidly evolving technologies, and rapid changes in customer behavior. Traditional approaches to product development typically fail in such environments and require companies to transform their often feature-driven mindset into a product-led mindset. A promising first step on the way to a product-led company is a better understanding of how product planning can be adapted to the requirements of an increasingly dynamic and uncertain market environment in the sense of product roadmapping. The authors developed the DEEP product roadmap assessment tool to help companies evaluate their current product roadmap practices and identify appropriate actions to transition to a more product-led company. Objective: The goal of this paper is to gain insight into the applicability and usefulness of version 1.1 of the DEEP model. In addition, the benefits, and implications of using the DEEP model in corporate contexts will be explored. Method: We conducted a multiple case study in which participants were observed using the DEEP model. We then interviewed each participant to understand their perceptions of the DEEP model. In addition, we conducted interviews with each company's product management department to learn how the application of the DEEP model influenced their attitudes toward product roadmapping. Results: The study showed that by applying the DEEP model, participants better understood which artifacts and methods were critical to product roadmapping success in a dynamic and uncertain market environment. In addition, the application of the DEEP model helped convince management and other stakeholders of the need to change current product roadmapping practices. The application also proved to be a suitable starting point for the transformation in the participating companies.
Blockchains have become increasingly important in recent years and have expanded their applicability to many domains beyond finance and cryptocurrencies. This adoption has particularly increased with the introduction of smart contracts, which are immutable, user-defined programs directly deployed on blockchain networks. However, many scenarios require business transactions to simultaneously access smart contracts on multiple, possibly heterogeneous blockchain networks while ensuring the atomicity and isolation of these transactions, which is not natively supported by current blockchain systems. Therefore, in this work, we introduce the Transactional Cross-Chain Smart Contract Invocation (TCCSCI) approach that supports such distributed business transactions while ensuring their global atomicity and serializability. The approach introduces the concept of Resource Manager Smart Contracts, and 2PC for Blockchains (2PC4BC), a client-driven Atomic Commit Protocol (ACP) specialized for blockchain-based distributed transactions. We validate our approach using a prototypical implementation, evaluate its introduced overhead, and prove its correctness.
Transaction processing is of growing importance for mobile computing. Booking tickets, flight reservation, banking, ePayment, and booking holiday arrangements are just a few examples for mobile transactions. Due to temporarily disconnected situations the synchronisation and consistent transaction processing are key issues. Serializability is a too strong criteria for correctness when the semantics of a transaction is known. We introduce a transaction model that allows higher concurrency for a certain class of transactions defined by its semantic. The transaction results are ”escrow serializable” and the synchronisation mechanism is non-blocking. Experimental implementation showed higher concurrency, transaction throughput, and less resources used than common locking or optimistic protocols.
Industrial practice is characterized by random events, also referred to as internal and external turbulences, which disturb the target-oriented planning and execution of production and logistics processes. Methods of probabilistic forecasting, in contrast to single value predictions, allow an estimation of the probability of various future outcomes of a random variable in the form of a probability density function instead of predicting the probability of a specific single outcome. Probabilistic forecasting methods, which are embedded into the analytics process to gain insights for the future based on historical data, therefore offer great potential for incorporating uncertainty into planning and control in industrial environments. In order to familiarize students with these potentials, a training module on the application of probabilistic forecasting methods in production and intralogistics was developed in the learning factory 'Werk150' of the ESB Business School (Reutlingen University). The theoretical introduction to the topic of analytics, probabilistic forecasting methods and the transition to the application domain of intralogistics is done based on examples from other disciplines such as weather forecasting and energy consumption forecasting. In addition, data sets of the learning factory are used to familiarize the students with the steps of the analytics process in a practice-oriented manner. After this, the students are given the task of identifying the influencing factors and required information to capture intralogistics turbulences based on defined turbulence scenarios (e.g. failure of a logistical resource) in the learning factory. Within practical production scenario runs, the students apply probabilistic forecasting using and comparing different probabilistic forecasting methods. The graduate training module allows the students to experience the potentials of using probabilistic forecasting methods to improve production and intralogistics processes in context with turbulences and to build up corresponding professional and methodological competencies.
Enterprise Architectures (EA) consists of many architecture elements, which stand in manifold relationships to each other. Therefore Architecture Analysis is important and very difficult for stakeholders. Due changing an architecture element has impacts on other elements different stakeholders are involved. In practice EAs are often analyzed using visualizations. This article aims at contributing to the field of visual analytics in EAM by analyzing how state of-the-art software platforms in EAM support stakeholders with respect to providing and visualizing the “right” information for decision-making tasks. We investigate the collaborative decision-making process in an experiment with master students using professional EAM tools by developing a research study and accomplishing them in a master’s level class with students.
Hardly any software development process is used as prescribed by authors or standards. Regardless of company size or industry sector, a majority of project teams and companies use hybrid development methods (short: hybrid methods) that combine different development methods and practices. Even though such hybrid methods are highly individualized, a common understanding of how to systematically construct synergetic practices is missing. In this article, we make a first step towards a statistical construction procedure for hybrid methods. Grounded in 1467 data points from a large‐scale practitioner survey, we study the question: What are hybrid methods made of and how can they be systematically constructed? Our findings show that only eight methods and few practices build the core of modern software development. Using an 85% agreement level in the participants' selections, we provide examples illustrating how hybrid methods can be characterized by the practices they are made of. Furthermore, using this characterization, we develop an initial construction procedure, which allows for defining a method frame and enriching it incrementally to devise a hybrid method using ranked sets of practice.
Ambitious goals set by the European Union strategy towards the emission reduction of multimodal logistic chains and new requirements for intermodal terminals set by the evolution of customer needs, contribute to a shift in the driver for the infrastructure development: from economy of scale to economy of density. This paper aims to present an innovative method for designing a process oriented technology chain for intermodal terminals in order to fulfill these new demanding requirements. The results of the case study of the Zero Emission Logistic Terminal Reutlingen are presented, highlighting how this particular context enables the design and development of a modular concept, paving the way for the generalization of the findings towards the transfer to similar contexts of other European cities.
In this paper it is first identified the trade-off among costs, flexibility and performances of autonomous robotic solutions for material handling processes, where adding value with automation is not as trivial as in production processes: hence the requirement for automated solutions to be simple, lean and efficient becomes even stricter. Then a method for modelling and comparing differential performances and costs of manual and autonomous solutions is developed. As a result of the method, a smart man-machine collaborative interface is designed and its impact evaluated on a specific case of study. Results are then generalized and prove the strong conclusions that in unconstrained environments, where full standardization cannot be achieved, the risk of investing in autonomous solutions can only be mitigated by creating a fast and smart man-machine collaborative interface.
Facial expressions play a dominant role in facilitating social interactions. We endeavor to develop tactile displays to reinstate facial expression modulated communication. The high spatial and temporal dimensionality of facial movements poses a unique challenge when designing tactile encodings of them. A further challenge is developing encodings that are at-tuned to the perceptual characteristics of our skin. A caveat of using vibrotactile displays is that tactile stimuli have been shown to induce perceptual tactile aftereffects when used on the fingers, arm and face. However, at present, despite the prevalence of waist-worn tactile displays, no such investigations of tactile aftereffects at the waist region exist in the literature, though they are warranted by the unique sensory and perceptual signalling characteristics of this area. Using an adaptation paradigm we investigated the presence of perceptual tactile aftereffects induced by continuous and burst vibrotactile stimuli delivered at the navel, side and spinal regions of the waist. We report evidence that the tactile perception topology of the waist is non-uniform, and specifically that the navel and spine regions are resistant to adaptive aftereffects while side regions are more prone to perceptual adaptations to continuous but not burst stimulations. Results of our current investigations highlight the unique set of challenges posed by designing waist-worn tactile displays. These and future perceptual studies can directly inform more realistic and effective implementations of complex high-dimensional spatiotemporal social cues.
IT environments that consist of a very large number of rather small structures like microservices, Internet of Things (IoT) components, or mobility systems are emerging to support flexible and agile products and services in the age of digital transformation. Biological metaphors of living and adaptable ecosystems with service-oriented enterprise architectures provide the foundation for self-optimizing, resilient run-time environments and distributed information systems. We are extending Enterprise Architecture (EA) methodologies and models that cover a high degree of heterogeneity and distribution to support the digital transformation and related information systems with micro-granular architectures. Our aim is to support flexibility and agile transformation for both IT and business capabilities within adaptable digital enterprise architectures. The present research paper investigates mechanisms for integrating Microservice Architectures (MSA) by extending original enterprise architecture reference models with elements for more flexible architectural metamodels and EA-mini-descriptions.
With the progress of technology in modern hospitals, an intelligent perioperative situation recognition will gain more relevance due to its potential to substantially improve surgical workflows by providing situation knowledge in real-time. Such knowledge can be extracted from image data by machine learning techniques but poses a privacy threat to the staff’s and patients’ personal data. De-identification is a possible solution for removing visual sensitive information. In this work, we developed a YOLO v3 based prototype to detect sensitive areas in the image in real-time. These are then deidentified using common image obfuscation techniques. Our approach shows that it is principle suitable for de-identifying sensitive data in OR images and contributes to a privacyrespectful way of processing in the context of situation recognition in the OR.
The aim of this work is the development of artificial intelligence (AI) application to support the recruiting process that elevates the domain of human resource management by advancing its capabilities and effectiveness. This affects recruiting processes and includes solutions for active sourcing, i.e. active recruitment, pre-sorting, evaluating structured video interviews and discovering internal training potential. This work highlights four novel approaches to ethical machine learning. The first is precise machine learning for ethically relevant properties in image recognition, which focuses on accurately detecting and analysing these properties. The second is the detection of bias in training data, allowing for the identification and removal of distortions that could skew results. The third is minimising bias, which involves actively working to reduce bias in machine learning models. Finally, an unsupervised architecture is introduced that can learn fair results even without ground truth data. Together, these approaches represent important steps forward in creating ethical and unbiased machine learning systems.
AI technologies such as deep learning provide promising advances in many areas. Using these technologies, enterprises and organizations implement new business models and capabilities. In the beginning, AI-technologies have been deployed in an experimental environment. AI-based applications have been created in an ad-hoc manner and without methodological guidance or engineering approach. Due to the increasing importance of AI-technologies, however, a more structured approach is necessary that enable the methodological engineering of AI-based applications. Therefore, we develop in this paper first steps towards methodological engineering of AI-based applications. First, we identify some important differences between the technological foundations of AI- technologies, in particular deep learning, and traditional information technologies. Then we create a framework that enables to engineer AI-applications using four steps: identification of an AI-application type, sub-type identification, lifecycle phase, and definition of details. The introduced framework considers that AI-applications use an inductive approach to infer knowledge from huge collections and streams of data. It not only enables the rapid development of AI-application but also the efficient sharing of knowledge on AI-applications.
Towards Automated Surgical Documentation using automatically generated checklists from BPMN models
(2021)
The documentation of surgeries is usually created from memory only after the operation, which is an additional effort for the surgeon and afflicted with the possibility of imprecisely, shortend reports. The display of process steps in the form of checklists and the automatic creation of surgical documentation from the completed process steps could serve as a reminder, standardize the surgical procedure and save time for the surgeon. Based on two works from Reutlingen University, which implemented the creation of dynamic checklists from Business Process Modelling Notation (BPMN) models and the storage of times at which a process step was completed, a prototype was developed for an android tablet, to expand the dynamic checklists by functions such as uploading photos and files, manual user entries, the interception of foreseeable deviations from the normal course of operations and the automatic creation of OR documentation.
Intraoperative brain deformation, so called brain shift, affects the applicability of preoperative magnetic resonance imaging (MRI) data to assist the procedures of intraoperative ultrasound (iUS) guidance during neurosurgery. This paper proposes a deep learning-based approach for fast and accurate deformable registration of preoperative MRI to iUS images to correct brain shift. Based on the architecture of 3D convolutional neural networks, the proposed deep MRI-iUS registration method has been successfully tested and evaluated on the retrospective evaluation of cerebral tumors (RESECT) dataset. This study showed that our proposed method outperforms other registration methods in previous studies with an average mean squared error (MSE) of 85. Moreover, this method can register three 3D MRI-US pair in less than a second, improving the expected outcomes of brain surgery.
Distraction of the driver is one of the most frequent causes for car accidents. We aim for a computational cognitive model predicting the driver’s degree of distraction during driving while performing a secondary task, such as talking with co-passengers. The secondary task might cognitively involve the driver to differing degrees depending on the topic of the conversation or the number of co-passengers. In order to detect these subtle differences in everyday driving situations, we aim to analyse in-car audio signals and combine this information with head pose and face tracking information. In the first step, we will assess driving, video and audio parameters reliably predicting cognitive distraction of the driver. These parameters will be used to train the cognitive model in estimating the degree of the driver’s distraction. In the second step, we will train and test the cognitive model during conversations of the driver with co-passengers during active driving. This paper describes the work in progress of our first experiment with preliminary results concerning driving parameters corresponding to the driver’s degree of distraction. In addition, the technical implementation of our experiment combining driving, video and audio data and first methodological results concerning the auditory analysis will be presented. The overall aim for the application of the cognitive distraction model is the development of a mobile user profile computing the individual distraction degree and being applicable also to other systems.
A large body of literature is concerned with models of presence— the sensory illusion of being part of a virtual scene— but there is still no general agreement on how to measure it objectively and reliably. For the presented study, we applied contemporary theory to measure presence in virtual reality. Thirty-seven participants explored an existing commercial game in order to complete a collection task. Two startle events were naturally embedded in the game progression to evoke physical reactions and head tracking data was collected in response to these events. Subjective presence was recorded using a post-study questionnaire and real-time assessments. Our novel implementation of behavioral measures lead to insights which could inform future presence research: We propose a measure in which startle reflexes are evoked through specific events in the virtual environment, and head tracking data is compared to the range and speed of baseline interactions.
Continuous refactoring is necessary to maintain source code quality and to cope with technical debt. Since manual refactoring is inefficient and error prone, various solutions for automated refactoring have been proposed in the past. However, empirical studies have shown that these solutions are not widely accepted by software developers and most refactorings are still performed manually. For example, developers reported that refactoring tools should support functionality for reviewing changes. They also criticized that introducing such tools would require substantial effort for configuration and integration into the current development environment.
In this paper, we present our work towards the Refactoring-Bot, an autonomous bot that integrates into the team like a human developer via the existing version control platform. The bot automatically performs refactorings to resolve code smells and presents the changes to a developer for asynchronous review via pull requests. This way, developers are not interrupted in their workflow and can review the changes at any time with familiar tools. Proposed refactorings can then be integrated into the code base via the push of a button. We elaborate on our vision, discuss design decisions, describe the current state of development, and give an outlook on planned development and research activities.
The euphoria around microservices has decreased over the years, but the trend of modernizing legacy systems to this novel architectural style is unbroken to date. A variety of approaches have been proposed in academia and industry, aiming to structure and automate the often long-lasting and cost-intensive migration journey. However, our research shows that there is still a need for more systematic guidance. While grey literature is dominant for knowledge exchange among practitioners, academia has contributed a significant body of knowledge as well, catching up on its initial neglect. A vast number of studies on the topic yielded novel techniques, often backed by industry evaluations. However, practitioners hardly leverage these resources. In this paper, we report on our efforts to design an architecture-centric methodology for migrating to microservices. As its main contribution, a framework provides guidance for architects during the three phases of a migration. We refer to methods, techniques, and approaches based on a variety of scientific studies that have not been made available in a similarly comprehensible manner before. Through an accompanying tool to be developed, architects will be in a position to systematically plan their migration, make better informed decisions, and use the most appropriate techniques and tools to transition their systems to microservices.
Digital transformation has changed corporate reality and, with that, firms’ IT environments and IT governance (ITG). As such, the perspective of ITG has shifted from the design of a relatively stable, closed and controllable system of a self-sufficient enterprise to a relatively fluid, open, agile and transformational system of networked co adaptive entities. Related to this paradigm shift in ITG, this paper aims to clarify how the concept of an effective ITG framework has changed in terms of the demand for agility in organizations. Thus, this study conducted 33 qualitative interviews with executives and senior managers from the banking industry in Germany, Switzerland and Austria. Analysis of the interviews focused on the formation of categories and the assignment of individual text parts (codings) to these categories to allow for a quantitative evaluation of the codings per category. Regarding traditional and agile ITG dimensions, 22 traditional and 25 agile dimensions were identified. Moreover, agile strategies within the agile ITG construct and ten ITG patterns were identified from the interview data. The data show relevant perspectives on the implementation of traditional and new ITG dimensions and highlight ambidextrous aspects in ITG frameworks.
While there has been increased digitization of private homes, only little has been done to understand these specific home technologies, how they serve consumers, among other issues. “Smart home technology” (SHT) refer to a wide range of artifacts from cleaning aids to energy advisors. Given this breadth, clarity surrounding the key characteristics and the multi-faceted impact of SHT is needed to conduct more directed research on SHT. We propose a taxonomy to help outline the salient intended outcomes of SHT. Through a process involving five iterations, we analyzed and classified 79 technologies (gathered from literature and industry reports). This uncovered seven dimensions encompassing 20 salient characteristics. We believe these dimensions/characteristics will help researchers and organizations better design and study the impacts of these technologies. Our long-term agenda is to use the proposed taxonomy for an exploratory inquiry to understand tensions occurring when personal and sustainability-related outcomes compete.
With the expansion of cyber-physical systems (CPSs) across critical and regulated industries, systems must be continuously updated to remain resilient. At the same time, they should be extremely secure and safe to operate and use. The DevOps approach caters to business demands of more speed and smartness in production, but it is extremely challenging to implement DevOps due to the complexity of critical CPSs and requirements from regulatory authorities. In this study, expert opinions from 33 European companies expose the gap in the current state of practice on DevOps-oriented continuous development and maintenance. The study contributes to research and practice by identifying a set of needs. Subsequently, the authors propose a novel approach called Secure DevOps and provide several avenues for further research and development in this area. The study shows that, because security is a cross-cutting property in complex CPSs, its proficient management requires system-wide competencies and capabilities across the CPSs development and operation.
Towards a practical maintainability quality model for service- and microservice-based systems
(2017)
Although current literature mentions a lot of different metrics related to the maintainability of service-based systems (SBSs), there is no comprehensive quality model (QM) with automatic evaluation and practical focus. To fill this gap, we propose a Maintainability Model for Services (MM4S), a layered maintainability QM consisting of service properties (SPs) related with automatically collectable Service Metrics (SMs). This research artifact created within an ongoing Design Science Research (DSR) project is the first version ready for detailed evaluation and critical feedback. The goal of MM4S is to serve as a simple and practical tool for basic maintainability estimation and control in the context of BSs and their specialization
microservice-based systems (μSBSs).
Towards a model for holistic mapping of supply chains by means of tracking and tracing technologies
(2022)
The usage of tracking and tracing technologies not only enables transparency and visibility of supply chains but also offers far-reaching advantages for companies, such as ensuring product quality or reducing supplier risks. Increasing the amount of shared information supports both internal and external planning processes as well as the stability and resilience of globally operating value chains. This paper aims to differentiate and define the functionalities of tracking and tracing technologies that are frequently used interchangeably in literature. Furthermore, this paper incorporates influencing factors impacting a sequencing of the connected world in Industry4.0 supply chain networks. This includes legal influences, the embedment of supply chain-related standards, and new possibilities of emerging technologies. Finally, the results are summarized in a model for the holistic mapping of supply chains by means of tracking and tracing technologies. The resulting technological solutions that can be derived from the model enable companies to address missing elements in order to enable the holistic mapping of supply chain events as well as the transparent representation of a digital shadow throughout the entire supply chain.
While there are several theoretical comparisons of Object Orientation (OO) and Service Orientation (SO), little empirical research on the maintainability of the two paradigms exists. To provide support for a generalizable comparison, we conducted a study with four related parts. Two functionally equivalent systems (one OO and one SO version) were analyzed with coupling and cohesion metrics as well as via a controlled experiment, where participants had to extend the systems. We also conducted a survey with 32 software professionals and interviewed 8 industry experts on the topic. Results indicate that the SO version of our system possesses a higher degree of cohesion, a lower degree of coupling, and could be extended faster. Survey and interview results suggest that industry sees systems built with SO as more loosely coupled, modifiable, and reusable. OO systems, however, were described as less complex and easier to test.
Current approaches for enterprise architecture lack analytical instruments for cyclic evaluations of business and system architectures in real business enterprise system environments. This impedes the broad use of enterprise architecture methodologies. Furthermore, the permanent evolution of systems desynchronizes quickly model representation and reality. Therefore we are introducing an approach for complementing the existing top-down approach for the creation of enterprise architecture with a bottom approach. Enterprise Architecture Analytics uses the architectural information contained in many infrastructures to provide architectural information. By applying Big Data technologies it is possible to exploit this information and to create architectural information. That means, Enterprise Architectures may be discovered, analyzed and optimized using analytics. The increased availability of architectural data also improves the possibilities to verify the compliance of Enterprise Architectures. Architectural decisions are linked to clustered architecture artifacts and categories according to a holistic EAM Reference Architecture with specific architecture metamodels. A special suited EAM Maturity Framework provides the base for systematic and analytics supported assessments of architecture capabilities.
Smart cities are considered data factories that generate an enormous amount of data from various sources. In fact data is the backbone of any smart services. Therefore, the strategic beneficial handling of this digital capital is crucial for cities. Some smart city pioneers have already written down their approach to data in the form of data strategies, but what should a city's data strategy include, and how can the goals and measures defined in the strategies be operationalized? This paper addresses these questions by looking closely at the data strategies of cities in Germany and the top three countries in the EU Digital Economy and Society Index. The in-depth analysis of 8 city data strategies has yielded 11 dimensions that cities should consider in their data strategy. These are relevance of data, principles, methods, data sharing, technology, data culture, data ethics, organizational structure, data security and privacy, collaborations, data literacy. In addition, data governance is a concept to put these 11 strategic dimensions into practice through standardization measures, training programs, and defining roles and responsibilities by developing a data catalog.
While the concepts of object-oriented antipatterns and code smells are prevalent in scientific literature and have been popularized by tools like SonarQube, the research field for service-based antipatterns and bad smells is not as cohesive and organized. The description of these antipatterns is distributed across several publications with no holistic schema or taxonomy. Furthermore, there is currently little synergy between documented antipatterns for the architectural styles SOA and Microservices, even though several antipatterns may hold value for both. We therefore conducted a Systematic Literature Review (SLR) that identified 14 primary studies. 36 service-based antipatterns were extracted from these studies and documented with a holistic data model. We also categorized the antipatterns with a taxonomy and implemented relationships between them. Lastly, we developed a web application for convenient browsing and implemented a GitHub-based repository and workflow for the collaborative evolution of the collection. Researchers and practitioners can use the repository as a reference, for training and education, or for quality assurance.
Analysis is an important part of the enterprise architecture management process. Prior to decisions regarding transformation of the enterprise architecture, the current situation and the outcomes of alternative action plans have to be analysed. Many analysis approaches have been proposed by researchers and current enterprise architecture management tools implement analysis functionalities. However, few work has been done structuring and classifying enterprise architecture analysis approaches. This paper collects and extends existing classification schemes, presenting a framework for enterprise architecture analysis classification. For evaluation, a collection of enterprise architecture analysis approaches has been classified based on this framework. As a result, the description of these approaches has been assessed, a common set of important categories for enterprise architecture analysis classification has been derived and suggestions for further development are drawn.
The benefits of urban data cannot be realized without a political and strategic view of data use. A core concept within this view is data governance, which aligns strategy in data-relevant structures and entities with data processes, actors, architectures, and overall data management. Data governance is not a new concept and has long been addressed by scientists and practitioners from an enterprise perspective. In the urban context, however, data governance has only recently attracted increased attention, despite the unprecedented relevance of data in the advent of smart cities. Urban data governance can create semantic compatibility between heterogeneous technologies and data silos and connect stakeholders by standardizing data models, processes, and policies. This research provides a foundation for developing a reference model for urban data governance, identifies challenges in dealing with data in cities, and defines factors for the successful implementation of urban data governance. To obtain the best possible insights, the study carries out qualitative research following the design science research paradigm, conducting semi-structured expert interviews with 27 municipalities from Austria, Germany, Denmark, Finland, Sweden, and the Netherlands. The subsequent data analysis based on cognitive maps provides valuable insights into urban data governance. The interview transcripts were transferred and synthesized into comprehensive urban data governance maps to analyze entities and complex relationships with respect to the current state, challenges, and success factors of urban data governance. The findings show that each municipal department defines data governance separately, with no uniform approach. Given cultural factors, siloed data architectures have emerged in cities, leading to interoperability and integrability issues. A city-wide data governance entity in a cross-cutting function can be instrumental in breaking down silos in cities and creating a unified view of the city’s data landscape. The further identified concepts and their mutual interaction offer a powerful tool for developing a reference model for urban data governance and for the strategic orientation of cities on their way to data-driven organizations.
Purpose
For the modeling, execution, and control of complex, non-standardized intraoperative processes, a modeling language is needed that reflects the variability of interventions. As the established Business Process Model and Notation (BPMN) reaches its limits in terms of flexibility, the Case Management Model and Notation (CMMN) was considered as it addresses weakly structured processes.
Methods
To analyze the suitability of the modeling languages, BPMN and CMMN models of a Robot-Assisted Minimally Invasive Esophagectomy and Cochlea Implantation were derived and integrated into a situation recognition workflow. Test cases were used to contrast the differences and compare the advantages and disadvantages of the models concerning modeling, execution, and control. Furthermore, the impact on transferability was investigated.
Results
Compared to BPMN, CMMN allows flexibility for modeling intraoperative processes while remaining understandable. Although more effort and process knowledge are needed for execution and control within a situation recognition system, CMMN enables better transferability of the models and therefore the system. Concluding, CMMN should be chosen as a supplement to BPMN for flexible process parts that can only be covered insufficiently by BPMN, or otherwise as a replacement for the entire process.
Conclusion
CMMN offers the flexibility for variable, weakly structured process parts, and is thus suitable for surgical interventions. A combination of both notations could allow optimal use of their advantages and support the transferability of the situation recognition system.
Autonomous driving is becoming the next big digital disruption in the automotive industry. However, the possibility of integrating autonomous driving vehicles into current transportation systems not only involves technological issues but also requires the acceptance and adoption of users. Therefore, this paper develops a conceptual model for user acceptance of autonomous driving vehicles. The corresponding model is tested through a standardized survey of 470 respondents in Germany. Finally, the findings are discussed in relation to the current developments in the automotive industry, and recommendations for further research are given.
Many start-ups are in search of cooperation partners to develop their innovative business models. In response, incumbent firms are introducing increasingly more cooperation systems to engage with start-ups. However, many of these cooperations end in failure. Although qualitative studies on cooperation models have tried to improve the effectiveness of incumbent start-up strategies, only a few have empirically examined start-up cooperation behavior. Considering the lack of adequate measurement models in current research, this paper focuses on developing a multi-item scale on cooperation behavior of start-ups, drawing from a series of qualitative and quantitative studies. The resultant scale contributes to recent research on start-up cooperation and provides a framework to add an empirical perspective to current research.
The conventional view of the value-creation chain suggests offering high-value propositions at the product level (in terms of benefits provided by elements of the product) to attain high-value perceptions at the customer level, which should ultimately result in high-value appropriation at the firm level (i.e. relationship, volume, pricing and financial success). This study challenges this view and provides a differentiated understanding of the value creation chain. With a multi-industry sample of 339 companies and a sample of 626 customers to validate managerial assessments, the authors apply a configurational approach to identify whether and to what extent offering high-value propositions at the product level is necessary or sufficient for achieving superior value perceptions at the customer level and high-value appropriation at the firm level. Taking into account the company-internal and company-external environment of the value-creation chain, the study identifies seven value creation chain constellations.
Purpose: This study aims to conceptualize and test the effect of consumers´ perceptions of complaint handling quality (PCHQ) in both traditional and social media channels.
Design/methodology/approach: Study 1 systematically reviews the relevant literature and then carries out a consumer and manager survey. This approach aims to conceptualize the dimensionality of PCHQ. Study 2 tests the effect of PCHQ on key marketing outcomes. Using survey data from a German telecommunications company, the study provides an explanation for the differences in outcomes across traditional (hotline) and social media channels.
Findings: Study 1 reveals that PCHQ is best conceptualized as a five dimensional construct with 15 facets. There are significant differences between customers and managers in terms of the importance attached to the various dimensions. The construct shows strong psychometric properties with high reliability and validity, thereby opening up opportunities to treat these facets as measurement indicators for the construct. Study 2 indicates that the effect of PCHQ on consumer loyalty and word-of-mouth (WOM) communication is stronger in social media than in traditional channels. Procedural justice and the overall quality of service solutions emerge as general dimensions of PCHQ because they are equally important in both channels. In contrast, interactional justice, distributive justice and customer effort have varying effects across the two channels.
Research limitations/implications: This study contributes to the understanding of a firm´s channel selection for complaint handling in two ways. First, it evaluates and conceptualizes the PCHQ construct. Second, it compares the effects of different dimensions of PCHQ on key marketing outcomes across traditional and socialmedia channels.
Practical implications: This study enables managers to understand the difference in efficacy attached to different dimensions of PCHQ. It further highlights such differences across traditional and social media service channels. For example, the effect of complaint handling on social media is of particular importance when generating WOM communication.
Originality/value: This study offers a comprehensive conceptualization of the PCHQ construct and reveals the general and channel contingent effects of its different dimensions on key marketing outcomes.
The development of new materials that mimic cartilage and its function is an unmet need that will allow replacing the damaged parts of the joints, instead of the whole joint. Polyvinyl alcohol (PVA) hydrogels have raised special interest for this application due to their biocompatibility, high swelling capacity and chemical stability. In this work, the effect of post-processing treatments (annealing, high hydrostatic pressure (HHP) and gamma-radiation) on the performance of PVA gels obtained by cast-drying was investigated and, their ability to be used as delivery vehicles of the anti-inflammatories diclofenac or ketorolac was evaluated. HHP damaged the hydrogels, breaking some bonds in the polymeric matrix, and therefore led to poor mechanical and tribological properties. The remaining treatments, in general, improved the performance of the materials, increasing their crystallinity. Annealing at 150 °C generated the best mechanical and tribological results: higher resistance to compressive and tensile loads, lower friction coefficients and ability to support higher loads in sliding movement. This material was loaded with the anti-inflammatories, both without and with vitamin E (Vit.E) or Vit.E + cetalkonium chloride (CKC). Vit.E + CKC helped to control the release of the drugs which occurred in 24 h. The material did not induce irritability or cytotoxicity and, therefore, shows high potential to be used in cartilage replacement with a therapeutic effect in the immediate postoperative period.
There are several intra-operative use cases which require the surgeon to interact with medical devices. We used the Leap Motion Controller as input device and implemented two use-cases: 2D-Interaction (e.g. advancing EPR data) and selection of a value (e.g. room illumination brightness). The gesture detection was successful and we mapped its output to several devices and systems.
Container virtualization evolved into a key technology for deployment automation in line with the DevOps paradigm. Whereas container management systems facilitate the deployment of cloud applications by employing container based artifacts, parts of the deployment logic have been applied before to build these artifacts. Current approaches do not integrate these two deployment phases in a comprehensive manner. Limited knowledge on application software and middleware encapsulated in container-based artifacts leads to maintainability and configuration issues. Besides, the deployment of cloud applications is based on custom orchestration solutions leading to lock in problems. In this paper, we propose a two-phase deployment method based on the TOSCA standard. We present integration concepts for TOSCA-based orchestration and deployment automation using container-based artifacts. Our two-phase deployment method enables capturing and aligning all the deployment logic related to a software release leading to better maintainability. Furthermore, we build a container management system, which is composed of a TOSCA-based orchestrator on Apache Mesos, to deploy container-based cloud applications automatically.
In this work, a comparison between different brushless harmonic-excited wound-rotor synchronous machines is performed. The general idea of all topologies is the elimination of the slip rings and auxiliary windings by using the already existing stator and rotor winding for field excitation. This is achieved by injecting a harmonic airgap field with the help of power electronics. This harmonic field does not interact with the fundamental field, it just transfers the excitation power across the airgap. Alternative methods with varying number of phases, different pole-pair combinations, and winding layouts are covered and compared with a detailed Finite-Element-parameterized model. Parasitic effects due to saturation and coupling between the harmonic and main windings are considered.
Enterprise Architectures (EA) consist of a multitude of architecture elements, which relate in manifold ways to each other. As the change of a single element hence impacts various other elements, mechanisms for architecture analysis are important to stakeholders. The high number of relationships aggravates architecture analysis and makes it a complex yet important task. In practice EAs are often analyzed using visualizations. This article contributes to the field of visual analytics in enterprise architecture management (EAM) by reviewing how state-of-the-art software platforms in EAM support stakeholders with respect to providing and visualizing the “right” information for decision-making tasks. We investigate the collaborative decision-making process in an experiment with master students using professional EAM tools by developing a research study. We evaluate the students’ findings by comparing them with the experience of an enterprise architect.
Distributed ledger technologies such as the blockchain technology offer an innovative solution to increase visibility and security to reduce supply chain risks. This paper proposes a solution to increase the transparency and auditability of manufactured products in collaborative networks by adopting smart contract-based virtual identities. Compared with existing approaches, this extended smart contract-based solution offers manufacturing networks the possibility of involving privacy, content updating, and portability approaches to smart contracts. As a result, the solution is suitable for the dynamic administration of complex supply chains.
Titanium(IV) surface complexes bearing chelating catecholato ligands for enhanced band-gap reduction
(2023)
Protonolysis reactions between dimethylamido titanium(IV) catecholate [Ti(CAT)(NMe2)2]2 and neopentanol or tris(tert-butoxy)silanol gave catecholato-bridged dimers [(Ti(CAT)(OCH2tBu)2)(HNMe2)]2 and [Ti(CAT){OSi(OtBu)3}2(HNMe2)2]2, respectively. Analogous reactions using the dimeric dimethylamido titanium(IV) (3,6-di-tert-butyl)catecholate [Ti(CATtBu2-3,6)(NMe2)2]2 yielded the monomeric Ti(CATtBu2-3,6)(OCH2tBu)2(HNMe2)2 and Ti(CATtBu2-3,6)[OSi(OtBu)3]2(HNMe2)2. The neopentoxide complex Ti(CATtBu2-3,6)(OCH2tBu)2(HNMe2)2 engaged in further protonolysis reactions with Si–OH groups and was consequentially used for grafting onto mesoporous silica KIT-6. Upon immobilization, the surface complex [Ti(CATtBu2-3,6)(OCH2tBu)2(HNMe2)2]@[KIT-6] retained the bidentate chelating geometry of the catecholato ligand. This convergent grafting strategy was compared with a sequential and an aqueous approach, which gave either a mixture of bidentate chelating species with a bipodally anchored Ti(IV) center along with other physisorbed surface species or not clearly identifiable surface species. Extension of the convergent and aqueous approaches to anatase mesoporous titania (m-TiO2) enabled optical and electronic investigations of the corresponding surface species, revealing that the band-gap reduction is more pronounced for the bidentate chelating species (convergent approach) than for that obtained via the aqueous approach. The applied methods include X-ray photoelectron spectroscopy, ultraviolet photoelectron spectroscopy, and solid-state UV/vis spectroscopy. The energy-level alignment for the surface species from the aqueous approach, calculated from experimental data, accounts for the well-known type II excitation mechanism, whereas the findings indicate a distinct excitation mechanism for the bidentate chelating surface species of the material [Ti(CATtBu2-3,6)(OCH2tBu)2(HNMe2)2]@[m-TiO2].
This paper is a review about the book "Stress Test: Reflections on Financial Crises" by Timothy Geithner. The book mainly discusses the policy decisions and implications of T. Geithner during his job as New York FED president and US-Treasury secretary under president Obama. The book reveals some hidden information about the decision-making process in both institutions. But it lacks a scientific foundation in order to explain the financial crisis in more detail. Hence, I think the book is less convincing than recognized in public. No doubt, Geithner crisis response deserves appreciation especially the "Stress Test". However, the overall book does not demonstrate that the response is sustainable in the long run and scientifically sound. Consequently, it is more a book on public policy and governance than economics.
Successful transitions to a sustainable bioeconomy require novel technologies, processes, and practices as well as a general agreement about the overarching normative direction of innovation. Both requirements necessarily involve collective action by those individuals who purchase, use, and co-produce novelties: the consumers. Based on theoretical considerations borrowed from evolutionary innovation economics and consumer social responsibility, we explore to what extent consumers’ scope of action is addressed in the scientific bioeconomy literature. We do so by systematically reviewing bioeconomy-related publications according to (i) the extent to which consumers are regarded as passive vs. active, and (ii) different domains of consumer responsibility (depending on their power to influence economic processes). We find all aspects of active consumption considered to varying degrees but observe little interconnection between domains. In sum, our paper contributes to the bioeconomy literature by developing a novel coding scheme that allows us to pinpoint different aspects of consumer activity, which have been considered in a rather isolated and undifferentiated manner. Combined with our theoretical considerations, the results of our review reveal a central research gap which should be taken up in future empirical and conceptual bioeconomy research. The system-spanning nature of a sustainable bioeconomy demands an equally holistic exploration of the consumers’ prospective and shared responsibility for contributing to its coming of age, ranging from the procurement of information on bio-based products and services to their disposal.
When forecasting sales figures, not only the sales history but also the future price of a product will influence the sales quantity. At first sight, multivariate time series seem to be the appropriate model for this task. Nonetheless, in real life history is not always repeatable, i.e., in the case of sales history there is only one price for a product at a given time. This complicates the design of a multivariate time series. However, for some seasonal or perishable products the price is rather a function of the expiration date than of the sales history. This additional information can help to design a more accurate and causal time series model. The proposed solution uses an univariate time series model but takes the price of a product as a parameter that influences systematically the prediction based on a calculated periodicity. The price influence is computed based on historical sales data using correlation analysis and adjustable price ranges to identify products with comparable history. The periodicity is calculated based on a novel approach that is based on data folding and Pearson Correlation. Compared to other techniques this approach is easy to compute and allows to preset the price parameter for predictions and simulations. Tests with data from the Data Mining Cup 2012 as well as artificial data demonstrate better results than established sophisticated time series methods.
Standardisation of breath sampling is important for application of breath analysis in clinical settings. By studying the effect of room airing on indoor and breath analytes and by generating time series of room air with different sampling intervals we sought to get further insights into room air metabolism, to detect the relevance of exogenous VOCs and to make conclusions about their consideration for the interpretation of exhaled breath. Room air and exhaled breath of a healthy subject were analysed before and after room airing. Furthermore a time series of room air with doors and windows closed was taken over 84 h by an automatic sampling every 180 min. A second times series studied room air analytes over 70 h with samples taken every 16.5 min. For breath and room air measurements an IMS coupled to a multi-capillary column (IMS/MCC) [Bio-Scout® - B&S Analytik GmbH, Dortmund, Germany] was used. The peaks were characterized using the Software Visual Now (B&S Analytik, Dortmund Germany) and identified using the software package MIMA (version 1.1, provided by the Max Planck Institute for Informatics, Saarbrücken, Germany) and the database 20160426_SubstanzDbNIST_122 (B & S Analytik GmbH, Dortmund, Germany). In the morning 4 analytes (Decamethylcylopentasiloxane [541-02-6]; Pentan-2-one [107-87-9] – Dimer; Hexan-1-al [66-25-1]; Pentan-2-one [107-87-9]) – Monomer showed high intensities in the room air and exhaled breath. They were significantly but not equally reduced by room airing. The time series about 84 h showed a time dependent decrease of analytes (limonen-monomer and -dimer; Decamethylcylopentasiloxane, Butan-1-ol, Butan-1-ol) as well as increase (Pentan-2-one [107-87-9] – Dimer). Shorter sampling intervals exhibited circadian variations of analyte concentrations for many analytes. Breath sampling in the morning needs room airing before starting. Then the variation of the intensity of indoor analytes can be kept small. The time series of indoor analytes show, that their intensities have a different behaviour, with time dependent declines, constant increases and circadian variations, dependent on room airing. This has implications on the breath sampling procedure and the intrepretation of exhaled breath.
The increase in distributed energy generation, such as photovoltaic systems (PV) or combined heat and power plants (CHP), poses new challenges to almost every distribution network operator (DNO). In the low-voltage (LV) grids, where installed PV capacity approaches the magnitude of household load, reverse power flow occurs at the secondary substa-tions. High PV penetration leads to voltage rise, flicker and loading problems. These problems have been addressed by the application of various techniques amongst which is the deployment of step voltage regulators (SVR). SVR can solve the voltage problem, but do not prevent or reduce reverse power flows. Therefore, the application of SVR in low voltage grids can result in significant power losses upstream. In this paper we present part of a research project investi-gating the application of remote-controlled cable cabinets (CC) with metering units in a low-voltage network as a possible alternative for SVR. A new generation of custom-made remote-control cable cabinets has been deployed and dynamic network reconfigurations (NR) have been realized with the following objectives: (i) reduction of reverse power flow through the secondary substation to the upstream network and therefore a reduction of upstream losses, (ii) reduction of the voltage rise caused by distributed energy resources and (iii) load balancing in the low-voltage grid. Secondary objec-tives are to improve the DNO's insight into the state of the network and to provide further information on future smart grid integration.
Advancements in Internet of Things (IoT), cloud and mobile computing have fostered the digital enrichment—or “digitization”—of physical products, which are gaining increasing relevance in practice. According to recent studies, global IoT spending will exceed USD 1 Trillion by 2021 and there will be over 25 billion IoT connections (KPMG, 2018). Porter and Heppelmann (2014) state that IT is “revolutionizing products [as …] IT is becoming an integral part of the product itself.” Senior business executives like GE’s former CEO Jeff Immelt (2015) are even proposing that “every industrial company in the coming age is also going to become a software and analytics company.” This reflects the increasing relevance of IT components’ (i.e., software, data analytics, cloud computing) integration into previously purely physical products. We call IT-enriched physical products, “digitized” products to differentiate them from purely intangible “digital” products, such as digital music, e-books, and software. Examples of digitized products include the Philips Hue smartphone-controllable lightbulb, Audi Connect internet-connected cars, or Rolls-Royce’s sensor-enabled pay per use jet engines.
Digitized products provide their producers with a wide range of opportunities to offer new functionality and product capabilities (e.g., autonomy) that traditional, physical products do not exhibit (Porter and Heppelmann, 2014). In addition, the digitization of products allows producers to continuously repurpose their offerings, by extending and/or changing the product functionality and, thus, enabling new value creation opportunities. Based on their re-programmability and connectivity, digitized products “remain essentially incomplete […] throughout their lifetime as users continue to add and delete […] and change […] functional capabilities” (Yoo, 2013). For instance, the Philips Hue connected lightbulb enables remote control of basic functions (e.g., switching on and off the light) as well as setting more advanced light scenes for day-to-day tasks (e.g., relax, read) via Amazon’s Alexa artificial intelligence assistant (Signify, 2019), offerings that were not intended use cases when Signify (previously known as Philips Lighting) created Hue in 2012. Thus, digitized products present limitless potentials for new functionality and unforeseen use cases, which provides them with a huge innovation capacity.
Despite the limitless potentials offered by digitized products, there has been a slow uptake of digitized products by businesses so far (Jernigan et al., 2016; Mocker et al., 2019). According to a 2016 MIT Sloan Management Review report (Jernigan et al., 2016) only 24% of the investigated firms were actively using IoT technologies – a key technology for digitized products. In a more recent research study Mocker et al. (2019) found that the median revenue share from digital offerings (i.e., solutions based on IT enriched products) in large companies only accounted for 5% of the total revenue of the investigated companies.
The slow uptake of digitized products might be explained by the challenges that firms face regarding the changing nature of digitized products. Pervasive digital technologies (such as IoT) change the nature of products by adding new functionality that was previously not part of the value proposition of the products/services (e.g., a pair of shoes embedded with sensors and connectivity allows joggers to have access to data regarding their run distance, speed, etc.) (Yoo et al., 2012). The addition of new functionality and use cases of digitized products makes it harder for producers to design and develop relevant products (Hui 2014). As described in the paper ‘Do Your Customers Actually Want a “Smart” Version of Your Product?’, “just because [firms] can make something with IoT technology doesn’t mean people will want it.” (Smith, 2017).
The shift in digitized products’ nature poses new challenges for producers along the entire product development process (Porter and Heppelmann, 2015; Yoo et al., 2012) and create a paradox in product digitization, described by Yoo et al. (2012) as the paradox of pace: while technology accelerates the rate of innovation, companies need to spend more time to digitize their products, extending time to market. The production of these digitized products also becomes more challenging, e.g., as companies need to deal with different clock-speeds of software and hardware development (Porter and Heppelman, 2015). The above-mentioned challenges suggest that producers need to better understand how they can generate value from their digitized products’ generative potentials.
The body of literature on digitized products has been growing in recent years. For instance, Herterich et al. (2016) investigate how digitized product affordances (i.e., potentials) enable industrial service innovation; Nicolescu et al. (2018) explore the emerging meanings of value associated with IoT; and Benbunan-Fich (2019) studies the impact of basic wearable sensors on the quality of the user experience. However, it remains unclear what it takes for firms to generate value with their digitized product potentials. This dissertation investigates this research gap.
This study introduces a straightforward approach to construct three-dimensional (3D) surface-enhanced Raman spectroscopy (SERS) substrates using chemically modified silica particles as microcarriers and by attaching metal nanoparticles (NPs) onto their surfaces. Tollens’ reagent and sputtering techniques are utilized to prepare the SERS substrates from mercapto-functionalized silica particles. Treatment with Tollens’ reagent generates a variety of silver NPs, ranging from approximately 10 to 400 nm, while sputtering with gold (Au) yields uniformly distributed NPs with an island-like morphology. Both substrates display wide plasmon resonances in the scattering spectra, making them effective for SERS in the visible spectral range, with enhancement factors (ratio of the analyte’s intensity at the hotspot compared to that on the substrate in the absence of metal nanoparticles) of up to 25. These 3D substrates have a significant advantage over traditional SERS substrates because their active surface area is not limited to a 2D surface but offers a much greater active surface due to the 3D arrangement of the NPs. This feature may enable achieving much higher SERS intensity from within streaming liquids or inside cells/tissues.
To generate greater value faster from digital innovation, many companies are increasing how much they learn from their own innovation efforts. However, in many companies, these changes are limited to one stakeholder group: innovation teams. Two other stakeholder groups, senior executives and experts from corporate functions, also need to learn from digital innovation initiatives. We have defined three learning imperatives that address a company’s needs to learn continually about building (1) a successful innovation, (2) a portfolio of initiatives that realizes strategic objectives faster, and (3) shared resources that propel multiple initiatives. All three imperatives involve collecting data regularly from digital innovation initiatives. In this research briefing we outline the three learning imperatives and provide examples of how companies are pursuing them to achieve strategic objectives more effectively and efficiently.
For large-scale processes as implemented in organizations that develop software in regulated domains, comprehensive software process models are implemented, e.g., for compliance requirements. Creating and evolving such processes is demanding and requires software engineers having substantial modeling skills to create consistent and certifiable processes. While teaching process engineering to students, we observed issues in providing and explaining models. In this paper, we present an exploratory study in which we aim to shed light on the challenges students face when it comes to modeling. Our findings show that students are capable of doing basic modeling tasks, yet, fail in utilizing models correctly. We conclude that the required skills, notably abstraction and solution development, are underdeveloped due to missing practice and routine. Since modeling is key to many software engineering disciplines, we advocate for intensifying modeling activities in teaching.
The data presented in this article characterize the thermomechanical and microhardness properties of a novel melamine-formaldehyde resin (MF) intended for the use as a self-healing surface coating. The investigated MF resin is able to undergo reversible crosslinking via Diels Alder reactive groups. The microhardness data were obtained from nanoindentation measurements performed on solid resin film samples at different stages of the self-healing cycle. Thermomechanical analysis was performed under dynamic load conditions. The data provide supplemental material to the manuscript published by Urdl et al. 2020 (https://doi.org/10.1016/j.eurpolymj.2020.109601) on the self-healing performance of this resin, where a more thorough discussion on the preparation, the properties of this coating material and its application in impregnated paper-based decorative laminates can be found.
In the course of a more intensive energy generation from regenerative sources, an increased number of energy storages is required. In addition to the widespread means of storing electric energy, storing energy thermally can contribute significantly. However, limited research exists on the behaviour of thermal energy storages (TES) in practical operation. While the physical processes are well known, it is nevertheless often not possible to adequately evaluate its performance with respect to the quality of thermal stratification inside the tank, which is crucial for the thermodynamic effectiveness of the TES. The behaviour of a TES is experimentally investigated in cyclic charging and discharging operation in interaction with a cogeneration (CHP) unit at a test rig in the lab. From the measurements the quality of thermal stratification is evaluated under varying conditions using different metrics such as normalised stratification factor, modified MIX number, exergy number and exergy efficiency, which extends the state of art for CHP applications. The results show that the positioning of the temperature sensors for turning the CHP unit on and off has a significant influence on both the effective capacity of a TES and the quality of thermal stratification inside the tank. It is also revealed that the positioning of at least one of these sensors outside the storage tank, i.e. in the return line to the CHP unit, prevents deterioration of thermal stratification, thereby enhancing thermodynamic effectiveness. Furthermore, the effects of thermal load and thermal load profile on effective capacity and thermal stratification are discussed, even though these are much smaller compared to the effect of positioning the temperature sensors.
This paper investigates the electrothermal stability and the predominant defect mechanism of a Schottky gate AlGaN/GaN HEMT. Calibrated 3-D electrothermal simulations are performed using a simple semiempirical dc model, which is verified against high-temperature measurements up to 440°C. To determine the thermal limits of the safe operating area, measurements up to destruction are conducted at different operating points. The predominant failure mechanism is identified to be hot-spot formation and subsequent thermal runaway, induced by large drain–gate leakage currents that occur at high temperatures. The simulation results and the high temperature measurements confirm the observed failure patterns.
Theory and practice of implementing a successful enterprise IoT strategy in the industry 4.0 era
(2021)
Since the arrival of the internet and affordable access to technologies, digital technologies have occupied a growing place in industries, propelling us towards a 4th industrial revolution: Industry 4.0. In today’s era of digital upheaval, enterprises are increasingly undergoing transformations that are leading to their digitalization. The traditional manufacturing industry is in the throes of a digital transformation that is accelerated by exponentially growing technologies (e.g., intelligent robots, Internet of Things, sensors, 3D printing). Around the world, enterprises are in a frantic race to implement solutions based on IoT to improve their productivity, innovation, and reduce costs and improve their markets on the international scene. Considering the immense transformative potential that IoTs and big data have to bring to the industrial sector, the adoption of IoT in all industrial systems is a challenge to remain competitive and thus transform the industry into a smart factory. This paper presents the description of the innovation and digitalization process, following the Industry 4.0 paradigm to implement a successful enterprise IoT strategy.
Hypericin has large potential in modern medicine and exhibits fascinating structural dynamics, such as multiple conformations and tautomerization. However, it is difficult to study individual conformers/tautomers, as they cannot be isolated due to the similarity of their chemical and physical properties. An approach to overcome this difficulty is to combine single molecule experiments with theoretical studies. Time-dependent density functional theory (TD-DFT) calculations reveal that tautomerization of hypericin occurs via a two-step proton transfer with an energy barrier of 1.63 eV, whereas a direct single-step pathway has a large activation energy barrier of 2.42 eV. Tautomerization in hypericin is accompanied by reorientation of the transition dipole moment, which can be directly observed by fluorescence intensity fluctuations. Quantitative tautomerization residence times can be obtained from the autocorrelation of the temporal emission behavior revealing that hypericin stays in the same tautomeric state for several seconds, which can be influenced by the embedding matrix. Furthermore, replacing hydrogen with deuterium further proves that the underlying process is based on tunneling of a proton. In addition, the tautomerization rate can be influenced by a λ/2 Fabry–Pérot microcavity, where the occupation of Raman active vibrations can alter the tunneling rate.
Thematic issue on human-centred ambient intelligence: cognitive approaches, reasoning and learning
(2017)
This editorial presents advances on human-centred Ambient Intelligence applications which take into account cognitive issues when modelling users (i.e. stress, attention disorders), and learn users’ activities/preferences and adapt to them (i.e. at home, driving a car). These papers also show AmI applications in health and education, which make them even more valuable for the general society.
The purpose of this paper is to study the recycling form of reusing second hand clothing from a conventional fashion brand’s perspective. It should clarify which measures and activities a fashion company needs to integrate in its value chain in order to offer branded second hand merchandise in a self-operated store. The research paper relies on a desk-based research and aims to illustrate the topic by means of a descriptive approach, processing the existing literature. Key findings demonstrate that fashion brands need to integrate complete lifecycle strategies, sustainability communication, and reverse logistics structures, like take-back schemes, for offering second hand clothing. The main limitations evolve from the research design. Further, empirical evidences need to be conducted for a more fundamental understanding of the new business model.
Due to the rising need for palliative care in Russia, it is crucial to provide timely and high-quality solutions for patients, relatives, and caregivers. A methodology for remote monitoring of patients in need of palliative care and the requirements will be developed for a hardware-software complex for remote monitoring of patients' health at home.
This paper investigates if food ^ retailing mobile applications from Germany, Austria, USA and the United Kingdom are meant to stay a marginal topic in grocery shopping, or if they have the potential to significantly shape the future of grocery retailing by serving as competitive advantages that can fulfil customer requirements and satisfaction. It has filtered out success factors in form of functions of grocery apps and it has extracted key competencies that can be used to create customer value. The Kano model can help selecting the right app functions. But, there are other prerequisites, like customers’ general attitude towards technology and their acceptance towards any kind of apps, that play an important role looking at the big picture of apps in grocery retailing. However, this paper has contributed one vital part of giving more importance to apps in grocery retailing in form of app functions that clearly deliver customer value. In short, apps that fit customers’ needs and that provide usability and convenience clearly have the potential to shape the future of grocery retailing - if key barriers towards app use are eliminated and if incentives are given that overcome scepticism.
In recent years, the Graph Model has become increasingly popular, especially in the application domain of social networks. The model has been semantically augmented with properties and labels attached to the graph elements. It is difficult to ensure data quality for the properties and the data structure because the model does not need a schema. In this paper, we propose a schema bound Typed Graph Model with properties and labels. These enhancements improve not only data quality but also the quality of graph analysis. The power of this model is provided by using hyper-nodes and hyper-edges, which allows to present data structures on different abstraction levels. We prove that the model is at least equivalent in expressive power to most popular data models. Therefore, it can be used as a supermodel for model management and data integration. We illustrate by example the superiority of this model over the property graph data model of Hidders and other prevalent data models, namely the relational, object-oriented, XML model, and RDF Schema.
The typed graph model
(2020)
In recent years, the Graph Model has become increasingly popular, especially in the application domain of social networks. The model has been semantically augmented with properties and labels attached to the graph elements. It is difficult to ensure data quality for the properties and the data structure because the model does not need a schema. In this paper, we propose a schema bound Typed Graph Model with properties and labels. These enhancements improve not only data quality but also the quality of graph analysis. The power of this model is provided by using hyper-nodes and hyper edges, which allows to present a data structure on different abstraction levels. We demonstrate by example the superiority of this model over the property graph data model of Hidders and other prevalent data models, namely the relational, object-oriented, and XML model.
The time has come : application of artificial intelligence in small- and medium-sized enterprises
(2022)
Artificial intelligence (AI) is not yet widely used in small- and medium-sized industrial enterprises (SME). The reasons for this are manifold and range from not understanding use cases, not enough trained employees, to too little data. This article presents a successful design-oriented case study at a medium-sized company, where the described reasons are present. In this study, future demand forecasts are generated based on historical demand data for products at a material number level using a gradient boosting machine (GBM). An improvement of 15% on the status quo (i.e. based on the root mean squared error) could be achieved with rather simple techniques. Hence, the motivation, the method, and the first results are presented. Concluding challenges, from which practical users should derive learning experiences and impulses for their own projects, are addressed.
Purpose
This field study aims to investigate the interactive relationships of millennial employee’s gender, supervisor’s gender and country culture on the conflict-management strategies (CMS) in ten countries (USA, China, Turkey, Germany, Bangladesh, Portugal, Pakistan, Italy, Thailand and Hong Kong).
Design/methodology/approach
This exploratory study extends past research by examining the interactive effects of gender × supervisor’s gender × country on the CMS within a single generation of workers, millennials. The Rahim Organizational Conflict Inventory–II, Form A was used to assess the use of the five CMS (integrating, obliging, dominating, avoiding and compromising). Data analysis found CMS used in the workplace are associated with the interaction of worker and supervisor genders and the national context of their work.
Findings
Data analysis (N = 2,801) was performed using the multivariate analysis of covariance with work experience as a covariate. The analysis provided support for the three-way interaction. This interaction suggests how one uses the CMS depends on self-gender, supervisor’s gender and the country where the parties live. Also, the covariate – work experience – was significantly associated with CMS.
Research limitations/implications
One of the limitations of this study is that the authors collected data from a collegiate sample of employed management students in ten countries. There are significant implications for leading global teams and training programs for mid-level millennials.
Practical implications
There are various conflict situations where one conflict strategy may be more appropriate than others. Organizations may have to change their policies for recruiting employees who are more effective in conflict management.
Social implications
Conflict management is not only important for managers but it is also important for all human beings. Individuals handle conflict every day and it would be really good if they could handle it effectively and improve their gains.
Originality/value
To the best of the authors’ knowledge, no study has tested a three-way interaction of variables on CMS. This study has a wealth of information on CMS for global managers.
The tale of 1000 cores: an evaluation of concurrency control on real(ly) large multi-socket hardware
(2020)
In this paper, we set out the goal to revisit the results of “Starring into the Abyss [...] of Concurrency Control with [1000] Cores” and analyse in-memory DBMSs on today’s large hardware. Despite the original assumption of the authors, today we do not see single-socket CPUs with 1000 cores. Instead multi-socket hardware made its way into production data centres. Hence, we follow up on this prior work with an evaluation of the characteristics of concurrency control schemes on real production multi-socket hardware with 1568 cores. To our surprise, we made several interesting findings which we report on in this paper.
Recently described rhizolutin and collinolactone isolated from Streptomyces Gç 40/10 share the same novel carbon scaffold. Analyses by NMR and X-Ray crystallography verify the structure of collinolactone and propose a revision of rhizolutins stereochemistry. Isotope-labeled precursor feeding shows that collinolactone is biosynthesized via type I polyketide synthase with Baeyer–Villiger oxidation. CRISPR-based genetic strategies led to the identification of the biosynthetic gene cluster and a high-production strain. Chemical semisyntheses yielded collinolactone analogues with inhibitory effects on L929 cell line. Fluorescence microscopy revealed that only particular analogues induce monopolar spindles impairing cell division in mitosis. Inspired by the Alzheimerprotective activity of rhizolutin, we investigated the neuroprotective effects of collinolactone and its analogues on glutamate-sensitive cells (HT22) and indeed, natural collinolactone displays distinct neuroprotection from intracellular oxidative stress.
Few unfocused factories outperform competitors, but Focus is elusive because the environment is constantly evolving and this requires changes to a factory’s key tasks. So how can focus be achieved and sustained? We present insights derived from an historical analysis of the German Hewlett-Packard server plant which went through a series of Focus changes over the years. Using this example, we provide clues for the right timing of Focus changes and discuss critical structural and infrastructural changes required during the Focus transitions, as well as cross-functional coordination and leadership challenges. Our assertion is that production operations constitute a system that can adapt to disruptive Change by using the levers of manufacturing policies to stay focused on a limited but absolutely essential task which creates a strategic advantage.
Compared to the automotive sector, where automation is the rule, in many other less standardized sectors automation is still the exception. This could soon hurt the productivity of industrialized countries, where the unemployment is low and the population is aging. Phenomena like the recent downfall in productivity, due to lockdowns and social distancing for prevention of health hazards during the COVID19 pandemic, only add to the problem. For these reasons, the relevance, motivation and intention for more automation in less standardized sectors has probably never been higher. However, available statistics say that providers and users of technologies struggle to bring more automation into action in automation-unfriendly sectors. In this paper, we present a decision support method for investment in automation that tackles the problem: the STIC analysis. The method takes a holistic and quantitative approach tying together technological, context-related and economic input parameters and synthetizing them in a final economic indicator. Thanks to the modelling of such parameters, it is possible to gain sensibility on the technological and/or process adjustments that would have the highest impact on the efficiency of the automation, thereby delivering value for both technology users and technology providers.
The success of an autonomous robotic system is influenced by several interdependent factors not easily identifiable. This paper is set to lay the foundation of a new integrated approach in order to deeply examine all the parameters and understand their contribution to success. After introducing the problem, two cutting edge autonomous systems for the process of unloading of containers will be presented. Then the STIC analysis, a recently developed method for modelling and interpreting all the parameters, will be introduced. The preliminary results of applying such a methodology to a first study case, based on one of the two systems available to the authors, will be shortly presented. Future research is in the end recommended in order to prove that this methodology is the only way to efficiently and effectively mitigate the risk that stops potential users from investing in autonomous systems in the logistics sector.
The second hand concept indicates a growing trend in clothing recently, leading to growing numbers of second hand shops and developments of new second hand retail forms. This paper concentrates on the current second hand market for fashion products and presents the different motives toward second hand consumption as well as alternative consumption channels for second hand products. The findings of the paper are founded on literature research of academic articles and case studies. Results show that there is a high potential for the second hand market due to the increasing interest of consumers in buying second hand products. The paper concentrates on the second hand market for fashion products in the western society. This means that there was no research on second hand products for disadvantaged people in poor countries. Furthermore, the paper focuses the formal second hand retail channels to see what is already on the market.
This study focuses on the different roles of social media for the promotion of a sustainable lifestyle, behaviour and consumption, especially with regard to the typically non-ethical fashion industry. Research findings include eight roles of social media influencing a sustainable consumption contrary to prior research naming one to five impacts. Results show that social media educates and engages the young and ethically interested target group besides increasing supply chain transparency and brand or theme awareness. Furthermore, social media provides a platform for organisations’ relationship management and social interaction since users get empowered to share experiences which leads to a higher level of trust.
Implementation of product-service systems (PSS) requires structural changes in the way that business in manufacturing industries is traditionally conducted. Literature frequently mentions the importance of human resource management (HRM), since people are involved in the entire process of PSS development and employees are the primary link to customers. However, to this day, no study has provided empirical evidence whether and in what way HRM of firms that implement PSS differs from HRM of firms that solely run a traditional manufacturing based business model. The aim of this study is to contribute to closing this gap by investigating the particular HR components of manufacturing firms that implement PSS and compare it with the HRM of firms that do not. The context of this study is the fashion industry, which is an ideal setting since it is a mature and highly competitive industry that is well-documented for causing significant environmental impact. PSS present a promising opportunity for fashion firms to differentiate and mitigate the industry’s ecological footprint. Analysis of variance (ANOVA) was conducted to analyze data of 102 international fashion firms. Findings reveal a significant higher focus on nearly the entire spectrum of HRM components of firms that implement PSS compared with firms that do not. Empirical findings and their interpretation are utilized to propose a general framework of the role of HRM for PSS implementation. This serves as a departure point for both scholars and practitioners for further research, and fosters the understanding of the role of HRM for managing PSS implementation.
Context: Development of software intensive products and services increasingly occurs by continuously deploying product or service increments, such as new features and enhancements, to customers. Product and service developers must continuously find out what customers want by direct customer feedback and usage behaviour observation. Objective: This paper examines the preconditions for setting up an experimentation system for continuous customer experiments. It describes the RIGHT model for Continuous Experimentation (Rapid Iterative value creation Gained through High-frequency Testing), illustrating the building blocks required for such a system. Method: An initial model for continuous experimentation is analytically derived from prior work. The model is matched against empirical case study findings from two startup companies and further developed. Results: Building blocks for a continuous experimentation system and infrastructure are presented. Conclusions: A suitable experimentation system requires at least the ability to release minimum viable products or features with suitable instrumentation, design and manage experiment plans, link experiment results with a product roadmap, and manage a flexible business strategy. The main challenges are proper, rapid design of experiments, advanced instrumentation of software to collect, analyse, and store relevant data, and the integration of experiment results in both the product development cycle and the software development process.
Due to rapidly changing technologies and business contexts, many products and services are developed under high uncertainties. It is often impossible to predict customer behaviors and outcomes upfront. Therefore, product and service developers must continuously find out what customers want, requiring a more experimental mode of management and appropriate support for continuously conducting experiments. We have analytically derived an initial model for continuous experimentation from prior work and matched it against empirical case study findings from two startup companies. We examined the preconditions for setting up an experimentation system for continuous customer experiments. The resulting RIGHT model for Continuous Experimentation (Rapid Iterative value creation Gained through High-frequency Testing) illustrates the building blocks required for such a system and the necessary infrastructure. The major findings are that a suitable experimentation system requires the ability to design, manage, and conduct experiments, create so-called minimum viable products or features, link experiment results with a product roadmap, and manage a flexible business strategy. The main challenges are proper, rapid design of experiments, advanced instrumentation of software to collect, analyse, and store relevant data, and integration of experiment results in the product development cycle, software development process, and business strategy. This summary refers to the article The RIGHT Model for Continuous Experimentation, published in the Journal of Systems and Software [Fa17].
The relevance of technology knowledge in digital transformation especially in small and mediumsized enterprises (SMEs) that are still largely dependent on physical human capital has become increasingly obvious. This is due to the rapid revolution in business environment coupled with increased living examples of firms disrupted by advancement in technological knowledge. Consequently, we find it progressively vital for SMEs to spot and mitigate both threats and take advantage of opportunities arising from digital transformation dynamism.
Our study aims at exploring the relevance of technology knowledge in SMEs for digital transformation to uncover the opportunities, roadmaps, and models that SMEs can take advantage of in the digital transformation and gain a competitive edge.
We conclude that irrespective relevance of technology knowledge for digital transformation coupled with its low costs and accessibility, SMEs are yet to realize the full potential of technological knowledge. This is mainly due to technologies appearing, changing and also vanishing so rapidly in the digital age, that gaining proper understanding without dedicated resources is utterly difficult for SMEs - making them less competitive as incumbent large firms in the market.
Purpose – The purpose of this paper is to examine the mediating effect of psychological contract breach on the relationship between job insecurity and counterproductive workplace behavior (CWB) and the moderating effect of employment status in this relationship.
Design/methodology/approach – Data were collected from 212 supervisor–subordinate dyads in a large Chinese state-owned air transportation group. AMOS 17.0 software was used to examine the hypothesized predictions and the theoretical model.
Findings – The results showed that psychological contract breach partially mediates the effect of job insecurity on CWB, including organizational counterproductive workplace behavior and interpersonal counterproductive workplace behavior. In addition, the relationships between job insecurity, psychological contract breach and CWB differ significantly between permanent workers and contract workers.
Originality/value – The present study provides a new insight into explaining the linkage between job insecurity and negative work behaviors as well as suggestions to managers on minimizing the harmful effects of job insecurity.
In standardized sectors such as the automotive, the cost-benefit ratio of automation solutions is high as they contribute to increase capacity, decrease costs and improve product quality. In less standardized application fields, the contribution of automation to improvements in capacity, cost and quality blurs. The automation of complex and unstructured tasks requires sophisticated, expensive and low-performing systems, whose impact on product quality is oftentimes not directly perceived by customers. As a result, the full automation of process chains in the general manufacturing or the logistic sectors is often a sub optimal solution. Taking the distance from the false idea that a process should be either fully automated, or fully manual, this paper presents a novel heuristic method for design of lean human-robot interaction, the Quality Interaction Function Deployment, with the objective of the “right level of automation”. Functions are divided among human and automated agents and several automation scenarios are created and evaluated with respect to their compliance to the requirements of all process´ stakeholders. As a result, synergies among operators (manual tasks) and machines (automated tasks) are improved, thus reducing time-losses and increasing productivity.
Public transport maps are typically designed in a way to support route finding tasks for passengers, while they also provide an overview about stations, metro lines, and city-specific attractions. Most of those maps are designed as a static representation, maybe placed in a metro station or printed in a travel guide. In this paper, we describe a dynamic, interactive public transport map visualization enhanced by additional views for the dynamic passenger data on different levels of temporal granularity. Moreover, we also allow extra statistical information in form of density plots, calendar-based visualizations, and line graphs. All this information is linked to the contextual metro map to give a viewer insights into the relations between time points and typical routes taken by the passengers. We also integrated a graph-based view on user-selected routes, a way to interactively compare those routes, an attribute- and property-driven automatic computation of specific routes for one map as well as for all available maps in our repertoire, and finally, also the most important sights in each city are included as extra information to include in a user-selected route. We illustrate the usefulness of our interactive visualization and map navigation system by applying it to the railway system of Hamburg in Germany while also taking into account the extra passenger data. As another indication for the usefulness of the interactively enhanced metro maps we conducted a controlled user experiment with 20 participants.
The purpose of this paper is to highlight potentials and limitations of the prosumer concept in fashion retail. The paper illustrates the evolution of prosumption and in which directions the concept is being developed. The primary research is based on a literature review containing different sources of academic and non-academic references. Findings suggest that the prosumer concept is no new phenomenon. Recently, it has moved into the focus of companies that have noted that it is efficient when engaging with customers in order to strengthen their brand loyalty. An increasing number of companies offer innovative business models that underlie the concept. However, lately smart prosuming machines are changing the objectives of the concept. Even if the prosumer concept exists since many years and scholars investigate its potentials continuously, it is the fashion industry that has been researched comparatively little up to now.
Context: Organizations are increasingly challenged by high market dynamics, rapidly evolving technologies and shifting user expectations. In consequence, many organizations are struggling with their ability to provide reliable product roadmaps by applying traditional roadmapping approaches. Currently, many companies are seeking opportunities to improve their product roadmapping practices and strive for new roadmapping approaches. A typical first step towards advancing the roadmapping capabilities of an organization is to assess the current situation. Therefore, the so-called maturity model DEEP for assessing the product roadmapping capabilities of companies operating in dynamic and uncertain environments has been developed and published by the authors.
Objective: The aim of this article is to conduct an initial validation of the DEEP model in order to understand its applicability better and to see if important concepts are missing. In addition, the aim of this article is to evolve the model based on the findings from the initial validation.
Method: The model has been given to practitioners such as product managers with the request to perform a self-assessment of the current product roadmapping practices in their company. Afterwards, interviews with each participant have been conducted in order to gain insights.
Results: The initial validation revealed that some of the stages of the model need to be rearranged and minor usability issues were found. The overall structure of the model was well received. The study resulted in the development of the version 1.1 of the DEEP product roadmap maturity model which is also presented in this article.
THE PROBLEM: Companies create problems for customers and employees when product innovation goes unmanaged. Eventually, excessive operational complexity hurts the bottom line.
THREE SOLUTIONS: Focus on product integration, not product proliferation. Make sure your product developers work closely with customerfacing and operational employees. And settle on a high-level purpose that can guide decision making.
Steady growing research material in a variety of databases, repositories and clouds make academic content more than ever hard to discover. Finding adequate material for the own research however is essential for every researcher. Based on recent developments in the field of artificial intelligence and the identified digital capabilities of future universities a change in the basic work of academic research is predicted. This study defines the idea of how artificial intelligence could simplifiy academic research at a digital university. Today's studies in the field of AI spectacle the true potential and its commanding impact on academic research.
The reduced research and development (R&D) efficiency, strong competition from generics, increased cost pressure from payers, and an increased biological complexity of new target indications have resulted in a rethinking and a change from a traditional and more closed R&D model in the pharmaceutical industry toward the new paradigm of open innovation. In the past years, pharmaceutical companies have broadened their external networks toward research collaborations with academic institutes, technology providers, or codevelopment partners. To fulfill the demand to reduce timelines and costs, research-based pharmaceutical companies started to outsource R&D activities. In addition, internal R&D processes were adjusted to the more open R&D model and new processes such as alliance management were established. The corporate frontier of pharmaceutical companies became permeable and more open. As a result, the focus of pharmaceutical R&D expanded from a purely internal toward a mixed internal and external model. Today, the U.S. pharmaceutical company Eli Lilly may have established the most open model toward external innovation, as it has integrated its innovation processes with its business model. Other companies are following this more open R&D model with newer concepts such as new frontier sciences, drug discovery alliances, private public partnerships, innovation incubators, virtual R&D, crowdsourcing, open source innovation, and innovation camps.
Electromigration (EM) is becoming a progressively severe reliability challenge due to increased interconnect current densities. A shift from traditional (post-layout) EM verification to robust (pro-active) EM aware design - where the circuit layout is designed with individual EM-robust solutions - is urgently needed. This tutorial will give an overview of EM and its effects on the reliability of present and future integrated circuits (ICs). We introduce the physical EM process and present its specific characteristics that can be affected during physical design. Examples of EM countermeasures which are applied in today’s commercial design flows are presented. We show how to improve the EM-robustness of metallization patterns and we also consider mission proiles to obtain application-oriented current density limits. The increasing interaction of EM with thermal migration is investigated as well. We conclude with a discussion of application examples to shift from the current post layout EM verification towards an EM aware physical design process. Its methodologies, such as EM-aware routing, increase the EM-robustness of the layout with the overall goal of reducing the negative impact of EM on the circuit’s reliability.
This paper describes the design and outcomes of an experimental study that addresses stock-and-flow-failure from a cognitive perspective. It is based on the assumption that holistic (global) and analytic (local) processing are important cognitive mechanisms underlying the ability to infer the behavior of dynamic systems. In a stock-and-flow task that is structurally equivalent to the department store task, we varied the format in which participants are primed to think about an environmental system, in particular whether they are primed to concentrate on lower-level (local) or higher-level (global) system elements. 148 psychology, geography and business students participated in our study. Students’ answers support our hypothesis that global processing increases participants’ ability to infer the overall system behavior. The beneficial influence of global presentation is even stronger when data are presented numerically rather than in the form of a graph. Our results suggest presenting complex dynamic systems in a way that facilitates global processing. This is particularly important as policy-designers and decision makers deal with complex issues in their everyday and professional life.
During the first years of their employment, the graduates are a liability to industry. The employer goes an extra mile to bridge the gap between university-exiting and profitable employment of engineering graduates. Unfortunately some cannot take this risk. Given this scenario, this paper presents a learning factory approach as a platform for the application of knowledge so as to develop the required engineering competences in South African engineering graduates before they enter the labour market. It spells out the components of a Stellenbosch University Learning Factory geared towards production of engineering graduates with the required industrial skills. It elaborates on the didactics embedded in the learning factory environment, tailor-made to produce engineers who can productively contribute to the growth of the industry upon exiting the university.
The paper describes a new stimulus using learning factories and an academic research programme - an M.Sc. in Digital Industrial Management and Engineering (DIME) comprising a double degree - to enhance international collaboration between four partner universities. The programme will be structured in such a way as to maintain or improve the level of innovation at the learning factories of each partner. The partners agreed to use Learning Factory focus areas along with DIME learning modules to stimulate international collaboration. Furthermore, they identified several research areas within the framework of the DIME program to encourage horizontal and vertical collaboration. Vertical collaboration connects faculty expertise across the Learning Factory network to advance knowledge in one of the focus areas, while Horizontal collaboration connects knowledge and expertise across multiple focus areas. Together they offer a platform for students to develop disciplinary and cross-disciplinary applied research skills necessary for addressing the complex challenges faced by industry. Hence, the university partners have the opportunity to develop the learning factory capabilities in alignment with the smart manufacturing concept. The learning factory is thus an important pillar in this venture. While postgraduate students/researchers in the DIME program are the enablers to ensure the success of entire projects, the learning factory provides a learning environment which is entirely conducive to fostering these successful collaborations. Ultimately, the partners are focussed on utilising smart technologies in line with the digitalization of the production process.
Internet of things innovations and the industrial internet these days become more and more decisive factors of future success for companies. Especially manufacturing oriented SME will face the challenge to develop innovative technology driven business models alongside technology innovations in this field which will be essential for future competitiveness. Failing in developing these technology driven business models in an internationally highly competitive environment will have a serious impact both on companies and on the society. Hence, securing economic stability and success of these technology driven business models is an indispensable task. To identify challenges for innovative industrial internet business models first it is necessary to understand what the industrial internet means to the leading parties and applying companies and start-ups in the field. Second, challenges from general business model development will be outlined. In a third step risks and challenges in business model development will be discussed with regard to the special characteristics of technology driven business models in the context of the industrial internet and the important role of the technological key component of the business model. Especially the capability to deal with an integrated consideration of the indivisible linked dimensions of economic and technological aspects of these business models is questioned. In the fourth place the specific challenges for industrial internet business models are derived. On the basis of these results it is also discussed what might be done to handle these challenges successfully with the goal to turn them into chances. The need for future research on the integration of the risk management perspective into the development of these technology driven business models is derived. This will help established companies and start-ups to realize great technological innovations for the industrial internet in sound and successful innovative business models.
Purpose
Injury or inflammation of the middle ear often results in the persistent tympanic membrane (TM) perforations, leading to conductive hearing loss (HL). However, in some cases the magnitude of HL exceeds that attributable by the TM perforation alone. The aim of the study is to better understand the effects of location and size of TM perforations on the sound transmission properties of the middle ear.
Methods
The middle ear transfer functions (METF) of six human temporal bones (TB) were compared before and after perforating the TM at different locations (anterior or posterior lower quadrant) and to different degrees (1 mm, ¼ of the TM, ½ of the TM, and full ablation). The sound-induced velocity of the stapes footplate was measured using single-point laser-Doppler-vibrometry (LDV). The METF were correlated with a Finite Element (FE) model of the middle ear, in which similar alterations were simulated.
Results
The measured and calculated METF showed frequency and perforation size dependent losses at all perforation locations. Starting at low frequencies, the loss expanded to higher frequencies with increased perforation size. In direct comparison, posterior TM perforations affected the transmission properties to a larger degree than anterior perforations. The asymmetry of the TM causes the malleus-incus complex to rotate and results in larger deflections in the posterior TM quadrants than in the anterior TM quadrants. Simulations in the FE model with a sealed cavity show that small perforations lead to a decrease in TM rigidity and thus to an increase in oscillation amplitude of the TM mainly above 1 kHz.
Conclusion
Size and location of TM perforations have a characteristic influence on the METF. The correlation of the experimental LDV measurements with an FE model contributes to a better understanding of the pathologic mechanisms of middle-ear diseases. If small perforations with significant HL are observed in daily clinical practice, additional middle ear pathologies should be considered. Further investigations on the loss of TM pretension due to perforations may be informative.
Due to the wide variety of benign and malignant salivary gland tumors, classification and malignant behavior determination based on histomorphological criteria can be difficult and sometimes impossible. Spectroscopical procedures can acquire molecular biological information without destroying the tissue within the measurement processes. Since several tissue preparation procedures exist, our study investigated the impact of these preparations on the chemical composition of healthy and tumorous salivary gland tissue by Fourier-transform infrared (FTIR) microspectroscopy. Sequential tissue cross-sections were prepared from native, formalin-fixed and formalin-fixed paraffin-embedded (FFPE) tissue and analyzed. The FFPE cross-sections were dewaxed and remeasured. By using principal component analysis (PCA) combined with a discriminant analysis (DA), robust models for the distinction of sample preparations were built individually for each parotid tissue type. As a result, the PCA-DA model evaluation showed a high similarity between native and formalin-fixed tissues based on their chemical composition. Thus, formalin-fixed tissues are highly representative of the native samples and facilitate a transfer from scientific laboratory analysis into the clinical routine due to their robust nature. Furthermore, the dewaxing of the cross-sections entails the loss of molecular information. Our study successfully demonstrated how FTIR microspectroscopy can be used as a powerful tool within existing clinical workflows.
Throughout the past decade the rapid proliferation and widespread adoption of social media for marketing purposes can be observed across all technological and digital touch points. This paper focuses on the implementation of social media marketing during mega sports events. We examine impacts by analyzing Adidas’ and Nike’s social media campaigns in the frame of the FIFA World Cup 2014 in Brazil. What impact did the social media activities of Nike and Adidas have on their Twitter and Facebook presence? Which additional value did the social media activities contribute to their respective targets of the entire marketing campaign? In order to answer these questions an empirical study was conducted. Several hypotheses were formulated and tested.
Perivascular stromal cells, including mesenchymal stem/stromal cells (MSCs), secrete paracrine factor in response to exercise training that can facilitate improvements in muscle remodeling. This study was designed to test the capacity for muscle-resident MSCs (mMSCs) isolated from young mice to release regenerative proteins in response to mechanical strain in vitro, and subsequently determine the extent to which strain-stimulated mMSCs can enhance skeletal muscle and cognitive performance in a mouse model of uncomplicated aging. Protein arrays confirmed a robust increase in protein release at 24 h following an acute bout of mechanical strain in vitro (10%, 1 Hz, 5 h) compared to non-strain controls. Aged (24 month old), C57BL/6 mice were provided bilateral intramuscular injection of saline, non strain control mMSCs, or mMSCs subjected to a single bout of mechanical strain in vitro (4 ×104). No significant changes were observed in muscle weight, myofiber size, maximal force, or satellite cell quantity at 1 or 4 wks between groups. Peripheral perfusion was significantly increased in muscle at 4 wks post-mMSC injection (p < 0.05), yet no difference was noted between control and preconditioned mMSCs. Intramuscular injection of preconditioned mMSCs increased the number of new neurons and astrocytes in the dentate gyrus of the hippocampus compared to both control groups (p < 0.05), with a trend toward an increase in water maze performance noted (p=0.07). Results from this study demonstrate that acute injection of exogenously stimulated muscle-resident stromal cells do not robustly impact aged muscle structure and function, yet increase the survival of new neurons in the hippocampus.
IT Governance (ITG) is crucial due to its significant impact on enabling innovation and enhancing firm performance. Hence, in the last decade ITG has become important in both academic and in practical research. Although several studies have investigated individual aspects of ITG success and its impact on single determinants, the causal relationship of how ITG promotes firm performance remains unclear. Thus, a more comprehensive understanding about the link between ITG and firm performance is needed. To address this gap, this research aims at understanding how ITG and firm performance are related. Therefore, we conducted a systematic literature review (1) to create an overview on how current research structures the link between ITG mechanisms and firm performance, (2) to uncover key constructs as potential mediators or moderators on the general link between ITG and performance, and (3) to set the basis for future studies on the ITG-firm performance relationship.