Informatik
Refine
Document Type
- Journal article (186) (remove)
Has full text
- yes (186) (remove)
Is part of the Bibliography
- yes (186)
Institute
- Informatik (186)
Publisher
- Elsevier (41)
- Springer (28)
- De Gruyter (11)
- MDPI (10)
- IARIA (7)
- Emerald (6)
- IEEE (6)
- International Academy of Business Disciplines (3)
- Riga Technical University Press (3)
- Sage (3)
Towards Automated Surgical Documentation using automatically generated checklists from BPMN models
(2021)
The documentation of surgeries is usually created from memory only after the operation, which is an additional effort for the surgeon and afflicted with the possibility of imprecisely, shortend reports. The display of process steps in the form of checklists and the automatic creation of surgical documentation from the completed process steps could serve as a reminder, standardize the surgical procedure and save time for the surgeon. Based on two works from Reutlingen University, which implemented the creation of dynamic checklists from Business Process Modelling Notation (BPMN) models and the storage of times at which a process step was completed, a prototype was developed for an android tablet, to expand the dynamic checklists by functions such as uploading photos and files, manual user entries, the interception of foreseeable deviations from the normal course of operations and the automatic creation of OR documentation.
Science-based analysis for climate action: how HSBC Bank uses the En-ROADS climate policy simulation
(2021)
In 2018, the Intergovernmental Panel on Climate Change (IPCC, 2018) found that rapid decarbonization and net negative greenhouse gas (GHG) emissions by mid-century are required to "hold the increase in global average temperature to well below 2°C above pre-industrial levels and pursue efforts to limit the temperature increase to 1.5°C," as stipulated by the Paris Agreement (UNFCCC, 2015, p. 2). Meeting these goals reduces physical climate-related risks from, for example, sea-level rise, ocean acidification, extreme weather, water shortages, declining crop yields, and other impacts. These impacts threaten our economy, security, health, and lives.
At the same time, policies to mitigate these harms by rapidly reducing GHG emissions can create transition risks for businesses - for example, stranded assets and loss of market value for fossil fuel producers and firms dependent on fossil energy (Carney, 2019). Rapid decarbonization requires an unprecedented energy transition (IEA, 2021a) driven by and affecting economic players including businesses, asset managers, and investors in all sectors and all countries (Kriegler et al., 2014).
However, GHG emissions are not falling rapidly enough to meet the goals of the Paris Agreement (Holz et al., 2018). The UNFCCC, 2021 found that the emissions reductions pledged by all nations as of early 2021 "fall far short of what is required, demonstrating the need for Parties to further strengthen their mitigation commitments under the Paris Agreement" (2021, p. 5). Businesses are faring no better. Despite high-profile calls to action from influential firms such as BlackRock (Fink, 2018, 2021), corporate action to meet climate goals has thus far fallen short (e.g. the Right, 2019 analysis of the German DAX 30 companies' emissions targets by NGO "right."). Instead of implementing climate strategies that might mitigate the risks, managers are often caught up in "firefighting" and capability traps that erode the resources needed for ambitious climate action (Sterman, 2015). Firms may also exaggerate environmental accomplishments, leading to greenwashing (Lyon and Maxwell, 2011); implement policies that are vague, rely on unproven offsets, or are not climate neutral (e.g. Sterman et al., 2018); or simply take no action at all (Delmas and Burbano, 2011; Sterman, 2015).
Adding to the confusion are difficulties evaluating the effectiveness of different climate policies. Misperceptions include wait-and-see approaches (Dutt and Gonzalez, 2012; Sterman, 2008), underestimating time delays and ignoring the unintended consequences of policies (Sterman, 2008), and beliefs in "silver bullet" solutions (Gilbert, 2009; Kriegler et al., 2013; Shackley and Dütschke, 2012). These beliefs arise in part because the climate–energy system is a high-dimensional dynamic system characterized by long time delays, multiple feedback loops, and nonlinearities (Sterman, 2011), while even simple systems are difficult for people to understand (Booth Sweeney and Sterman, 2000; Cronin et al., 2009; Kapmeier et al., 2017). Although senior executives might receive briefings on climate change, simply providing more information does not necessarily lead to more effective action (Pearce et al., 2015; Sterman, 2011).
Alternatively, interactive approaches to learning about climate change and policies to mitigate it can trigger climate action (Creutzig and Kapmeier, 2020). Decision-makers require tools and methods grounded in science that enable them to learn for themselves how a low-carbon economy can be achieved and how climate policies condition physical and transition risks. The system dynamics climate–energy simulation En-ROADS (Energy-Rapid Overview and Decision Support; Jones et al., 2019b), codeveloped by the climate think-tank Climate Interactive and the MIT Sloan Sustainability Initiative, provides such a tool.
Here we show how En-ROADS helps HSBC Bank U.S.A., the American subsidiary of U.K.-based multinational financial services company HSBC Holdings plc, focus its global sustainability strategy on activities with higher impact and relevance, communicate and implement the strategy, understand transition risks, and better align the strategy with global climate goals. We show how the versatility and interactivity of En-ROADS increases its reach throughout the organization. Finally, we discuss challenges and lessons learned that may be helpful to other organizations.
Intra-operative fluoroscopy-guided assistance system for transcatheter aortic valve implantation
(2014)
A new surgical assistance system has been developed to assist the correct positioning of the AVP during transapical TAVI. The developed assistance system automatically defines the target area for implanting the AVP under live 2-D fluoroscopy guidance. Moreover, this surgical assistance system works with low levels of contrast agent for the final deployment of AVP, reducing therefore long-term negative effects, such as renal failure in the elderly and high-risk patients.
Background and purpose: Transapical aortic valve replacement (TAVR) is a recent minimally invasive surgical treatment technique for elderly and high-risk patients with severe aortic stenosis. In this paper,a simple and accurate image-based method is introduced to aid the intra-operative guidance of TAVR procedure under 2-D X-ray fluoroscopy.
Methods: The proposed method fuses a 3-D aortic mesh model and anatomical valve landmarks with live 2-D fluoroscopic images. The 3-D aortic mesh model and landmarks are reconstructed from interventional X-ray C-arm CT system, and a target area for valve implantation is automatically estimated using these aortic mesh models.Based on template-based tracking approach, the overlay of visualized 3-D aortic mesh model, land-marks and target area of implantation is updated onto fluoroscopic images by approximating the aortic root motion from a pigtail catheter motion without contrast agent. Also, a rigid intensity-based registration algorithm is used to track continuously the aortic root motion in the presence of contrast agent.Furthermore, a sensorless tracking of the aortic valve prosthesis is provided to guide the physician to perform the appropriate placement of prosthesis into the estimated target area of implantation.
Results: Retrospective experiments were carried out on fifteen patient datasets from the clinical routine of the TAVR. The maximum displacement errors were less than 2.0 mm for both the dynamic overlay of aortic mesh models and image-based tracking of the prosthesis, and within the clinically accepted ranges. Moreover, high success rates of the proposed method were obtained above 91.0% for all tested patient datasets.
Conclusion: The results showed that the proposed method for computer-aided TAVR is potentially a helpful tool for physicians by automatically defining the accurate placement position of the prosthesis during the surgical procedure.
„Bürgerrechtler klagen gegen Weitergabe von Gesundheitsdaten“ – so titelt (spiegel.de, 2022) am 29.04.2022. Dabei geht es um die Weitergabe pseudonymisierter Daten von 73 Millionen Versicherten durch die gesetzlichen Krankenkassen. Diese Daten sollen der Forschung zur Verfügung gestellt werden. Die Kläger bezweifeln, dass die Daten nicht deanonymisiert werden können. Dieses aktuelle Beispiel zeigt einen konkreten und relevanten Anwendungsfall des Themas Anonymisierung/Pseudonymisierung im aktuariellen Kontext auf. Es ist davon auszugehen, dass die Relevanz in den kommenden Jahren weiter zunehmen wird.
Spätestens seit dem Inkrafttreten der DSGVO ist das Thema Datenschutz allgegenwärtig und stellt uns Aktuare vor große Herausforderungen. Europäische Initiativen zur Schaffung eines Binnenmarktes für Daten sollen zwar die Möglichkeit schaffen, Daten einfacher zu teilen und so beispielsweise Dritten für Forschungszwecke zur Verfügung zu stellen, werfen aber auch viele Fragestellungen auf. Eine naheliegende Lösung ist es, Daten zu anonymisieren oder zu pseudonymisieren. Aber was bedeutet das konkret und welche Konsequenzen ergeben sich daraus? Bis zu welchem Grad müssen Daten anonymisiert werden und welche ReIdentifikationsrisiken bestehen weiterhin?
Parallel applications are the computational backbone of major industry trends and grand challenges in science. Whereas these applications are typically constructed for dedicated High Performance Computing clusters and supercomputers, the cloud emerges as attractive execution environment, which provides on-demand resource provisioning and a pay-per-use model. However, cloud environments require specific application properties that may restrict parallel application design. As a result, design trade-offs are required to simultaneously maximize parallel performance and benefit from cloud-specific characteristics.
In this paper, we present a novel approach to assess the cloud readiness of parallel applications based on the design decisions made. By discovering and understanding the implications of these parallel design decisions on an application’s cloud readiness, our approach supports the migration of parallel applications to the cloud.We introduce an assessment procedure, its underlying meta model, and a corresponding instantiation to structure this multi-dimensional design space. For evaluation purposes, we present an extensive case study comprising three parallel applications and discuss their cloud readiness based on our approach.
Container virtualization evolved into a key technology for deployment automation in line with the DevOps paradigm. Whereas container management systems facilitate the deployment of cloud applications by employing container based artifacts, parts of the deployment logic have been applied before to build these artifacts. Current approaches do not integrate these two deployment phases in a comprehensive manner. Limited knowledge on application software and middleware encapsulated in container-based artifacts leads to maintainability and configuration issues. Besides, the deployment of cloud applications is based on custom orchestration solutions leading to lock in problems. In this paper, we propose a two-phase deployment method based on the TOSCA standard. We present integration concepts for TOSCA-based orchestration and deployment automation using container-based artifacts. Our two-phase deployment method enables capturing and aligning all the deployment logic related to a software release leading to better maintainability. Furthermore, we build a container management system, which is composed of a TOSCA-based orchestrator on Apache Mesos, to deploy container-based cloud applications automatically.
Elasticity is considered to be the most beneficial characteristic of cloud environments, which distinguishes the cloud from clusters and grids. Whereas elasticity has become mainstream for web-based, interactive applications, it is still a major research challenge how to leverage elasticity for applications from the high-performance computing (HPC) domain, which heavily rely on efficient parallel processing techniques. In this work, we specifically address the challenges of elasticity for parallel tree search applications. Well-known meta-algorithms based on this parallel processing technique include branch-and-bound and backtracking search. We show that their characteristics render static resource provisioning inappropriate and the capability of elastic scaling desirable. Moreover, we discuss how to construct an elasticity controller that reasons about the scaling behavior of a parallel system at runtime and dynamically adapts the number of processing units according to user-defined cost and efficiency thresholds. We evaluate a prototypical elasticity controller based on our findings by employing several benchmarks for parallel tree search and discuss the applicability of the proposed approach. Our experimental results show that, by means of elastic scaling, the performance can be controlled according to user-defined thresholds, which cannot be achieved with static resource provisioning.
The cloud evolved into an attractive execution environment for parallel applications, which make use of compute resources to speed up the computation of large problems in science and industry. Whereas Infrastructure as a Service (IaaS) offerings have been commonly employed, more recently, serverless computing emerged as a novel cloud computing paradigm with the goal of freeing developers from resource management issues. However, as of today, serverless computing platforms are mainly used to process computations triggered by events or user requests that can be executed independently of each other and benefit from on-demand and elastic compute resources as well as per-function billing. In this work, we discuss how to employ serverless computing platforms to operate parallel applications. We specifically focus on the class of parallel task farming applications and introduce a novel approach to free developers from both parallelism and resource management issues. Our approach includes a proactive elasticity controller that adapts the physical parallelism per application run according to user-defined goals. Specifically, we show how to consider a user-defined execution time limit after which the result of the computation needs to be present while minimizing the associated monetary costs. To evaluate our concepts, we present a prototypical elastic parallel system architecture for self-tuning serverless task farming and implement two applications based on our framework. Moreover, we report on performance measurements for both applications as well as the prediction accuracy of the proposed proactive elasticity control mechanism and discuss our key findings.
Background: Internationally, teledermatology has proven to be a viable alternative to conventional physical referrals. Travel cost and referral times are reduced while patient safety is preserved. Especially patients from rural areas benefit from this healthcare innovation. Despite these established facts and positive experiences from EU neighboring countries like the Netherlands or the United Kingdom, Germany has not yet implemented store-and-forward teledermatology in routine care.
Methods: The TeleDerm study will implement and evaluate store-and-forward teledermatology in 50 general practitioner (GP) practices as an alternative to conventional referrals. TeleDerm aims to confirm that the possibility of store-and-forward teledermatology in GP practices is going to lead to a 15% (n = 260) reduction in referrals in the intervention arm. The study uses a cluster-randomized controlled trial design. Randomization is planned for the cluster “county”. The main observational unit is the GP practice. Poisson distribution of referrals is assumed. The evaluation of secondary outcomes like acceptance, enablers and barriers uses a mixed methods design with questionnaires and interviews.
Discussion: Due to the heterogeneity of GP practice organization, patient management software, information technology service providers, GP personal technical affinity and training, we expect several challenges in implementing teledermatology in German GP routine care. Therefore, we plan to recruit 30% more GPs than required by the power calculation. The implementation design and accompanying evaluation is expected to deliver vital insights into the specifics of implementing telemedicine in German routine care.
Background
Although teledermatology has been proven internationally to be an effective and safe addition to the care of patients in primary care, there are few pilot projects implementing teledermatology in routine outpatient care in Germany. The aim of this cluster randomized controlled trial was to evaluate whether referrals to dermatologists are reduced by implementing a store-and-forward teleconsultation system in general practitioner practices.
Methods
Eight counties were cluster randomized to the intervention and control conditions. During the 1-year intervention period between July 2018 and June 2019, 46 general practitioner practices in the 4 intervention counties implemented a store-and-forward teledermatology system with Patient Data Management System interoperability. It allowed practice teams to initiate teleconsultations for patients with dermatologic complaints. In the four control counties, treatment as usual was performed. As primary outcome, number of referrals was calculated from routine health care data. Poisson regression was used to compare referral rates between the intervention practices and 342 control practices.
Results
The primary analysis revealed no significant difference in referral rates (relative risk = 1.02; 95% confidence interval = 0.911–1.141; p = .74). Secondary analyses accounting for sociodemographic and practice characteristics but omitting county pairing resulted in significant differences of referral rates between intervention practices and control practices. Matched county pair, general practitioner age, patient age, and patient sex distribution in the practices were significantly related to referral rates.
Conclusions
While a store-and-forward teleconsultation system was successfully implemented in the German primary health care setting, the intervention's effect was superimposed by regional factors. Such regional factors should be considered in future teledermatology research.
Background: One of the most promising health care development areas is introducing telemedicine services and creating solutions based on blockchain technology. The study of systems combining both these domains indicates the ongoing expansion of digital technologies in this market segment.
Objective: This paper aims to review the feasibility of blockchain technology for telemedicine.
Methods: The authors identified relevant studies via systematic searches of databases including PubMed, Scopus, Web of Science, IEEE Xplore, and Google Scholar. The suitability of each for inclusion in this review was assessed independently. Owing to the lack of publications, available blockchain-based tokens were discovered via conventional web search engines (Google, Yahoo, and Yandex).
Results: Of the 40 discovered projects, only 18 met the selection criteria. The 5 most prevalent features of the available solutions (N=18) were medical data access (14/18, 78%), medical service processing (14/18, 78%), diagnostic support (10/18, 56%), payment transactions (10/18, 56%), and fundraising for telemedical instrument development (5/18, 28%).
Conclusions: These different features (eg, medical data access, medical service processing, epidemiology reporting, diagnostic support, and treatment support) allow us to discuss the possibilities for integration of blockchain technology into telemedicine and health care on different levels. In this area, a wide range of tasks can be identified that could be accomplished based on digital technologies using blockchains.
The situation in the markets is changing rapidly and competition in the business sector is increasing rapidly. As a result, corporate marketing decisions are based on creating greater value for the consumer, which creates competitiveness and provides an advantage in competing for future customer loyalty. The purpose of this study is to determine whether there is a link between marketing communication tools and consumer perceived value in pursuit of consumer loyalty. Qualitative (observational research) and quantitative (a questionnaire survey) research methods were used to investigate the problem empirically. The observational research elucidated the value provided to the consumer by the research objects through marketing communication tools, supplementing the key questions for the quantitative study. Correlation and regression analysis were used in the study, with the results showing a statistically significant relationship between marketing communication tools and consumer perceived value in terms of user loyalty. It has also been determined that the greatest and strongest relationship in consumer value creation through marketing communication tools is the appropriate, mutually coordinated and complementary use of a package of marketing communication tools to achieve synergies that create the preconditions for increasing consumer loyalty in a competitive market.
Software process improvement (SPI) has been around for decades: frameworks are proposed, success factors are studied, and experiences have been reported. However, the sheer mass of concepts, approaches, and standards published over the years overwhelms practitioners as well as researchers. What is out there? Are there new trends and emerging approaches? What are open issues? Still, we struggle to answer these questions about the current state of SPI and related research. In this article, we present results from an updated systematic mapping study to shed light on the field of SPI, to develop a big picture of the state of the art, and to draw conclusions for future research directions. An analysis of 769 publications draws a big picture of SPI-related research of the past quarter-century. Our study shows a high number of solution proposals, experience reports, and secondary studies, but only few theories and models on SPI in general. In particular, standard SPI models like CMMI and ISO/IEC 15,504 are analyzed, enhanced, and evaluated for applicability in practice, but these standards are also critically discussed, e.g., from the perspective of SPI in small to-medium-sized companies, which leads to new specialized frameworks. New and specialized frameworks account for the majority of the contributions found (approx. 38%). Furthermore, we find a growing interest in success factors (approx. 16%) to aid companies in conducting SPI and in adapting agile principles and practices for SPI (approx. 10%). Beyond these specific topics, the study results also show an increasing interest into secondary studies with the purpose of aggregating and structuring SPI-related knowledge. Finally, the present study helps directing future research by identifying under-researched topics awaiting further investigation.
Together with many success stories, promises such as the increase in production speed and the improvement in stakeholders' collaboration have contributed to making agile a transformation in the software industry in which many companies want to take part. However, driven either by a natural and expected evolution or by contextual factors that challenge the adoption of agile methods as prescribed by their creator(s), software processes in practice mutate into hybrids over time. Are these still agile In this article, we investigate the question: what makes a software development method agile We present an empirical study grounded in a large-scale international survey that aims to identify software development methods and practices that improve or tame agility. Based on 556 data points, we analyze the perceived degree of agility in the implementation of standard project disciplines and its relation to used development methods and practices. Our findings suggest that only a small number of participants operate their projects in a purely traditional or agile manner (under 15%). That said, most project disciplines and most practices show a clear trend towards increasing degrees of agility. Compared to the methods used to develop software, the selection of practices has a stronger effect on the degree of agility of a given discipline. Finally, there are no methods or practices that explicitly guarantee or prevent agility. We conclude that agility cannot be defined solely at the process level. Additional factors need to be taken into account when trying to implement or improve agility in a software company. Finally, we discuss the field of software process-related research in the light of our findings and present a roadmap for future research.
The emergence of agile methods and practices has not only changed the development processes but might also have affected how companies conduct software process improvement (SPI). Through a set of complementary studies, we aim to understand how SPI has changed in times of agile software development. Specifically, we aim (a) to identify and characterize the set of publications that connect elements of agility to SPI, (b) to explore to which extent agile methods/practices have been used in the context of SPI, and (c) to understand whether the topics addressed in the literature are relevant and useful for industry professionals. To study these questions, we conducted an in-depth analysis of the literature identified in a previous mapping study, an interview study, and an analysis of the responses given by industry professionals to SPI related questions stemming from an independently conducted survey study. Regarding the first question, we identified 55 publications that focus on both SPI and agility of which 48 present and discuss how agile methods/practices are used to steer SPI initiatives. Regarding the second question, we found that the two most frequently mentioned agile methods in the context of SPI are Scrum and Extreme Programming (XP), while the most frequently mentioned agile practices are integrate often, test-first, daily meeting, pair programming, retrospective, on-site customer, and product backlog. Regarding the third question, we found that a majority of the interviewed and surveyed industry professionals see SPI as a continuous activity. They agree with the agile SPI literature that agile methods/practices play an important role in SPI activities but that the importance given to specific agile methods/practices does not always coincide with the frequency with which these methods/practices are mentioned in the literature.
In recent years, the Graph Model has become increasingly popular, especially in the application domain of social networks. The model has been semantically augmented with properties and labels attached to the graph elements. It is difficult to ensure data quality for the properties and the data structure because the model does not need a schema. In this paper, we propose a schema bound Typed Graph Model with properties and labels. These enhancements improve not only data quality but also the quality of graph analysis. The power of this model is provided by using hyper-nodes and hyper-edges, which allows to present data structures on different abstraction levels. We prove that the model is at least equivalent in expressive power to most popular data models. Therefore, it can be used as a supermodel for model management and data integration. We illustrate by example the superiority of this model over the property graph data model of Hidders and other prevalent data models, namely the relational, object-oriented, XML model, and RDF Schema.
"Learning by doing" in Higher Education in technical disciplines is mostly realized by hands-on labs. It challenges the exploratory aptitude and curiosity of a person. But, exploratory learning is hindered by technical situations that are not easy to establish and to verify. Technical skills are, however, mandatory for employees in this area. On the other side, theoretical concepts are often compromised by commercial products. The challenge is to contrast and reconcile theory with practice. Another challenge is to implement a self-assessment and grading scheme that keeps up with the scalability of e-learning courses. In addition, it should allow the use of different commercial products in the labs and still grade the assignment results automatically in a uniform way. In two European Union funded projects we designed, implemented, and evaluated a unique e-learning reference model, which realizes a modularized teaching concept that provides easily reproducible virtual hands-on labs. The novelty of the approach is to use software products of industrial relevance to compare with theory and to contrast different implementations. In a sample case study, we demonstrate the automated assessment for the creative database modeling and design task. Pilot applications in several European countries demonstrated that the participants gained highly sustainable competences that improved their attractiveness for employment.
This paper presents a concurrency control mechanism that does not follow a "one concurrency control mechanism fits all needs" strategy. With the presented mechanism a transaction runs under several concurrency control mechanisms and the appropriate one is chosen based on the accessed data. For this purpose, the data is divided into four classes based on its access type and usage (semantics). Class O (the optimistic class) implements a first-committer-wins strategy, class R (the reconciliation class) implements a first-n-committers-win strategy, class P (the pessimistic class) implements a first-reader-wins strategy, and class E (the escrow class) implements a first-n-readers-win strategy. Accordingly, the model is called OjRjPjE. The selected concurrency control mechanism may be automatically adapted at run-time according to the current load or a known usage profile. This run-time adaptation allows OjRjPjE to balance the commit rate and the response time even under changing conditions. OjRjPjE outperforms the Snapshot Isolation concurrency control in terms of response time by a factor of approximately 4.5 under heavy transactional load (4000 concurrent transactions). As consequence, the degree of concurrency is 3.2 times higher.