Informatik
Refine
Year of publication
- 2018 (64) (remove)
Document Type
- Conference proceeding (46)
- Journal article (15)
- Book chapter (3)
Language
- English (64) (remove)
Is part of the Bibliography
- yes (64)
Institute
- Informatik (64)
Publisher
- Springer (18)
- IEEE (11)
- RWTH Aachen (8)
- Gesellschaft für Informatik e.V (3)
- Association for Computing Machinery (2)
- Elsevier (2)
- HTWG Konstanz (2)
- SciTePress (2)
- ARVO (1)
- American Marketing Association (1)
The Tenth International Conference on Advances in Databases, Knowledge, and Data Applications (DBKDA 2018), held between May 20 - 24, 2018 - Nice, France, continued a series of international events covering a large spectrum of topics related to advances in fundamentals on databases, evolution of relation between databases and other domains, data base technologies and content processing, as well as specifics in applications domains databases.
Advances in different technologies and domains related to databases triggered substantial improvements for content processing, information indexing, and data, process and knowledge mining. The push came from Web services, artificial intelligence, and agent technologies, as well as from the generalization of the XML adoption.
High-speed communications and computations, large storage capacities, and loadbalancing for distributed databases access allow new approaches for content processing with incomplete patterns, advanced ranking algorithms and advanced indexing methods.
Evolution on e-business, ehealth and telemedicine, bioinformatics, finance and marketing, geographical positioning systems put pressure on database communities to push the ‘de facto’ methods to support new requirements in terms of scalability, privacy, performance, indexing, and heterogeneity of both content and technology.
Early reduction of risks in a startup or an innovation project is highly important. Appropriate means for risk reduction, such as testing business models with different kinds of experiments exist. However, deciding what to test and how to select the right test, is challenging for many startups and innovation projects. This article presents the so-called Business Experiments Navigator (BEN), a toolkit to assist startup and innovation processes. It compliments other tools such as the Business Model Canvas or the Lean Startup process. The main contribution of BEN is to bridge the gap between the riskiest assumptions of a business model and the multitude of available testing techniques by providing assumption templates. The Business Experiments Navigator has been validated in several workshops. Results show that it creates awareness among the workshop participants that a business model is based on assumptions which impose risks and need to be validated. Further, users of BEN were able to identify relevant assumptions and map different kinds of assumptions to appropriate testing techniques. The process applied in the workshops, as well as the assumption templates, helped the participants understand the main concepts and transfer their learnings, to their own business ideas.
A transaction is a demarcated sequence of application operations, for which the following properties are guaranteed by the underlying transaction processing system (TPS): atomicity, consistency, isolation, and durability (ACID). Transactions are therefore a general abstraction, provided by TPS that simplifies application development by relieving transactional applications from the burden of concurrency and failure handling. Apart from the ACID properties, a TPS must guarantee high and robust performance (high transactional throughput and low response times), high reliability (no data loss, ability to recover last consistent state, fault tolerance), and high availability (infrequent outages, short recovery times).
The architectures and workhorse algorithms of a high-performance TPS are built around the properties of the underlying hardware. The introduction of nonvolatile memories (NVM) as novel storage technology opens an entire new problem space, with the need to revise aspects such as the virtual memory hierarchy, storage management and data placement, access paths, and indexing. NVM are also referred to as storage-class memory (SCM).
Active storage
(2018)
In brief, Active Storage refers to an architectural hardware and software paradigm, based on collocation storage and compute units. Ideally, it will allow to execute application-defined data ... within the physical data storage. Thus Active Storage seeks to minimize expensive data movement, improving performance, scalability, and resource efficiency. The effective use of Active Storage mandates new architectures, algorithms, interfaces, and development toolchains.
Blockchains yield to new workloads in database management systems and K/V-stores. Distributed Ledger Technology (DLT) is a technique for managing transactions in ’trustless’ distributed systems. Yet, clients of nodes in blockchain networks are backed by ’trustworthy’ K/V-Stores, like LevelDB or RocksDB in Ethereum, which are based on Log-Structured Merge Trees (LSM Trees). However, LSM-Trees do not fully match the properties of blockchains and enterprise workloads.
In this paper, we claim that Partitioned B-Trees (PBT) fit the properties of this DLT: uniformly distributed hash keys, immutability, consensus, invalid blocks, unspent and off-chain transactions, reorganization and data state / version ordering in a distributed log-structure. PBT can locate records of newly inserted key-value pairs, as well as data of unspent transactions, in separate partitions in main memory. Once several blocks acquire consensus, PBTs evict a whole partition, which becomes immutable, to secondary storage. This behavior minimizes write amplification and enables a beneficial sequential write pattern on modern hardware. Furthermore, DLT implicate some type of log-based versioning. PBTs can serve as MV-store for data storage of logical blocks and indexing in multi-version concurrency control (MVCC) transaction processing.
Modern persistent Key/Value stores are designed to meet the demand for high transactional throughput and high data ingestion rates. Still, they rely on backwards-compatible storage stack and abstractions to ease space management, foster seamless proliferation and system integration. Their dependence on the traditional I/O stack has negative impact on performance, causes unacceptably high write-amplification, and limits the storage longevity.
In the present paper we present NoFTL KV, an approach that results in a lean I/O stack, integrating physical storage management natively in the Key/Value store. NoFTL-KV eliminates backwards compatibility, allowing the Key/Value store to directly consume the characteristics of modern storage technologies. NoFTLKV is implemented under RocksDB. The performance evaluation under LinkBench shows that NoFTL-KV improves transactional throughput by 33%, while response times improve up to 2.3x. Furthermore, NoFTL KV reduces write-amplification 19x and improves storage longevity by imately the same factor.
Background: Internationally, teledermatology has proven to be a viable alternative to conventional physical referrals. Travel cost and referral times are reduced while patient safety is preserved. Especially patients from rural areas benefit from this healthcare innovation. Despite these established facts and positive experiences from EU neighboring countries like the Netherlands or the United Kingdom, Germany has not yet implemented store-and-forward teledermatology in routine care.
Methods: The TeleDerm study will implement and evaluate store-and-forward teledermatology in 50 general practitioner (GP) practices as an alternative to conventional referrals. TeleDerm aims to confirm that the possibility of store-and-forward teledermatology in GP practices is going to lead to a 15% (n = 260) reduction in referrals in the intervention arm. The study uses a cluster-randomized controlled trial design. Randomization is planned for the cluster “county”. The main observational unit is the GP practice. Poisson distribution of referrals is assumed. The evaluation of secondary outcomes like acceptance, enablers and barriers uses a mixed methods design with questionnaires and interviews.
Discussion: Due to the heterogeneity of GP practice organization, patient management software, information technology service providers, GP personal technical affinity and training, we expect several challenges in implementing teledermatology in German GP routine care. Therefore, we plan to recruit 30% more GPs than required by the power calculation. The implementation design and accompanying evaluation is expected to deliver vital insights into the specifics of implementing telemedicine in German routine care.
We present an approach for segmenting individual cells and lamellipodia in epithelial cell clusters using fully convolutional neural networks. The method will set the basis for measuring cell cluster dynamics and expansion to improve the investigation of collective cell migration phenomena. The fully learning-based front-end avoids classical feature engineering, yet the network architecture needs to be designed carefully. Our network predicts how likely each pixel belongs to one of the classes and, thus, is able to segment the image. Besides characterizing segmentation performance, we discuss how the network will be further employed.
This work is a report on practical experiences with the issue of interoperability in German practice management systems (PMSs) from an ongoing clinical trial on teledermatology, the TeleDerm project. A proprietary and established web-platform for store-and-forward telemedicine is integrated with the IT in the GPs’ offices for automatic exchange of basic patient data. Most of the 19 different PMSs included in the study sample lack support of modern health data exchange standards, therefore the relatively old but widely available German health data exchange interface “Gerätedatentransfer” (GDT) is used. Due to the lack of enforcement and regulation of the GDT standard, several obstacles to interoperability are encountered. As a partial, but reusable working solution to cope with these issues, we present a custom middleware which is used in conjunction with GDT. We describe the design, technical implementation and observed hindrances with the existing infrastructure. A discussion on health care interfacing standards and the current state of interoperability in German PMS software is given.
Lots of movies are produced every year, too many to watch all of them and in particular, to get an overview about the evolution of typical movie genres and actors playing in them. Moreover, it is a challenging problem to detect correlations among the movies and the actors in those movies, in particular, if we are interested in time-varying data patterns like trends, countertrends, or anomalies and outliers. Those correlations are specifically interesting if they can be inspected on different levels of granularity, e.g., temporal, but also hierarchical in form of country- or continent-based correlations. In this paper we describe the IMDb Explorer, a webbased visualization tool that consists of two major views denoted by the movie cosmos and the career lines. Both views are linked and interactively manipulable while a list of user-defined metrics are explorable. We illustrate the usefulness of the visualization tool by applying it to the entire movie database provided by IMDb.