Refine
Year of publication
- 2021 (214) (remove)
Document Type
- Journal article (118)
- Conference proceeding (76)
- Book chapter (14)
- Doctoral Thesis (3)
- Anthology (2)
- Book (1)
Language
- English (214) (remove)
Is part of the Bibliography
- yes (214)
Institute
- Informatik (75)
- ESB Business School (61)
- Life Sciences (44)
- Technik (27)
- Texoversum (6)
- Zentrale Einrichtungen (2)
Publisher
- Springer (33)
- Elsevier (25)
- MDPI (24)
- IEEE (16)
- Wiley (9)
- De Gruyter (7)
- ACS (5)
- Association for Information Systems (AIS) (5)
- SSRN (5)
- VDE Verlag GmbH (5)
This book is about the challenges that emerge for organizations from an ever faster changing world. While useful at their time, several management tools, including classic strategic planning processes, will no longer suffice to address these challenges in a timely and comprehensive fashion. While individual management tools are still valid to solve specific problems, they need to be employed based on a clear understanding of what the greater challenge is and how they need to be combined and prioritized with other approaches. In order to do so, companies can apply the clarity of thinking from the military with regard to which leadership level is responsible for what and how these levels need to interact in order to produce a single aligned response to an outside opportunity or threat. Finally, the tool of business wargaming, while known for some time, proves to be an ideal approach to quickly and effectively bring all leadership levels together, align them around a common objective and lay the groundwork for effective implementation of targeted responses that will keep the organization competitive and in the game for the long run. The book offers a comprehensive introduction to business wargaming, including a historical account, a classification of different types of games and a number of specific real-world examples. This book is targeted at practicing managers dealing with the aforementioned challenges, as well as for students of business and strategy at every level.
Several studies analyzed existing Web APIs against the constraints of REST to estimate the degree of REST compliance among state-of-the-art APIs. These studies revealed that only a small number of Web APIs are truly RESTful. Moreover, identified mismatches between theoretical REST concepts and practical implementations lead us to believe that practitioners perceive many rules and best practices aligned with these REST concepts differently in terms of their importance and impact on software quality. We therefore conducted a Delphi study in which we confronted eight Web API experts from industry with a catalog of 82 REST API design rules. For each rule, we let them rate its importance and software quality impact. As consensus, our experts rated 28 rules with high, 17 with medium, and 37 with low importance. Moreover, they perceived usability, maintainability, and compatibility as the most impacted quality attributes. The detailed analysis revealed that the experts saw rules for reaching Richardson maturity level 2 as critical, while reaching level 3 was less important. As the acquired consensus data may serve as valuable input for designing a tool-supported approach for the automatic quality evaluation of RESTful APIs, we briefly discuss requirements for such an approach and comment on the applicability of the most important rules.
Together with many success stories, promises such as the increase in production speed and the improvement in stakeholders' collaboration have contributed to making agile a transformation in the software industry in which many companies want to take part. However, driven either by a natural and expected evolution or by contextual factors that challenge the adoption of agile methods as prescribed by their creator(s), software processes in practice mutate into hybrids over time. Are these still agile In this article, we investigate the question: what makes a software development method agile We present an empirical study grounded in a large-scale international survey that aims to identify software development methods and practices that improve or tame agility. Based on 556 data points, we analyze the perceived degree of agility in the implementation of standard project disciplines and its relation to used development methods and practices. Our findings suggest that only a small number of participants operate their projects in a purely traditional or agile manner (under 15%). That said, most project disciplines and most practices show a clear trend towards increasing degrees of agility. Compared to the methods used to develop software, the selection of practices has a stronger effect on the degree of agility of a given discipline. Finally, there are no methods or practices that explicitly guarantee or prevent agility. We conclude that agility cannot be defined solely at the process level. Additional factors need to be taken into account when trying to implement or improve agility in a software company. Finally, we discuss the field of software process-related research in the light of our findings and present a roadmap for future research.
Hyperspectral imaging and reflectance spectroscopy in the range from 200–380 nm were used to rapidly detect and characterize copper oxidation states and their layer thicknesses on direct bonded copper in a non-destructive way. Single-point UV reflectance spectroscopy, as a well-established method, was utilized to compare the quality of the hyperspectral imaging results. For the laterally resolved measurements of the copper surfaces an UV hyperspectral imaging setup based on a pushbroom imager was used. Six different types of direct bonded copper were studied. Each type had a different oxide layer thickness and was analyzed by depth profiling using X-ray photoelectron spectroscopy. In total, 28 samples were measured to develop multivariate models to characterize and predict the oxide layer thicknesses. The principal component analysis models (PCA) enabled a general differentiation between the sample types on the first two PCs with 100.0% and 96% explained variance for UV spectroscopy and hyperspectral imaging, respectively. Partial least squares regression (PLS-R) models showed reliable performance with R2c = 0.94 and 0.94 and RMSEC = 1.64 nm and 1.76 nm, respectively. The developed in-line prototype system combined with multivariate data modeling shows high potential for further development of this technique towards real large-scale processes.
The paper explains a workflow to simulate the food energy water (FEW) nexus for an urban district combining various data sources like 3D city models, particularly the City Geography Markup Language (CityGML) data model from the Open Geospatial Consortium, Open StreetMap and Census data. A long term vision is to extend the CityGML data model by developing a FEW Application Domain Extension (FEW ADE) to support future FEW simulation workflows such as the one explained in this paper. Together with the mentioned simulation workflow, this paper also identifies some necessary FEW related parameters for the future development of a FEW ADE. Furthermore, relevant key performance indicators are investigated, and the relevant datasets necessary to calculate these indicators are studied. Finally, different calculations are performed for the downtown borough Ville-Marie in the city of Montréal (Canada) for the domains of food waste (FW) and wastewater (WW) generation. For this study, a workflow is developed to calculate the energy generation from anaerobic digestion of FW and WW. In the first step, the data collection and preparation was done. Here relevant data for georeferencing, data for model set-up, and data for creating the required usage libraries, like food waste and wastewater generation per person, were collected. The next step was the data integration and calculation of the relevant parameters, and lastly, the results were visualized for analysis purposes. As a use case to support such calculations, the CityGML level of detail two model of Montréal is enriched with information such as building functions and building usages from OpenStreetMap. The calculation of the total residents based on the CityGML model as the main input for Ville-Marie results in a population of 72,606. The statistical value for 2016 was 89,170, which corresponds to a deviation of 15.3%. The energy recovery potential of FW is about 24,024 GJ/year, and that of wastewater is about 1,629 GJ/year, adding up to 25,653 GJ/year. Relating values to the calculated number of inhabitants in Ville-Marie results in 330.9 kWh/year for FW and 22.4 kWh/year for wastewater, respectively.
Avatars are in use when interacting in virtual environments in different contexts, in collaborative work, as well as in gaming and also in virtual meetings with friends. Therefore it is important to understand how the relationship between user and avatar works. In this study, an online survey is used to determine how the perception of an avatar changes in different contexts by relating it to existing avatar relationship typologies. Additionally, it is determined whether in each context a realistic, abstract or comic-like representation is preferred by the participants. One result was a preference of low poly representations in the work context, which are associated with the perception of the avatar as a tool. In the context of meeting friends, a realistic representation is perceived as more appropriate, which is perceived as an accurate self-representation. In the gaming context, the results are less clear, which can be attributed to different gaming preferences. Here, unlike in the other contexts, a comic-like representation is also perceived as appropriate, which is associated with the perception of the avatar as a friend. A symbiotic user-avatar relationship is not directly related to any form of representation, but always lies in the midfield, which is attributed to the fact that it represents a whole spectrum between other categories.
To correctly assess the cleanliness of technical surfaces in a production process, corresponding online monitoring systems must provide sufficient data. A promising method for fast, large-area, and non-contact monitoring is hyperspectral imaging (HSI), which was used in this paper for the detection and quantification of organic surface contaminations. Depending on the cleaning parameter constellation, different levels of organic residues remained on the surface. Afterwards, the cleanliness was determined by the carbon content in the atom percent on the sample surfaces, characterized by XPS and AES. The HSI data and the XPS measurements were correlated, using machine learning methods, to generate a predictive model for the carbon content of the surface. The regression algorithms elastic net, random forest regression, and support vector machine regression were used. Overall, the developed method was able to quantify organic contaminations on technical surfaces. The best regression model found was a random forest model, which achieved an R2 of 0.7 and an RMSE of 7.65 At.-% C. Due to the easy-to-use measurement and the fast evaluation by machine learning, the method seems suitable for an online monitoring system. However, the results also show that further experiments are necessary to improve the quality of the prediction models.
Unprecedented formation of sterically stabilized phospholipid liposomes of cuboidal morphology
(2021)
Sterically stabilized phospholipid liposomes of unprecedented cuboid morphology are formed upon introduction in the bilayer membrane of original polymers, based on polyglycidol bearing a lipid-mimetic residue. Strong hydrogen bonding in the polyglycidol sublayers creates attractive forces, which, facilitated by fluidization of the membrane, bring about the flattening of the bilayers and the formation of cuboid vesicles.
Forecasting demand is challenging. Various products exhibit different demand patterns. While demand may be constant and regular for one product, it may be sporadic for another, as well as when demand occurs, it may fluctuate significantly. Forecasting errors are costly and result in obsolete inventory or unsatisfied demand. Methods from statistics, machine learning, and deep learning have been used to predict such demand patterns. Nevertheless, it is not clear for what demand pattern, which algorithm would achieve the best forecast. Therefore, even today a large number of models are used to forecast on a test period. The model with the best result on the test period is used for the actual forecast. This approach is computationally and time intensive and, in most cases, uneconomical. In our paper we show the possibility to use a machine learning classification algorithm, which predicts the best possible model based on the characteristics of a time series. The approach was developed and evaluated on a dataset from a B2B-technical-retailer. The machine learning classification algorithm achieves a mean ROC-AUC of 89%, which emphasizes the skill of the model.
Intermittent time series forecasting is a challenging task which still needs particular attention of researchers. The more unregularly events occur, the more difficult is it to predict them. With Croston’s approach in 1972 (1.Nr. 3:289–303), intermittence and demand of a time series were investigated the first time separately. He proposes an exponential smoothing in his attempt to generate a forecast which corresponds to the demand per period in average. Although this algorithm produces good results in the field of stock control, it does not capture the typical characteristics of intermittent time series within the final prediction. In this paper, we investigate a time series’ intermittence and demand individually, forecast the upcoming demand value and inter-demand interval length using recent machine learning algorithms, such as long-short-term-memories and light-gradient-boosting machines, and reassemble both information to generate a prediction which preserves the characteristics of an intermittent time series. We compare the results against Croston’s approach, as well as recent forecast procedures where no split is performed.