Refine
Document Type
- Conference proceeding (13)
- Journal article (7)
- Book chapter (2)
Is part of the Bibliography
- yes (22)
Institute
Publisher
Zur Entwicklung einer Sofortpreiskalkulation für CNC-Drehteile werden Machine-Learning-Ansätze sowie ein deterministischer Algorithmus untersucht. Der deterministische Algorithmus funktioniert ausschließlich für Drehteile mit geringer Komplexität. Die Machine Learning Modelle hingegen sind zukunftsfähiger, da die ersten Ergebnisse bereits sehr geringe Abweichungswerte zu den festgelegten Referenzpreisen erreichen können. Mit steigendem Datenaufkommen können beide Machine-Learning-Modelle mit geringem Aufwand weiter verbessert werden.
Forecasting demand is challenging. Various products exhibit different demand patterns. While demand may be constant and regular for one product, it may be sporadic for another, as well as when demand occurs, it may fluctuate significantly. Forecasting errors are costly and result in obsolete inventory or unsatisfied demand. Methods from statistics, machine learning, and deep learning have been used to predict such demand patterns. Nevertheless, it is not clear for what demand pattern, which algorithm would achieve the best forecast. Therefore, even today a large number of models are used to forecast on a test period. The model with the best result on the test period is used for the actual forecast. This approach is computationally and time intensive and, in most cases, uneconomical. In our paper we show the possibility to use a machine learning classification algorithm, which predicts the best possible model based on the characteristics of a time series. The approach was developed and evaluated on a dataset from a B2B-technical-retailer. The machine learning classification algorithm achieves a mean ROC-AUC of 89%, which emphasizes the skill of the model.
Prior to the introduction of AI-based forecast models in the procurement department of an industrial retail company, we assessed the digital skills of the procurement employees and surveyed their attitudes toward a new digital technology. The aim of the survey was to ascertain important contextual factors which are likely to influence the acceptance and the successful use of the new forecast tool. What we find is that the digital skills of the employees show an intermediate level and that their attitudes toward key aspects of new digital technologies are largely positive. Thus, the conditions for high acceptance and the successful use of the models are good, as evidenced by the high intention of the procurement staff to use the models. In line with previous research, we find that the perceived usefulness of a new technology and the perceived ease of use are significant drivers of the willingness to use the new forecast tool.
Forecasting demand is challenging. Various products exhibit different demand patterns. While demand may be constant and regular for one product, it may be sporadic for another, as well as when demand occurs, it may fluctuate significantly. Forecasting errors are costly and result in obsolete inventory or unsatisfied demand. Methods from statistics, machine learning, and deep learning have been used to predict such demand patterns. Nevertheless, it is not clear for what demand pattern, which algorithm would achieve the best forecast. Therefore, even today a large number of models are used to forecast on a test period. The model with the best result on the test period is used for the actual forecast. This approach is computationally and time intensive and, in most cases, uneconomical. In our paper we show the possibility to use a machine learning classification algorithm, which predicts the best possible model based on the characteristics of a time series. The approach was developed and evaluated on a dataset from a B2B-technical-retailer. The machine learning classification algorithm achieves a mean ROC-AUC of 89%, which emphasizes the skill of the model.
Intermittent time series forecasting is a challenging task which still needs particular attention of researchers. The more unregularly events occur, the more difficult is it to predict them. With Croston’s approach in 1972 (1.Nr. 3:289–303), intermittence and demand of a time series were investigated the first time separately. He proposes an exponential smoothing in his attempt to generate a forecast which corresponds to the demand per period in average. Although this algorithm produces good results in the field of stock control, it does not capture the typical characteristics of intermittent time series within the final prediction. In this paper, we investigate a time series’ intermittence and demand individually, forecast the upcoming demand value and inter-demand interval length using recent machine learning algorithms, such as long-short-term-memories and light-gradient-boosting machines, and reassemble both information to generate a prediction which preserves the characteristics of an intermittent time series. We compare the results against Croston’s approach, as well as recent forecast procedures where no split is performed.
The aim of this paper is to show to what extent Artificial Intelligence can be used to optimize forecasting capability in procurement as well as to compare AI with traditional statistic methods. At the same time this article presents the status quo of the research project ANIMATE. The project applies Artificial Intelligence to forecast customer orders in medium-sized companies.
Precise forecasts are essential for companies. For planning, decision making and controlling. Forecasts are applied, e.g. in the areas of supply chain, production or purchasing. Medium-sized companies have major challenges in using suitable methods to improve their forecasting ability.
Companies often use proven methods such as classical statistics as the ARIMA algorithm. However, simple statistics often fail while applied for complex non-linear predictions.
Initial results show that even a simple MLP ANN produces better results than traditional statistic methods. Furthermore, a baseline (Implicit Sales Expectation) of the company was used to compare the performance. This comparison also shows that the proposed AI method is superior.
Until the developed method becomes part of corporate practice, it must be further optimized. The model has difficulties with strong declines, for example due to holidays. The authors are certain that the model can be further improved. For example, through more advanced methods, such as a FilterNet, but also through more data, such as external data on holiday periods.
Detection of defective parts and tools is essential in large-scale industrial manufacturing, playing a vital role in predictive maintenance, quality assurance, and safety hazard minimization. While traditionally performed by humans, the automation of visual anomaly detection using neural networks has gained prominence due to their increasing performance capabilities. However, deep learning models require extensive data for training, while acquiring annotated data is both costly and labor-intensive, especially for defect variations in industrial scenarios. Unsupervised methods, trained without labels or annotations, offer a potential solution but struggle to distinguish true anomalies from irrelevant impurities. To address the limitations of data dependency and spurious correlations in deep learning models, we introduce a demonstrator utilizing Human Importance-aware Network Tuning (HINT) to incorporate domain knowledge during training, and Explainable Artificial Intelligence (XAI) to provide insights into the model’s decision-making process.
This study explores the application of the PatchCore algorithm for anomaly classification in hobbing tools, an area of keen interest in industrial artificial intelligence application. Despite utilizing limited training images, the algorithm demonstrates capability in recognizing a variety of anomalies, promising to reduce the time-intensive labeling process traditionally undertaken by domain experts. The algorithm demonstrated an accuracy of 92%, precision of 84%, recall of 100%, and a balanced F1 score of 91%, showcasing its proficiency in identifying anomalies. However, the investigation also highlights that while the algorithm effectively identifies anomalies, it doesn't primarily recognize domain-specific wear issues. Thus, the presented approach is used only for pre-classification, with domain experts subsequently segmenting the images indicating significant wear. The intention is to employ a supervised learning procedure to identify actual wear. This premise will be further investigated in future research studies.
The maintenance of special tools is an expensive business. Either manual inspection by an expert costs valuable resources, or the loss of a tool due to irreparable wear is associated with high replacement costs, while reconditioning requires only a fraction. In order to avoid higher costs and drive forward the automation process in production, a German gear manufacturer wants to create an automatic evaluation of skiving gears. As a sub-step of this automated condition detection, it is necessary for wheels to be automatically aligned within a vision-based inspection cell. In extension to a study conducted last year, further image preprocessing steps are implemented in this publication and a new alignment algorithm from the autoencoder family is evaluated. By using an additional synthetic dataset, previous limitations could be clarified. The results show that thorough data preparation is beneficial for all solution approaches and that neural networks can even beat a brute force algorithm.
Forecasting intermittent and lumpy demand is challenging. Demand occurs only sporadically and, when it does, it can vary considerably. Forecast errors are costly, resulting in obsolescent stock or unmet demand. Methods from statistics, machine learning and deep learning have been used to predict such demand patterns. Traditional accuracy metrics are often employed to evaluate the forecasts, however these come with major drawbacks such as not taking horizontal and vertical shifts over the forecasting horizon into account, or indeed stock-keeping or opportunity costs. This results in a disadvantageous selection of methods in the context of intermittent and lumpy demand forecasts. In our study, we compare methods from statistics, machine learning and deep learning by applying a novel metric called Stock-keeping-oriented Prediction Error Costs (SPEC), which overcomes the drawbacks associated with traditional metrics. Taking the SPEC metric into account, the Croston algorithm achieves the best result, just ahead of a Long Short-Term Memory Neural Network.