Refine
Document Type
- Journal article (15) (remove)
Language
- English (15)
Is part of the Bibliography
- yes (15)
Institute
- Technik (15)
Publisher
In this paper, it aims to model wind speed time series at multiple sites. The five-parameter Johnson distribution is deployed to relate the wind speed at each site to a Gaussian time series, and the resultant m-dimensional Gaussian stochastic vector process Z(t) is employed to model the temporal-spatial correlation of wind speeds at m different sites. In general, it is computationally tedious to obtain the autocorrelation functions (ACFs) and cross-correlation functions (CCFs) of Z(t), which are different to those of wind speed times series. In order to circumvent this correlation distortion problem, the rank ACF and rank CCF are introduced to characterize the temporal-spatial correlation of wind speeds, whereby the ACFs and CCFs of Z(t) can be analytically obtained. Then, Fourier transformation is implemented to establish the cross-spectral density matrix of Z(t), and an analytical approach is proposed to generate samples of wind speeds at m different sites. Finally, simulation experiments are performed to check the proposed methods, and the results verify that the five-parameter Johnson distribution can accurately match distribution functions of wind speeds, and the spectral representation method can well reproduce the temporal-spatial correlation of wind speeds.
We present a fully automatic approach to real-time 3D face reconstruction from monocular in-the-wild videos. With the use of a cascaded-regressor-based face tracking and a 3D morphable face model shape fitting, we obtain a semidense 3D face shape. We further use the texture information from multiple frames to build a holistic 3D face representation from the video footage. Our system is able to capture facial expressions and does not require any person specific training. We demonstrate the robustness of our approach on the challenging 300 Videos in the Wild (300- VW) dataset. Our real-time fitting framework is available as an open-source library at http://4dface.org.
With the rapid development of globalization, the demand for translation between different languages is also increasing. Although pre-training has achieved excellent results in neural machine translation, the existing neural machine translation has almost no high-quality suitable for specific fields. Alignment information, so this paper proposes a pre-training neural machine translation with alignment information via optimal transport. First, this paper narrows the representation gap between different languages by using OTAP to generate domain-specific data for information alignment, and learns richer semantic information. Secondly, this paper proposes a lightweight model DR-Reformer, which uses Reformer as the backbone network, adds Dropout layers and Reduction layers, reduces model parameters without losing accuracy, and improves computational efficiency. Experiments on the Chinese and English datasets of AI Challenger 2018 and WMT-17 show that the proposed algorithm has better performance than existing algorithms.
In clothing e-commerce, the challenge of optimally recommending clothing that suits a user’s unique characteristics remains a pressing issue. Many platforms simply recommend best-selling or popular clothing, without taking into account important attributes like user’s face color, pupil color, face shape, age, etc. To solve this problem, this paper proposes a personalized clothing recommendation algorithm that incorporates the established 4-Season Color System and user-specific biological characteristics. Firstly, the attributes and colors of clothing are classified by Fnet network, that can learn disjoint label combinations and mitigate the issue of excessive labels. Secondly, on the basis of the 4-Season Color System, the user’s face color model is trained by combined MobileNetV3_DTL, which ensures the model’s generalization and improves the training speed. Thirdly, user’s face shape and age are divided into different categories by an Inception network. Finally, according to the users’ face color, age, face shape and other information, personalized clothing is recommended in a coarse-to-fine manner. Experiments on five datasets demonstrate that the algorithm proposed in this paper achieves state-of-the-art results.
With the continuous development of economy, consumers pay more attention to the demand for personalization clothing. However, the recommendation quality of the existing clothing recommendation system is not enough to meet the user’s needs. When browsing online clothing, facial expression is the salient information to understand the user’s preference. In this paper, we propose a novel method to automatically personalize clothing recommendation based on user emotional analysis. Firstly, the facial expression is classified by multiclass SVM. Next, the user’s multi-interest value is calculated using expression intensity that is obtained by hybrid RCNN. Finally, the multi-interest value is fused to carry out personalized recommendation. The experimental results show that the proposed method achieves a significant improvement over other algorithms.
We presented our robot framework and our efforts to make face analysis more robust towards self-occlusion caused by head pose. By using a lightweight linear fitting algorithm, we are able to obtain 3D models of human faces in real-time. The combination of adaptive tracking and 3D face modelling for the analysis of human faces is used as a basis for further research on human-machine interaction on our SCITOS robot platform.
Deep learning-based fabric defect detection methods have been widely investigated to improve production efficiency and product quality. Although deep learning-based methods have proved to be powerful tools for classification and segmentation, some key issues remain to be addressed when applied to real applications. Firstly, the actual fabric production conditions of factories necessitate higher real-time performance of methods. Moreover, fabric defects as abnormal samples are very rare compared with normal samples, which results in data imbalance. It makes model training based on deep learning challenging. To solve these problems, an extremely efficient convolutional neural network, Mobile-Unet, is proposed to achieve the end-to-end defect segmentation. The median frequency balancing loss function is used to overcome the challenge of sample imbalance. Additionally, Mobile-Unet introduces depth-wise separable convolution, which dramatically reduces the complexity cost and model size of the network. It comprises two parts: encoder and decoder. The MobileNetV2 feature extractor is used as the encoder, and then five deconvolution layers are added as the decoder. Finally, the softmax layer is used to generate the segmentation mask. The performance of the proposed model has been evaluated by public fabric datasets and self-built fabric datasets. In comparison with other methods, the experimental results demonstrate that segmentation accuracy and detection speed in the proposed method achieve state-of-the-art performance.
In visual adaptive tracking, the tracker adapts to the target, background, and conditions of the image sequence. Each update introduces some error, so the tracker might drift away from the target over time. To increase the robustness against the drifting problem, we present three ideas on top of a particle filter framework: An optical-flow-based motion estimation, a learning strategy for preventing bad updates while staying adaptive, and a sliding window detector for failure detection and finding the best training examples. We experimentally evaluate the ideas using the BoBoT dataseta. The code of our tracker is available online.
For autonomously driving cars and intelligent vehicles it is crucial to understand the scene context including objects in the surrounding. A fundamental technique accomplishing this is scene labeling. That is, assigning a semantic class to each pixel in a scene image. This task is commonly tackled quite well by fully convolutional neural networks (FCN). Crucial factors are a small model size and a low execution time. This work presents the first method that exploits depth cues together with confidence estimates in a CNN. To this end, novel experimentally grounded network architecture is proposed to perform robust scene labeling that does not require costly preprocessing like CRFs or LSTMs as commonly used in related work. The effectiveness of this approach is demonstrated in an extensive evaluation on a challenging real-world dataset. The new architecture is highly optimized for high accuracy and low execution time.
Annotations of subject IDs in images are very important as ground truth for face recognition applications and news retrieval systems. Face naming is becoming a significant research topic in news image indexing applications. By exploiting the uniqueness of name, face naming is transformed to the problem of multiple instance learning (MIL) with exclusive constraint, namely the eMIL problem. First, the positive bags and the negative bags are automatically annotated by a hybrid recurrent convolutional neural network and a distributed affinity propagation cluster. Next, positive instance selection and updating are used to reduce the influence of false-positive bag and to improve the performance. Finally, max exclusive density and iterative Max-ED algorithms are proposed to solve the eMIL problem. The experimental results show that the proposed algorithms achieve a significant improvement over other algorithms.