No Arabic abstract
Predictive maintenance, i.e. predicting failure to be few steps ahead of the fault, is one of the pillars of Industry 4.0. An effective method for that is to track early signs of degradation before a failure happens. This paper presents an innovative failure predictive scheme for machines. The proposed scheme combines the use of full spectrum of the vibration data caused by the machines and data visualization technologies. This scheme is featured by no training data required and by quick start after installation. First, we propose to use full spectrum (as high-dimensional data vector) with no cropping and no complex feature extraction and to visualize data behavior by mapping the high dimensional vectors into a 2D map. We then can ensure the simplicity of process and less possibility of overlooking of important information as well as providing a human-friendly and human-understandable output. Second, we propose Real-Time Data Tracker (RTDT) which predicts the failure at an appropriate time with sufficient time for maintenance by plotting real-time frequency spectrum data of the target machine on the 2D map composed from normal data. Third, we show the test results of our proposal using vibration data of bearings from real-world test-to-failure measurements provided by the public dataset, the IMS dataset.
The scope of data-driven fault diagnosis models is greatly improved through deep learning (DL). However, the classical convolution and recurrent structure have their defects in computational efficiency and feature representation, while the latest Transformer architecture based on attention mechanism has not been applied in this field. To solve these problems, we propose a novel time-frequency Transformer (TFT) model inspired by the massive success of standard Transformer in sequence processing. Specially, we design a fresh tokenizer and encoder module to extract effective abstractions from the time-frequency representation (TFR) of vibration signals. On this basis, a new end-to-end fault diagnosis framework based on time-frequency Transformer is presented in this paper. Through the case studies on bearing experimental datasets, we constructed the optimal Transformer structure and verified the performance of the diagnostic method. The superiority of the proposed method is demonstrated in comparison with the benchmark model and other state-of-the-art methods.
We consider the problems of learning forward models that map state to high-dimensional images and inverse models that map high-dimensional images to state in robotics. Specifically, we present a perceptual model for generating video frames from state with deep networks, and provide a framework for its use in tracking and prediction tasks. We show that our proposed model greatly outperforms standard deconvolutional methods and GANs for image generation, producing clear, photo-realistic images. We also develop a convolutional neural network model for state estimation and compare the result to an Extended Kalman Filter to estimate robot trajectories. We validate all models on a real robotic system.
This work presents a prediction model for rolling noise in multi-story buildings, such as that generated by a rolling delivery trolley. Until now, mechanical excitation in multi-story buildings has been limited to impact sources such as the tapping machine. Rolling noise models have been limited to outdoor sources such as trains and automotive vehicles. The model presented here is able to represent the physical phenomena unique to indoor rolling noise, taking into account influencing factors such as the roughness of the wheel and the floor, the material and geometric properties of the wheel and the floor, the rolling velocity of the trolley, and the load on the trolley. The model may be used as a tool to investigate how different flooring systems (including multi-layer systems) respond to rolling excitation, for the purpose of developing multi-story building solutions which are better equipped to combat this kind of noise source.
Tactile sensing is important for robots to perceive the world as it captures the texture and hardness of the object in contact and is robust to illumination and colour variances. However, due to the limited sensing area and the resistance of the fixed surface, current tactile sensors have to tap the tactile sensor on target object many times when assessing a large surface, i.e., pressing, lifting up and shifting to another region. This process is ineffective and time consuming. It is also undesirable to drag such sensors as this often damages the sensitive membrane of the sensor or the object. To address these problems, we propose a cylindrical optical tactile sensor named TouchRoller that can roll around its center axis. It maintains being in contact with the assessed surface throughout the entire motion, which allows for measuring the object continuously and effectively. Extensive experiments show that the TouchRoller sensor can cover a textured surface of 8cm*11cm in a short time of 10s, much more effectively than a flat optical tactile sensor (in 196s). The reconstructed map of the texture from the collected tactile images has a high Structural Similarity Index (SSIM) of 0.31 on average, when compared with the visual texture. In addition, the contacts on the sensor can be localised with a low localisation error, 2.63mm in the center regions and 7.66mm on average. The proposed sensor will enable the fast assessment of large surfaces with high-resolution tactile sensing, and also the effective collection of tactile images.
Manifold learning techniques for dynamical systems and time series have shown their utility for a broad spectrum of applications in recent years. While these methods are effective at learning a low-dimensional representation, they are often insufficient for visualizing the global and local structure of the data. In this paper, we present DIG (Dynamical Information Geometry), a visualization method for multivariate time series data that extracts an information geometry from a diffusion framework. Specifically, we implement a novel group of distances in the context of diffusion operators, which may be useful to reveal structure in the data that may not be accessible by the commonly used diffusion distances. Finally, we present a case study applying our visualization tool to EEG data to visualize sleep stages.