ﻻ يوجد ملخص باللغة العربية
Although CNN has reached satisfactory performance in image-related tasks, using CNN to process videos is much more challenging due to the enormous size of raw video streams. In this work, we propose to use motion vectors and residuals from modern video compression techniques to effectively learn the representation of the raw frames and greatly remove the temporal redundancy, giving a faster video processing model. Compressed Video Action Recognition(CoViAR) has explored to directly use compressed video to train the deep neural network, where the motion vectors were utilized to present temporal information. However, motion vector is designed for minimizing video size where precise motion information is not obligatory. Compared with optical flow, motion vectors contain noisy and unreliable motion information. Inspired by the mechanism of video compression codecs, we propose an approach to refine the motion vectors where unreliable movement will be removed while temporal information is largely reserved. We prove that replacing the original motion vector with refined one and using the same network as CoViAR has achieved state-of-art performance on the UCF-101 and HMDB-51 with negligible efficiency degrades comparing with original CoViAR.
Training robust deep video representations has proven to be much more challenging than learning deep image representations. This is in part due to the enormous size of raw video streams and the high temporal redundancy; the true and interesting signa
Motion has shown to be useful for video understanding, where motion is typically represented by optical flow. However, computing flow from video frames is very time-consuming. Recent works directly leverage the motion vectors and residuals readily av
Two-stream networks have achieved great success in video recognition. A two-stream network combines a spatial stream of RGB frames and a temporal stream of Optical Flow to make predictions. However, the temporal redundancy of RGB frames as well as th
Video action recognition, which is topical in computer vision and video analysis, aims to allocate a short video clip to a pre-defined category such as brushing hair or climbing stairs. Recent works focus on action recognition with deep neural networ
Deep learning has achieved great success in recognizing video actions, but the collection and annotation of training data are still quite laborious, which mainly lies in two aspects: (1) the amount of required annotated data is large; (2) temporally