ترغب بنشر مسار تعليمي؟ اضغط هنا

Learning Spatio-Appearance Memory Network for High-Performance Visual Tracking

125   0   0.0 ( 0 )
 نشر من قبل Fei Xie
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Existing visual object tracking usually learns a bounding-box based template to match the targets across frames, which cannot accurately learn a pixel-wise representation, thereby being limited in handling severe appearance variations. To address these issues, much effort has been made on segmentation-based tracking, which learns a pixel-wise object-aware template and can achieve higher accuracy than bounding-box template based tracking. However, existing segmentation-based trackers are ineffective in learning the spatio-temporal correspondence across frames due to no use of the rich temporal information. To overcome this issue, this paper presents a novel segmentation-based tracking architecture, which is equipped with a spatio-appearance memory network to learn accurate spatio-temporal correspondence. Among it, an appearance memory network explores spatio-temporal non-local similarity to learn the dense correspondence between the segmentation mask and the current frame. Meanwhile, a spatial memory network is modeled as discriminative correlation filter to learn the mapping between feature map and spatial map. The appearance memory network helps to filter out the noisy samples in the spatial memory network while the latter provides the former with more accurate target geometrical center. This mutual promotion greatly boosts the tracking performance. Without bells and whistles, our simple-yet-effective tracking architecture sets new state-of-the-arts on the VOT2016, VOT2018, VOT2019, GOT-10K, TrackingNet, and VOT2020 benchmarks, respectively. Besides, our tracker outperforms the leading segmentation-based trackers SiamMask and D3S on two video object segmentation benchmarks DAVIS16 and DAVIS17 by a large margin. The source codes can be found at https://github.com/phiphiphi31/DMB.



قيم البحث

اقرأ أيضاً

In this paper, we present a new tracking architecture with an encoder-decoder transformer as the key component. The encoder models the global spatio-temporal feature dependencies between target objects and search regions, while the decoder learns a q uery embedding to predict the spatial positions of the target objects. Our method casts object tracking as a direct bounding box prediction problem, without using any proposals or predefined anchors. With the encoder-decoder transformer, the prediction of objects just uses a simple fully-convolutional network, which estimates the corners of objects directly. The whole method is end-to-end, does not need any postprocessing steps such as cosine window and bounding box smoothing, thus largely simplifying existing tracking pipelines. The proposed tracker achieves state-of-the-art performance on five challenging short-term and long-term benchmarks, while running at real-time speed, being 6x faster than Siam R-CNN. Code and models are open-sourced at https://github.com/researchmm/Stark.
Most of the existing trackers usually rely on either a multi-scale searching scheme or pre-defined anchor boxes to accurately estimate the scale and aspect ratio of a target. Unfortunately, they typically call for tedious and heuristic configurations . To address this issue, we propose a simple yet effective visual tracking framework (named Siamese Box Adaptive Network, SiamBAN) by exploiting the expressive power of the fully convolutional network (FCN). SiamBAN views the visual tracking problem as a parallel classification and regression problem, and thus directly classifies objects and regresses their bounding boxes in a unified FCN. The no-prior box design avoids hyper-parameters associated with the candidate boxes, making SiamBAN more flexible and general. Extensive experiments on visual tracking benchmarks including VOT2018, VOT2019, OTB100, NFS, UAV123, and LaSOT demonstrate that SiamBAN achieves state-of-the-art performance and runs at 40 FPS, confirming its effectiveness and efficiency. The code will be available at https://github.com/hqucv/siamban.
A number of techniques exist to use an ensemble of atoms as a quantum memory for light. Many of these propose to use backward retrieval as a way to improve the storage and recall efficiency. We report on a demonstration of an off-resonant Raman memor y that uses backward retrieval to achieve an efficiency of $65pm6%$ at a storage time of one pulse duration. The memory has a characteristic decay time of 60 $mu$s, corresponding to a delay-bandwidth product of $160$.
The deep learning-based visual tracking algorithms such as MDNet achieve high performance leveraging to the feature extraction ability of a deep neural network. However, the tracking efficiency of these trackers is not very high due to the slow featu re extraction for each frame in a video. In this paper, we propose an effective tracking algorithm to alleviate the time-consuming problem. Specifically, we design a deep flow collaborative network, which executes the expensive feature network only on sparse keyframes and transfers the feature maps to other frames via optical flow. Moreover, we raise an effective adaptive keyframe scheduling mechanism to select the most appropriate keyframe. We evaluate the proposed approach on large-scale datasets: OTB2013 and OTB2015. The experiment results show that our algorithm achieves considerable speedup and high precision as well.
Discriminant Correlation Filters (DCF) based methods now become a kind of dominant approach to online object tracking. The features used in these methods, however, are either based on hand-crafted features like HoGs, or convolutional features trained independently from other tasks like image classification. In this work, we present an end-to-end lightweight network architecture, namely DCFNet, to learn the convolutional features and perform the correlation tracking process simultaneously. Specifically, we treat DCF as a special correlation filter layer added in a Siamese network, and carefully derive the backpropagation through it by defining the network output as the probability heatmap of object location. Since the derivation is still carried out in Fourier frequency domain, the efficiency property of DCF is preserved. This enables our tracker to run at more than 60 FPS during test time, while achieving a significant accuracy gain compared with KCF using HoGs. Extensive evaluations on OTB-2013, OTB-2015, and VOT2015 benchmarks demonstrate that the proposed DCFNet tracker is competitive with several state-of-the-art trackers, while being more compact and much faster.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا