ﻻ يوجد ملخص باللغة العربية
Robust and accurate six degree-of-freedom tracking on portable devices remains a challenging problem, especially on small hand-held devices such as smartphones. For improved robustness and accuracy, complementary movement information from an IMU and a camera is often fused. Conventional visual-inertial methods fuse information from IMUs with a sparse cloud of feature points tracked by the device camera. We consider a visually dense approach, where the IMU data is fused with the dense optical flow field estimated from the camera data. Learning-based methods applied to the full image frames can leverage visual cues and global consistency of the flow field to improve the flow estimates. We show how a learning-based optical flow model can be combined with conventional inertial navigation, and how ideas from probabilistic deep learning can aid the robustness of the measurement updates. The practical applicability is demonstrated on real-world data acquired by an iPad in a challenging low-texture environment.
Infants spontaneous and voluntary movements mirror developmental integrity of brain networks since they require coordinated activation of multiple sites in the central nervous system. Accordingly, early detection of infants with atypical motor develo
The deep learning-based visual tracking algorithms such as MDNet achieve high performance leveraging to the feature extraction ability of a deep neural network. However, the tracking efficiency of these trackers is not very high due to the slow featu
Most end-to-end Multi-Object Tracking (MOT) methods face the problems of low accuracy and poor generalization ability. Although traditional filter-based methods can achieve better results, they are difficult to be endowed with optimal hyperparameters
Despite the advantages of all-weather and all-day high-resolution imaging, SAR remote sensing images are much less viewed and used by general people because human vision is not adapted to microwave scattering phenomenon. However, expert interpreters
We present a self-supervised learning approach for optical flow. Our method distills reliable flow estimations from non-occluded pixels, and uses these predictions as ground truth to learn optical flow for hallucinated occlusions. We further design a