ترغب بنشر مسار تعليمي؟ اضغط هنا

Visual motion processing and human tracking behavior

73   0   0.0 ( 0 )
 نشر من قبل Laurent Perrinet
 تاريخ النشر 2016
  مجال البحث علم الأحياء
والبحث باللغة English
 تأليف Anna Montagnini




اسأل ChatGPT حول البحث

The accurate visual tracking of a moving object is a human fundamental skill that allows to reduce the relative slip and instability of the objects image on the retina, thus granting a stable, high-quality vision. In order to optimize tracking performance across time, a quick estimate of the objects global motion properties needs to be fed to the oculomotor system and dynamically updated. Concurrently, performance can be greatly improved in terms of latency and accuracy by taking into account predictive cues, especially under variable conditions of visibility and in presence of ambiguous retinal information. Here, we review several recent studies focusing on the integration of retinal and extra-retinal information for the control of human smooth pursuit.By dynamically probing the tracking performance with well established paradigms in the visual perception and oculomotor literature we provide the basis to test theoretical hypotheses within the framework of dynamic probabilistic inference. We will in particular present the applications of these results in light of state-of-the-art computer vision algorithms.



قيم البحث

اقرأ أيضاً

Many organisms, from flies to humans, use visual signals to estimate their motion through the world. To explore the motion estimation problem, we have constructed a camera/gyroscope system that allows us to sample, at high temporal resolution, the jo int distribution of input images and rotational motions during a long walk in the woods. From these data we construct the optimal estimator of velocity based on spatial and temporal derivatives of image intensity in small patches of the visual world. Over the bulk of the naturally occurring dynamic range, the optimal estimator exhibits the same systematic errors seen in neural and behavioral responses, including the confounding of velocity and contrast. These results suggest that apparent errors of sensory processing may reflect an optimal response to the physical signals in the environment.
Humans race drones faster than algorithms, despite being limited to a fixed camera angle, body rate control, and response latencies in the order of hundreds of milliseconds. A better understanding of the ability of human pilots of selecting appropria te motor commands from highly dynamic visual information may provide key insights for solving current challenges in vision-based autonomous navigation. This paper investigates the relationship between human eye movements, control behavior, and flight performance in a drone racing task. We collected a multimodal dataset from 21 experienced drone pilots using a highly realistic drone racing simulator, also used to recruit professional pilots. Our results show task-specific improvements in drone racing performance over time. In particular, we found that eye gaze tracks future waypoints (i.e., gates), with first fixations occurring on average 1.5 seconds and 16 meters before reaching the gate. Moreover, human pilots consistently looked at the inside of the future flight path for lateral (i.e., left and right turns) and vertical maneuvers (i.e., ascending and descending). Finally, we found a strong correlation between pilots eye movements and the commanded direction of quadrotor flight, with an average visual-motor response latency of 220 ms. These results highlight the importance of coordinated eye movements in human-piloted drone racing. We make our dataset publicly available.
118 - Jianren Wang , Yihui He 2020
Visual object tracking (VOT) is an essential component for many applications, such as autonomous driving or assistive robotics. However, recent works tend to develop accurate systems based on more computationally expensive feature extractors for bett er instance matching. In contrast, this work addresses the importance of motion prediction in VOT. We use an off-the-shelf object detector to obtain instance bounding boxes. Then, a combination of camera motion decouple and Kalman filter is used for state estimation. Although our baseline system is a straightforward combination of standard methods, we obtain state-of-the-art results. Our method establishes new state-of-the-art performance on VOT (VOT-2016 and VOT-2018). Our proposed method improves the EAO on VOT-2016 from 0.472 of prior art to 0.505, from 0.410 to 0.431 on VOT-2018. To show the generalizability, we also test our method on video object segmentation (VOS: DAVIS-2016 and DAVIS-2017) and observe consistent improvement.
155 - Li Zhaoping , Li Zhe 2014
It has been hypothesized that neural activities in the primary visual cortex (V1) represent a saliency map of the visual field to exogenously guide attention. This hypothesis has so far provided only qualitative predictions and their confirmations. W e report this hypothesis first quantitative prediction, derived without free parameters, and its confirmation by human behavioral data. The hypothesis provides a direct link between V1 neural responses to a visual location and the saliency of that location to guide attention exogenously. In a visual input containing many bars, one of them saliently different from all the other bars which are identical to each other, saliency at the singletons location can be measured by the shortness of the reaction time in a visual search task to find the singleton. The hypothesis predicts quantitatively the whole distribution of the reaction times to find a singleton unique in color, orientation, and motion direction from the reaction times to find other types of singletons. The predicted distribution matches the experimentally observed distribution in all six human observers. A requirement for this successful prediction is a data-motivated assumption that V1 lacks neurons tuned simultaneously to color, orientation, and motion direction of visual inputs. Since evidence suggests that extrastriate cortices do have such neurons, we discuss the possibility that the extrastriate cortices play no role in guiding exogenous attention so that they can be devoted to other functional roles like visual decoding or endogenous attention.
Understanding primate behavior is a mission-critical goal of both biology and biomedicine. Despite the importance of behavior, our ability to rigorously quantify it has heretofore been limited to low-information measures like preference, looking time , and reaction time, or to non-scaleable measures like ethograms. However, recent technological advances have led to a major revolution in behavioral measurement. Specifically, digital video cameras and automated pose tracking software can provide detailed measures of full body position (i.e., pose) of multiple primates over time (i.e., behavior) with high spatial and temporal resolution. Pose-tracking technology in turn can be used to detect behavioral states, such as eating, sleeping, and mating. The availability of such data has in turn spurred developments in data analysis techniques. Together, these changes are poised to lead to major advances in scientific fields that rely on behavioral as a dependent variable. In this review, we situate the tracking revolution in the history of the study of behavior, argue for investment in and development of analytical and research techniques that can profit from the advent of the era of big behavior, and propose that zoos will have a central role to play in this era.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا