Do you want to publish a course? Click here

Angular Velocity Estimation of Image Motion Mimicking the Honeybee Tunnel Centring Behaviour

347   0   0.0 ( 0 )
 Added by Qinbing Fu
 Publication date 2019
  fields Biology
and research's language is English




Ask ChatGPT about the research

Insects use visual information to estimate angular velocity of retinal image motion, which determines a variety of flight behaviours including speed regulation, tunnel centring and visual navigation. For angular velocity estimation, honeybees show large spatial-independence against visual stimuli, whereas the previous models have not fulfilled such an ability. To address this issue, we propose a bio-plausible model for estimating the image motion velocity based on behavioural experiments of the honeybee flying through patterned tunnels. The proposed model contains mainly three parts, the texture estimation layer for spatial information extraction, the delay-and-correlate layer for temporal information extraction and the decoding layer for angular velocity estimation. This model produces responses that are largely independent of the spatial frequency in grating experiments. And the model has been implemented in a virtual bee for tunnel centring simulations. The results coincide with both electro-physiological neuron spike and behavioural path recordings, which indicates our proposed method provides a better explanation of the honeybees image motion detection mechanism guiding the tunnel centring behaviour.



rate research

Read More

131 - Paula Sanz Leon 2012
Choosing an appropriate set of stimuli is essential to characterize the response of a sensory system to a particular functional dimension, such as the eye movement following the motion of a visual scene. Here, we describe a framework to generate random texture movies with controlled information content, i.e., Motion Clouds. These stimuli are defined using a generative model that is based on controlled experimental parametrization. We show that Motion Clouds correspond to dense mixing of localized moving gratings with random positions. Their global envelope is similar to natural-like stimulation with an approximate full-field translation corresponding to a retinal slip. We describe the construction of these stimuli mathematically and propose an open-source Python-based implementation. Examples of the use of this framework are shown. We also propose extensions to other modalities such as color vision, touch, and audition.
Recently, we have proposed a redox molecular hypothesis about the natural biophysical substrate of visual perception and imagery (Bokkon, 2009. BioSystems; Bokkon and DAngiulli, 2009. Bioscience Hypotheses). Namely, the retina transforms external photon signals into electrical signals that are carried to the V1 (striate cortex). Then, V1 retinotopic electrical signals (spike-related electrical signals along classical axonal-dendritic pathways) can be converted into regulated ultraweak bioluminescent photons (biophotons) through redox processes within retinotopic visual neurons that make it possible to create intrinsic biophysical pictures during visual perception and imagery. However, the consensus opinion is to consider biophotons as by-products of cellular metabolism. This paper argues that biophotons are not by-products, other than originating from regulated cellular radical/redox processes. It also shows that the biophoton intensity can be considerably higher inside cells than outside. Our simple calculations, within a level of accuracy, suggest that the real biophoton intensity in retinotopic neurons may be sufficient for creating intrinsic biophysical picture representation of a single-object image during visual perception.
The input-output behaviour of the Wiener neuronal model subject to alternating input is studied under the assumption that the effect of such an input is to make the drift itself of an alternating type. Firing densities and related statistics are obtained via simulations of the sample-paths of the process in the following three cases: the drift changes occur during random periods characterized by (i) exponential distribution, (ii) Erlang distribution with a preassigned shape parameter, and (iii) deterministic distribution. The obtained results are compared with those holding for the Wiener neuronal model subject to sinusoidal input
In low-level sensory systems, it is still unclear how the noisy information collected locally by neurons may give rise to a coherent global percept. This is well demonstrated for the detection of motion in the aperture problem: as luminance of an elongated line is symmetrical along its axis, tangential velocity is ambiguous when measured locally. Here, we develop the hypothesis that motion-based predictive coding is sufficient to infer global motion. Our implementation is based on a context-dependent diffusion of a probabilistic representation of motion. We observe in simulations a progressive solution to the aperture problem similar to physio-logy and behavior. We demonstrate that this solution is the result of two underlying mechanisms. First, we demonstrate the formation of a tracking behavior favoring temporally coherent features independent of their texture. Second, we observe that incoherent features are explained away, while coherent information diffuses progressively to the global scale. Most previous models included ad hoc mechanisms such as end-stopped cells or a selection layer to track specific luminance-based features as necessary conditions to solve the aperture problem. Here, we have proved that motion-based predictive coding, as it is implemented in this functional model, is sufficient to solve the aperture problem. This solution may give insights into the role of prediction underlying a large class of sensory computations.
Since initial reports regarding the impact of motion artifact on measures of functional connectivity, there has been a proliferation of confound regression methods to limit its impact. However, recent techniques have not been systematically evaluated using consistent outcome measures. Here, we provide a systematic evaluation of 12 commonly used confound regression methods in 193 young adults. Specifically, we compare methods according to three benchmarks, including the residual relationship between motion and connectivity, distance-dependent effects of motion on connectivity, and additional degrees of freedom lost in confound regression. Our results delineate two clear trade-offs among methods. First, methods that include global signal regression minimize the relationship between connectivity and motion, but unmask distance-dependent artifact. In contrast, censoring methods mitigate both motion artifact and distance-dependence, but use additional degrees of freedom. Taken together, these results emphasize the heterogeneous efficacy of proposed methods, and suggest that different confound regression strategies may be appropriate in the context of specific scientific goals.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا