ترغب بنشر مسار تعليمي؟ اضغط هنا

Optimal local estimates of visual motion in a natural environment

69   0   0.0 ( 0 )
 نشر من قبل William Bialek
 تاريخ النشر 2018
  مجال البحث علم الأحياء فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Many organisms, from flies to humans, use visual signals to estimate their motion through the world. To explore the motion estimation problem, we have constructed a camera/gyroscope system that allows us to sample, at high temporal resolution, the joint distribution of input images and rotational motions during a long walk in the woods. From these data we construct the optimal estimator of velocity based on spatial and temporal derivatives of image intensity in small patches of the visual world. Over the bulk of the naturally occurring dynamic range, the optimal estimator exhibits the same systematic errors seen in neural and behavioral responses, including the confounding of velocity and contrast. These results suggest that apparent errors of sensory processing may reflect an optimal response to the physical signals in the environment.



قيم البحث

اقرأ أيضاً

It has recently been discovered that single neuron stimulation can impact network dynamics in immature and adult neuronal circuits. Here we report a novel mechanism which can explain in neuronal circuits, at an early stage of development, the peculia r role played by a few specific neurons in promoting/arresting the population activity. For this purpose, we consider a standard neuronal network model, with short-term synaptic plasticity, whose population activity is characterized by bursting behavior. The addition of developmentally inspired constraints and correlations in the distribution of the neuronal connectivities and excitabilities leads to the emergence of functional hub neurons, whose stimulation/deletion is critical for the network activity. Functional hubs form a clique, where a precise sequential activation of the neurons is essential to ignite collective events without any need for a specific topological architecture. Unsupervised time-lagged firings of supra-threshold cells, in connection with coordinated entrainments of near-threshold neurons, are the key ingredients to orchestrate
Visually induced neuronal activity in V1 displays a marked gamma-band component which is modulated by stimulus properties. It has been argued that synchronized oscillations contribute to these gamma-band activity [... however,] even when oscillations are observed, they undergo temporal decorrelation over very few cycles. This is not easily accounted for in previous network modeling of gamma oscillations. We argue here that interactions between cortical layers can be responsible for this fast decorrelation. We study a model of a V1 hypercolumn, embedding a simplified description of the multi-layered structure of the cortex. When the stimulus contrast is low, the induced activity is only weakly synchronous and the network resonates transiently without developing collective oscillations. When the contrast is high, on the other hand, the induced activity undergoes synchronous oscillations with an irregular spatiotemporal structure expressing a synchronous chaotic state. As a consequence the population activity undergoes fast temporal decorrelation, with concomitant rapid damping of the oscillations in LFPs autocorrelograms and peak broadening in LFPs power spectra. [...] Finally, we argue that the mechanism underlying the emergence of synchronous chaos in our model is in fact very general. It stems from the fact that gamma oscillations induced by local delayed inhibition tend to develop chaos when coupled by sufficiently strong excitation.
An essential requirement for the representation of functional patterns in complex neural networks, such as the mammalian cerebral cortex, is the existence of stable regimes of network activation, typically arising from a limited parameter range. In t his range of limited sustained activity (LSA), the activity of neural populations in the network persists between the extremes of either quickly dying out or activating the whole network. Hierarchical modular networks were previously found to show a wider parameter range for LSA than random or small-world networks not possessing hierarchical organization or multiple modules. Here we explored how variation in the number of hierarchical levels and modules per level influenced network dynamics and occurrence of LSA. We tested hierarchical configurations of different network sizes, approximating the large-scale networks linking cortical columns in one hemisphere of the rat, cat, or macaque monkey brain. Scaling of the network size affected the number of hierarchical levels and modules in the optimal networks, also depending on whether global edge density or the numbers of connections per node were kept constant. For constant edge density, only few network configurations, possessing an intermediate number of levels and a large number of modules, led to a large range of LSA independent of brain size. For a constant number of node connections, there was a trend for optimal configurations in larger-size networks to possess a larger number of hierarchical levels or more modules. These results may help to explain the trend to greater network complexity apparent in larger brains and may indicate that this complexity is required for maintaining stable levels of neural activation.
The ability to store continuous variables in the state of a biological system (e.g. a neural network) is critical for many behaviours. Most models for implementing such a memory manifold require hand-crafted symmetries in the interactions or precise fine-tuning of parameters. We present a general principle that we refer to as {it frozen stabilisation}, which allows a family of neural networks to self-organise to a critical state exhibiting memory manifolds without parameter fine-tuning or symmetries. These memory manifolds exhibit a true continuum of memory states and can be used as general purpose integrators for inputs aligned with the manifold. Moreover, frozen stabilisation allows robust memory manifolds in small networks, and this is relevant to debates of implementing continuous attractors with a small number of neurons in light of recent experimental discoveries.
The structural human connectome (i.e. the network of fiber connections in the brain) can be analyzed at ever finer spatial resolution thanks to advances in neuroimaging. Here we analyze several large data sets for the human brain network made availab le by the Open Connectome Project. We apply statistical model selection to characterize the degree distributions of graphs containing up to $simeq 10^6$ nodes and $simeq 10^8$ edges. A three-parameter generalized Weibull (also known as a stretched exponential) distribution is a good fit to most of the observed degree distributions. For almost all networks, simple power laws cannot fit the data, but in some cases there is statistical support for power laws with an exponential cutoff. We also calculate the topological (graph) dimension $D$ and the small-world coefficient $sigma$ of these networks. While $sigma$ suggests a small-world topology, we found that $D < 4$ showing that long-distance connections provide only a small correction to the topology of the embedding three-dimensional space.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا