ترغب بنشر مسار تعليمي؟ اضغط هنا

Emergence of Intrinsic Representations of Images by Feedforward and Feedback Processes and Bioluminescent Photons in Early Retinotopic Areas

267   0   0.0 ( 0 )
 نشر من قبل Vahid Salari
 تاريخ النشر 2010
  مجال البحث علم الأحياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Recently, we put forwarded a redox molecular hypothesis involving the natural biophysical substrate of visual perception and imagery. Here, we explicitly propose that the feedback and feedforward iterative operation processes can be interpreted in terms of a homunculus looking at the biophysical picture in our brain during visual imagery. We further propose that the brain can use both picture-like and language-like representation processes. In our interpretation, visualization (imagery) is a special kind of representation i.e., visual imagery requires a peculiar inherent biophysical (picture-like) mechanism. We also conjecture that the evolution of higher levels of complexity made the biophysical picture representation of the external visual world possible by controlled redox and bioluminescent nonlinear (iterative) biochemical reactions in the V1 and V2 areas during visual imagery. Our proposal deals only with the primary level of visual representation (i.e. perceived scene).



قيم البحث

اقرأ أيضاً

The abundant recurrent horizontal and feedback connections in the primate visual cortex are thought to play an important role in bringing global and semantic contextual information to early visual areas during perceptual inference, helping to resolve local ambiguity and fill in missing details. In this study, we find that introducing feedback loops and horizontal recurrent connections to a deep convolution neural network (VGG16) allows the network to become more robust against noise and occlusion during inference, even in the initial feedforward pass. This suggests that recurrent feedback and contextual modulation transform the feedforward representations of the network in a meaningful and interesting way. We study the population codes of neurons in the network, before and after learning with feedback, and find that learning with feedback yielded an increase in discriminability (measured by d-prime) between the different object classes in the population codes of the neurons in the feedforward path, even at the earliest layer that receives feedback. We find that recurrent feedback, by injecting top-down semantic meaning to the population activities, helps the network learn better feedforward paths to robustly map noisy image patches to the latent representations corresponding to important visual concepts of each object class, resulting in greater robustness of the network against noises and occlusion as well as better fine-grained recognition.
Several studies with brain signals suggested that bottom-up and top-down influences are exerted through distinct frequency bands among visual cortical areas. It has been recently shown that theta and gamma rhythms subserve feedforward, whereas the fe edback influence is dominated by the alpha-beta rhythm in primates. A few theoretical models for reproducing these effects have been proposed so far. Here we show that a simple but biophysically plausible two-network motif composed of spiking-neuron models and chemical synapses can exhibit feedforward and feedback influences through distinct frequency bands. Differently from previous studies, this kind of model allows us to study directed influences not only at the population level, by using a proxy for the local field potential, but also at the cellular level, by using the neuronal spiking series.
The cerebral cortex is composed of multiple cortical areas that exert a wide variety of brain functions. Although human brain neurons are genetically and areally mosaic, the three-dimensional structural differences between neurons in different brain areas or between the neurons of different individuals have not been delineated. Here, we report a nanometer-scale geometric analysis of brain tissues of the superior temporal gyrus of 4 schizophrenia and 4 control cases by using synchrotron radiation nanotomography. The results of the analysis and a comparison with results for the anterior cingulate cortex indicated that 1) neuron structures are dissimilar between brain areas and that 2) the dissimilarity varies from case to case. The structural diverseness was mainly observed in terms of the neurite curvature that inversely correlates with the diameters of the neurites and spines. The analysis also revealed the geometric differences between the neurons of the schizophrenia and control cases, suggesting that neuron structure is associated with brain function. The area dependency of the neuron structure and its diverseness between individuals should represent the individuality of brain functions.
The ongoing exponential rise in recording capacity calls for new approaches for analysing and interpreting neural data. Effective dimensionality has emerged as an important property of neural activity across populations of neurons, yet different stud ies rely on different definitions and interpretations of this quantity. Here we focus on intrinsic and embedding dimensionality, and discuss how they might reveal computational principles from data. Reviewing recent works, we propose that the intrinsic dimensionality reflects information about the latent variables encoded in collective activity, while embedding dimensionality reveals the manner in which this information is processed. We conclude by highlighting the role of network models as an ideal substrate for testing more specifically various hypotheses on the computational principles reflected through intrinsic and embedding dimensionality.
The ability to store continuous variables in the state of a biological system (e.g. a neural network) is critical for many behaviours. Most models for implementing such a memory manifold require hand-crafted symmetries in the interactions or precise fine-tuning of parameters. We present a general principle that we refer to as {it frozen stabilisation}, which allows a family of neural networks to self-organise to a critical state exhibiting memory manifolds without parameter fine-tuning or symmetries. These memory manifolds exhibit a true continuum of memory states and can be used as general purpose integrators for inputs aligned with the manifold. Moreover, frozen stabilisation allows robust memory manifolds in small networks, and this is relevant to debates of implementing continuous attractors with a small number of neurons in light of recent experimental discoveries.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا