ترغب بنشر مسار تعليمي؟ اضغط هنا

Role of homeostasis in learning sparse representations

174   0   0.0 ( 0 )
 نشر من قبل Laurent Perrinet
 تاريخ النشر 2016
  مجال البحث علم الأحياء
والبحث باللغة English
 تأليف Laurent Perrinet




اسأل ChatGPT حول البحث

Neurons in the input layer of primary visual cortex in primates develop edge-like receptive fields. One approach to understanding the emergence of this response is to state that neural activity has to efficiently represent sensory data with respect to the statistics of natural scenes. Furthermore, it is believed that such an efficient coding is achieved using a competition across neurons so as to generate a sparse representation, that is, where a relatively small number of neurons are simultaneously active. Indeed, different models of sparse coding, coupled with Hebbian learning and homeostasis, have been proposed that successfully match the observed emergent response. However, the specific role of homeostasis in learning such sparse representations is still largely unknown. By quantitatively assessing the efficiency of the neural representation during learning, we derive a cooperative homeostasis mechanism that optimally tunes the competition between neurons within the sparse coding algorithm. We apply this homeostasis while learning small patches taken from natural images and compare its efficiency with state-of-the-art algorithms. Results show that while different sparse coding algorithms give similar coding results, the homeostasis provides an optimal balance for the representation of natural images within the population of neurons. Competition in sparse coding is optimized when it is fair. By contributing to optimizing statistical competition across neurons, homeostasis is crucial in providing a more efficient solution to the emergence of independent components.



قيم البحث

اقرأ أيضاً

Despite a rise in the use of learning by doing pedagogical methods in praxis, little is known as to how these methods improve learning outcomes. Here we show that visual association cortex causally contributes to performance benefits of a learning by doing method. This finding derives from transcranial magnetic stimulation (TMS) and a gesture-enriched foreign language (L2) vocabulary learning paradigm performed by 22 young adults. Inhibitory TMS of visual motion cortex reduced learning outcomes for abstract and concrete gesture-enriched words in comparison to sham stimulation. There were no TMS effects on words learned with pictures. These results adjudicate between opposing predictions of two neuroscientific learning theories: While reactivation-based theories predict no functional role of visual motion cortex in vocabulary learning outcomes, the current study supports the predictive coding theory view that specialized sensory cortices precipitate sensorimotor-based learning benefits.
219 - Karol Gregor , Yann LeCun 2011
We propose a simple and efficient algorithm for learning sparse invariant representations from unlabeled data with fast inference. When trained on short movies sequences, the learned features are selective to a range of orientations and spatial frequ encies, but robust to a wide range of positions, similar to complex cells in the primary visual cortex. We give a hierarchical version of the algorithm, and give guarantees of fast convergence under certain conditions.
Many representation systems on the sphere have been proposed in the past, such as spherical harmonics, wavelets, or curvelets. Each of these data representations is designed to extract a specific set of features, and choosing the best fixed represent ation system for a given scientific application is challenging. In this paper, we show that we can learn directly a representation system from given data on the sphere. We propose two new adaptive approaches: the first is a (potentially multi-scale) patch-based dictionary learning approach, and the second consists in selecting a representation among a parametrized family of representations, the {alpha}-shearlets. We investigate their relative performance to represent and denoise complex structures on different astrophysical data sets on the sphere.
We present a reinforcement learning algorithm for learning sparse non-parametric controllers in a Reproducing Kernel Hilbert Space. We improve the sample complexity of this approach by imposing a structure of the state-action function through a norma lized advantage function (NAF). This representation of the policy enables efficiently composing multiple learned models without additional training samples or interaction with the environment. We demonstrate the performance of this algorithm on learning obstacle-avoidance policies in multiple simulations of a robot equipped with a laser scanner while navigating in a 2D environment. We apply the composition operation to various policy combinations and test them to show that the composed policies retain the performance of their components. We also transfer the composed policy directly to a physical platform operating in an arena with obstacles in order to demonstrate a degree of generalization.
We study the storage of multiple phase-coded patterns as stable dynamical attractors in recurrent neural networks with sparse connectivity. To determine the synaptic strength of existent connections and store the phase-coded patterns, we introduce a learning rule inspired to the spike-timing dependent plasticity (STDP). We find that, after learning, the spontaneous dynamics of the network replay one of the stored dynamical patterns, depending on the network initialization. We study the network capacity as a function of topology, and find that a small- world-like topology may be optimal, as a compromise between the high wiring cost of long range connections and the capacity increase.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا