Do you want to publish a course? Click here

Towards deep learning with segregated dendrites

120   0   0.0 ( 0 )
 Added by Blake Richards
 Publication date 2016
  fields Biology
and research's language is English




Ask ChatGPT about the research

Deep learning has led to significant advances in artificial intelligence, in part, by adopting strategies motivated by neurophysiology. However, it is unclear whether deep learning could occur in the real brain. Here, we show that a deep learning algorithm that utilizes multi-compartment neurons might help us to understand how the brain optimizes cost functions. Like neocortical pyramidal neurons, neurons in our model receive sensory information and higher-order feedback in electrotonically segregated compartments. Thanks to this segregation, the neurons in different layers of the network can coordinate synaptic weight updates. As a result, the network can learn to categorize images better than a single layer network. Furthermore, we show that our algorithm takes advantage of multilayer architectures to identify useful representations---the hallmark of deep learning. This work demonstrates that deep learning can be achieved using segregated dendritic compartments, which may help to explain the dendritic morphology of neocortical pyramidal neurons.



rate research

Read More

108 - V. Mendez , A. Iomin 2014
This chapter is a contribution in the Handbook of Applications of Chaos Theory ed. by Prof. Christos H Skiadas. The chapter is organized as follows. First we study the statistical properties of combs and explain how to reduce the effect of teeth on the movement along the backbone as a waiting time distribution between consecutive jumps. Second, we justify an employment of a comb-like structure as a paradigm for further exploration of a spiny dendrite. In particular, we show how a comb-like structure can sustain the phenomenon of the anomalous diffusion, reaction-diffusion and Levy walks. Finally, we illustrate how the same models can be also useful to deal with the mechanism of ta translocation wave / translocation waves of CaMKII and its propagation failure. We also present a brief introduction to the fractional integro-differentiation in appendix at the end of the chapter.
In the sensation of tones, visions and other stimuli, the surround inhibition mechanism (or lateral inhibition mechanism) is crucial. The mechanism enhances the signals of the strongest tone, color and other stimuli, by reducing and inhibiting the surrounding signals, since the latter signals are less important. This surround inhibition mechanism is well studied in the physiology of sensor systems. The neural network with two hidden layers in addition to input and output layers is constructed; having 60 neurons (units) in each of the four layers. The label (correct answer) is prepared from an input signal by applying seven times operations of the Hartline mechanism, that is, by sending inhibitory signals from the neighboring neurons and amplifying all the signals afterwards. The implication obtained by the deep learning of this neural network is compared with the standard physiological understanding of the surround inhibition mechanism.
A central challenge in neuroscience is to understand neural computations and circuit mechanisms that underlie the encoding of ethologically relevant, natural stimuli. In multilayered neural circuits, nonlinear processes such as synaptic transmission and spiking dynamics present a significant obstacle to the creation of accurate computational models of responses to natural stimuli. Here we demonstrate that deep convolutional neural networks (CNNs) capture retinal responses to natural scenes nearly to within the variability of a cells response, and are markedly more accurate than linear-nonlinear (LN) models and Generalized Linear Models (GLMs). Moreover, we find two additional surprising properties of CNNs: they are less susceptible to overfitting than their LN counterparts when trained on small amounts of data, and generalize better when tested on stimuli drawn from a different distribution (e.g. between natural scenes and white noise). Examination of trained CNNs reveals several properties. First, a richer set of feature maps is necessary for predicting the responses to natural scenes compared to white noise. Second, temporally precise responses to slowly varying inputs originate from feedforward inhibition, similar to known retinal mechanisms. Third, the injection of latent noise sources in intermediate layers enables our model to capture the sub-Poisson spiking variability observed in retinal ganglion cells. Fourth, augmenting our CNNs with recurrent lateral connections enables them to capture contrast adaptation as an emergent property of accurately describing retinal responses to natural scenes. These methods can be readily generalized to other sensory modalities and stimulus ensembles. Overall, this work demonstrates that CNNs not only accurately capture sensory circuit responses to natural scenes, but also yield information about the circuits internal structure and function.
Replay is the reactivation of one or more neural patterns, which are similar to the activation patterns experienced during past waking experiences. Replay was first observed in biological neural networks during sleep, and it is now thought to play a critical role in memory formation, retrieval, and consolidation. Replay-like mechanisms have been incorporated into deep artificial neural networks that learn over time to avoid catastrophic forgetting of previous knowledge. Replay algorithms have been successfully used in a wide range of deep learning methods within supervised, unsupervised, and reinforcement learning paradigms. In this paper, we provide the first comprehensive comparison between replay in the mammalian brain and replay in artificial neural networks. We identify multiple aspects of biological replay that are missing in deep learning systems and hypothesize how they could be utilized to improve artificial neural networks.
Deep supervised neural networks trained to classify objects have emerged as popular models of computation in the primate ventral stream. These models represent information with a high-dimensional distributed population code, implying that inferotemporal (IT) responses are also too complex to interpret at the single-neuron level. We challenge this view by modelling neural responses to faces in the macaque IT with a deep unsupervised generative model, beta-VAE. Unlike deep classifiers, beta-VAE disentangles sensory data into interpretable latent factors, such as gender or hair length. We found a remarkable correspondence between the generative factors discovered by the model and those coded by single IT neurons. Moreover, we were able to reconstruct face images using the signals from just a handful of cells. This suggests that the ventral visual stream may be optimising the disentangling objective, producing a neural code that is low-dimensional and semantically interpretable at the single-unit level.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا