Do you want to publish a course? Click here

Influence of autapses on synchronisation in neural networks with chemical synapses

82   0   0.0 ( 0 )
 Added by Paulo Protachevicz
 Publication date 2020
  fields Biology
and research's language is English




Ask ChatGPT about the research

A great deal of research has been devoted on the investigation of neural dynamics in various network topologies. However, only a few studies have focused on the influence of autapses, synapses from a neuron onto itself via closed loops, on neural synchronisation. Here, we build a random network with adaptive exponential integrate-and-fire neurons coupled with chemical synapses, equipped with autapses, to study the effect of the latter on synchronous behaviour. We consider time delay in the conductance of the pre-synaptic neuron for excitatory and inhibitory connections. Interestingly, in neural networks consisting of both excitatory and inhibitory neurons, we uncover that synchronous behaviour depends on their synapse type. Our results provide evidence on the synchronous and desynchronous activities that emerge in random neural networks with chemical, inhibitory and excitatory synapses where neurons are equipped with autapses.



rate research

Read More

Conventionally, information is represented by spike rates in the neural system. Here, we consider the ability of temporally modulated activities in neuronal networks to carry information extra to spike rates. These temporal modulations, commonly known as population spikes, are due to the presence of synaptic depression in a neuronal network model. We discuss its relevance to an experiment on transparent motions in macaque monkeys by Treue et al. in 2000. They found that if the moving directions of objects are too close, the firing rate profile will be very similar to that with one direction. As the difference in the moving directions of objects is large enough, the neuronal system would respond in such a way that the network enhances the resolution in the moving directions of the objects. In this paper, we propose that this behavior can be reproduced by neural networks with dynamical synapses when there are multiple external inputs. We will demonstrate how resolution enhancement can be achieved, and discuss the conditions under which temporally modulated activities are able to enhance information processing performances in general.
In this work we studied the combined action of chemical and electrical synapses in small networks of Hindmarsh-Rose (HR) neurons on the synchronous behaviour and on the rate of information produced (per time unit) by the networks. We show that if the chemical synapse is excitatory, the larger the chemical synapse strength used the smaller the electrical synapse strength needed to achieve complete synchronisation, and for moderate synaptic strengths one should expect to find desynchronous behaviour. Otherwise, if the chemical synapse is inhibitory, the larger the chemical synapse strength used the larger the electrical synapse strength needed to achieve complete synchronisation, and for moderate synaptic strengths one should expect to find synchronous behaviours. Finally, we show how to calculate semi-analytically an upper bound for the rate of information produced per time unit (Kolmogorov-Sinai entropy) in larger networks. As an application, we show that this upper bound is linearly proportional to the number of neurons in a network whose neurons are highly connected.
Networks of fast-spiking interneurons are crucial for the generation of neural oscillations in the brain. Here we study the synchronous behavior of interneuronal networks that are coupled by delayed inhibitory and fast electrical synapses. We find that both coupling modes play a crucial role by the synchronization of the network. In addition, delayed inhibitory synapses affect the emerging oscillatory patterns. By increasing the inhibitory synaptic delay, we observe a transition from regular to mixed oscillatory patterns at a critical value. We also examine how the unreliability of inhibitory synapses influences the emergence of synchronization and the oscillatory patterns. We find that low levels of reliability tend to destroy synchronization, and moreover, that interneuronal networks with long inhibitory synaptic delays require a minimal level of reliability for the mixed oscillatory pattern to be maintained.
How can animals behave effectively in conditions involving different motivational contexts? Here, we propose how reinforcement learning neural networks can learn optimal behavior for dynamically changing motivational salience vectors. First, we show that Q-learning neural networks with motivation can navigate in environment with dynamic rewards. Second, we show that such networks can learn complex behaviors simultaneously directed towards several goals distributed in an environment. Finally, we show that in Pavlovian conditioning task, the responses of the neurons in our model resemble the firing patterns of neurons in the ventral pallidum (VP), a basal ganglia structure involved in motivated behaviors. We show that, similarly to real neurons, recurrent networks with motivation are composed of two oppositely-tuned classes of neurons, responding to positive and negative rewards. Our model generates predictions for the VP connectivity. We conclude that networks with motivation can rapidly adapt their behavior to varying conditions without changes in synaptic strength when expected reward is modulated by motivation. Such networks may also provide a mechanism for how hierarchical reinforcement learning is implemented in the brain.
524 - B. Cessac , T. Vieville 2008
We present a mathematical analysis of a networks with Integrate-and-Fire neurons and adaptive conductances. Taking into account the realistic fact that the spike time is only known within some textit{finite} precision, we propose a model where spikes are effective at times multiple of a characteristic time scale $delta$, where $delta$ can be textit{arbitrary} small (in particular, well beyond the numerical precision). We make a complete mathematical characterization of the model-dynamics and obtain the following results. The asymptotic dynamics is composed by finitely many stable periodic orbits, whose number and period can be arbitrary large and can diverge in a region of the synaptic weights space, traditionally called the edge of chaos, a notion mathematically well defined in the present paper. Furthermore, except at the edge of chaos, there is a one-to-one correspondence between the membrane potential trajectories and the raster plot. This shows that the neural code is entirely in the spikes in this case. As a key tool, we introduce an order parameter, easy to compute numerically, and closely related to a natural notion of entropy, providing a relevant characterization of the computational capabilities of the network. This allows us to compare the computational capabilities of leaky and Integrate-and-Fire models and conductance based models. The present study considers networks with constant input, and without time-dependent plasticity, but the framework has been designed for both extensions.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا