ترغب بنشر مسار تعليمي؟ اضغط هنا

Adaptive Synchronization and Anticipatory Dynamical System

50   0   0.0 ( 0 )
 نشر من قبل Chun-Chung Chen
 تاريخ النشر 2015
  مجال البحث علم الأحياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Many biological systems can sense periodical variations in a stimulus input and produce well-timed, anticipatory responses after the input is removed. Such systems show memory effects for retaining timing information in the stimulus and cannot be understood from traditional synchronization consideration of passive oscillatory systems. To understand this anticipatory phenomena, we consider oscillators built from excitable systems with the addition of an adaptive dynamics. With such systems, well-timed post-stimulus responses similar to those from experiments can be obtained. Furthermore, a well-known model of working memory is shown to possess similar anticipatory dynamics when the adaptive mechanism is identified with synaptic facilitation. The last finding suggests that this type of oscillators can be common in neuronal systems with plasticity.

قيم البحث

اقرأ أيضاً

136 - Sungho Hong , 2006
White noise methods are a powerful tool for characterizing the computation performed by neural systems. These methods allow one to identify the feature or features that a neural system extracts from a complex input, and to determine how these feature s are combined to drive the systems spiking response. These methods have also been applied to characterize the input/output relations of single neurons driven by synaptic inputs, simulated by direct current injection. To interpret the results of white noise analysis of single neurons, we would like to understand how the obtained feature space of a single neuron maps onto the biophysical properties of the membrane, in particular the dynamics of ion channels. Here, through analysis of a simple dynamical model neuron, we draw explicit connections between the output of a white noise analysis and the underlying dynamical system. We find that under certain assumptions, the form of the relevant features is well defined by the parameters of the dynamical system. Further, we show that under some conditions, the feature space is spanned by the spike-triggered average and its successive order time derivatives.
110 - Sungho Hong 2008
In many cases, the computation of a neural system can be reduced to a receptive field, or a set of linear filters, and a thresholding function, or gain curve, which determines the firing probability; this is known as a linear/nonlinear model. In some forms of sensory adaptation, these linear filters and gain curve adjust very rapidly to changes in the variance of a randomly varying driving input. An apparently similar but previously unrelated issue is the observation of gain control by background noise in cortical neurons: the slope of the firing rate vs current (f-I) curve changes with the variance of background random input. Here, we show a direct correspondence between these two observations by relating variance-dependent changes in the gain of f-I curves to characteristics of the changing empirical linear/nonlinear model obtained by sampling. In the case that the underlying system is fixed, we derive relationships relating the change of the gain with respect to both mean and variance with the receptive fields derived from reverse correlation on a white noise stimulus. Using two conductance-based model neurons that display distinct gain modulation properties through a simple change in parameters, we show that coding properties of both these models quantitatively satisfy the predicted relationships. Our results describe how both variance-dependent gain modulation and adaptive neural computation result from intrinsic nonlinearity.
We investigate the synchronization features of a network of spiking neurons under a distance-dependent coupling following a power-law model. The interplay between topology and coupling strength leads to the existence of different spatiotemporal patte rns, corresponding to either non-synchronized or phase-synchronized states. Particularly interesting is what we call synchronization malleability, in which the system depicts significantly different phase synchronization degrees for the same parameters as a consequence of a different ordering of neural inputs. We analyze the functional connectivity of the network by calculating the mutual information between neuronal spike trains, allowing us to characterize the structures of synchronization in the network. We show that these structures are dependent on the ordering of the inputs for the parameter regions where the network presents synchronization malleability and we suggest that this is due to a complex interplay between coupling, connection architecture, and individual neural inputs.
The preBotzinger Complex, the mammalian inspiratory rhythm generator, encodes inspiratory time as motor pattern. Spike synchronization throughout this sparsely connected network generates inspiratory bursts albeit with variable latencies after preins piratory activity onset in each breathing cycle. Using preBotC rhythmogenic microcircuit minimal models, we examined the variability in probability and latency to burst, mimicking experiments. Among various physiologically plausible graphs of 1000 point neurons with experimentally determined neuronal and synaptic parameters, directed ErdH{o}s-Renyi graphs best captured the experimentally observed dynamics. Mechanistically, preBotC (de)synchronization and oscillatory dynamics are regulated by the efferent connectivity of spiking neurons that gates the amplification of modest preinspiratory activity through input convergence. Furthermore, to replicate experiments, a lognormal distribution of synaptic weights was necessary to augment the efficacy of convergent coincident inputs. These mechanisms enable exceptionally robust yet flexible preBotC attractor dynamics that, we postulate, represent universal temporal-processing and decision-making computational motifs throughout the brain.
The role of synchronous firing in sensory coding and cognition remains controversial. While studies, focusing on its mechanistic consequences in attentional tasks, suggest that synchronization dynamically boosts sensory processing, others failed to f ind significant synchronization levels in such tasks. We attempt to understand both lines of evidence within a coherent theoretical framework. We conceptualize synchronization as an independent control parameter to study how the postsynaptic neuron transmits the average firing activity of a presynaptic population, in the presence of synchronization. We apply the Berger-Levy theory of energy efficient information transmission to interpret simulations of a Hodgkin-Huxley-type postsynaptic neuron model, where we varied the firing rate and synchronization level in the presynaptic population independently. We find that for a fixed presynaptic firing rate the simulated postsynaptic interspike interval distribution depends on the synchronization level and is well-described by a generalized extreme value distribution. For synchronization levels of 15% to 50%, we find that the optimal distribution of presynaptic firing rate, maximizing the mutual information per unit cost, is maximized at ~30% synchronization level. These results suggest that the statistics and energy efficiency of neuronal communication channels, through which the input rate is communicated, can be dynamically adapted by the synchronization level.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا