Do you want to publish a course? Click here

Perspective: network-guided pattern formation of neural dynamics

370   0   0.0 ( 0 )
 Added by Marcus Kaiser
 Publication date 2014
  fields Biology Physics
and research's language is English




Ask ChatGPT about the research

The understanding of neural activity patterns is fundamentally linked to an understanding of how the brains network architecture shapes dynamical processes. Established approaches rely mostly on deviations of a given network from certain classes of random graphs. Hypotheses about the supposed role of prominent topological features (for instance, the roles of modularity, network motifs, or hierarchical network organization) are derived from these deviations. An alternative strategy could be to study deviations of network architectures from regular graphs (rings, lattices) and consider the implications of such deviations for self-organized dynamic patterns on the network. Following this strategy, we draw on the theory of spatiotemporal pattern formation and propose a novel perspective for analyzing dynamics on networks, by evaluating how the self-organized dynamics are confined by network architecture to a small set of permissible collective states. In particular, we discuss the role of prominent topological features of brain connectivity, such as hubs, modules and hierarchy, in shaping activity patterns. We illustrate the notion of network-guided pattern formation with numerical simulations and outline how it can facilitate the understanding of neural dynamics.



rate research

Read More

We propose a single chunk model of long-term memory that combines the basic features of the ACT-R theory and the multiple trace memory architecture. The pivot point of the developed theory is a mathematical description of the creation of new memory traces caused by learning a certain fragment of information pattern and affected by the fragments of this pattern already retained by the current moment of time. Using the available psychological and physiological data these constructions are justified. The final equation governing the learning and forgetting processes is constructed in the form of the differential equation with the Caputo type fractional time derivative. Several characteristic situations of the learning (continuous and discontinuous) and forgetting processes are studied numerically. In particular, it is demonstrated that, first, the learning and forgetting exponents of the corresponding power laws of the memory fractional dynamics should be regarded as independent system parameters. Second, as far as the spacing effects are concerned, the longer the discontinuous learning process, the longer the time interval within which a subject remembers the information without its considerable lost. Besides, the latter relationship is a linear proportionality.
274 - J.B. Satinover 2008
Using an artificial neural network (ANN), a fixed universe of approximately 1500 equities from the Value Line index are rank-ordered by their predicted price changes over the next quarter. Inputs to the network consist only of the ten prior quarterly percentage changes in price and in earnings for each equity (by quarter, not accumulated), converted to a relative rank scaled around zero. Thirty simulated portfolios are constructed respectively of the 10, 20,..., and 100 top ranking equities (long portfolios), the 10, 20,..., 100 bottom ranking equities (short portfolios) and their hedged sets (long-short portfolios). In a 29-quarter simulation from the end of the third quarter of 1994 through the fourth quarter of 2001 that duplicates real-world trading of the same method employed during 2002, all portfolios are held fixed for one quarter. Results are compared to the S&P 500, the Value Line universe itself, trading the universe of equities using the proprietary ``Value Line Ranking System (to which this method is in some ways similar), and to a Martingale method of ranking the same equities. The cumulative returns generated by the network predictor significantly exceed those generated by the S&P 500, the overall universe, the Martingale and Value Line prediction methods and are not eroded by trading costs. The ANN shows significantly positive Jensens alpha, i.e., anomalous risk-adjusted expected return. A time series of its global performance shows a clear antipersistence. However, its performance is significantly better than a simple one-step Martingale predictor, than the Value Line system itself and than a simple buy and hold strategy, even when transaction costs are accounted for.
We show how a Hopfield network with modifiable recurrent connections undergoing slow Hebbian learning can extract the underlying geometry of an input space. First, we use a slow/fast analysis to derive an averaged system whose dynamics derives from an energy function and therefore always converges to equilibrium points. The equilibria reflect the correlation structure of the inputs, a global object extracted through local recurrent interactions only. Second, we use numerical methods to illustrate how learning extracts the hidden geometrical structure of the inputs. Indeed, multidimensional scaling methods make it possible to project the final connectivity matrix on to a distance matrix in a high-dimensional space, with the neurons labelled by spatial position within this space. The resulting network structure turns out to be roughly convolutional. The residual of the projection defines the non-convolutional part of the connectivity which is minimized in the process. Finally, we show how restricting the dimension of the space where the neurons live gives rise to patterns similar to cortical maps. We motivate this using an energy efficiency argument based on wire length minimization. Finally, we show how this approach leads to the emergence of ocular dominance or orientation columns in primary visual cortex. In addition, we establish that the non-convolutional (or long-range) connectivity is patchy, and is co-aligned in the case of orientation learning.
Biological neurons receive multiple noisy oscillatory signals, and their dynamical response to the superposition of these signals is of fundamental importance for information processing in the brain. Here we study the response of neural systems to the weak envelope modulation signal, which is superimposed by two periodic signals with different frequencies. We show that stochastic resonance occurs at the beat frequency in neural systems at the single-neuron as well as the population level. The performance of this frequency-difference-dependent stochastic resonance is influenced by both the beat frequency and the two forcing frequencies. Compared to a single neuron, a population of neurons is more efficient in detecting the information carried by the weak envelope modulation signal at the beat frequency. Furthermore, an appropriate fine-tuning of the excitation-inhibition balance can further optimize the response of a neural ensemble to the superimposed signal. Our results thus introduce and provide insights into the generation and modulation mechanism of the frequency-difference-dependent stochastic resonance in neural systems.
The activity of a sparse network of leaky integrate-and-fire neurons is carefully revisited with reference to a regime of a bona-fide asynchronous dynamics. The study is preceded by a finite-size scaling analysis, carried out to identify a setup where collective synchronization is negligible. The comparison between quenched and annealed networks reveals the emergence of substantial differences when the coupling strength is increased, via a scenario somehow reminiscent of a phase transition. For sufficiently strong synaptic coupling, quenched networks exhibit a highly bursting neural activity, well reproduced by a self-consistent approach, based on the assumption that the input synaptic current is the superposition of independent renewal processes. The distribution of interspike intervals turns out to be relatively long-tailed; a crucial feature required for the self-sustainment of the bursting activity in a regime where neurons operate on average (much) below threshold. A semi-quantitative analogy with Ornstein-Uhlenbeck processes helps validating this interpretation. Finally, an alternative explanation in terms of Poisson processes is offered under the additional assumption of mutual correlations among excitatory and inhibitory spikes.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا