No Arabic abstract
Understanding the functioning of a neural system in terms of its underlying circuitry is an important problem in neuroscience. Recent developments in electrophysiology and imaging allow one to simultaneously record activities of hundreds of neurons. Inferring the underlying neuronal connectivity patterns from such multi-neuronal spike train data streams is a challenging statistical and computational problem. This task involves finding significant temporal patterns from vast amounts of symbolic time series data. In this paper we show that the frequent episode mining methods from the field of temporal data mining can be very useful in this context. In the frequent episode discovery framework, the data is viewed as a sequence of events, each of which is characterized by an event type and its time of occurrence and episodes are certain types of temporal patterns in such data. Here we show that, using the set of discovered frequent episodes from multi-neuronal data, one can infer different types of connectivity patterns in the neural system that generated it. For this purpose, we introduce the notion of mining for frequent episodes under certain temporal constraints; the structure of these temporal constraints is motivated by the application. We present algorithms for discovering serial and parallel episodes under these temporal constraints. Through extensive simulation studies we demonstrate that these methods are useful for unearthing patterns of neuronal network connectivity.
Discovering frequent episodes in event sequences is an interesting data mining task. In this paper, we argue that this framework is very effective for analyzing multi-neuronal spike train data. Analyzing spike train data is an important problem in neuroscience though there are no data mining approaches reported for this. Motivated by this application, we introduce different temporal constraints on the occurrences of episodes. We present algorithms for discovering frequent episodes under temporal constraints. Through simulations, we show that our method is very effective for analyzing spike train data for unearthing underlying connectivity patterns.
We investigated the influence of efficacy of synaptic interaction on firing synchronization in excitatory neuronal networks. We found spike death phenomena, namely, the state of neurons transits from limit cycle to fixed point or transient state. The phenomena occur under the perturbation of excitatory synaptic interaction that has a high efficacy. We showed that the decrease of synaptic current results in spike death through depressing the feedback of sodium ionic current. In the networks with spike death property the degree of synchronization is lower and unsensitive to the heterogeneity of neurons. The mechanism of the influence is that the transition of neuron state disrupts the adjustment of the rhythm of neuron oscillation and prevents further increase of firing synchronization.
Reconstructing network connectivity from the collective dynamics of a system typically requires access to its complete continuous-time evolution although these are often experimentally inaccessible. Here we propose a theory for revealing physical connectivity of networked systems only from the event time series their intrinsic collective dynamics generate. Representing the patterns of event timings in an event space spanned by inter-event and cross-event intervals, we reveal which other units directly influence the inter-event times of any given unit. For illustration, we linearize an event space mapping constructed from the spiking patterns in model neural circuits to reveal the presence or absence of synapses between any pair of neurons as well as whether the coupling acts in an inhibiting or activating (excitatory) manner. The proposed model-independent reconstruction theory is scalable to larger networks and may thus play an important role in the reconstruction of networks from biology to social science and engineering.
Our mysterious brain is believed to operate near a non-equilibrium point and generate critical self-organized avalanches in neuronal activity. Recent experimental evidence has revealed significant heterogeneity in both synaptic input and output connectivity, but whether the structural heterogeneity participates in the regulation of neuronal avalanches remains poorly understood. By computational modelling, we predict that different types of structural heterogeneity contribute distinct effects on avalanche neurodynamics. In particular, neuronal avalanches can be triggered at an intermediate level of input heterogeneity, but heterogeneous output connectivity cannot evoke avalanche dynamics. In the criticality region, the co-emergence of multi-scale cortical activities is observed, and both the avalanche dynamics and neuronal oscillations are modulated by the input heterogeneity. Remarkably, we show similar results can be reproduced in networks with various types of in- and out-degree distributions. Overall, these findings not only provide details on the underlying circuitry mechanisms of nonrandom synaptic connectivity in the regulation of neuronal avalanches, but also inspire testable hypotheses for future experimental studies.
A method of data assimilation (DA) is employed to estimate electrophysiological parameters of neurons simultaneously with their synaptic connectivity in a small model biological network. The DA procedure is cast as an optimization, with a cost function consisting of both a measurement error and a model error term. An iterative reweighting of these terms permits a systematic method to identify the lowest minimum, within a local region of state space, on the surface of a non-convex cost function. In the model, two sets of parameter values are associated with two particular functional modes of network activity: simultaneous firing of all neurons, and a pattern-generating mode wherein the neurons burst in sequence. The DA procedure is able to recover these modes if: i) the stimulating electrical currents have chaotic waveforms, and ii) the measurements consist of the membrane voltages of all neurons in the circuit. Further, this method is able to prune a model of unnecessarily high dimensionality to a representation that contains the maximum dimensionality required to reproduce the provided measurements. This paper offers a proof-of-concept that DA has the potential to inform laboratory designs for estimating properties in small and isolatable functional circuits.