In this work, we study the dynamic range in a neuronal network modelled by cellular automaton. We consider deterministic and non-deterministic rules to simulate electrical and chemical synapses. Chemical synapses have an intrinsic time-delay and are susceptible to parameter variations guided by learning Hebbian rules of behaviour. Our results show that chemical synapses can abruptly enhance sensibility of the neural network, a manifestation that can become even more predominant if learning rules of evolution are applied to the chemical synapses.
Excessively high, neural synchronisation has been associated with epileptic seizures, one of the most common brain diseases worldwide. A better understanding of neural synchronisation mechanisms can thus help control or even treat epilepsy. In this paper, we study neural synchronisation in a random network where nodes are neurons with excitatory and inhibitory synapses, and neural activity for each node is provided by the adaptive exponential integrate-and-fire model. In this framework, we verify that the decrease in the influence of inhibition can generate synchronisation originating from a pattern of desynchronised spikes. The transition from desynchronous spikes to synchronous bursts of activity, induced by varying the synaptic coupling, emerges in a hysteresis loop due to bistability where abnormal (excessively high synchronous) regimes exist. We verify that, for parameters in the bistability regime, a square current pulse can trigger excessively high (abnormal) synchronisation, a process that can reproduce features of epileptic seizures. Then, we show that it is possible to suppress such abnormal synchronisation by applying a small-amplitude external current on less than 10% of the neurons in the network. Our results demonstrate that external electrical stimulation not only can trigger synchronous behaviour, but more importantly, it can be used as a means to reduce abnormal synchronisation and thus, control or treat effectively epileptic seizures.
Computers has been endowed with a part of human-like intelligence owing to the rapid development of the artificial intelligence technology represented by the neural networks. Facing the challenge to make machines more imaginative, we consider a quantum stochastic neural network (QSNN), and propose a learning algorithm to update the parameters governing the network evolution. The QSNN can be applied to a class of classification problems, we investigate its performance in sentence classification and find that the coherent part of the quantum evolution can accelerate training, and improve the accuracy of verses recognition which can be deemed as a quantum enhanced associative memory. In addition, the coherent QSNN is found more robust against both label noise and device noise so that it is a more adaptive option for practical implementation.
We show that discrete synaptic weights can be efficiently used for learning in large scale neural systems, and lead to unanticipated computational performance. We focus on the representative case of learning random patterns with binary synapses in single layer networks. The standard statistical analysis shows that this problem is exponentially dominated by isolated solutions that are extremely hard to find algorithmically. Here, we introduce a novel method that allows us to find analytical evidence for the existence of subdominant and extremely dense regions of solutions. Numerical experiments confirm these findings. We also show that the dense regions are surprisingly accessible by simple learning protocols, and that these synaptic configurations are robust to perturbations and generalize better than typical solutions. These outcomes extend to synapses with multiple states and to deeper neural architectures. The large deviation measure also suggests how to design novel algorithmic schemes for optimization based on local entropy maximization.
Spiking neural networks (SNNs) has attracted much attention due to its great potential of modeling time-dependent signals. The firing rate of spiking neurons is decided by control rate which is fixed manually in advance, and thus, whether the firing rate is adequate for modeling actual time series relies on fortune. Though it is demanded to have an adaptive control rate, it is a non-trivial task because the control rate and the connection weights learned during the training process are usually entangled. In this paper, we show that the firing rate is related to the eigenvalue of the spike generation function. Inspired by this insight, by enabling the spike generation function to have adaptable eigenvalues rather than parametric control rates, we develop the Bifurcation Spiking Neural Network (BSNN), which has an adaptive firing rate and is insensitive to the setting of control rates. Experiments validate the effectiveness of BSNN on a broad range of tasks, showing that BSNN achieves superior performance to existing SNNs and is robust to the setting of control rates.
This paper presents a new approach for assembling graph neural networks based on framelet transforms. The latter provides a multi-scale representation for graph-structured data. We decompose an input graph into low-pass and high-pass frequencies coefficients for network training, which then defines a framelet-based graph convolution. The framelet decomposition naturally induces a graph pooling strategy by aggregating the graph feature into low-pass and high-pass spectra, which considers both the feature values and geometry of the graph data and conserves the total information. The graph neural networks with the proposed framelet convolution and pooling achieve state-of-the-art performance in many node and graph prediction tasks. Moreover, we propose shrinkage as a new activation for the framelet convolution, which thresholds high-frequency information at different scales. Compared to ReLU, shrinkage activation improves model performance on denoising and signal compression: noises in both node and structure can be significantly reduced by accurately cutting off the high-pass coefficients from framelet decomposition, and the signal can be compressed to less than half its original size with well-preserved prediction performance.