No Arabic abstract
The metrization of the space of neural responses is an ongoing research program seeking to find natural ways to describe, in geometrical terms, the sets of possible activities in the brain. One component of this program are the {em spike metrics}, notions of distance between two spike trains recorded from a neuron. Alignment spike metrics work by identifying equivalent spikes in one train and the other. We present an alignment spike metric having $mathcal{L}_p$ underlying geometrical structure; the $mathcal{L}_2$ version is Euclidean and is suitable for further embedding in Euclidean spaces by Multidimensional Scaling methods or related procedures. We show how to implement a fast algorithm for the computation of this metric based on bipartite graph matching theory.
In artificial neural networks trained with gradient descent, the weights used for processing stimuli are also used during backward passes to calculate gradients. For the real brain to approximate gradients, gradient information would have to be propagated separately, such that one set of synaptic weights is used for processing and another set is used for backward passes. This produces the so-called weight transport problem for biological models of learning, where the backward weights used to calculate gradients need to mirror the forward weights used to process stimuli. This weight transport problem has been considered so hard that popular proposals for biological learning assume that the backward weights are simply random, as in the feedback alignment algorithm. However, such random weights do not appear to work well for large networks. Here we show how the discontinuity introduced in a spiking system can lead to a solution to this problem. The resulting algorithm is a special case of an estimator used for causal inference in econometrics, regression discontinuity design. We show empirically that this algorithm rapidly makes the backward weights approximate the forward weights. As the backward weights become correct, this improves learning performance over feedback alignment on tasks such as Fashion-MNIST, SVHN, CIFAR-10 and VOC. Our results demonstrate that a simple learning rule in a spiking network can allow neurons to produce the right backward connections and thus solve the weight transport problem.
The Robinson-Foulds (RF) distance is by far the most widely used measure of dissimilarity between trees. Although the distribution of these distances has been investigated for twenty years, an algorithm that is explicitly polynomial time has yet to be described for computing this distribution (which is also the distribution of trees around a given tree under the popular Robinson-Foulds metric). In this paper we derive a polynomial-time algorithm for this distribution. We show how the distribution can be approximated by a Poisson distribution determined by the proportion of leaves that lie in `cherries of the given tree. We also describe how our results can be used to derive normalization constants that are required in a recently-proposed maximum likelihood approach to supertree construction.
This paper is the instructions for the proceeding of the International Symposium on Crop. Sugar beet crop models have rarely taken into account the morphogenetic process generating plant architecture despite the fact that plant architectural plasticity plays a key role during growth, especially under stress conditions. The objective of this paper is to develop this approach by applying the GreenLab model of plant growth to sugar beet and to study the potential advantages for applicative purposes. Experiments were conducted with husbandry practices in 2006. The study of sugar beet development, mostly phytomer appearance, organ expansion and leaf senescence, allowed us to define a morphogenetic model of sugar beet growth based on GreenLab. It simulates organogenesis, biomass production and biomass partitioning. The functional parameters controlling source-sink relationships during plant growth were estimated from organ and compartment dry masses, measured at seven different times, for samples of plants. The fitting results are good, which shows that the introduced framework is adapted to analyse source-sink dynamics and shoot-root allocation throughout the season. However, this approach still needs to be fully validated, particularly among seasons.
Entropy is a classical measure to quantify the amount of information or complexity of a system. Various entropy-based measures such as functional and spectral entropies have been proposed in brain network analysis. However, they are less widely used than traditional graph theoretic measures such as global and local efficiencies because either they are not well-defined on a graph or difficult to interpret its biological meaning. In this paper, we propose a new entropy-based graph invariant, called volume entropy. It measures the exponential growth rate of the number of paths in a graph, which is a relevant measure if information flows through the graph forever. We model the information propagation on a graph by the generalized Markov system associated to the weighted edge-transition matrix. We estimate the volume entropy using the stationary equation of the generalized Markov system. A prominent advantage of using the stationary equation is that it assigns certain distribution of weights on the edges of the brain graph, which we call the stationary distribution. The stationary distribution shows the information capacity of edges and the direction of information flow on a brain graph. The simulation results show that the volume entropy distinguishes the underlying graph topology and geometry better than the existing graph measures. In brain imaging data application, the volume entropy of brain graphs was significantly related to healthy normal aging from 20s to 60s. In addition, the stationary distribution of information propagation gives a new insight into the information flow of functional brain graph.
A core goal of functional neuroimaging is to study how the environment is processed in the brain. The mainstream paradigm involves concurrently measuring a broad spectrum of brain responses to a small set of environmental features preselected with reference to previous studies or a theoretical framework. As a complement, we invert this approach by allowing the investigator to record the modulation of a preselected brain response by a broad spectrum of environmental features. Our approach is optimal when theoretical frameworks or previous empirical data are impoverished. By using a prespecified closed-loop design, the approach addresses fundamental challenges of reproducibility and generalisability in brain research. These conditions are particularly acute when studying the developing brain, where our theories based on adult brain function may fundamentally misrepresent the topography of infant cognition and where there are substantial practical challenges to data acquisition. Our methodology employs machine learning to map modulation of a neural feature across a space of experimental stimuli. Our method collects, processes and analyses EEG brain data in real-time; and uses a neuro-adaptive Bayesian optimisation algorithm to adjust the stimulus presented depending on the prior samples of a given participant. Unsampled stimuli can be interpolated by fitting a Gaussian process regression along the dataset. We show that our method can automatically identify the face of the infants mother through online recording of their Nc brain response to a face continuum. We can retrieve model statistics of individualised responses for each participant, opening the door for early identification of atypical development. This approach has substantial potential in infancy research and beyond for improving power and generalisability of mapping the individual cognitive topography of brain function.