Do you want to publish a course? Click here

213 - Liane Gabora 2019
EVOC (for EVOlution of Culture) is a computer model of culture that enables us to investigate how various factors such as barriers to cultural diffusion, the presence and choice of leaders, or changes in the ratio of innovation to imitation affect the diversity and effectiveness of ideas. It consists of neural network based agents that invent ideas for actions, and imitate neighbors actions. The model is based on a theory of culture according to which what evolves through culture is not memes or artifacts, but the internal models of the world that give rise to them, and they evolve not through a Darwinian process of competitive exclusion but a Lamarckian process involving exchange of innovation protocols. EVOC shows an increase in mean fitness of actions over time, and an increase and then decrease in the diversity of actions. Diversity of actions is positively correlated with population size and density, and with barriers between populations. Slowly eroding borders increase fitness without sacrificing diversity by fostering specialization followed by sharing of fit actions. Introducing a leader that broadcasts its actions throughout the population increases the fitness of actions but reduces diversity of actions. Increasing the number of leaders reduces this effect. Efforts are underway to simulate the conditions under which an agent immigrating from one culture to another contributes new ideas while still fitting in.
154 - Laurent Perrinet 2016
Neurons in the input layer of primary visual cortex in primates develop edge-like receptive fields. One approach to understanding the emergence of this response is to state that neural activity has to efficiently represent sensory data with respect to the statistics of natural scenes. Furthermore, it is believed that such an efficient coding is achieved using a competition across neurons so as to generate a sparse representation, that is, where a relatively small number of neurons are simultaneously active. Indeed, different models of sparse coding, coupled with Hebbian learning and homeostasis, have been proposed that successfully match the observed emergent response. However, the specific role of homeostasis in learning such sparse representations is still largely unknown. By quantitatively assessing the efficiency of the neural representation during learning, we derive a cooperative homeostasis mechanism that optimally tunes the competition between neurons within the sparse coding algorithm. We apply this homeostasis while learning small patches taken from natural images and compare its efficiency with state-of-the-art algorithms. Results show that while different sparse coding algorithms give similar coding results, the homeostasis provides an optimal balance for the representation of natural images within the population of neurons. Competition in sparse coding is optimized when it is fair. By contributing to optimizing statistical competition across neurons, homeostasis is crucial in providing a more efficient solution to the emergence of independent components.
Energy efficiency is closely related to the evolution of biological systems and is important to their information processing. In this paper, we calculated the excitation probability of a simple model of a bistable biological unit in response to pulsatile inputs, and its spontaneous excitation rate due to noise perturbation. Then we analytically calculated the mutual information, energy cost, and energy efficiency of an array of these bistable units. We found that the optimal number of units could maximize this arrays energy efficiency in encoding pulse inputs, which depends on the fixed energy cost. We conclude that demand for energy efficiency in biological systems may strongly influence the size of these systems under the pressure of natural selection.
The electrical properties of extracellular space around neurons are important to understand the genesis of extracellular potentials, as well as for localizing neuronal activity from extracellular recordings. However, the exact nature of these extracellular properties is still uncertain. We introduce a method to measure the impedance of the tissue, and which preserves the intact cell-medium interface, using whole-cell patch-clamp recordings in vivo and in vitro. We find that neural tissue has marked non-ohmic and frequency-filtering properties, which are not consistent with a resistive (ohmic) medium, as often assumed. In contrast, using traditional metal electrodes provides very different results, more consistent with a resistive medium. The amplitude and phase profiles of the measured impedance are consistent with the contribution of ionic diffusion. We also show that the impact of such frequency-filtering properties is possibly important on the genesis of local field potentials, as well as on the cable properties of neurons. The present results show non-ohmic properties of the extracellular medium around neurons, and suggest that source estimation methods, as well as the cable properties of neurons, which all assume ohmic extracellular medium, may need to be re-evaluated.
Chronic pain affects about 100 million adults in the US. Despite their great need, neuropharmacology and neurostimulation therapies for chronic pain have been associated with suboptimal efficacy and limited long-term success, as their mechanisms of action are unclear. Yet current computational models of pain transmission suffer from several limitations. In particular, dorsal column models do not include the fundamental underlying sensory activity traveling in these nerve fibers. We developed a (simple) simulation test bed of electrical neurostimulation of myelinated nerve fibers with underlying sensory activity. This paper reports our findings so far. Interactions between stimulation-evoked and underlying activities are mainly due to collisions of action potentials and losses of excitability due to the refractory period following an action potential. In addition, intuitively, the reliability of sensory activity decreases as the stimulation frequency increases. This first step opens the door to a better understanding of pain transmission and its modulation by neurostimulation therapies.
Here we introduce a new model of natural textures based on the feature spaces of convolutional neural networks optimised for object recognition. Samples from the model are of high perceptual quality demonstrating the generative power of neural networks trained in a purely discriminative fashion. Within the model, textures are represented by the correlations between feature maps in several layers of the network. We show that across layers the texture representations increasingly capture the statistical properties of natural images while making object information more and more explicit. The model provides a new tool to generate stimuli for neuroscience and might offer insights into the deep representations learned by convolutional neural networks.
Synaptic plasticity is the capacity of a preexisting connection between two neurons to change in strength as a function of neural activity. Because synaptic plasticity is the major candidate mechanism for learning and memory, the elucidation of its constituting mechanisms is of crucial importance in many aspects of normal and pathological brain function. In particular, a prominent aspect that remains debated is how the plasticity mechanisms, that encompass a broad spectrum of temporal and spatial scales, come to play together in a concerted fashion. Here we review and discuss evidence that pinpoints to a possible non-neuronal, glial candidate for such orchestration: the regulation of synaptic plasticity by astrocytes.
Network graphs have become a popular tool to represent complex systems composed of many interacting subunits; especially in neuroscience, network graphs are increasingly used to represent and analyze functional interactions between neural sources. Interactions are often reconstructed using pairwise bivariate analyses, overlooking their multivariate nature: it is neglected that investigating the effect of one source on a target necessitates to take all other sources as potential nuisance variables into account; also combinations of sources may act jointly on a given target. Bivariate analyses produce networks that may contain spurious interactions, which reduce the interpretability of the network and its graph metrics. A truly multivariate reconstruction, however, is computationally intractable due to combinatorial explosion in the number of potential interactions. Thus, we have to resort to approximative methods to handle the intractability of multivariate interaction reconstruction, and thereby enable the use of networks in neuroscience. Here, we suggest such an approximative approach in the form of an algorithm that extends fast bivariate interaction reconstruction by identifying potentially spurious interactions post-hoc: the algorithm flags potentially spurious edges, which may then be pruned from the network. This produces a statistically conservative network approximation that is guaranteed to contain non-spurious interactions only. We describe the algorithm and present a reference implementation to test its performance. We discuss the algorithm in relation to other approximative multivariate methods and highlight suitable application scenarios. Our approach is a tractable and data-efficient way of reconstructing approximative networks of multivariate interactions. It is preferable if available data are limited or if fully multivariate approaches are computationally infeasible.
Participants in an eye-movement experiment performed a modified version of the Landolt-C paradigm (Williams & Pollatsek, 2007) in which they searched for target squares embedded in linear arrays of spatially contiguous words (i.e., short sequences of squares having missing segments of variable size and orientation). Although the distributions of single- and first-of-multiple fixation locations replicated previous patterns suggesting saccade targeting (e.g., Yan, Kliegl, Richter, Nuthmann, & Shu, 2010), the distribution of all forward fixation locations was uniform, suggesting the absence of specific saccade targets. Furthermore, properties of the words (e.g., gap size) also influenced fixation durations and forward saccade length, suggesting that on-going processing affects decisions about when and where (i.e., how far) to move the eyes. The theoretical implications of these results for existing and future accounts of eye-movement control are discussed.
159 - Yanping Liu , Huan Wei 2015
The word-based account of saccades drawn by a central gravity of the PVL is supported by two pillars of evidences. The first is the finding of the initial fixation location on a word resembled a normal distribution (Rayner, 1979). The other is the finding of a moderate slope coefficient between the launch site and the landing site (b=0.49, see McConkie, Kerr, Reddix, & Zola, 1988). Four simulations on different saccade targeting strategies and one eye-movement experiment of Chinese reading have been conducted to evaluate the two findings. We demonstrated that the current understanding of the word-based account is not conclusive by showing an alternative strategy of the word-based account and identifying the problem with the calculation of the slope coefficient. Although almost all the computational models of eye-movement control during reading have built on the two findings, future efforts should be directed to understand the precise contribution of different saccade targeting strategies, and to know how their weighting might vary across desperate writing systems.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا