ترغب بنشر مسار تعليمي؟ اضغط هنا

Unsupervised deep learning identifies semantic disentanglement in single inferotemporal neurons

237   0   0.0 ( 0 )
 نشر من قبل Irina Higgins
 تاريخ النشر 2020
  مجال البحث علم الأحياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Deep supervised neural networks trained to classify objects have emerged as popular models of computation in the primate ventral stream. These models represent information with a high-dimensional distributed population code, implying that inferotemporal (IT) responses are also too complex to interpret at the single-neuron level. We challenge this view by modelling neural responses to faces in the macaque IT with a deep unsupervised generative model, beta-VAE. Unlike deep classifiers, beta-VAE disentangles sensory data into interpretable latent factors, such as gender or hair length. We found a remarkable correspondence between the generative factors discovered by the model and those coded by single IT neurons. Moreover, we were able to reconstruct face images using the signals from just a handful of cells. This suggests that the ventral visual stream may be optimising the disentangling objective, producing a neural code that is low-dimensional and semantically interpretable at the single-unit level.



قيم البحث

اقرأ أيضاً

As an emerging technology, transcranial focused ultrasound has been demonstrated to successfully evoke motor responses in mice, rabbits, and sensory/motor responses in humans. Yet, the spatial resolution of ultrasound does not allow for high-precisio n stimulation. Here, we developed a tapered fiber optoacoustic emitter (TFOE) for optoacoustic stimulation of neurons with an unprecedented spatial resolution of 20 microns, enabling selective activation of single neurons or subcellular structures, such as axons and dendrites. A single acoustic pulse of 1 microsecond converted by the TFOE from a single laser pulse of 3 nanoseconds is shown as the shortest acoustic stimuli so far for successful neuron activation. The highly localized ultrasound generated by the TFOE made it possible to integrate the optoacoustic stimulation and highly stable patch clamp recording on single neurons. Direct measurements of electrical response of single neurons to acoustic stimulation, which is difficult for conventional ultrasound stimulation, have been demonstrated for the first time. By coupling TFOE with ex vivo brain slice electrophysiology, we unveil cell-type-specific response of excitatory and inhibitory neurons to acoustic stimulation. These results demonstrate that TFOE is a non-genetic single-cell and sub-cellular modulation technology, which could shed new insights into the mechanism of neurostimulation.
Neurons generate magnetic fields which can be recorded with macroscopic techniques such as magneto-encephalography. The theory that accounts for the genesis of neuronal magnetic fields involves dendritic cable structures in homogeneous resistive extr acellular media. Here, we generalize this model by considering dendritic cables in extracellular media with arbitrarily complex electric properties. This method is based on a multi-scale mean-field theory where the neuron is considered in interaction with a mean extracellular medium (characterized by a specific impedance). We first show that, as expected, the generalized cable equation and the standard cable generate magnetic fields that mostly depend on the axial current in the cable, with a moderate contribution of extracellular currents. Less expected, we also show that the nature of the extracellular and intracellular media influence the axial current, and thus also influence neuronal magnetic fields. We illustrate these properties by numerical simulations and suggest experiments to test these findings.
Neuroimaging data analysis often involves emph{a-priori} selection of data features to study the underlying neural activity. Since this could lead to sub-optimal feature selection and thereby prevent the detection of subtle patterns in neural activit y, data-driven methods have recently gained popularity for optimizing neuroimaging data analysis pipelines and thereby, improving our understanding of neural mechanisms. In this context, we developed a deep convolutional architecture that can identify discriminating patterns in neuroimaging data and applied it to electroencephalography (EEG) recordings collected from 25 subjects performing a hand motor task before and after a rest period or a bout of exercise. The deep network was trained to classify subjects into exercise and control groups based on differences in their EEG signals. Subsequently, we developed a novel method termed the cue-combination for Class Activation Map (ccCAM), which enabled us to identify discriminating spatio-temporal features within definite frequency bands (23--33 Hz) and assess the effects of exercise on the brain. Additionally, the proposed architecture allowed the visualization of the differences in the propagation of underlying neural activity across the cortex between the two groups, for the first time in our knowledge. Our results demonstrate the feasibility of using deep network architectures for neuroimaging analysis in different contexts such as, for the identification of robust brain biomarkers to better characterize and potentially treat neurological disorders.
74 - Hideaki Shimazaki 2015
We show that dynamical gain modulation of neurons stimulus response is described as an information-theoretic cycle that generates entropy associated with the stimulus-related activity from entropy produced by the modulation. To articulate this theory , we describe stimulus-evoked activity of a neural population based on the maximum entropy principle with constraints on two types of overlapping activities, one that is controlled by stimulus conditions and the other, termed internal activity, that is regulated internally in an organism. We demonstrate that modulation of the internal activity realises gain control of stimulus response, and controls stimulus information. A cycle of neural dynamics is then introduced to model information processing by the neurons during which the stimulus information is dynamically enhanced by the internal gain-modulation mechanism. Based on the conservation law for entropy production, we demonstrate that the cycle generates entropy ascribed to the stimulus-related activity using entropy supplied by the internal mechanism, analogously to a heat engine that produces work from heat. We provide an efficient cycle that achieves the highest entropic efficiency to retain the stimulus information. The theory allows us to quantify efficiency of the internal computation and its theoretical limit.
Circuits using superconducting single-photon detectors and Josephson junctions to perform signal reception, synaptic weighting, and integration are investigated. The circuits convert photon-detection events into flux quanta, the number of which is de termined by the synaptic weight. The current from many synaptic connections is inductively coupled to a superconducting loop that implements the neuronal threshold operation. Designs are presented for synapses and neurons that perform integration as well as detect coincidence events for temporal coding. Both excitatory and inhibitory connections are demonstrated. It is shown that a neuron with a single integration loop can receive input from 1000 such synaptic connections, and neurons of similar design could employ many loops for dendritic processing.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا