Do you want to publish a course? Click here

Verifying Design through Generative Visualization of Neural Activities

80   0   0.0 ( 0 )
 Added by Pan Wang
 Publication date 2021
  fields Biology
and research's language is English




Ask ChatGPT about the research

Current neuroscience focused approaches for evaluating the effectiveness of a design do not use direct visualisation of mental activity. A recurrent neural network is used as the encoder to learn latent representation from electroencephalogram (EEG) signals, recorded while subjects looked at 50 categories of images. A generative adversarial network (GAN) conditioned on the EEG latent representation is trained for reconstructing these images. After training, the neural network is able to reconstruct images from brain activity recordings. To demonstrate the proposed method in the context of the mental association with a design, we performed a study that indicates an iconic design image could inspire the subject to create cognitive associations with branding and valued products. The proposed method could have the potential in verifying designs by visualizing the cognitive understanding of underlying brain activity.



rate research

Read More

Chronic pain affects about 100 million adults in the US. Despite their great need, neuropharmacology and neurostimulation therapies for chronic pain have been associated with suboptimal efficacy and limited long-term success, as their mechanisms of action are unclear. Yet current computational models of pain transmission suffer from several limitations. In particular, dorsal column models do not include the fundamental underlying sensory activity traveling in these nerve fibers. We developed a (simple) simulation test bed of electrical neurostimulation of myelinated nerve fibers with underlying sensory activity. This paper reports our findings so far. Interactions between stimulation-evoked and underlying activities are mainly due to collisions of action potentials and losses of excitability due to the refractory period following an action potential. In addition, intuitively, the reliability of sensory activity decreases as the stimulation frequency increases. This first step opens the door to a better understanding of pain transmission and its modulation by neurostimulation therapies.
We present a theoretical application of an optimal experiment design (OED) methodology to the development of mathematical models to describe the stimulus-response relationship of sensory neurons. Although there are a few related studies in the computational neuroscience literature on this topic, most of them are either involving non-linear static maps or simple linear filters cascaded to a static non-linearity. Although the linear filters might be appropriate to demonstrate some aspects of neural processes, the high level of non-linearity in the nature of the stimulus-response data may render them inadequate. In addition, modelling by a static non-linear input - output map may mask important dynamical (time-dependent) features in the response data. Due to all those facts a non-linear continuous time dynamic recurrent neural network that models the excitatory and inhibitory membrane potential dynamics is preferred. The main goal of this research is to estimate the parametric details of this model from the available stimulus-response data. In order to design an efficient estimator an optimal experiment design scheme is proposed which computes a pre-shaped stimulus to maximize a certain measure of Fisher Information Matrix. This measure depends on the estimated values of the parameters in the current step and the optimal stimuli are used in a maximum likelihood estimation procedure to find an estimate of the network parameters. This process works as a loop until a reasonable convergence occurs. The response data is discontinuous as it is composed of the neural spiking instants which is assumed to obey the Poisson statistical distribution. Thus the likelihood functions depend on the Poisson statistics. In order to validate the approach and evaluate its performance, a comparison with another approach on estimation based on randomly generated stimuli is also presented.
Sophisticated visualization tools are essential for the presentation and exploration of human neuroimaging data. While two-dimensional orthogonal views of neuroimaging data are conventionally used to display activity and statistical analysis, three-dimensional (3D) representation is useful for showing the spatial distribution of a functional network, as well as its temporal evolution. For these purposes, there is currently no open-source, 3D neuroimaging tool that can simultaneously visualize desired combinations of MRI, CT, EEG, MEG, fMRI, PET, and intracranial EEG (i.e., ECoG, depth electrodes, and DBS). Here we present the Multi-Modal Visualization Tool (MMVT), which is designed for researchers to interact with their neuroimaging functional and anatomical data through simultaneous visualization of these existing imaging modalities. MMVT contains two separate modules: The first is an add-on to the open-source, 3D-rendering program Blender. It is an interactive graphical interface that enables users to simultaneously visualize multi-modality functional and statistical data on cortical and subcortical surfaces as well as MEEG sensors and intracranial electrodes. This tool also enables highly accurate 3D visualization of neuroanatomy, including the location of invasive electrodes relative to brain structures. The second module includes complete stand-alone pre-processing pipelines, from raw data to statistical maps. Each of the modules and module features can be integrated, separate from the tool, into existing data pipelines. This gives the tool a distinct advantage in both clinical and research domains as each has highly specialized visual and processing needs. MMVT leverages open-source software to build a comprehensive tool for data visualization and exploration.
Neuroimaging data analysis often involves emph{a-priori} selection of data features to study the underlying neural activity. Since this could lead to sub-optimal feature selection and thereby prevent the detection of subtle patterns in neural activity, data-driven methods have recently gained popularity for optimizing neuroimaging data analysis pipelines and thereby, improving our understanding of neural mechanisms. In this context, we developed a deep convolutional architecture that can identify discriminating patterns in neuroimaging data and applied it to electroencephalography (EEG) recordings collected from 25 subjects performing a hand motor task before and after a rest period or a bout of exercise. The deep network was trained to classify subjects into exercise and control groups based on differences in their EEG signals. Subsequently, we developed a novel method termed the cue-combination for Class Activation Map (ccCAM), which enabled us to identify discriminating spatio-temporal features within definite frequency bands (23--33 Hz) and assess the effects of exercise on the brain. Additionally, the proposed architecture allowed the visualization of the differences in the propagation of underlying neural activity across the cortex between the two groups, for the first time in our knowledge. Our results demonstrate the feasibility of using deep network architectures for neuroimaging analysis in different contexts such as, for the identification of robust brain biomarkers to better characterize and potentially treat neurological disorders.
In recent years, artificial neural networks have achieved state-of-the-art performance for predicting the responses of neurons in the visual cortex to natural stimuli. However, they require a time consuming parameter optimization process for accurately modeling the tuning function of newly observed neurons, which prohibits many applications including real-time, closed-loop experiments. We overcome this limitation by formulating the problem as $K$-shot prediction to directly infer a neurons tuning function from a small set of stimulus-response pairs using a Neural Process. This required us to developed a Factorized Neural Process, which embeds the observed set into a latent space partitioned into the receptive field location and the tuning function properties. We show on simulated responses that the predictions and reconstructed receptive fields from the Factorized Neural Process approach ground truth with increasing number of trials. Critically, the latent representation that summarizes the tuning function of a neuron is inferred in a quick, single forward pass through the network. Finally, we validate this approach on real neural data from visual cortex and find that the predictive accuracy is comparable to -- and for small $K$ even greater than -- optimization based approaches, while being substantially faster. We believe this novel deep learning systems identification framework will facilitate better real-time integration of artificial neural network modeling into neuroscience experiments.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا