Do you want to publish a course? Click here

Statistical Neuroscience in the Single Trial Limit

103   0   0.0 ( 0 )
 Added by Alex Williams
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Individual neurons often produce highly variable responses over nominally identical trials, reflecting a mixture of intrinsic noise and systematic changes in the animals cognitive and behavioral state. In addition to investigating how noise and state changes impact neural computation, statistical models of trial-to-trial variability are becoming increasingly important as experimentalists aspire to study naturalistic animal behaviors, which never repeat themselves exactly and may rarely do so even approximately. Estimating the basic features of neural response distributions may seem impossible in this trial-limited regime. Fortunately, by identifying and leveraging simplifying structure in neural data -- e.g. shared gain modulations across neural subpopulations, temporal smoothness in neural firing rates, and correlations in responses across behavioral conditions -- statistical estimation often remains tractable in practice. We review recent advances in statistical neuroscience that illustrate this trend and have enabled novel insights into the trial-by-trial operation of neural circuits.



rate research

Read More

Within computational neuroscience, informal interactions with modelers often reveal wildly divergent goals. In this opinion piece, we explicitly address the diversity of goals that motivate and ultimately influence modeling efforts. We argue that a wide range of goals can be meaningfully taken to be of highest importance. A simple informal survey conducted on the Internet confirmed the diversity of goals in the community. However, different priorities or preferences of individual researchers can lead to divergent model evaluation criteria. We propose that many disagreements in evaluating the merit of computational research stem from differences in goals and not from the mechanics of constructing, describing, and validating models. We suggest that authors state explicitly their goals when proposing models so that others can judge the quality of the research with respect to its stated goals.
In recent years, the field of neuroscience has gone through rapid experimental advances and extensive use of quantitative and computational methods. This accelerating growth has created a need for methodological analysis of the role of theory and the modeling approaches currently used in this field. Toward that end, we start from the general view that the primary role of science is to solve empirical problems, and that it does so by developing theories that can account for phenomena within their domain of application. We propose a commonly-used set of terms - descriptive, mechanistic, and normative - as methodological designations that refer to the kind of problem a theory is intended to solve. Further, we find that models of each kind play distinct roles in defining and bridging the multiple levels of abstraction necessary to account for any neuroscientific phenomenon. We then discuss how models play an important role to connect theory and experiment, and note the importance of well-defined translation functions between them. Furthermore, we describe how models themselves can be used as a form of experiment to test and develop theories. This report is the summary of a discussion initiated at the conference Present and Future Theoretical Frameworks in Neuroscience, which we hope will contribute to a much-needed discussion in the neuroscientific community.
Over the last several years, the use of machine learning (ML) in neuroscience has been rapidly increasing. Here, we review MLs contributions, both realized and potential, across several areas of systems neuroscience. We describe four primary roles of ML within neuroscience: 1) creating solutions to engineering problems, 2) identifying predictive variables, 3) setting benchmarks for simple models of the brain, and 4) serving itself as a model for the brain. The breadth and ease of its applicability suggests that machine learning should be in the toolbox of most systems neuroscientists.
The estimation of causal network architectures in the brain is fundamental for understanding cognitive information processes. However, access to the dynamic processes underlying cognition is limited to indirect measurements of the hidden neuronal activity, for instance through fMRI data. Thus, estimating the network structure of the underlying process is challenging. In this article, we embed an adaptive importance sampler called Adaptive Path Integral Smoother (APIS) into the Expectation-Maximization algorithm to obtain point estimates of causal connectivity. We demonstrate on synthetic data that this procedure finds not only the correct network structure but also the direction of effective connections from random initializations of the connectivity matrix. In addition--motivated by contradictory claims in the literature--we examine the effect of the neuronal timescale on the sensitivity of the BOLD signal to changes in the connectivity and on the maximum likelihood solutions of the connectivity. We conclude with two warnings: First, the connectivity estimates under the assumption of slow dynamics can be extremely biased if the data was generated by fast neuronal processes. Second, the faster the time scale, the less sensitive the BOLD signal is to changes in the incoming connections to a node. Hence, connectivity estimation using realistic neural dynamics timescale requires extremely high-quality data and seems infeasible in many practical data sets.
Functional MRI (fMRI) is a powerful technique that has allowed us to characterize visual cortex responses to stimuli, yet such experiments are by nature constructed based on a priori hypotheses, limited to the set of images presented to the individual while they are in the scanner, are subject to noise in the observed brain responses, and may vary widely across individuals. In this work, we propose a novel computational strategy, which we call NeuroGen, to overcome these limitations and develop a powerful tool for human vision neuroscience discovery. NeuroGen combines an fMRI-trained neural encoding model of human vision with a deep generative network to synthesize images predicted to achieve a target pattern of macro-scale brain activation. We demonstrate that the reduction of noise that the encoding model provides, coupled with the generative networks ability to produce images of high fidelity, results in a robust discovery architecture for visual neuroscience. By using only a small number of synthetic images created by NeuroGen, we demonstrate that we can detect and amplify differences in regional and individual human brain response patterns to visual stimuli. We then verify that these discoveries are reflected in the several thousand observed image responses measured with fMRI. We further demonstrate that NeuroGen can create synthetic images predicted to achieve regional response patterns not achievable by the best-matching natural images. The NeuroGen framework extends the utility of brain encoding models and opens up a new avenue for exploring, and possibly precisely controlling, the human visual system.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا