ترغب بنشر مسار تعليمي؟ اضغط هنا

A new inference approach for training shallow and deep generalized linear models of noisy interacting neurons

81   0   0.0 ( 0 )
 نشر من قبل Gabriel Mahuas
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

Generalized linear models are one of the most efficient paradigms for predicting the correlated stochastic activity of neuronal networks in response to external stimuli, with applications in many brain areas. However, when dealing with complex stimuli, the inferred coupling parameters often do not generalize across different stimulus statistics, leading to degraded performance and blowup instabilities. Here, we develop a two-step inference strategy that allows us to train robust generalized linear models of interacting neurons, by explicitly separating the effects of correlations in the stimulus from network interactions in each training step. Applying this approach to the responses of retinal ganglion cells to complex visual stimuli, we show that, compared to classical methods, the models trained in this way exhibit improved performance, are more stable, yield robust interaction networks, and generalize well across complex visual statistics. The method can be extended to deep convolutional neural networks, leading to models with high predictive accuracy for both the neuron firing rates and their correlations.

قيم البحث

اقرأ أيضاً

Neuronal ensemble inference is a significant problem in the study of biological neural networks. Various methods have been proposed for ensemble inference from experimental data of neuronal activity. Among them, Bayesian inference approach with gener ative model was proposed recently. However, this method requires large computational cost for appropriate inference. In this work, we give an improved Bayesian inference algorithm by modifying update rule in Markov chain Monte Carlo method and introducing the idea of simulated annealing for hyperparameter control. We compare the performance of ensemble inference between our algorithm and the original one, and discuss the advantage of our method.
120 - Zhe Fei , Yi Li 2019
The focus of modern biomedical studies has gradually shifted to explanation and estimation of joint effects of high dimensional predictors on disease risks. Quantifying uncertainty in these estimates may provide valuable insight into prevention strat egies or treatment decisions for both patients and physicians. High dimensional inference, including confidence intervals and hypothesis testing, has sparked much interest. While much work has been done in the linear regression setting, there is lack of literature on inference for high dimensional generalized linear models. We propose a novel and computationally feasible method, which accommodates a variety of outcome types, including normal, binomial, and Poisson data. We use a splitting and smoothing approach, which splits samples into two parts, performs variable selection using one part and conducts partial regression with the other part. Averaging the estimates over multiple random splits, we obtain the smoothed estimates, which are numerically stable. We show that the estimates are consistent, asymptotically normal, and construct confidence intervals with proper coverage probabilities for all predictors. We examine the finite sample performance of our method by comparing it with the existing methods and applying it to analyze a lung cancer cohort study.
Functional networks provide a topological description of activity patterns in the brain, as they stem from the propagation of neural activity on the underlying anatomical or structural network of synaptic connections. This latter is well known to be organized in hierarchical and modular way. While it is assumed that structural networks shape their functional counterparts, it is also hypothesized that alterations of brain dynamics come with transformations of functional connectivity. In this computational study, we introduce a novel methodology to monitor the persistence and breakdown of hierarchical order in functional networks, generated from computational models of activity spreading on both synthetic and real structural connectomes. We show that hierarchical connectivity appears in functional networks in a persistent way if the dynamics is set to be in the quasi-critical regime associated with optimal processing capabilities and normal brain function, while it breaks down in other (supercritical) dynamical regimes, often associated with pathological conditions. Our results offer important clues for the study of optimal neurocomputing architectures and processes, which are capable of controlling patterns of activity and information flow. We conclude that functional connectivity patterns achieve optimal balance between local specialized processing (i.e. segregation) and global integration by inheriting the hierarchical organization of the underlying structural architecture.
Generalized linear models (GLMs) such as logistic regression are among the most widely used arms in data analysts repertoire and often used on sensitive datasets. A large body of prior works that investigate GLMs under differential privacy (DP) const raints provide only private point estimates of the regression coefficients, and are not able to quantify parameter uncertainty. In this work, with logistic and Poisson regression as running examples, we introduce a generic noise-aware DP Bayesian inference method for a GLM at hand, given a noisy sum of summary statistics. Quantifying uncertainty allows us to determine which of the regression coefficients are statistically significantly different from zero. We provide a previously unknown tight privacy analysis and experimentally demonstrate that the posteriors obtained from our model, while adhering to strong privacy guarantees, are close to the non-private posteriors.
Estimators computed from adaptively collected data do not behave like their non-adaptive brethren. Rather, the sequential dependence of the collection policy can lead to severe distributional biases that persist even in the infinite data limit. We de velop a general method -- $mathbf{W}$-decorrelation -- for transforming the bias of adaptive linear regression estimators into variance. The method uses only coarse-grained information about the data collection policy and does not need access to propensity scores or exact knowledge of the policy. We bound the finite-sample bias and variance of the $mathbf{W}$-estimator and develop asymptotically correct confidence intervals based on a novel martingale central limit theorem. We then demonstrate the empirical benefits of the generic $mathbf{W}$-decorrelation procedure in two different adaptive data settings: the multi-armed bandit and the autoregressive time series.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا