Do you want to publish a course? Click here

Empirical bayes formulation of the elastic net and mixed-norm models: application to the eeg inverse problem

64   0   0.0 ( 0 )
 Added by Deirel Paz-Linares
 Publication date 2016
and research's language is English




Ask ChatGPT about the research

The estimation of EEG generating sources constitutes an Inverse Problem (IP) in Neuroscience. This is an ill-posed problem, due to the non-uniqueness of the solution, and many kinds of prior information have been used to constrain it. A combination of smoothness (L2 norm-based) and sparseness (L1 norm-based) constraints is a flexible approach that have been pursued by important examples such as the Elastic Net (ENET) and mixed-norm (MXN) models. The former is used to find solutions with a small number of smooth non-zero patches, while the latter imposes sparseness and smoothness simultaneously along different dimensions of the spatio-temporal matrix solutions. Both models have been addressed within the penalized regression approach, where the regularization parameters are selected heuristically, leading usually to non-optimal solutions. The existing Bayesian formulation of ENET allows hyperparameter learning, but using computationally intensive Monte Carlo/Expectation Maximization methods. In this work we attempt to solve the EEG IP using a Bayesian framework for models based on mixtures of L1/L2 norms penalization functions (Laplace/Normal priors) such as ENET and MXN. We propose a Sparse Bayesian Learning algorithm based on combining the Empirical Bayes and the iterative coordinate descent procedures to estimate both the parameters and hyperparameters. Using simple but realistic simulations we found that our methods are able to recover complicated source setups more accurately and with a more robust variable selection than the ENET and LASSO solutions using classical algorithms. We also solve the EEG IP using data coming from a visual attention experiment, finding more interpretable neurophysiological patterns with our methods, as compared with other known methods such as LORETA, ENET and LASSO FUSION using the classical regularization approach.



rate research

Read More

A new class of survival frailty models based on the Generalized Inverse-Gaussian (GIG) distributions is proposed. We show that the GIG frailty models are flexible and mathematically convenient like the popular gamma frailty model. Furthermore, our proposed class is robust and does not present some computational issues experienced by the gamma model. By assuming a piecewise-exponential baseline hazard function, which gives a semiparametric flavour for our frailty class, we propose an EM-algorithm for estimating the model parameters and provide an explicit expression for the information matrix. Simulated results are addressed to check the finite sample behavior of the EM-estimators and also to study the performance of the GIG models under misspecification. We apply our methodology to a TARGET (Therapeutically Applicable Research to Generate Effective Treatments) data about survival time of patients with neuroblastoma cancer and show some advantages of the GIG frailties over existing models in the literature.
Time series datasets often contain heterogeneous signals, composed of both continuously changing quantities and discretely occurring events. The coupling between these measurements may provide insights into key underlying mechanisms of the systems under study. To better extract this information, we investigate the asymptotic statistical properties of coupling measures between continuous signals and point processes. We first introduce martingale stochastic integration theory as a mathematical model for a family of statistical quantities that include the Phase Locking Value, a classical coupling measure to characterize complex dynamics. Based on the martingale Central Limit Theorem, we can then derive the asymptotic Gaussian distribution of estimates of such coupling measure, that can be exploited for statistical testing. Second, based on multivariate extensions of this result and Random Matrix Theory, we establish a principled way to analyze the low rank coupling between a large number of point processes and continuous signals. For a null hypothesis of no coupling, we establish sufficient conditions for the empirical distribution of squared singular values of the matrix to converge, as the number of measured signals increases, to the well-known Marchenko-Pastur (MP) law, and the largest squared singular value converges to the upper end of the MPs support. This justifies a simple thresholding approach to assess the significance of multivariate coupling. Finally, we illustrate with simulations the relevance of our univariate and multivariate results in the context of neural time series, addressing how to reliably quantify the interplay between multi channel Local Field Potential signals and the spiking activity of a large population of neurons.
We propose an algorithm to select parameter subset combinations that can be estimated using an ordinary least-squares (OLS) inverse problem formulation with a given data set. First, the algorithm selects the parameter combinations that correspond to sensitivity matrices with full rank. Second, the algorithm involves uncertainty quantification by using the inverse of the Fisher Information Matrix. Nominal values of parameters are used to construct synthetic data sets, and explore the effects of removing certain parameters from those to be estimated using OLS procedures. We quantify these effects in a score for a vector parameter defined using the norm of the vector of standard errors for components of estimates divided by the estimates. In some cases the method leads to reduction of the standard error for a parameter to less than 1% of the estimate.
85 - Yue Yang , Ryan Martin 2020
In high-dimensions, the prior tails can have a significant effect on both posterior computation and asymptotic concentration rates. To achieve optimal rates while keeping the posterior computations relatively simple, an empirical Bayes approach has recently been proposed, featuring thin-tailed conjugate priors with data-driven centers. While conjugate priors ease some of the computational burden, Markov chain Monte Carlo methods are still needed, which can be expensive when dimension is high. In this paper, we develop a variational approximation to the empirical Bayes posterior that is fast to compute and retains the optimal concentration rate properties of the original. In simulations, our method is shown to have superior performance compared to existing variational approximations in the literature across a wide range of high-dimensional settings.
120 - Xiuwen Duan 2021
Empirical Bayes methods have been around for a long time and have a wide range of applications. These methods provide a way in which historical data can be aggregated to provide estimates of the posterior mean. This thesis revisits some of the empirical Bayesian methods and develops new applications. We first look at a linear empirical Bayes estimator and apply it on ranking and symbolic data. Next, we consider Tweedies formula and show how it can be applied to analyze a microarray dataset. The application of the formula is simplified with the Pearson system of distributions. Saddlepoint approximations enable us to generalize several results in this direction. The results show that the proposed methods perform well in applications to real data sets.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا