ترغب بنشر مسار تعليمي؟ اضغط هنا

Tuning parameter selection for penalized likelihood estimation of inverse covariance matrix

320   0   0.0 ( 0 )
 نشر من قبل Xin Gao Dr.
 تاريخ النشر 2009
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

In a Gaussian graphical model, the conditional independence between two variables are characterized by the corresponding zero entries in the inverse covariance matrix. Maximum likelihood method using the smoothly clipped absolute deviation (SCAD) penalty (Fan and Li, 2001) and the adaptive LASSO penalty (Zou, 2006) have been proposed in literature. In this article, we establish the result that using Bayesian information criterion (BIC) to select the tuning parameter in penalized likelihood estimation with both types of penalties can lead to consistent graphical model selection. We compare the empirical performance of BIC with cross validation method and demonstrate the advantageous performance of BIC criterion for tuning parameter selection through simulation studies.



قيم البحث

اقرأ أيضاً

260 - Libo Sun , Chihoon Lee , 2013
We consider the problem of estimating parameters of stochastic differential equations (SDEs) with discrete-time observations that are either completely or partially observed. The transition density between two observations is generally unknown. We pr opose an importance sampling approach with an auxiliary parameter when the transition density is unknown. We embed the auxiliary importance sampler in a penalized maximum likelihood framework which produces more accurate and computationally efficient parameter estimates. Simulation studies in three different models illustrate promising improvements of the new penalized simulated maximum likelihood method. The new procedure is designed for the challenging case when some state variables are unobserved and moreover, observed states are sparse over time, which commonly arises in ecological studies. We apply this new approach to two epidemics of chronic wasting disease in mule deer.
High-dimensional predictive models, those with more measurements than observations, require regularization to be well defined, perform well empirically, and possess theoretical guarantees. The amount of regularization, often determined by tuning para meters, is integral to achieving good performance. One can choose the tuning parameter in a variety of ways, such as through resampling methods or generalized information criteria. However, the theory supporting many regularized procedures relies on an estimate for the variance parameter, which is complicated in high dimensions. We develop a suite of information criteria for choosing the tuning parameter in lasso regression by leveraging the literature on high-dimensional variance estimation. We derive intuition showing that existing information-theoretic approaches work poorly in this setting. We compare our risk estimators to existing methods with an extensive simulation and derive some theoretical justification. We find that our new estimators perform well across a wide range of simulation conditions and evaluation criteria.
The regularization approach for variable selection was well developed for a completely observed data set in the past two decades. In the presence of missing values, this approach needs to be tailored to different missing data mechanisms. In this pape r, we focus on a flexible and generally applicable missing data mechanism, which contains both ignorable and nonignorable missing data mechanism assumptions. We show how the regularization approach for variable selection can be adapted to the situation under this missing data mechanism. The computational and theoretical properties for variable selection consistency are established. The proposed method is further illustrated by comprehensive simulation studies and real data analyses, for both low and high dimensional settings.
131 - Edouard Ollier 2021
Nonlinear Mixed effects models are hidden variables models that are widely used in many field such as pharmacometrics. In such models, the distribution characteristics of hidden variables can be specified by including several parameters such as covar iates or correlations which must be selected. Recent development of pharmacogenomics has brought averaged/high dimensional problems to the field of nonlinear mixed effects modeling for which standard covariates selection techniques like stepwise methods are not well suited. This work proposes to select covariates and correlation parameters using a penalized likelihood approach. The penalized likelihood problem is solved using a stochastic proximal gradient algorithm to avoid inner-outer iterations. Speed of convergence of the proximal gradient algorithm is improved by the use of component-wise adaptive gradient step sizes. The practical implementation and tuning of the proximal gradient algorithm is explored using simulations. Calibration of regularization parameters is performed by minimizing the Bayesian Information Criterion using particle swarm optimization, a zero order optimization procedure. The use of warm restart and parallelization allows to reduce significantly computing time. The performance of the proposed method compared to the traditional grid search strategy is explored using simulated data. Finally, an application to real data from two pharmacokinetics studies is provided, one studying an antifibrinolitic and the other studying an antibiotic.
Non-parametric maximum likelihood estimation encompasses a group of classic methods to estimate distribution-associated functions from potentially censored and truncated data, with extensive applications in survival analysis. These methods, including the Kaplan-Meier estimator and Turnbulls method, often result in overfitting, especially when the sample size is small. We propose an improvement to these methods by applying kernel smoothing to their raw estimates, based on a BIC-type loss function that balances the trade-off between optimizing model fit and controlling model complexity. In the context of a longitudinal study with repeated observations, we detail our proposed smoothing procedure and optimization algorithm. With extensive simulation studies over multiple realistic scenarios, we demonstrate that our smoothing-based procedure provides better overall accuracy in both survival function estimation and individual-level time-to-event prediction by reducing overfitting. Our smoothing procedure decreases the discrepancy between the estimated and true simulated survival function using interval-censored data by up to 49% compared to the raw un-smoothed estimate, with similar improvements of up to 41% and 23% in within-sample and out-of-sample prediction, respectively. Finally, we apply our method to real data on censored breast cancer diagnosis, which similarly shows improvement when compared to empirical survival estimates from uncensored data. We provide an R package, SISE, for implementing our penalized likelihood method.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا