ترغب بنشر مسار تعليمي؟ اضغط هنا

Online Sparse Sliced Inverse Regression

120   0   0.0 ( 0 )
 نشر من قبل Haoyang Cheng
 تاريخ النشر 2020
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

Due to the demand for tackling the problem of streaming data with high dimensional covariates, we propose an online sparse sliced inverse regression (OSSIR) method for online sufficient dimension reduction. The existing online sufficient dimension reduction methods focus on the case when the dimension $p$ is small. In this article, we show that our method can achieve better statistical accuracy and computation speed when the dimension $p$ is large. There are two important steps in our method, one is to extend the online principal component analysis to iteratively obtain the eigenvalues and eigenvectors of the kernel matrix, the other is to use the truncated gradient to achieve online $L_{1}$ regularization. We also analyze the convergence of the extended Candid covariance-free incremental PCA(CCIPCA) and our method. By comparing several existing methods in the simulations and real data applications, we demonstrate the effectiveness and efficiency of our method.



قيم البحث

اقرأ أيضاً

This work considers variational Bayesian inference as an inexpensive and scalable alternative to a fully Bayesian approach in the context of sparsity-promoting priors. In particular, the priors considered arise from scale mixtures of Normal distribut ions with a generalized inverse Gaussian mixing distribution. This includes the variational Bayesian LASSO as an inexpensive and scalable alternative to the Bayesian LASSO introduced in [56]. It also includes priors which more strongly promote sparsity. For linear models the method requires only the iterative solution of deterministic least squares problems. Furthermore, for $nrightarrow infty$ data points and p unknown covariates the method can be implemented exactly online with a cost of O(p$^3$) in computation and O(p$^2$) in memory. For large p an approximation is able to achieve promising results for a cost of O(p) in both computation and memory. Strategies for hyper-parameter tuning are also considered. The method is implemented for real and simulated data. It is shown that the performance in terms of variable selection and uncertainty quantification of the variational Bayesian LASSO can be comparable to the Bayesian LASSO for problems which are tractable with that method, and for a fraction of the cost. The present method comfortably handles n = p = 131,073 on a laptop in minutes, and n = 10$^5$, p = 10$^6$ overnight.
Sliced inverse regression is one of the most popular sufficient dimension reduction methods. Originally, it was designed for independent and identically distributed data and recently extend to the case of serially and spatially dependent data. In thi s work we extend it to the case of spatially dependent data where the response might depend also on neighbouring covariates when the observations are taken on a grid-like structure as it is often the case in econometric spatial regression applications. We suggest guidelines on how to decide upon the dimension of the subspace of interest and also which spatial lag might be of interest when modeling the response. These guidelines are supported by a conducted simulation study.
143 - Yue Yu , Zhihong Chen , Jie Yang 2011
This article concerns the dimension reduction in regression for large data set. We introduce a new method based on the sliced inverse regression approach, called cluster-based regularized sliced inverse regression. Our method not only keeps the merit of considering both response and predictors information, but also enhances the capability of handling highly correlated variables. It is justified under certain linearity conditions. An empirical application on a macroeconomic data set shows that our method has outperformed the dynamic factor model and other shrinkage methods.
119 - Zhishen Ye , Jie Yang 2013
We propose a new method for dimension reduction in regression using the first two inverse moments. We develop corresponding weighted chi-squared tests for the dimension of the regression. The proposed method considers linear combinations of Sliced In verse Regression (SIR) and the method using a new candidate matrix which is designed to recover the entire inverse second moment subspace. The optimal combination may be selected based on the p-values derived from the dimension tests. Theoretically, the proposed method, as well as Sliced Average Variance Estimate (SAVE), are more capable of recovering the complete central dimension reduction subspace than SIR and Principle Hessian Directions (pHd). Therefore it can substitute for SIR, pHd, SAVE, or any linear combination of them at a theoretical level. Simulation study indicates that the proposed method may have consistently greater power than SIR, pHd, and SAVE.
Deterministic interpolation and quadrature methods are often unsuitable to address Bayesian inverse problems depending on computationally expensive forward mathematical models. While interpolation may give precise posterior approximations, determinis tic quadrature is usually unable to efficiently investigate an informative and thus concentrated likelihood. This leads to a large number of required expensive evaluations of the mathematical model. To overcome these challenges, we formulate and test a multilevel adaptive sparse Leja algorithm. At each level, adaptive sparse grid interpolation and quadrature are used to approximate the posterior and perform all quadrature operations, respectively. Specifically, our algorithm uses coarse discretizations of the underlying mathematical model to investigate the parameter space and to identify areas of high posterior probability. Adaptive sparse grid algorithms are then used to place points in these areas, and ignore other areas of small posterior probability. The points are weighted Leja points. As the model discretization is coarse, the construction of the sparse grid is computationally efficient. On this sparse grid, the posterior measure can be approximated accurately with few expensive, fine model discretizations. The efficiency of the algorithm can be enhanced further by exploiting more than two discretization levels. We apply the proposed multilevel adaptive sparse Leja algorithm in numerical experiments involving elliptic inverse problems in 2D and 3D space, in which we compare it with Markov chain Monte Carlo sampling and a standard multilevel approximation.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا