ترغب بنشر مسار تعليمي؟ اضغط هنا

Hierarchical sparse Cholesky decomposition with applications to high-dimensional spatio-temporal filtering

89   0   0.0 ( 0 )
 نشر من قبل Marcin Jurek
 تاريخ النشر 2020
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

Spatial statistics often involves Cholesky decomposition of covariance matrices. To ensure scalability to high dimensions, several recent approximations have assumed a sparse Cholesky factor of the precision matrix. We propose a hierarchical Vecchia approximation, whose conditional-independence assumptions imply sparsity in the Cholesky factors of both the precision and the covariance matrix. This remarkable property is crucial for applications to high-dimensional spatio-temporal filtering. We present a fast and simple algorithm to compute our hierarchical Vecchia approximation, and we provide extensions to non-linear data assimilation with non-Gaussian data based on the Laplace approximation. In several numerical comparisons, our methods strongly outperformed alternative approaches.



قيم البحث

اقرأ أيضاً

Riemann manifold Hamiltonian Monte Carlo (RMHMC) has the potential to produce high-quality Markov chain Monte Carlo-output even for very challenging target distributions. To this end, a symmetric positive definite scaling matrix for RMHMC, which deri ves, via a modified Cholesky factorization, from the potentially indefinite negative Hessian of the target log-density is proposed. The methodology is able to exploit the sparsity of the Hessian, stemming from conditional independence modeling assumptions, and thus admit fast implementation of RMHMC even for high-dimensional target distributions. Moreover, the methodology can exploit log-concave conditional target densities, often encountered in Bayesian hierarchical models, for faster sampling and more straight forward tuning. The proposed methodology is compared to alternatives for some challenging targets, and is illustrated by applying a state space model to real data.
The smoothly clipped absolute deviation (SCAD) and the minimax concave penalty (MCP) penalized regression models are two important and widely used nonconvex sparse learning tools that can handle variable selection and parameter estimation simultaneou sly, and thus have potential applications in various fields such as mining biological data in high-throughput biomedical studies. Theoretically, these two models enjoy the oracle property even in the high-dimensional settings, where the number of predictors $p$ may be much larger than the number of observations $n$. However, numerically, it is quite challenging to develop fast and stable algorithms due to their non-convexity and non-smoothness. In this paper we develop a fast algorithm for SCAD and MCP penalized learning problems. First, we show that the global minimizers of both models are roots of the nonsmooth equations. Then, a semi-smooth Newton (SSN) algorithm is employed to solve the equations. We prove that the SSN algorithm converges locally and superlinearly to the Karush-Kuhn-Tucker (KKT) points. Computational complexity analysis shows that the cost of the SSN algorithm per iteration is $O(np)$. Combined with the warm-start technique, the SSN algorithm can be very efficient and accurate. Simulation studies and a real data example suggest that our SSN algorithm, with comparable solution accuracy with the coordinate descent (CD) and the difference of convex (DC) proximal Newton algorithms, is more computationally efficient.
118 - Lei Gong , James M. Flegal 2014
A current challenge for many Bayesian analyses is determining when to terminate high-dimensional Markov chain Monte Carlo simulations. To this end, we propose using an automated sequential stopping procedure that terminates the simulation when the co mputational uncertainty is small relative to the posterior uncertainty. Such a stopping rule has previously been shown to work well in settings with posteriors of moderate dimension. In this paper, we illustrate its utility in high-dimensional simulations while overcoming some current computational issues. Further, we investigate the relationship between the stopping rule and effective sample size. As examples, we consider two complex Bayesian analyses on spatially and temporally correlated datasets. The first involves a dynamic space-time model on weather station data and the second a spatial variable selection model on fMRI brain imaging data. Our results show the sequential stopping rule is easy to implement, provides uncertainty estimates, and performs well in high-dimensional settings.
84 - Anru Zhang , Rungang Han 2018
In this article, we consider the sparse tensor singular value decomposition, which aims for dimension reduction on high-dimensional high-order data with certain sparsity structure. A method named Sparse Tensor Alternating Thresholding for Singular Va lue Decomposition (STAT-SVD) is proposed. The proposed procedure features a novel double projection & thresholding scheme, which provides a sharp criterion for thresholding in each iteration. Compared with regular tensor SVD model, STAT-SVD permits more robust estimation under weaker assumptions. Both the upper and lower bounds for estimation accuracy are developed. The proposed procedure is shown to be minimax rate-optimal in a general class of situations. Simulation studies show that STAT-SVD performs well under a variety of configurations. We also illustrate the merits of the proposed procedure on a longitudinal tensor dataset on European country mortality rates.
Algorithms involving Gaussian processes or determinantal point processes typically require computing the determinant of a kernel matrix. Frequently, the latter is computed from the Cholesky decomposition, an algorithm of cubic complexity in the size of the matrix. We show that, under mild assumptions, it is possible to estimate the determinant from only a sub-matrix, with probabilistic guarantee on the relative error. We present an augmentation of the Cholesky decomposition that stops under certain conditions before processing the whole matrix. Experiments demonstrate that this can save a considerable amount of time while having an overhead of less than $5%$ when not stopping early. More generally, we present a probabilistic stopping strategy for the approximation of a sum of known length where addends are revealed sequentially. We do not assume independence between addends, only that they are bounded from below and decrease in conditional expectation.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا