ترغب بنشر مسار تعليمي؟ اضغط هنا

Identification of unknown parameters and prediction with hierarchical matrices

87   0   0.0 ( 0 )
 نشر من قبل Alexander Litvinenko
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

Statistical analysis of massive datasets very often implies expensive linear algebra operations with large dense matrices. Typical tasks are an estimation of unknown parameters of the underlying statistical model and prediction of missing values. We developed the H-MLE procedure, which solves these tasks. The unknown parameters can be estimated by maximizing the joint Gaussian log-likelihood function, which depends on a covariance matrix. To decrease high computational cost, we approximate the covariance matrix in the hierarchical (H-) matrix format. The H-matrix technique allows us to work with inhomogeneous covariance matrices and almost arbitrary locations. Especially, H-matrices can be applied in cases when the matrices under consideration are dense and unstructured. For validation purposes, we implemented three machine learning methods: the k-nearest neighbors (kNN), random forest, and deep neural network. The best results (for the given datasets) were obtained by the kNN method with three or seven neighbors depending on the dataset. The results computed with the H-MLE method were compared with the results obtained by the kNN method. The developed H-matrix code and all datasets are freely available online.



قيم البحث

اقرأ أيضاً

We introduce an ensemble Markov chain Monte Carlo approach to sampling from a probability density with known likelihood. This method upgrades an underlying Markov chain by allowing an ensemble of such chains to interact via a process in which one cha ins state is cloned as anothers is deleted. This effective teleportation of states can overcome issues of metastability in the underlying chain, as the scheme enjoys rapid mixing once the modes of the target density have been populated. We derive a mean-field limit for the evolution of the ensemble. We analyze the global and local convergence of this mean-field limit, showing asymptotic convergence independent of the spectral gap of the underlying Markov chain, and moreover we interpret the limiting evolution as a gradient flow. We explain how interaction can be applied selectively to a subset of state variables in order to maintain advantage on very high-dimensional problems. Finally we present the application of our methodology to Bayesian hyperparameter estimation for Gaussian process regression.
In this work we introduce a reduced-rank algorithm for Gaussian process regression. Our numerical scheme converts a Gaussian process on a user-specified interval to its Karhunen-Lo`eve expansion, the $L^2$-optimal reduced-rank representation. Numeric al evaluation of the Karhunen-Lo`eve expansion is performed once during precomputation and involves computing a numerical eigendecomposition of an integral operator whose kernel is the covariance function of the Gaussian process. The Karhunen-Lo`eve expansion is independent of observed data and depends only on the covariance kernel and the size of the interval on which the Gaussian process is defined. The scheme of this paper does not require translation invariance of the covariance kernel. We also introduce a class of fast algorithms for Bayesian fitting of hyperparameters, and demonstrate the performance of our algorithms with numerical experiments in one and two dimensions. Extensions to higher dimensions are mathematically straightforward but suffer from the standard curses of high dimensions.
The stochastic partial differential equation approach to Gaussian processes (GPs) represents Matern GP priors in terms of $n$ finite element basis functions and Gaussian coefficients with sparse precision matrix. Such representations enhance the scal ability of GP regression and classification to datasets of large size $N$ by setting $napprox N$ and exploiting sparsity. In this paper we reconsider the standard choice $n approx N$ through an analysis of the estimation performance. Our theory implies that, under certain smoothness assumptions, one can reduce the computation and memory cost without hindering the estimation accuracy by setting $n ll N$ in the large $N$ asymptotics. Numerical experiments illustrate the applicability of our theory and the effect of the prior lengthscale in the pre-asymptotic regime.
We describe a numerical scheme for evaluating the posterior moments of Bayesian linear regression models with partial pooling of the coefficients. The principal analytical tool of the evaluation is a change of basis from coefficient space to the spac e of singular vectors of the matrix of predictors. After this change of basis and an analytical integration, we reduce the problem of finding moments of a density over k + m dimensions, to finding moments of an m-dimensional density, where k is the number of coefficients and k + m is the dimension of the posterior. Moments can then be computed using, for example, MCMC, the trapezoid rule, or adaptive Gaussian quadrature. An evaluation of the SVD of the matrix of predictors is the dominant computational cost and is performed once during the precomputation stage. We demonstrate numerical results of the algorithm. The scheme described in this paper generalizes naturally to multilevel and multi-group hierarchical regression models where normal-normal parameters appear.
Markov Chain Monte Carlo methods become increasingly popular in applied mathematics as a tool for numerical integration with respect to complex and high-dimensional distributions. However, application of MCMC methods to heavy tailed distributions and distributions with analytically intractable densities turns out to be rather problematic. In this paper, we propose a novel approach towards the use of MCMC algorithms for distributions with analytically known Fourier transforms and, in particular, heavy tailed distributions. The main idea of the proposed approach is to use MCMC methods in Fourier domain to sample from a density proportional to the absolute value of the underlying characteristic function. A subsequent application of the Parsevals formula leads to an efficient algorithm for the computation of integrals with respect to the underlying density. We show that the resulting Markov chain in Fourier domain may be geometrically ergodic even in the case of heavy tailed original distributions. We illustrate our approach by several numerical examples including multivariate elliptically contoured stable distributions.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا