ترغب بنشر مسار تعليمي؟ اضغط هنا

The stochastic partial differential equation approach to Gaussian processes (GPs) represents Matern GP priors in terms of $n$ finite element basis functions and Gaussian coefficients with sparse precision matrix. Such representations enhance the scal ability of GP regression and classification to datasets of large size $N$ by setting $napprox N$ and exploiting sparsity. In this paper we reconsider the standard choice $n approx N$ through an analysis of the estimation performance. Our theory implies that, under certain smoothness assumptions, one can reduce the computation and memory cost without hindering the estimation accuracy by setting $n ll N$ in the large $N$ asymptotics. Numerical experiments illustrate the applicability of our theory and the effect of the prior lengthscale in the pre-asymptotic regime.
85 - Bryon Aragam , Ruiyi Yang 2021
We study uniform consistency in nonparametric mixture models as well as closely related mixture of regression (also known as mixed regression) models, where the regression functions are allowed to be nonparametric and the error distributions are assu med to be convolutions of a Gaussian density. We construct uniformly consistent estimators under general conditions while simultaneously highlighting several pain points in extending existing pointwise consistency results to uniform results. The resulting analysis turns out to be nontrivial, and several novel technical tools are developed along the way. In the case of mixed regression, we prove $L^1$ convergence of the regression functions while allowing for the component regression functions to intersect arbitrarily often, which presents additional technical challenges. We also consider generalizations to general (i.e. non-convolutional) nonparametric mixtures.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا