ترغب بنشر مسار تعليمي؟ اضغط هنا

This paper introduces and analyzes a stochastic search method for parameter estimation in linear regression models in the spirit of Beran and Millar (1987). The idea is to generate a random finite subset of a parameter space which will automatically contain points which are very close to an unknown true parameter. The motivation for this procedure comes from recent work of Duembgen, Samworth and Schuhmacher (2011) on regression models with log-concave error distributions.
We study the approximation of arbitrary distributions $P$ on $d$-dimensional space by distributions with log-concave density. Approximation means minimizing a Kullback--Leibler-type functional. We show that such an approximation exists if and only if $P$ has finite first moments and is not supported by some hyperplane. Furthermore we show that this approximation depends continuously on $P$ with respect to Mallows distance $D_1(cdot,cdot)$. This result implies consistency of the maximum likelihood estimator of a log-concave density under fairly general conditions. It also allows us to prove existence and consistency of estimators in regression models with a response $Y=mu(X)+epsilon$, where $X$ and $epsilon$ are independent, $mu(cdot)$ belongs to a certain class of regression functions while $epsilon$ is a random error with log-concave density and mean zero.
In this paper we show that the family P_d of probability distributions on R^d with log-concave densities satisfies a strong continuity condition. In particular, it turns out that weak convergence within this family entails (i) convergence in total va riation distance, (ii) convergence of arbitrary moments, and (iii) pointwise convergence of Laplace transforms. Hence the nonparametric model P_d has similar properties as parametric models such as, for instance, the family of all d-variate Gaussian distributions.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا