Do you want to publish a course? Click here

Limit distribution theory for maximum likelihood estimation of a log-concave density

145   0   0.0 ( 0 )
 Added by Kaspar Rufibach
 Publication date 2009
and research's language is English




Ask ChatGPT about the research

We find limiting distributions of the nonparametric maximum likelihood estimator (MLE) of a log-concave density, that is, a density of the form $f_0=expvarphi_0$ where $varphi_0$ is a concave function on $mathbb{R}$. The pointwise limiting distributions depend on the second and third derivatives at 0 of $H_k$, the lower invelope of an integrated Brownian motion process minus a drift term depending on the number of vanishing derivatives of $varphi_0=log f_0$ at the point of interest. We also establish the limiting distribution of the resulting estimator of the mode $M(f_0)$ and establish a new local asymptotic minimax lower bound which shows the optimality of our mode estimator in terms of both rate of convergence and dependence of constants on population values.



rate research

Read More

We present theoretical properties of the log-concave maximum likelihood estimator of a density based on an independent and identically distributed sample in $mathbb{R}^d$. Our study covers both the case where the true underlying density is log-concave, and where this model is misspecified. We begin by showing that for a sequence of log-concave densities, convergence in distribution implies much stronger types of convergence -- in particular, it implies convergence in Hellinger distance and even in certain exponentially weighted total variation norms. In our main result, we prove the existence and uniqueness of a log-concave density that minimises the Kullback--Leibler divergence from the true density over the class all log-concave densities, and also show that the log-concave maximum likelihood estimator converges almost surely in these exponentially weighted total variation norms to this minimiser. In the case of a correctly specified model, this demonstrates a strong type of consistency for the estimator; in a misspecified model, it shows that the estimator converges to the log-concave density that is closest in the Kullback--Leibler sense to the true density.
We study nonparametric maximum likelihood estimation of a log-concave probability density and its distribution and hazard function. Some general properties of these estimators are derived from two characterizations. It is shown that the rate of convergence with respect to supremum norm on a compact interval for the density and hazard rate estimator is at least $(log(n)/n)^{1/3}$ and typically $(log(n)/n)^{2/5}$, whereas the difference between the empirical and estimated distribution function vanishes with rate $o_{mathrm{p}}(n^{-1/2})$ under certain regularity assumptions.
Let X_1, ..., X_n be independent and identically distributed random vectors with a log-concave (Lebesgue) density f. We first prove that, with probability one, there exists a unique maximum likelihood estimator of f. The use of this estimator is attractive because, unlike kernel density estimation, the method is fully automatic, with no smoothing parameters to choose. Although the existence proof is non-constructive, we are able to reformulate the issue of computation in terms of a non-differentiable convex optimisation problem, and thus combine techniques of computational geometry with Shors r-algorithm to produce a sequence that converges to the maximum likelihood estimate. For the moderate or large sample sizes in our simulations, the maximum likelihood estimator is shown to provide an improvement in performance compared with kernel-based methods, even when we allow the use of a theoretical, optimal fixed bandwidth for the kernel estimator that would not be available in practice. We also present a real data clustering example, which shows that our methodology can be used in conjunction with the Expectation--Maximisation (EM) algorithm to fit finite mixtures of log-concave densities. An R version of the algorithm is available in the package LogConcDEAD -- Log-Concave Density Estimation in Arbitrary Dimensions.
Let ${P_{theta}:theta in {mathbb R}^d}$ be a log-concave location family with $P_{theta}(dx)=e^{-V(x-theta)}dx,$ where $V:{mathbb R}^dmapsto {mathbb R}$ is a known convex function and let $X_1,dots, X_n$ be i.i.d. r.v. sampled from distribution $P_{theta}$ with an unknown location parameter $theta.$ The goal is to estimate the value $f(theta)$ of a smooth functional $f:{mathbb R}^dmapsto {mathbb R}$ based on observations $X_1,dots, X_n.$ In the case when $V$ is sufficiently smooth and $f$ is a functional from a ball in a Holder space $C^s,$ we develop estimators of $f(theta)$ with minimax optimal error rates measured by the $L_2({mathbb P}_{theta})$-distance as well as by more general Orlicz norm distances. Moreover, we show that if $dleq n^{alpha}$ and $s>frac{1}{1-alpha},$ then the resulting estimators are asymptotically efficient in Hajek-LeCam sense with the convergence rate $sqrt{n}.$ This generalizes earlier results on estimation of smooth functionals in Gaussian shift models. The estimators have the form $f_k(hat theta),$ where $hat theta$ is the maximum likelihood estimator and $f_k: {mathbb R}^dmapsto {mathbb R}$ (with $k$ depending on $s$) are functionals defined in terms of $f$ and designed to provide a higher order bias reduction in functional estimation problem. The method of bias reduction is based on iterative parametric bootstrap and it has been successfully used before in the case of Gaussian models.
We study the problem of computing the maximum likelihood estimator (MLE) of multivariate log-concave densities. Our main result is the first computationally efficient algorithm for this problem. In more detail, we give an algorithm that, on input a set of $n$ points in $mathbb{R}^d$ and an accuracy parameter $epsilon>0$, it runs in time $text{poly}(n, d, 1/epsilon)$, and outputs a log-concave density that with high probability maximizes the log-likelihood up to an additive $epsilon$. Our approach relies on a natural convex optimization formulation of the underlying problem that can be efficiently solved by a projected stochastic subgradient method. The main challenge lies in showing that a stochastic subgradient of our objective function can be efficiently approximated. To achieve this, we rely on structural results on approximation of log-concave densities and leverage classical algorithmic tools on volume approximation of convex bodies and uniform sampling from convex sets.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا