ترغب بنشر مسار تعليمي؟ اضغط هنا

An Efficient Algorithm for High-Dimensional Log-Concave Maximum Likelihood

298   0   0.0 ( 0 )
 نشر من قبل Brian Axelrod
 تاريخ النشر 2018
والبحث باللغة English




اسأل ChatGPT حول البحث

The log-concave maximum likelihood estimator (MLE) problem answers: for a set of points $X_1,...X_n in mathbb R^d$, which log-concave density maximizes their likelihood? We present a characterization of the log-concave MLE that leads to an algorithm with runtime $poly(n,d, frac 1 epsilon,r)$ to compute a log-concave distribution whose log-likelihood is at most $epsilon$ less than that of the MLE, and $r$ is parameter of the problem that is bounded by the $ell_2$ norm of the vector of log-likelihoods the MLE evaluated at $X_1,...,X_n$.



قيم البحث

اقرأ أيضاً

We consider the problem of computing the maximum likelihood multivariate log-concave distribution for a set of points. Specifically, we present an algorithm which, given $n$ points in $mathbb{R}^d$ and an accuracy parameter $epsilon>0$, runs in time $poly(n,d,1/epsilon),$ and returns a log-concave distribution which, with high probability, has the property that the likelihood of the $n$ points under the returned distribution is at most an additive $epsilon$ less than the maximum likelihood that could be achieved via any log-concave distribution. This is the first computationally efficient (polynomial time) algorithm for this fundamental and practically important task. Our algorithm rests on a novel connection with exponential families: the maximum likelihood log-concave distribution belongs to a class of structured distributions which, while not an exponential family, locally possesses key properties of exponential families. This connection then allows the problem of computing the log-concave maximum likelihood distribution to be formulated as a convex optimization problem, and solved via an approximate first-order method. Efficiently approximating the (sub) gradients of the objective function of this optimization problem is quite delicate, and is the main technical challenge in this work.
We study the problem of computing the maximum likelihood estimator (MLE) of multivariate log-concave densities. Our main result is the first computationally efficient algorithm for this problem. In more detail, we give an algorithm that, on input a s et of $n$ points in $mathbb{R}^d$ and an accuracy parameter $epsilon>0$, it runs in time $text{poly}(n, d, 1/epsilon)$, and outputs a log-concave density that with high probability maximizes the log-likelihood up to an additive $epsilon$. Our approach relies on a natural convex optimization formulation of the underlying problem that can be efficiently solved by a projected stochastic subgradient method. The main challenge lies in showing that a stochastic subgradient of our objective function can be efficiently approximated. To achieve this, we rely on structural results on approximation of log-concave densities and leverage classical algorithmic tools on volume approximation of convex bodies and uniform sampling from convex sets.
Let X_1, ..., X_n be independent and identically distributed random vectors with a log-concave (Lebesgue) density f. We first prove that, with probability one, there exists a unique maximum likelihood estimator of f. The use of this estimator is attr active because, unlike kernel density estimation, the method is fully automatic, with no smoothing parameters to choose. Although the existence proof is non-constructive, we are able to reformulate the issue of computation in terms of a non-differentiable convex optimisation problem, and thus combine techniques of computational geometry with Shors r-algorithm to produce a sequence that converges to the maximum likelihood estimate. For the moderate or large sample sizes in our simulations, the maximum likelihood estimator is shown to provide an improvement in performance compared with kernel-based methods, even when we allow the use of a theoretical, optimal fixed bandwidth for the kernel estimator that would not be available in practice. We also present a real data clustering example, which shows that our methodology can be used in conjunction with the Expectation--Maximisation (EM) algorithm to fit finite mixtures of log-concave densities. An R version of the algorithm is available in the package LogConcDEAD -- Log-Concave Density Estimation in Arbitrary Dimensions.
We find limiting distributions of the nonparametric maximum likelihood estimator (MLE) of a log-concave density, that is, a density of the form $f_0=expvarphi_0$ where $varphi_0$ is a concave function on $mathbb{R}$. The pointwise limiting distributi ons depend on the second and third derivatives at 0 of $H_k$, the lower invelope of an integrated Brownian motion process minus a drift term depending on the number of vanishing derivatives of $varphi_0=log f_0$ at the point of interest. We also establish the limiting distribution of the resulting estimator of the mode $M(f_0)$ and establish a new local asymptotic minimax lower bound which shows the optimality of our mode estimator in terms of both rate of convergence and dependence of constants on population values.
We introduce a notion called entropic independence for distributions $mu$ defined on pure simplicial complexes, i.e., subsets of size $k$ of a ground set of elements. Informally, we call a background measure $mu$ entropically independent if for any ( possibly randomly chosen) set $S$, the relative entropy of an element of $S$ drawn uniformly at random carries at most $O(1/k)$ fraction of the relative entropy of $S$, a constant multiple of its ``share of entropy. Entropic independence is the natural analog of spectral independence, another recently established notion, if one replaces variance by entropy. In our main result, we show that $mu$ is entropically independent exactly when a transformed version of the generating polynomial of $mu$ can be upper bounded by its linear tangent, a property implied by concavity of the said transformation. We further show that this concavity is equivalent to spectral independence under arbitrary external fields, an assumption that also goes by the name of fractional log-concavity. Our result can be seen as a new tool to establish entropy contraction from the much simpler variance contraction inequalities. A key differentiating feature of our result is that we make no assumptions on marginals of $mu$ or the degrees of the underlying graphical model when $mu$ is based on one. We leverage our results to derive tight modified log-Sobolev inequalities for multi-step down-up walks on fractionally log-concave distributions. As our main application, we establish the tight mixing time of $O(nlog n)$ for Glauber dynamics on Ising models with interaction matrix of operator norm smaller than $1$, improving upon the prior quadratic dependence on $n$.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا