Do you want to publish a course? Click here

Entropy of convex functions on $R^d$

127   0   0.0 ( 0 )
 Added by Jon A. Wellner
 Publication date 2015
and research's language is English




Ask ChatGPT about the research

Let $Omega$ be a bounded closed convex set in ${mathbb R}^d$ with non-empty interior, and let ${cal C}_r(Omega)$ be the class of convex functions on $Omega$ with $L^r$-norm bounded by $1$. We obtain sharp estimates of the $epsilon$-entropy of ${cal C}_r(Omega)$ under $L^p(Omega)$ metrics, $1le p<rle infty$. In particular, the results imply that the universal lower bound $epsilon^{-d/2}$ is also an upper bound for all $d$-polytopes, and the universal upper bound of $epsilon^{-frac{(d-1)}{2}cdot frac{pr}{r-p}}$ for $p>frac{dr}{d+(d-1)r}$ is attained by the closed unit ball. While a general convex body can be approximated by inscribed polytopes, the entropy rate does not carry over to the limiting body. Our results have applications to questions concerning rates of convergence of nonparametric estimators of high-dimensional shape-constrained functions.



rate research

Read More

We address the problem of adaptive minimax density estimation on $bR^d$ with $bL_p$--loss on the anisotropic Nikolskii classes. We fully characterize behavior of the minimax risk for different relationships between regularity parameters and norm indexes in definitions of the functional class and of the risk. In particular, we show that there are four different regimes with respect to the behavior of the minimax risk. We develop a single estimator which is (nearly) optimal in orderover the complete scale of the anisotropic Nikolskii classes. Our estimation procedure is based on a data-driven selection of an estimator from a fixed family of kernel estimators.
We discuss a general approach to handling multiple hypotheses testing in the case when a particular hypothesis states that the vector of parameters identifying the distribution of observations belongs to a convex compact set associated with the hypothesis. With our approach, this problem reduces to testing the hypotheses pairwise. Our central result is a test for a pair of hypotheses of the outlined type which, under appropriate assumptions, is provably nearly optimal. The test is yielded by a solution to a convex programming problem, so that our construction admits computationally efficient implementation. We further demonstrate that our assumptions are satisfied in several important and interesting applications. Finally, we show how our approach can be applied to a rather general detection problem encompassing several classical statistical settings such as detection of abrupt signal changes, cusp detection and multi-sensor detection.
We consider the problem of estimating the mean vector $theta$ of a $d$-dimensional spherically symmetric distributed $X$ based on balanced loss functions of the forms: {bf (i)} $omega rho(|de-de_{0}|^{2}) +(1-omega)rho(|de - theta|^{2})$ and {bf (ii)} $ellleft(omega |de - de_{0}|^{2} +(1-omega)|de - theta|^{2}right)$, where $delta_0$ is a target estimator, and where $rho$ and $ell$ are increasing and concave functions. For $dgeq 4$ and the target estimator $delta_0(X)=X$, we provide Baranchik-type estimators that dominate $delta_0(X)=X$ and are minimax. The findings represent extensions of those of Marchand & Strawderman (cite{ms2020}) in two directions: {bf (a)} from scale mixture of normals to the spherical class of distributions with Lebesgue densities and {bf (b)} from completely monotone to concave $rho$ and $ell$.
We consider the problem of selective inference after solving a (randomized) convex statistical learning program in the form of a penalized or constrained loss function. Our first main result is a change-of-measure formula that describes many conditional sampling problems of interest in selective inference. Our approach is model-agnostic in the sense that users may provide their own statistical model for inference, we simply provide the modification of each distribution in the model after the selection. Our second main result describes the geometric structure in the Jacobian appearing in the change of measure, drawing connections to curvature measures appearing in Weyl-Steiner volume-of-tubes formulae. This Jacobian is necessary for problems in which the convex penalty is not polyhedral, with the prototypical example being group LASSO or the nuclear norm. We derive explicit formulae for the Jacobian of the group LASSO. To illustrate the generality of our method, we consider many examples throughout, varying both the penalty or constraint in the statistical learning problem as well as the loss function, also considering selective inference after solving multiple statistical learning programs. Penalties considered include LASSO, forward stepwise, stagewise algorithms, marginal screening and generalized LASSO. Loss functions considered include squared-error, logistic, and log-det for covariance matrix estimation. Having described the appropriate distribution we wish to sample from through our first two results, we outline a framework for sampling using a projected Langevin sampler in the (commonly occuring) case that the distribution is log-concave.
We investigate two important properties of M-estimator, namely, robustness and tractability, in linear regression setting, when the observations are contaminated by some arbitrary outliers. Specifically, robustness means the statistical property that the estimator should always be close to the underlying true parameters {em regardless of the distribution of the outliers}, and tractability indicates the computational property that the estimator can be computed efficiently, even if the objective function of the M-estimator is {em non-convex}. In this article, by learning the landscape of the empirical risk, we show that under mild conditions, many M-estimators enjoy nice robustness and tractability properties simultaneously, when the percentage of outliers is small. We further extend our analysis to the high-dimensional setting, where the number of parameters is greater than the number of samples, $p gg n$, and prove that when the proportion of outliers is small, the penalized M-estimators with {em $L_1$} penalty will enjoy robustness and tractability simultaneously. Our research provides an analytic approach to see the effects of outliers and tuning parameters on the robustness and tractability for some families of M-estimators. Simulation and case study are presented to illustrate the usefulness of our theoretical results for M-estimators under Welschs exponential squared loss.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا