Approximate separability of symmetrically penalized least squares in high dimensions: characterization and consequences


Abstract in English

We show that the high-dimensional behavior of symmetrically penalized least squares with a possibly non-separable, symmetric, convex penalty in both (i) the Gaussian sequence model and (ii) the linear model with uncorrelated Gaussian designs nearly matches the behavior of least squares with an appropriately chosen separable penalty in these same models. The similarity in behavior is precisely quantified by a finite-sample concentration inequality in both cases. Our results help clarify the role non-separability can play in high-dimensional M-estimation. In particular, if the empirical distribution of the coordinates of the parameter is known --exactly or approximately-- there are at most limited advantages to using non-separable, symmetric penalties over separable ones. In contrast, if the empirical distribution of the coordinates of the parameter is unknown, we argue that non-separable, symmetric penalties automatically implement an adaptive procedure which we characterize. We also provide a partial converse which characterizes adaptive procedures which can be implemented in this way.

Download