ﻻ يوجد ملخص باللغة العربية
We consider the problem of selective inference after solving a (randomized) convex statistical learning program in the form of a penalized or constrained loss function. Our first main result is a change-of-measure formula that describes many conditional sampling problems of interest in selective inference. Our approach is model-agnostic in the sense that users may provide their own statistical model for inference, we simply provide the modification of each distribution in the model after the selection. Our second main result describes the geometric structure in the Jacobian appearing in the change of measure, drawing connections to curvature measures appearing in Weyl-Steiner volume-of-tubes formulae. This Jacobian is necessary for problems in which the convex penalty is not polyhedral, with the prototypical example being group LASSO or the nuclear norm. We derive explicit formulae for the Jacobian of the group LASSO. To illustrate the generality of our method, we consider many examples throughout, varying both the penalty or constraint in the statistical learning problem as well as the loss function, also considering selective inference after solving multiple statistical learning programs. Penalties considered include LASSO, forward stepwise, stagewise algorithms, marginal screening and generalized LASSO. Loss functions considered include squared-error, logistic, and log-det for covariance matrix estimation. Having described the appropriate distribution we wish to sample from through our first two results, we outline a framework for sampling using a projected Langevin sampler in the (commonly occuring) case that the distribution is log-concave.
Inspired by sample splitting and the reusable holdout introduced in the field of differential privacy, we consider selective inference with a randomized response. We discuss two major advantages of using a randomized response for model selection. Fir
Selective inference is a recent research topic that tries to perform valid inference after using the data to select a reasonable statistical model. We propose MAGIC, a new method for selective inference that is general, powerful and tractable. MAGIC
Estimation of the prediction error of a linear estimation rule is difficult if the data analyst also use data to select a set of variables and construct the estimation rule using only the selected variables. In this work, we propose an asymptotically
We consider the problem of choosing the best of $n$ samples, out of a large random pool, when the sampling of each member is associated with a certain cost. The quality (worth) of the best sample clearly increases with $n$, but so do the sampling cos
We introduce a new sufficient statistic for the population parameter vector by allowing for the sampling design to first be selected at random amongst a set of candidate sampling designs. In contrast to the traditional approach in survey sampling, we