ﻻ يوجد ملخص باللغة العربية
Selective inference is a recent research topic that tries to perform valid inference after using the data to select a reasonable statistical model. We propose MAGIC, a new method for selective inference that is general, powerful and tractable. MAGIC is a method for selective inference after solving a convex optimization problem with smooth loss and $ell_1$ penalty. Randomization is incorporated into the optimization problem to boost statistical power. Through reparametrization, MAGIC reduces the problem into a sampling problem with simple constraints. MAGIC applies to many $ell_1$ penalized optimization problem including the Lasso, logistic Lasso and neighborhood selection in graphical models, all of which we consider in this paper.
Inspired by sample splitting and the reusable holdout introduced in the field of differential privacy, we consider selective inference with a randomized response. We discuss two major advantages of using a randomized response for model selection. Fir
Covariance matrix testing for high dimensional data is a fundamental problem. A large class of covariance test statistics based on certain averaged spectral statistics of the sample covariance matrix are known to obey central limit theorems under the
We propose statistical inferential procedures for panel data models with interactive fixed effects in a kernel ridge regression framework.Compared with traditional sieve methods, our method is automatic in the sense that it does not require the choic
Conditional selective inference (SI) has been actively studied as a new statistical inference framework for data-driven hypotheses. The basic idea of conditional SI is to make inferences conditional on the selection event characterized by a set of li
We consider the problem of selective inference after solving a (randomized) convex statistical learning program in the form of a penalized or constrained loss function. Our first main result is a change-of-measure formula that describes many conditio