One estimator, many estimands: fine-grained quantification of uncertainty using conditional inference


Abstract in English

Statistical uncertainty has many components, such as measurement errors, temporal variation, or sampling. Not all of these sources are relevant when considering a specific application, since practitioners might view some attributes of observations as fixed. We study the statistical inference problem arising when data is drawn conditionally on some attributes. These attributes are assumed to be sampled from a super-population but viewed as fixed when conducting uncertainty quantification. The estimand is thus defined as the parameter of a conditional distribution. We propose methods to construct conditionally valid p-values and confidence intervals for these conditional estimands based on asymptotically linear estimators. In this setting, a given estimator is conditionally unbiased for potentially many conditional estimands, which can be seen as parameters of different populations. Testing different populations raises questions of multiple testing. We discuss simple procedures that control novel conditional error rates. In addition, we introduce a bias correction technique that enables transfer of estimators across conditional distributions arising from the same super-population. This can be used to infer parameters and estimators on future datasets based on some new data. The validity and applicability of the proposed methods are demonstrated on simulated and real-world data.

Download