Do you want to publish a course? Click here

Randomization Inference beyond the Sharp Null: Bounded Null Hypotheses and Quantiles of Individual Treatment Effects

197   0   0.0 ( 0 )
 Added by Xinran Li
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Randomization (a.k.a. permutation) inference is typically interpreted as testing Fishers sharp null hypothesis that all effects are exactly zero. This hypothesis is often criticized as uninteresting and implausible. We show, however, that many randomization tests are also valid for a bounded null hypothesis under which effects are all negative (or positive) for all units but otherwise heterogeneous. The bounded null is closely related to important concepts such as monotonicity and Pareto efficiency. Inverting tests of this hypothesis yields confidence intervals for the maximum (or minimum) individual treatment effect. We then extend randomization tests to infer other quantiles of individual effects, which can be used to infer the proportion of units with effects larger (or smaller) than any threshold. The proposed confidence intervals for all quantiles of individual effects are simultaneously valid, in the sense that no correction due to multiple analyses is needed. In sum, we provide a broader justification for Fisher randomization tests, and develop exact nonparametric inference for quantiles of heterogeneous individual effects. We illustrate our methods with simulations and applications, where we find that Stephenson rank statistics often provide the most informative results.



rate research

Read More

81 - Baoluo Sun , Zhiqiang Tan 2020
Consider the problem of estimating the local average treatment effect with an instrument variable, where the instrument unconfoundedness holds after adjusting for a set of measured covariates. Several unknown functions of the covariates need to be estimated through regression models, such as instrument propensity score and treatment and outcome regression models. We develop a computationally tractable method in high-dimensional settings where the numbers of regression terms are close to or larger than the sample size. Our method exploits regularized calibrated estimation, which involves Lasso penalties but carefully chosen loss functions for estimating coefficient vectors in these regression models, and then employs a doubly robust estimator for the treatment parameter through augmented inverse probability weighting. We provide rigorous theoretical analysis to show that the resulting Wald confidence intervals are valid for the treatment parameter under suitable sparsity conditions if the instrument propensity score model is correctly specified, but the treatment and outcome regression models may be misspecified. For existing high-dimensional methods, valid confidence intervals are obtained for the treatment parameter if all three models are correctly specified. We evaluate the proposed methods via extensive simulation studies and an empirical application to estimate the returns to education.
458 - Yichong Zhang , Xin Zheng 2018
In this paper, we study the estimation and inference of the quantile treatment effect under covariate-adaptive randomization. We propose two estimation methods: (1) the simple quantile regression and (2) the inverse propensity score weighted quantile regression. For the two estimators, we derive their asymptotic distributions uniformly over a compact set of quantile indexes, and show that, when the treatment assignment rule does not achieve strong balance, the inverse propensity score weighted estimator has a smaller asymptotic variance than the simple quantile regression estimator. For the inference of method (1), we show that the Wald test using a weighted bootstrap standard error under-rejects. But for method (2), its asymptotic size equals the nominal level. We also show that, for both methods, the asymptotic size of the Wald test using a covariate-adaptive bootstrap standard error equals the nominal level. We illustrate the finite sample performance of the new estimation and inference methods using both simulated and real datasets.
In science, the most widespread statistical quantities are perhaps $p$-values. A typical advice is to reject the null hypothesis $H_0$ if the corresponding p-value is sufficiently small (usually smaller than 0.05). Many criticisms regarding p-values have arisen in the scientific literature. The main issue is that in general optimal p-values (based on likelihood ratio statistics) are not measures of evidence over the parameter space $Theta$. Here, we propose an emph{objective} measure of evidence for very general null hypotheses that satisfies logical requirements (i.e., operations on the subsets of $Theta$) that are not met by p-values (e.g., it is a possibility measure). We study the proposed measure in the light of the abstract belief calculus formalism and we conclude that it can be used to establish objective states of belief on the subsets of $Theta$. Based on its properties, we strongly recommend this measure as an additional summary of significance tests. At the end of the paper we give a short listing of possible open problems.
108 - Ruoqi Yu , Shulei Wang 2020
In observational studies, balancing covariates in different treatment groups is essential to estimate treatment effects. One of the most commonly used methods for such purposes is weighting. The performance of this class of methods usually depends on strong regularity conditions for the underlying model, which might not hold in practice. In this paper, we investigate weighting methods from a functional estimation perspective and argue that the weights needed for covariate balancing could differ from those needed for treatment effects estimation under low regularity conditions. Motivated by this observation, we introduce a new framework of weighting that directly targets the treatment effects estimation. Unlike existing methods, the resulting estimator for a treatment effect under this new framework is a simple kernel-based $U$-statistic after applying a data-driven transformation to the observed covariates. We characterize the theoretical properties of the new estimators of treatment effects under a nonparametric setting and show that they are able to work robustly under low regularity conditions. The new framework is also applied to several numerical examples to demonstrate its practical merits.
Causal effect sizes may vary among individuals and they can even be of opposite directions. When there exists serious effect heterogeneity, the population average causal effect (ACE) is not very informative. It is well-known that individual causal effects (ICEs) cannot be determined in cross-sectional studies, but we will show that ICEs can be retrieved from longitudinal data under certain conditions. We will present a general framework for individual causality where we will view effect heterogeneity as an individual-specific effect modification that can be parameterized with a latent variable, the receptiveness factor. The distribution of the receptiveness factor can be retrieved, and it will enable us to study the contrast of the potential outcomes of an individual under stationarity assumptions. Within the framework, we will study the joint distribution of the individuals potential outcomes conditioned on all individuals factual data and subsequently the distribution of the cross-world causal effect (CWCE). We discuss conditions such that the latter converges to a degenerated distribution, in which case the ICE can be estimated consistently. To demonstrate the use of this general framework, we present examples in which the outcome process can be parameterized as a (generalized) linear mixed model.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا