Do you want to publish a course? Click here

Non-Asymptotic Inference in Instrumental Variables Estimation

184   0   0.0 ( 0 )
 Added by Joel Horowitz
 Publication date 2018
  fields Economy
and research's language is English




Ask ChatGPT about the research

This paper presents a simple method for carrying out inference in a wide variety of possibly nonlinear IV models under weak assumptions. The method is non-asymptotic in the sense that it provides a finite sample bound on the difference between the true and nominal probabilities of rejecting a correct null hypothesis. The method is a non-Studentized version of the Anderson-Rubin test but is motivated and analyzed differently. In contrast to the conventional Anderson-Rubin test, the method proposed here does not require restrictive distributional assumptions, linearity of the estimated model, or simultaneous equations. Nor does it require knowledge of whether the instruments are strong or weak. It does not require testing or estimating the strength of the instruments. The method can be applied to quantile IV models that may be nonlinear and can be used to test a parametric IV model against a nonparametric alternative. The results presented here hold in finite samples, regardless of the strength of the instruments.



rate research

Read More

81 - Baoluo Sun , Zhiqiang Tan 2020
Consider the problem of estimating the local average treatment effect with an instrument variable, where the instrument unconfoundedness holds after adjusting for a set of measured covariates. Several unknown functions of the covariates need to be estimated through regression models, such as instrument propensity score and treatment and outcome regression models. We develop a computationally tractable method in high-dimensional settings where the numbers of regression terms are close to or larger than the sample size. Our method exploits regularized calibrated estimation, which involves Lasso penalties but carefully chosen loss functions for estimating coefficient vectors in these regression models, and then employs a doubly robust estimator for the treatment parameter through augmented inverse probability weighting. We provide rigorous theoretical analysis to show that the resulting Wald confidence intervals are valid for the treatment parameter under suitable sparsity conditions if the instrument propensity score model is correctly specified, but the treatment and outcome regression models may be misspecified. For existing high-dimensional methods, valid confidence intervals are obtained for the treatment parameter if all three models are correctly specified. We evaluate the proposed methods via extensive simulation studies and an empirical application to estimate the returns to education.
A general asymptotic theory is given for the panel data AR(1) model with time series independent in different cross sections. The theory covers the cases of stationary process, nearly non-stationary process, unit root process, mildly integrated, mildly explosive and explosive processes. It is assumed that the cross-sectional dimension and time-series dimension are respectively $N$ and $T$. The results in this paper illustrate that whichever the process is, with an appropriate regularization, the least squares estimator of the autoregressive coefficient converges to a normal distribution with rate at least $O(N^{-1/3})$. Since the variance is the key to characterize the normal distribution, it is important to discuss the variance of the least squares estimator. We will show that when the autoregressive coefficient $rho$ satisfies $|rho|<1$, the variance declines at the rate $O((NT)^{-1/2})$, while the rate changes to $O(N^{-1/2}T^{-1})$ when $rho=1$ and $O(N^{-1/2}rho^{-T+2})$ when $|rho|>1$. $rho=1$ is the critical point where the convergence rate changes radically. The transition process is studied by assuming $rho$ depending on $T$ and going to $1$. An interesting phenomenon discovered in this paper is that, in the explosive case, the least squares estimator of the autoregressive coefficient has a standard normal limiting distribution in panel data case while it may not has a limiting distribution in univariate time series case.
109 - Wenjie Wang , Yichong Zhang 2021
We study the wild bootstrap inference for instrumental variable (quantile) regressions in the framework of a small number of large clusters, in which the number of clusters is viewed as fixed and the number of observations for each cluster diverges to infinity. For subvector inference, we show that the wild bootstrap Wald test with or without using the cluster-robust covariance matrix controls size asymptotically up to a small error as long as the parameters of endogenous variables are strongly identified in at least one of the clusters. We further develop a wild bootstrap Anderson-Rubin (AR) test for full-vector inference and show that it controls size asymptotically up to a small error even under weak or partial identification for all clusters. We illustrate the good finite-sample performance of the new inference methods using simulations and provide an empirical application to a well-known dataset about U.S. local labor markets.
108 - Xinwei Ma , Jingshen Wang 2018
Inverse Probability Weighting (IPW) is widely used in empirical work in economics and other disciplines. As Gaussian approximations perform poorly in the presence of small denominators, trimming is routinely employed as a regularization strategy. However, ad hoc trimming of the observations renders usual inference procedures invalid for the target estimand, even in large samples. In this paper, we first show that the IPW estimator can have different (Gaussian or non-Gaussian) asymptotic distributions, depending on how close to zero the probability weights are and on how large the trimming threshold is. As a remedy, we propose an inference procedure that is robust not only to small probability weights entering the IPW estimator but also to a wide range of trimming threshold choices, by adapting to these different asymptotic distributions. This robustness is achieved by employing resampling techniques and by correcting a non-negligible trimming bias. We also propose an easy-to-implement method for choosing the trimming threshold by minimizing an empirical analogue of the asymptotic mean squared error. In addition, we show that our inference procedure remains valid with the use of a data-driven trimming threshold. We illustrate our method by revisiting a dataset from the National Supported Work program.
In this work, we study a new recursive stochastic algorithm for the joint estimation of quantile and superquantile of an unknown distribution. The novelty of this algorithm is to use the Cesaro averaging of the quantile estimation inside the recursive approximation of the superquantile. We provide some sharp non-asymptotic bounds on the quadratic risk of the superquantile estimator for different step size sequences. We also prove new non-asymptotic $L^p$-controls on the Robbins Monro algorithm for quantile estimation and its averaged version. Finally, we derive a central limit theorem of our joint procedure using the diffusion approximation point of view hidden behind our stochastic algorithm.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا