Do you want to publish a course? Click here

Decomposing Identification Gains and Evaluating Instrument Identification Power for Partially Identified Average Treatment Effects

127   0   0.0 ( 0 )
 Added by David Frazier
 Publication date 2020
  fields Economy
and research's language is English




Ask ChatGPT about the research

This paper studies the instrument identification power for the average treatment effect (ATE) in partially identified binary outcome models with an endogenous binary treatment. We propose a novel approach to measure the instrument identification power by their ability to reduce the width of the ATE bounds. We show that instrument strength, as determined by the extreme values of the conditional propensity score, and its interplays with the degree of endogeneity and the exogenous covariates all play a role in bounding the ATE. We decompose the ATE identification gains into a sequence of measurable components, and construct a standardized quantitative measure for the instrument identification power ($IIP$). The decomposition and the $IIP$ evaluation are illustrated with finite-sample simulation studies and an empirical example of childbearing and womens labor supply. Our simulations show that the $IIP$ is a useful tool for detecting irrelevant instruments.



rate research

Read More

In nonlinear panel data models, fixed effects methods are often criticized because they cannot identify average marginal effects (AMEs) in short panels. The common argument is that the identification of AMEs requires knowledge of the distribution of unobserved heterogeneity, but this distribution is not identified in a fixed effects model with a short panel. In this paper, we derive identification results that contradict this argument. In a panel data dynamic logic model, and for T as small as four, we prove the point identification of different AMEs, including causal effects of changes in the lagged dependent variable or in the duration in last choice. Our proofs are constructive and provide simple closed-form expressions for the AMEs in terms of probabilities of choice histories. We illustrate our results using Monte Carlo experiments and with an empirical application of a dynamic structural model of consumer brand choice with state dependence.
Given the unconfoundedness assumption, we propose new nonparametric estimators for the reduced dimensional conditional average treatment effect (CATE) function. In the first stage, the nuisance functions necessary for identifying CATE are estimated by machine learning methods, allowing the number of covariates to be comparable to or larger than the sample size. The second stage consists of a low-dimensional local linear regression, reducing CATE to a function of the covariate(s) of interest. We consider two variants of the estimator depending on whether the nuisance functions are estimated over the full sample or over a hold-out sample. Building on Belloni at al. (2017) and Chernozhukov et al. (2018), we derive functional limit theory for the estimators and provide an easy-to-implement procedure for uniform inference based on the multiplier bootstrap. The empirical application revisits the effect of maternal smoking on a babys birth weight as a function of the mothers age.
188 - Yinchu Zhu 2021
We consider the setting in which a strong binary instrument is available for a binary treatment. The traditional LATE approach assumes the monotonicity condition stating that there are no defiers (or compliers). Since this condition is not always obvious, we investigate the sensitivity and testability of this condition. In particular, we focus on the question: does a slight violation of monotonicity lead to a small problem or a big problem? We find a phase transition for the monotonicity condition. On one of the boundary of the phase transition, it is easy to learn the sign of LATE and on the other side of the boundary, it is impossible to learn the sign of LATE. Unfortunately, the impossible side of the phase transition includes data-generating processes under which the proportion of defiers tends to zero. This boundary of phase transition is explicitly characterized in the case of binary outcomes. Outside a special case, it is impossible to test whether the data-generating process is on the nice side of the boundary. However, in the special case that the non-compliance is almost one-sided, such a test is possible. We also provide simple alternatives to monotonicity.
63 - Haitian Xie 2020
This paper studies the econometric aspects of the generalized local IV framework defined using the unordered monotonicity condition, which accommodates multiple levels of treatment and instrument in program evaluations. The framework is explicitly developed to allow for conditioning covariates. Nonparametric identification results are obtained for a wide range of policy-relevant parameters. Semiparametric efficiency bounds are computed for these identified structural parameters, including the local average structural function and local average structural function on the treated. Two semiparametric estimators are introduced that achieve efficiency. One is the conditional expectation projection estimator defined through the nonparametric identification equation. The other is the double/debiased machine learning estimator defined through the efficient influence function, which is suitable for high-dimensional settings. More generally, for parameters implicitly defined by possibly non-smooth and overidentifying moment conditions, this study provides the calculation for the corresponding semiparametric efficiency bounds and proposes efficient semiparametric GMM estimators again using the efficient influence functions. Then an optimal set of testable implications of the model assumption is proposed. Previous results developed for the binary local IV model and the multivalued treatment model under unconfoundedness are encompassed as special cases in this more general framework. The theoretical results are illustrated by an empirical application investigating the return to schooling across different fields of study, and a Monte Carlo experiment.
Economists are often interested in estimating averages with respect to distributions of unobservables, such as moments of individual fixed-effects, or average partial effects in discrete choice models. For such quantities, we propose and study posterior average effects (PAE), where the average is computed conditional on the sample, in the spirit of empirical Bayes and shrinkage methods. While the usefulness of shrinkage for prediction is well-understood, a justification of posterior conditioning to estimate population averages is currently lacking. We show that PAE have minimum worst-case specification error under various forms of misspecification of the parametric distribution of unobservables. In addition, we introduce a measure of informativeness of the posterior conditioning, which quantifies the worst-case specification error of PAE relative to parametric model-based estimators. As illustrations, we report PAE estimates of distributions of neighborhood effects in the US, and of permanent and transitory components in a model of income dynamics.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا