Do you want to publish a course? Click here

Evidence Aggregation for Treatment Choice

191   0   0.0 ( 0 )
 Added by Takuya Ishihara
 Publication date 2021
  fields Economy
and research's language is English




Ask ChatGPT about the research

Consider a planner who has to decide whether or not to introduce a new policy to a certain local population. The planner has only limited knowledge of the policys causal impact on this population due to a lack of data but does have access to the publicized results of intervention studies performed for similar policies on different populations. How should the planner make use of and aggregate this existing evidence to make her policy decision? Building upon the paradigm of `patient-centered meta-analysis proposed by Manski (2020; Towards Credible Patient-Centered Meta-Analysis, Epidemiology), we formulate the planners problem as a statistical decision problem with a social welfare objective pertaining to the local population, and solve for an optimal aggregation rule under the minimax-regret criterion. We investigate the analytical properties, computational feasibility, and welfare regret performance of this rule. We also compare the minimax regret decision rule with plug-in decision rules based upon a hierarchical Bayes meta-regression or stylized mean-squared-error optimal prediction. We apply the minimax regret decision rule to two settings: whether to enact an active labor market policy given evidence from 14 randomized control trial studies; and whether to approve a drug (Remdesivir) for COVID-19 treatment using a meta-database of clinical trials.



rate research

Read More

188 - Yinchu Zhu 2021
We consider the setting in which a strong binary instrument is available for a binary treatment. The traditional LATE approach assumes the monotonicity condition stating that there are no defiers (or compliers). Since this condition is not always obvious, we investigate the sensitivity and testability of this condition. In particular, we focus on the question: does a slight violation of monotonicity lead to a small problem or a big problem? We find a phase transition for the monotonicity condition. On one of the boundary of the phase transition, it is easy to learn the sign of LATE and on the other side of the boundary, it is impossible to learn the sign of LATE. Unfortunately, the impossible side of the phase transition includes data-generating processes under which the proportion of defiers tends to zero. This boundary of phase transition is explicitly characterized in the case of binary outcomes. Outside a special case, it is impossible to test whether the data-generating process is on the nice side of the boundary. However, in the special case that the non-compliance is almost one-sided, such a test is possible. We also provide simple alternatives to monotonicity.
This paper discusses the problem of estimation and inference on the effects of time-varying treatment. We propose a method for inference on the effects treatment histories, introducing a dynamic covariate balancing method combined with penalized regression. Our approach allows for (i) treatments to be assigned based on arbitrary past information, with the propensity score being unknown; (ii) outcomes and time-varying covariates to depend on treatment trajectories; (iii) high-dimensional covariates; (iv) heterogeneity of treatment effects. We study the asymptotic properties of the estimator, and we derive the parametric convergence rate of the proposed procedure. Simulations and an empirical application illustrate the advantage of the method over state-of-the-art competitors.
In this paper we derive locally D-optimal designs for discrete choice experiments based on multinomial probit models. These models include several discrete explanatory variables as well as a quantitative one. The commonly used multinomial logit model assumes independent utilities for different choice options. Thus, D-optimal optimal designs for such multinomial logit models may comprise choice sets, e.g., consisting of alternatives which are identical in all discrete attributes but different in the quantitative variable. Obviously such designs are not appropriate for many empirical choice experiments. It will be shown that locally D-optimal designs for multinomial probit models supposing independent utilities consist of counterintuitive choice sets as well. However, locally D-optimal designs for multinomial probit models allowing for dependent utilities turn out to be reasonable for analyzing decisions using discrete choice studies.
203 - Yeonwoo Rho , Xiaofeng Shao 2018
In unit root testing, a piecewise locally stationary process is adopted to accommodate nonstationary errors that can have both smooth and abrupt changes in second- or higher-order properties. Under this framework, the limiting null distributions of the conventional unit root test statistics are derived and shown to contain a number of unknown parameters. To circumvent the difficulty of direct consistent estimation, we propose to use the dependent wild bootstrap to approximate the non-pivotal limiting null distributions and provide a rigorous theoretical justification for bootstrap consistency. The proposed method is compared through finite sample simulations with the recolored wild bootstrap procedure, which was developed for errors that follow a heteroscedastic linear process. Further, a combination of autoregressive sieve recoloring with the dependent wild bootstrap is shown to perform well. The validity of the dependent wild bootstrap in a nonstationary setting is demonstrated for the first time, showing the possibility of extensions to other inference problems associated with locally stationary processes.
183 - Joel L. Horowitz 2018
This paper presents a simple method for carrying out inference in a wide variety of possibly nonlinear IV models under weak assumptions. The method is non-asymptotic in the sense that it provides a finite sample bound on the difference between the true and nominal probabilities of rejecting a correct null hypothesis. The method is a non-Studentized version of the Anderson-Rubin test but is motivated and analyzed differently. In contrast to the conventional Anderson-Rubin test, the method proposed here does not require restrictive distributional assumptions, linearity of the estimated model, or simultaneous equations. Nor does it require knowledge of whether the instruments are strong or weak. It does not require testing or estimating the strength of the instruments. The method can be applied to quantile IV models that may be nonlinear and can be used to test a parametric IV model against a nonparametric alternative. The results presented here hold in finite samples, regardless of the strength of the instruments.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا