ترغب بنشر مسار تعليمي؟ اضغط هنا

We study the impact of weak identification in discrete choice models, and provide insights into the determinants of identification strength in these models. Using these insights, we propose a novel test that can consistently detect weak identificatio n in commonly applied discrete choice models, such as probit, logit, and many of their extensions. Furthermore, we demonstrate that when the null hypothesis of weak identification is rejected, Wald-based inference can be carried out using standard formulas and critical values. A Monte Carlo study compares our proposed testing approach against commonly applied weak identification tests. The results simultaneously demonstrate the good performance of our approach and the fundamental failure of using conventional weak identification tests for linear models in the discrete choice model context. Furthermore, we compare our approach against those commonly applied in the literature in two empirical examples: married women labor force participation, and US food aid and civil conflicts.
This paper studies the instrument identification power for the average treatment effect (ATE) in partially identified binary outcome models with an endogenous binary treatment. We propose a novel approach to measure the instrument identification powe r by their ability to reduce the width of the ATE bounds. We show that instrument strength, as determined by the extreme values of the conditional propensity score, and its interplays with the degree of endogeneity and the exogenous covariates all play a role in bounding the ATE. We decompose the ATE identification gains into a sequence of measurable components, and construct a standardized quantitative measure for the instrument identification power ($IIP$). The decomposition and the $IIP$ evaluation are illustrated with finite-sample simulation studies and an empirical example of childbearing and womens labor supply. Our simulations show that the $IIP$ is a useful tool for detecting irrelevant instruments.
140 - Gael M. Martin , 2020
The Bayesian statistical paradigm uses the language of probability to express uncertainty about the phenomena that generate observed data. Probability distributions thus characterize Bayesian analysis, with the rules of probability used to transform prior probability distributions for all unknowns - parameters, latent variables, models - into posterior distributions, subsequent to the observation of data. Conducting Bayesian analysis requires the evaluation of integrals in which these probability distributions appear. Bayesian computation is all about evaluating such integrals in the typical case where no analytical solution exists. This paper takes the reader on a chronological tour of Bayesian computation over the past two and a half centuries. Beginning with the one-dimensional integral first confronted by Bayes in 1763, through to recent problems in which the unknowns number in the millions, we place all computational problems into a common framework, and describe all computational methods using a common notation. The aim is to help new researchers in particular - and more generally those interested in adopting a Bayesian approach to empirical work - make sense of the plethora of computational techniques that are now on offer; understand when and why different methods are useful; and see the links that do exist, between them all.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا