No Arabic abstract
We propose a computationally feasible way of deriving the identified features of models with multiple equilibria in pure or mixed strategies. It is shown that in the case of Shapley regular normal form games, the identified set is characterized by the inclusion of the true data distribution within the core of a Choquet capacity, which is interpreted as the generalized likelihood of the model. In turn, this inclusion is characterized by a finite set of inequalities and efficient and easily implementable combinatorial methods are described to check them. In all normal form games, the identified set is characterized in terms of the value of a submodular or convex optimization program. Efficient algorithms are then given and compared to check inclusion of a parameter in this identified set. The latter are illustrated with family bargaining games and oligopoly entry games.
We study testable implications of multiple equilibria in discrete games with incomplete information. Unlike de Paula and Tang (2012), we allow the players private signals to be correlated. In static games, we leverage independence of private types across games whose equilibrium selection is correlated. In dynamic games with serially correlated discrete unobserved heterogeneity, our testable implication builds on the fact that the distribution of a sequence of choices and states are mixtures over equilibria and unobserved heterogeneity. The number of mixture components is a known function of the length of the sequence as well as the cardinality of equilibria and unobserved heterogeneity support. In both static and dynamic cases, these testable implications are implementable using existing statistical tools.
We study the rise in the acceptability fiat money in a Kiyotaki-Wright economy by developing a method that can determine dynamic Nash equilibria for a class of search models with genuine heterogenous agents. We also address open issues regarding the stability properties of pure strategies equilibria and the presence of multiple equilibria. Experiments illustrate the liquidity conditions that favor the transition from partial to full acceptance of fiat money, and the effects of inflationary shocks on production, liquidity, and trade.
We study the impact of weak identification in discrete choice models, and provide insights into the determinants of identification strength in these models. Using these insights, we propose a novel test that can consistently detect weak identification in commonly applied discrete choice models, such as probit, logit, and many of their extensions. Furthermore, we demonstrate that when the null hypothesis of weak identification is rejected, Wald-based inference can be carried out using standard formulas and critical values. A Monte Carlo study compares our proposed testing approach against commonly applied weak identification tests. The results simultaneously demonstrate the good performance of our approach and the fundamental failure of using conventional weak identification tests for linear models in the discrete choice model context. Furthermore, we compare our approach against those commonly applied in the literature in two empirical examples: married women labor force participation, and US food aid and civil conflicts.
This paper studies identification and estimation of a class of dynamic models in which the decision maker (DM) is uncertain about the data-generating process. The DM surrounds a benchmark model that he or she fears is misspecified by a set of models. Decisions are evaluated under a worst-case model delivering the lowest utility among all models in this set. The DMs benchmark model and preference parameters are jointly underidentified. With the benchmark model held fixed, primitive conditions are established for identification of the DMs worst-case model and preference parameters. The key step in the identification analysis is to establish existence and uniqueness of the DMs continuation value function allowing for unbounded statespace and unbounded utilities. To do so, fixed-point results are derived for monotone, convex operators that act on a Banach space of thin-tailed functions arising naturally from the structure of the continuation value recursion. The fixed-point results are quite general; applications to models with learning and Rust-type dynamic discrete choice models are also discussed. For estimation, a perturbation result is derived which provides a necessary and sufficient condition for consistent estimation of continuation values and the worst-case model. The result also allows convergence rates of estimators to be characterized. An empirical application studies an endowment economy where the DMs benchmark model may be interpreted as an aggregate of experts forecasting models. The application reveals time-variation in the way the DM pessimistically distorts benchmark probabilities. Consequences for asset pricing are explored and connections are drawn with the literature on macroeconomic uncertainty.
This paper studies the econometric aspects of the generalized local IV framework defined using the unordered monotonicity condition, which accommodates multiple levels of treatment and instrument in program evaluations. The framework is explicitly developed to allow for conditioning covariates. Nonparametric identification results are obtained for a wide range of policy-relevant parameters. Semiparametric efficiency bounds are computed for these identified structural parameters, including the local average structural function and local average structural function on the treated. Two semiparametric estimators are introduced that achieve efficiency. One is the conditional expectation projection estimator defined through the nonparametric identification equation. The other is the double/debiased machine learning estimator defined through the efficient influence function, which is suitable for high-dimensional settings. More generally, for parameters implicitly defined by possibly non-smooth and overidentifying moment conditions, this study provides the calculation for the corresponding semiparametric efficiency bounds and proposes efficient semiparametric GMM estimators again using the efficient influence functions. Then an optimal set of testable implications of the model assumption is proposed. Previous results developed for the binary local IV model and the multivalued treatment model under unconfoundedness are encompassed as special cases in this more general framework. The theoretical results are illustrated by an empirical application investigating the return to schooling across different fields of study, and a Monte Carlo experiment.