Do you want to publish a course? Click here

Testing homogeneity in dynamic discrete games in finite samples

67   0   0.0 ( 0 )
 Added by Takuya Ura
 Publication date 2020
  fields Economy
and research's language is English




Ask ChatGPT about the research

The literature on dynamic discrete games often assumes that the conditional choice probabilities and the state transition probabilities are homogeneous across markets and over time. We refer to this as the homogeneity assumption in dynamic discrete games. This homogeneity assumption enables empirical studies to estimate the games structural parameters by pooling data from multiple markets and from many time periods. In this paper, we propose a hypothesis test to evaluate whether the homogeneity assumption holds in the data. Our hypothesis is the result of an approximate randomization test, implemented via a Markov chain Monte Carlo (MCMC) algorithm. We show that our hypothesis test becomes valid as the (user-defined) number of MCMC draws diverges, for any fixed number of markets, time-periods, and players. We apply our test to the empirical study of the U.S. Portland cement industry in Ryan (2012).



rate research

Read More

116 - Aureo de Paula , Xun Tang 2020
We study testable implications of multiple equilibria in discrete games with incomplete information. Unlike de Paula and Tang (2012), we allow the players private signals to be correlated. In static games, we leverage independence of private types across games whose equilibrium selection is correlated. In dynamic games with serially correlated discrete unobserved heterogeneity, our testable implication builds on the fact that the distribution of a sequence of choices and states are mixtures over equilibria and unobserved heterogeneity. The number of mixture components is a known function of the length of the sequence as well as the cardinality of equilibria and unobserved heterogeneity support. In both static and dynamic cases, these testable implications are implementable using existing statistical tools.
In nonlinear panel data models, fixed effects methods are often criticized because they cannot identify average marginal effects (AMEs) in short panels. The common argument is that the identification of AMEs requires knowledge of the distribution of unobserved heterogeneity, but this distribution is not identified in a fixed effects model with a short panel. In this paper, we derive identification results that contradict this argument. In a panel data dynamic logic model, and for T as small as four, we prove the point identification of different AMEs, including causal effects of changes in the lagged dependent variable or in the duration in last choice. Our proofs are constructive and provide simple closed-form expressions for the AMEs in terms of probabilities of choice histories. We illustrate our results using Monte Carlo experiments and with an empirical application of a dynamic structural model of consumer brand choice with state dependence.
We study the impact of weak identification in discrete choice models, and provide insights into the determinants of identification strength in these models. Using these insights, we propose a novel test that can consistently detect weak identification in commonly applied discrete choice models, such as probit, logit, and many of their extensions. Furthermore, we demonstrate that when the null hypothesis of weak identification is rejected, Wald-based inference can be carried out using standard formulas and critical values. A Monte Carlo study compares our proposed testing approach against commonly applied weak identification tests. The results simultaneously demonstrate the good performance of our approach and the fundamental failure of using conventional weak identification tests for linear models in the discrete choice model context. Furthermore, we compare our approach against those commonly applied in the literature in two empirical examples: married women labor force participation, and US food aid and civil conflicts.
115 - Michael J. Longo 2013
According to the cosmological principle, galaxy cluster sizes and cluster densities, when averaged over sufficiently large volumes of space, are expected to be constant everywhere, except for a slow variation with look-back time (redshift). Thus, average cluster sizes or correlation lengths provide a means of testing for homogeneity that is almost free of selection biases. Using ~10^6 galaxies from the SDSS DR7 survey, I show that regions of space separated by ~2 Gpc/h have the same average cluster size and density to 5 - 10 percent. I show that the average cluster size, averaged over many galaxies, remains constant to less than 10 percent from small redshifts out to redshifts of 0.25. The evolution of the cluster sizes with increasing redshift gives fair agreement when the same analysis is applied to the Millennium Simulation. However, the MS does not replicate the increase in cluster amplitudes with redshift seen in the SDSS data. This increase is shown to be caused by the changing composition of the SDSS sample with increasing redshifts. There is no evidence to support a model that attributes the SN Ia dimming to our happening to live in a large, nearly spherical void.
We study the rise in the acceptability fiat money in a Kiyotaki-Wright economy by developing a method that can determine dynamic Nash equilibria for a class of search models with genuine heterogenous agents. We also address open issues regarding the stability properties of pure strategies equilibria and the presence of multiple equilibria. Experiments illustrate the liquidity conditions that favor the transition from partial to full acceptance of fiat money, and the effects of inflationary shocks on production, liquidity, and trade.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا