Do you want to publish a course? Click here

Inference in Dynamic Discrete Choice Problems under Local Misspecification

88   0   0.0 ( 0 )
 Added by Takuya Ura
 Publication date 2016
and research's language is English




Ask ChatGPT about the research

Single-agent dynamic discrete choice models are typically estimated using heavily parametrized econometric frameworks, making them susceptible to model misspecification. This paper investigates how misspecification affects the results of inference in these models. Specifically, we consider a local misspecification framework in which specification errors are assumed to vanish at an arbitrary and unknown rate with the sample size. Relative to global misspecification, the local misspecification analysis has two important advantages. First, it yields tractable and general results. Second, it allows us to focus on parameters with structural interpretation, instead of pseudo-true parameters. We consider a general class of two-step estimators based on the K-stage sequential policy function iteration algorithm, where K denotes the number of iterations employed in the estimation. This class includes Hotz and Miller (1993)s conditional choice probability estimator, Aguirregabiria and Mira (2002)s pseudo-likelihood estimator, and Pesendorfer and Schmidt-Dengler (2008)s asymptotic least squares estimator. We show that local misspecification can affect the asymptotic distribution and even the rate of convergence of these estimators. In principle, one might expect that the effect of the local misspecification could change with the number of iterations K. One of our main findings is that this is not the case, i.e., the effect of local misspecification is invariant to K. In practice, this means that researchers cannot eliminate or even alleviate problems of model misspecification by changing K.



rate research

Read More

Specifying reward functions for robots that operate in environments without a natural reward signal can be challenging, and incorrectly specified rewards can incentivise degenerate or dangerous behavior. A promising alternative to manually specifying reward functions is to enable robots to infer them from human feedback, like demonstrations or corrections. To interpret this feedback, robots treat as approximately optimal a choice the person makes from a choice set, like the set of possible trajectories they could have demonstrated or possible corrections they could have made. In this work, we introduce the idea that the choice set itself might be difficult to specify, and analyze choice set misspecification: what happens as the robot makes incorrect assumptions about the set of choices from which the human selects their feedback. We propose a classification of different kinds of choice set misspecification, and show that these different classes lead to meaningful differences in the inferred reward and resulting performance. While we would normally expect misspecification to hurt, we find that certain kinds of misspecification are neither helpful nor harmful (in expectation). However, in other situations, misspecification can be extremely harmful, leading the robot to believe the opposite of what it should believe. We hope our results will allow for better prediction and response to the effects of misspecification in real-world reward inference.
Multiple imputation has become one of the most popular approaches for handling missing data in statistical analyses. Part of this success is due to Rubins simple combination rules. These give frequentist valid inferences when the imputation and analysis procedures are so called congenial and the complete data analysis is valid, but otherwise may not. Roughly speaking, congeniality corresponds to whether the imputation and analysis models make different assumptions about the data. In practice imputation and analysis procedures are often not congenial, such that tests may not have the correct size and confidence interval coverage deviates from the advertised level. We examine a number of recent proposals which combine bootstrapping with multiple imputation, and determine which are valid under uncongeniality and model misspecification. Imputation followed by bootstrapping generally does not result in valid variance estimates under uncongeniality or misspecification, whereas bootstrapping followed by imputation does. We recommend a particular computationally efficient variant of bootstrapping followed by imputation.
We use decision theory to confront uncertainty that is sufficiently broad to incorporate models as approximations. We presume the existence of a featured collection of what we call structured models that have explicit substantive motivations. The decision maker confronts uncertainty through the lens of these models, but also views these models as simplifications, and hence, as misspecified. We extend min-max analysis under model ambiguity to incorporate the uncertainty induced by acknowledging that the models used in decision-making are simplified approximations. Formally, we provide an axiomatic rationale for a decision criterion that incorporates model misspecification concerns.
Granger causality has been employed to investigate causality relations between components of stationary multiple time series. We generalize this concept by developing statistical inference for local Granger causality for multivariate locally stationary processes. Our proposed local Granger causality approach captures time-evolving causality relationships in nonstationary processes. The proposed local Granger causality is well represented in the frequency domain and estimated based on the parametric time-varying spectral density matrix using the local Whittle likelihood. Under regularity conditions, we demonstrate that the estimators converge to multivariate normal in distribution. Additionally, the test statistic for the local Granger causality is shown to be asymptotically distributed as a quadratic form of a multivariate normal distribution. The finite sample performance is confirmed with several simulation studies for multivariate time-varying autoregressive models. For practical demonstration, the proposed local Granger causality method uncovered new functional connectivity relationships between channels in brain signals. Moreover, the method was able to identify structural changes in financial data.
In nonlinear panel data models, fixed effects methods are often criticized because they cannot identify average marginal effects (AMEs) in short panels. The common argument is that the identification of AMEs requires knowledge of the distribution of unobserved heterogeneity, but this distribution is not identified in a fixed effects model with a short panel. In this paper, we derive identification results that contradict this argument. In a panel data dynamic logic model, and for T as small as four, we prove the point identification of different AMEs, including causal effects of changes in the lagged dependent variable or in the duration in last choice. Our proofs are constructive and provide simple closed-form expressions for the AMEs in terms of probabilities of choice histories. We illustrate our results using Monte Carlo experiments and with an empirical application of a dynamic structural model of consumer brand choice with state dependence.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا