ترغب بنشر مسار تعليمي؟ اضغط هنا

A Generalized Focused Information Criterion for GMM

79   0   0.0 ( 0 )
 نشر من قبل Francis DiTraglia
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

This paper proposes a criterion for simultaneous GMM model and moment selection: the generalized focused information criterion (GFIC). Rather than attempting to identify the true specification, the GFIC chooses from a set of potentially mis-specified moment conditions and parameter restrictions to minimize the mean-squared error (MSE) of a user-specified target parameter. The intent of the GFIC is to formalize a situation common in applied practice. An applied researcher begins with a set of fairly weak baseline assumptions, assumed to be correct, and must decide whether to impose any of a number of stronger, more controversial suspect assumptions that yield parameter restrictions, additional moment conditions, or both. Provided that the baseline assumptions identify the model, we show how to construct an asymptotically unbiased estimator of the asymptotic MSE to select over these suspect assumptions: the GFIC. We go on to provide results for post-selection inference and model averaging that can be applied both to the GFIC and various alternative selection criteria. To illustrate how our criterion can be used in practice, we specialize the GFIC to the problem of selecting over exogeneity assumptions and lag lengths in a dynamic panel model, and show that it performs well in simulations. We conclude by applying the GFIC to a dynamic panel data model for the price elasticity of cigarette demand.



قيم البحث

اقرأ أيضاً

We propose the double robust Lagrange multiplier (DRLM) statistic for testing hypotheses specified on the pseudo-true value of the structural parameters in the generalized method of moments. The pseudo-true value is defined as the minimizer of the po pulation continuous updating objective function and equals the true value of the structural parameter in the absence of misspecification. ocite{hhy96} The (bounding) chi-squared limiting distribution of the DRLM statistic is robust to both misspecification and weak identification of the structural parameters, hence its name. To emphasize its importance for applied work, we use the DRLM test to analyze the return on education, which is often perceived to be weakly identified, using data from Card (1995) where misspecification occurs in case of treatment heterogeneity; and to analyze the risk premia associated with risk factors proposed in Adrian et al. (2014) and He et al. (2017), where both misspecification and weak identification need to be addressed.
Structural estimation is an important methodology in empirical economics, and a large class of structural models are estimated through the generalized method of moments (GMM). Traditionally, selection of structural models has been performed based on model fit upon estimation, which take the entire observed samples. In this paper, we propose a model selection procedure based on cross-validation (CV), which utilizes sample-splitting technique to avoid issues such as over-fitting. While CV is widely used in machine learning communities, we are the first to prove its consistency in model selection in GMM framework. Its empirical property is compared to existing methods by simulations of IV regressions and oligopoly market model. In addition, we propose the way to apply our method to Mathematical Programming of Equilibrium Constraint (MPEC) approach. Finally, we perform our method to online-retail sales data to compare dynamic market model to static model.
130 - Yong Li , Xiaobin Liu , Jun Yu 2018
In this paper, a new and convenient $chi^2$ wald test based on MCMC outputs is proposed for hypothesis testing. The new statistic can be explained as MCMC version of Wald test and has several important advantages that make it very convenient in pract ical applications. First, it is well-defined under improper prior distributions and avoids Jeffrey-Lindleys paradox. Second, its asymptotic distribution can be proved to follow the $chi^2$ distribution so that the threshold values can be easily calibrated from this distribution. Third, its statistical error can be derived using the Markov chain Monte Carlo (MCMC) approach. Fourth, most importantly, it is only based on the posterior MCMC random samples drawn from the posterior distribution. Hence, it is only the by-product of the posterior outputs and very easy to compute. In addition, when the prior information is available, the finite sample theory is derived for the proposed test statistic. At last, the usefulness of the test is illustrated with several applications to latent variable models widely used in economics and finance.
The Pareto model is very popular in risk management, since simple analytical formulas can be derived for financial downside risk measures (Value-at-Risk, Expected Shortfall) or reinsurance premiums and related quantities (Large Claim Index, Return Pe riod). Nevertheless, in practice, distributions are (strictly) Pareto only in the tails, above (possible very) large threshold. Therefore, it could be interesting to take into account second order behavior to provide a better fit. In this article, we present how to go from a strict Pareto model to Pareto-type distributions. We discuss inference, and derive formulas for various measures and indices, and finally provide applications on insurance losses and financial risks.
We develop monitoring procedures for cointegrating regressions, testing the null of no breaks against the alternatives that there is either a change in the slope, or a change to non-cointegration. After observing the regression for a calibration samp le m, we study a CUSUM-type statistic to detect the presence of change during a monitoring horizon m+1,...,T. Our procedures use a class of boundary functions which depend on a parameter whose value affects the delay in detecting the possible break. Technically, these procedures are based on almost sure limiting theorems whose derivation is not straightforward. We therefore define a monitoring function which - at every point in time - diverges to infinity under the null, and drifts to zero under alternatives. We cast this sequence in a randomised procedure to construct an i.i.d. sequence, which we then employ to define the detector function. Our monitoring procedure rejects the null of no break (when correct) with a small probability, whilst it rejects with probability one over the monitoring horizon in the presence of breaks.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا