Do you want to publish a course? Click here

Bootstrap-Assisted Unit Root Testing With Piecewise Locally Stationary Errors

204   0   0.0 ( 0 )
 Added by Yeonwoo Rho
 Publication date 2018
  fields Economy
and research's language is English




Ask ChatGPT about the research

In unit root testing, a piecewise locally stationary process is adopted to accommodate nonstationary errors that can have both smooth and abrupt changes in second- or higher-order properties. Under this framework, the limiting null distributions of the conventional unit root test statistics are derived and shown to contain a number of unknown parameters. To circumvent the difficulty of direct consistent estimation, we propose to use the dependent wild bootstrap to approximate the non-pivotal limiting null distributions and provide a rigorous theoretical justification for bootstrap consistency. The proposed method is compared through finite sample simulations with the recolored wild bootstrap procedure, which was developed for errors that follow a heteroscedastic linear process. Further, a combination of autoregressive sieve recoloring with the dependent wild bootstrap is shown to perform well. The validity of the dependent wild bootstrap in a nonstationary setting is demonstrated for the first time, showing the possibility of extensions to other inference problems associated with locally stationary processes.

rate research

Read More

This paper studies the asymptotic convergence of computed dynamic models when the shock is unbounded. Most dynamic economic models lack a closed-form solution. As such, approximate solutions by numerical methods are utilized. Since the researcher cannot directly evaluate the exact policy function and the associated exact likelihood, it is imperative that the approximate likelihood asymptotically converges -- as well as to know the conditions of convergence -- to the exact likelihood, in order to justify and validate its usage. In this regard, Fernandez-Villaverde, Rubio-Ramirez, and Santos (2006) show convergence of the likelihood, when the shock has compact support. However, compact support implies that the shock is bounded, which is not an assumption met in most dynamic economic models, e.g., with normally distributed shocks. This paper provides theoretical justification for most dynamic models used in the literature by showing the conditions for convergence of the approximate invariant measure obtained from numerical simulations to the exact invariant measure, thus providing the conditions for convergence of the likelihood.
The Environment Kuznets Curve (EKC) predicts an inverted U-shaped relationship between economic growth and environmental pollution. Current analyses frequently employ models which restrict the nonlinearities in the data to be explained by the economic growth variable only. We propose a Generalized Cointegrating Polynomial Regression (GCPR) with flexible time trends to proxy time effects such as technological progress and/or environmental awareness. More specifically, a GCPR includes flexible powers of deterministic trends and integer powers of stochastic trends. We estimate the GCPR by nonlinear least squares and derive its asymptotic distribution. Endogeneity of the regressors can introduce nuisance parameters into this limiting distribution but a simulated approach nevertheless enables us to conduct valid inference. Moreover, a subsampling KPSS test can be used to check the stationarity of the errors. A comprehensive simulation study shows good performance of the simulated inference approach and the subsampling KPSS test. We illustrate the GCPR approach on a dataset of 18 industrialised countries containing GDP and CO2 emissions. We conclude that: (1) the evidence for an EKC is significantly reduced when a nonlinear time trend is included, and (2) a linear cointegrating relation between GDP and CO2 around a power law trend also provides an accurate description of the data.
82 - Zheng Fang 2021
This paper makes the following original contributions. First, we develop a unifying framework for testing shape restrictions based on the Wald principle. The test has asymptotic uniform size control and is uniformly consistent. Second, we examine the applicability and usefulness of some prominent shape enforcing operators in implementing our framework. In particular, in stark contrast to its use in point and interval estimation, the rearrangement operator is inapplicable due to a lack of convexity. The greatest convex minorization and the least concave majorization are shown to enjoy the analytic properties required to employ our framework. Third, we show that, despite that the projection operator may not be well-defined/behaved in general parameter spaces such as those defined by uniform norms, one may nonetheless employ a powerful distance-based test by applying our framework. Monte Carlo simulations confirm that our test works well. We further showcase the empirical relevance by investigating the relationship between weekly working hours and the annual wage growth in the high-end labor market.
183 - Joel L. Horowitz 2018
This paper presents a simple method for carrying out inference in a wide variety of possibly nonlinear IV models under weak assumptions. The method is non-asymptotic in the sense that it provides a finite sample bound on the difference between the true and nominal probabilities of rejecting a correct null hypothesis. The method is a non-Studentized version of the Anderson-Rubin test but is motivated and analyzed differently. In contrast to the conventional Anderson-Rubin test, the method proposed here does not require restrictive distributional assumptions, linearity of the estimated model, or simultaneous equations. Nor does it require knowledge of whether the instruments are strong or weak. It does not require testing or estimating the strength of the instruments. The method can be applied to quantile IV models that may be nonlinear and can be used to test a parametric IV model against a nonparametric alternative. The results presented here hold in finite samples, regardless of the strength of the instruments.
188 - Yinchu Zhu 2021
We consider the setting in which a strong binary instrument is available for a binary treatment. The traditional LATE approach assumes the monotonicity condition stating that there are no defiers (or compliers). Since this condition is not always obvious, we investigate the sensitivity and testability of this condition. In particular, we focus on the question: does a slight violation of monotonicity lead to a small problem or a big problem? We find a phase transition for the monotonicity condition. On one of the boundary of the phase transition, it is easy to learn the sign of LATE and on the other side of the boundary, it is impossible to learn the sign of LATE. Unfortunately, the impossible side of the phase transition includes data-generating processes under which the proportion of defiers tends to zero. This boundary of phase transition is explicitly characterized in the case of binary outcomes. Outside a special case, it is impossible to test whether the data-generating process is on the nice side of the boundary. However, in the special case that the non-compliance is almost one-sided, such a test is possible. We also provide simple alternatives to monotonicity.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا