ترغب بنشر مسار تعليمي؟ اضغط هنا

Sequential monitoring for cointegrating regressions

134   0   0.0 ( 0 )
 نشر من قبل Lorenzo Trapani
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

We develop monitoring procedures for cointegrating regressions, testing the null of no breaks against the alternatives that there is either a change in the slope, or a change to non-cointegration. After observing the regression for a calibration sample m, we study a CUSUM-type statistic to detect the presence of change during a monitoring horizon m+1,...,T. Our procedures use a class of boundary functions which depend on a parameter whose value affects the delay in detecting the possible break. Technically, these procedures are based on almost sure limiting theorems whose derivation is not straightforward. We therefore define a monitoring function which - at every point in time - diverges to infinity under the null, and drifts to zero under alternatives. We cast this sequence in a randomised procedure to construct an i.i.d. sequence, which we then employ to define the detector function. Our monitoring procedure rejects the null of no break (when correct) with a small probability, whilst it rejects with probability one over the monitoring horizon in the presence of breaks.



قيم البحث

اقرأ أيضاً

We propose a novel conditional quantile prediction method based on complete subset averaging (CSA) for quantile regressions. All models under consideration are potentially misspecified and the dimension of regressors goes to infinity as the sample si ze increases. Since we average over the complete subsets, the number of models is much larger than the usual model averaging method which adopts sophisticated weighting schemes. We propose to use an equal weight but select the proper size of the complete subset based on the leave-one-out cross-validation method. Building upon the theory of Lu and Su (2015), we investigate the large sample properties of CSA and show the asymptotic optimality in the sense of Li (1987). We check the finite sample performance via Monte Carlo simulations and empirical applications.
The Environment Kuznets Curve (EKC) predicts an inverted U-shaped relationship between economic growth and environmental pollution. Current analyses frequently employ models which restrict the nonlinearities in the data to be explained by the economi c growth variable only. We propose a Generalized Cointegrating Polynomial Regression (GCPR) with flexible time trends to proxy time effects such as technological progress and/or environmental awareness. More specifically, a GCPR includes flexible powers of deterministic trends and integer powers of stochastic trends. We estimate the GCPR by nonlinear least squares and derive its asymptotic distribution. Endogeneity of the regressors can introduce nuisance parameters into this limiting distribution but a simulated approach nevertheless enables us to conduct valid inference. Moreover, a subsampling KPSS test can be used to check the stationarity of the errors. A comprehensive simulation study shows good performance of the simulated inference approach and the subsampling KPSS test. We illustrate the GCPR approach on a dataset of 18 industrialised countries containing GDP and CO2 emissions. We conclude that: (1) the evidence for an EKC is significantly reduced when a nonlinear time trend is included, and (2) a linear cointegrating relation between GDP and CO2 around a power law trend also provides an accurate description of the data.
109 - Wenjie Wang , Yichong Zhang 2021
We study the wild bootstrap inference for instrumental variable (quantile) regressions in the framework of a small number of large clusters, in which the number of clusters is viewed as fixed and the number of observations for each cluster diverges t o infinity. For subvector inference, we show that the wild bootstrap Wald test with or without using the cluster-robust covariance matrix controls size asymptotically up to a small error as long as the parameters of endogenous variables are strongly identified in at least one of the clusters. We further develop a wild bootstrap Anderson-Rubin (AR) test for full-vector inference and show that it controls size asymptotically up to a small error even under weak or partial identification for all clusters. We illustrate the good finite-sample performance of the new inference methods using simulations and provide an empirical application to a well-known dataset about U.S. local labor markets.
Bartik regressions use locations differential exposure to nationwide sector-level shocks as an instrument to estimate the effect of a location-level treatment on an outcome. In the canonical Bartik design, locations differential exposure to industry- level employment shocks are used as an instrument to measure the effect of their employment evolution on their wage evolution. Some recent papers studying Bartik designs have assumed that the sector-level shocks are exogenous and all have the same expectation. This second assumption may sometimes be implausible. For instance, there could be industries whose employment is more likely to grow than that of other industries. We replace that second assumption by parallel trends assumptions. Under our assumptions, Bartik regressions identify weighted sums of location-specific effects, with weights that may be negative. Accordingly, such regressions may be misleading in the presence of heterogeneous effects, an issue that was not present under the assumptions maintained in previous papers. Estimating the weights attached to Bartik regressions is a way to assess their robustness to heterogeneous effects. We also propose an alternative estimator that is robust to location-specific effects. Finally, we revisit two applications. In both cases, Bartik regressions have fairly large negative weights attached to them. Our alternative estimator is substantially different from the Bartik regression coefficient in one application.
The Pareto model is very popular in risk management, since simple analytical formulas can be derived for financial downside risk measures (Value-at-Risk, Expected Shortfall) or reinsurance premiums and related quantities (Large Claim Index, Return Pe riod). Nevertheless, in practice, distributions are (strictly) Pareto only in the tails, above (possible very) large threshold. Therefore, it could be interesting to take into account second order behavior to provide a better fit. In this article, we present how to go from a strict Pareto model to Pareto-type distributions. We discuss inference, and derive formulas for various measures and indices, and finally provide applications on insurance losses and financial risks.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا