ترغب بنشر مسار تعليمي؟ اضغط هنا

Cluster-Based Regularized Sliced Inverse Regression for Forecasting Macroeconomic Variables

133   0   0.0 ( 0 )
 نشر من قبل Yue Yu
 تاريخ النشر 2011
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

This article concerns the dimension reduction in regression for large data set. We introduce a new method based on the sliced inverse regression approach, called cluster-based regularized sliced inverse regression. Our method not only keeps the merit of considering both response and predictors information, but also enhances the capability of handling highly correlated variables. It is justified under certain linearity conditions. An empirical application on a macroeconomic data set shows that our method has outperformed the dynamic factor model and other shrinkage methods.


قيم البحث

اقرأ أيضاً

Sliced inverse regression is one of the most popular sufficient dimension reduction methods. Originally, it was designed for independent and identically distributed data and recently extend to the case of serially and spatially dependent data. In thi s work we extend it to the case of spatially dependent data where the response might depend also on neighbouring covariates when the observations are taken on a grid-like structure as it is often the case in econometric spatial regression applications. We suggest guidelines on how to decide upon the dimension of the subspace of interest and also which spatial lag might be of interest when modeling the response. These guidelines are supported by a conducted simulation study.
Due to the demand for tackling the problem of streaming data with high dimensional covariates, we propose an online sparse sliced inverse regression (OSSIR) method for online sufficient dimension reduction. The existing online sufficient dimension re duction methods focus on the case when the dimension $p$ is small. In this article, we show that our method can achieve better statistical accuracy and computation speed when the dimension $p$ is large. There are two important steps in our method, one is to extend the online principal component analysis to iteratively obtain the eigenvalues and eigenvectors of the kernel matrix, the other is to use the truncated gradient to achieve online $L_{1}$ regularization. We also analyze the convergence of the extended Candid covariance-free incremental PCA(CCIPCA) and our method. By comparing several existing methods in the simulations and real data applications, we demonstrate the effectiveness and efficiency of our method.
108 - Zhishen Ye , Jie Yang 2013
We propose a new method for dimension reduction in regression using the first two inverse moments. We develop corresponding weighted chi-squared tests for the dimension of the regression. The proposed method considers linear combinations of Sliced In verse Regression (SIR) and the method using a new candidate matrix which is designed to recover the entire inverse second moment subspace. The optimal combination may be selected based on the p-values derived from the dimension tests. Theoretically, the proposed method, as well as Sliced Average Variance Estimate (SAVE), are more capable of recovering the complete central dimension reduction subspace than SIR and Principle Hessian Directions (pHd). Therefore it can substitute for SIR, pHd, SAVE, or any linear combination of them at a theoretical level. Simulation study indicates that the proposed method may have consistently greater power than SIR, pHd, and SAVE.
We move beyond Is Machine Learning Useful for Macroeconomic Forecasting? by adding the how. The current forecasting literature has focused on matching specific variables and horizons with a particularly successful algorithm. In contrast, we study the usefulness of the underlying features driving ML gains over standard macroeconometric methods. We distinguish four so-called features (nonlinearities, regularization, cross-validation and alternative loss function) and study their behavior in both the data-rich and data-poor environments. To do so, we design experiments that allow to identify the treatment effects of interest. We conclude that (i) nonlinearity is the true game changer for macroeconomic prediction, (ii) the standard factor model remains the best regularization, (iii) K-fold cross-validation is the best practice and (iv) the $L_2$ is preferred to the $bar epsilon$-insensitive in-sample loss. The forecasting gains of nonlinear techniques are associated with high macroeconomic uncertainty, financial stress and housing bubble bursts. This suggests that Machine Learning is useful for macroeconomic forecasting by mostly capturing important nonlinearities that arise in the context of uncertainty and financial frictions.
We present a model for generating probabilistic forecasts by combining kernel density estimation (KDE) and quantile regression techniques, as part of the probabilistic load forecasting track of the Global Energy Forecasting Competition 2014. The KDE method is initially implemented with a time-decay parameter. We later improve this method by conditioning on the temperature or the period of the week variables to provide more accurate forecasts. Secondly, we develop a simple but effective quantile regression forecast. The novel aspects of our methodology are two-fold. First, we introduce symmetry into the time-decay parameter of the kernel density estimation based forecast. Secondly we combine three probabilistic forecasts with different weights for different periods of the month.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا