ترغب بنشر مسار تعليمي؟ اضغط هنا

Averaging orthogonal projectors

35   0   0.0 ( 0 )
 نشر من قبل Eero Liski
 تاريخ النشر 2012
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

Dimensionality is a major concern in analyzing large data sets. Some well known dimension reduction methods are for example principal component analysis (PCA), invariant coordinate selection (ICS), sliced inverse regression (SIR), sliced average variance estimate (SAVE), principal hessian directions (PHD) and inverse regression estimator (IRE). However, these methods are usually adequate of finding only certain types of structures or dependencies within the data. This calls the need to combine information coming from several different dimension reduction methods. We propose a generalization of the Crone and Crosby distance, a weighted distance that allows to combine subspaces of different dimensions. Some natural choices of weights are considered in detail. Based on the weighted distance metric we discuss the concept of averages of subspaces as well to combine various dimension reduction methods. The performance of the weighted distances and the combining approach is illustrated via simulations.

قيم البحث

اقرأ أيضاً

204 - Miaomiao Wang , Guohua Zou 2019
Model averaging considers the model uncertainty and is an alternative to model selection. In this paper, we propose a frequentist model averaging estimator for composite quantile regressions. In recent years, research on these topics has been added a s a separate method, but no study has investigated them in combination. We apply a delete-one cross-validation method to estimate the model weights, and prove that the jackknife model averaging estimator is asymptotically optimal in terms of minimizing out-of-sample composite final prediction error. Simulations are conducted to demonstrate the good finite sample properties of our estimator and compare it with commonly used model selection and averaging methods. The proposed method is applied to the analysis of the stock returns data and the wage data and performs well.
This paper is concerned with model averaging estimation for partially linear functional score models. These models predict a scalar response using both parametric effect of scalar predictors and non-parametric effect of a functional predictor. Within this context, we develop a Mallows-type criterion for choosing weights. The resulting model averaging estimator is proved to be asymptotically optimal under certain regularity conditions in terms of achieving the smallest possible squared error loss. Simulation studies demonstrate its superiority or comparability to information criterion score-based model selection and averaging estimators. The proposed procedure is also applied to two real data sets for illustration. That the components of nonparametric part are unobservable leads to a more complicated situation than ordinary partially linear models (PLM) and a different theoretical derivation from those of PLM.
71 - Yufan Li 2019
Orthogonal Generalized Autoregressive Conditional Heteroskedasticity model (OGARCH) is widely used in finance industry to produce volatility and correlation forecasts. We show that the classic OGARCH model, nevertheless, tends to be too slow in refle cting sudden changes in market condition due to excessive persistence of the integral univariate GARCH processes. To obtain more flexibility to accommodate abrupt market changes, e.g. financial crisis, we extend classic OGARCH model by incorporating a two-state Markov regime-switching GARCH process. This novel construction allows us to capture recurrent systemic regime shifts. Empirical results show that this generalization resolves the problem of excessive persistency effectively and greatly enhances OGARCHs ability to adapt to sudden market breaks while preserving OGARCHs most attractive features such as dimension reduction and multi-step ahead forecasting. By constructing a global minimum variance portfolio (GMVP), we are able to demonstrate significant outperformance of the extended model over the classic OGARCH model and the commonly used Exponentially Weighted Moving Average (EWMA) model. In addition, we show that the extended model is superior to OGARCH and EWMA in terms of predictive accuracy.
Model averaging is an alternative to model selection for dealing with model uncertainty, which is widely used and very valuable. However, most of the existing model averaging methods are proposed based on the least squares loss function, which could be very sensitive to the presence of outliers in the data. In this paper, we propose an outlier-robust model averaging approach by Mallows-type criterion. The key idea is to develop weight choice criteria by minimising an estimator of the expected prediction error for the function being convex with an unique minimum, and twice differentiable in expectation, rather than the expected squared error. The robust loss functions, such as least absolute deviation and Hubers function, reduce the effects of large residuals and poor samples. Simulation study and real data analysis are conducted to demonstrate the finite-sample performance of our estimators and compare them with other model selection and averaging methods.
Smoothed AIC (S-AIC) and Smoothed BIC (S-BIC) are very widely used in model averaging and are very easily to implement. Especially, the optimal model averaging method MMA and JMA have only been well developed in linear models. Only by modifying, they can be applied to other models. But S-AIC and S-BIC can be used in all situations where AIC and BIC can be calculated. In this paper, we study the asymptotic behavior of two commonly used model averaging estimators, the S-AIC and S-BIC estimators, under the standard asymptotic with general fixed parameter setup. In addition, the resulting coverage probability in Buckland et al. (1997) is not studied accurately, but it is claimed that it will be close to the intended. Our derivation make it possible to study accurately. Besides, we also prove that the confidence interval construction method in Hjort and Claeskens (2003) still works in linear regression with normal distribution error. Both the simulation and applied example support our theory conclusion.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا