ترغب بنشر مسار تعليمي؟ اضغط هنا

Why Indexing Works

82   0   0.0 ( 0 )
 نشر من قبل Jan Hendrik Witte
 تاريخ النشر 2015
  مجال البحث مالية
والبحث باللغة English




اسأل ChatGPT حول البحث

We develop a simple stock selection model to explain why active equity managers tend to underperform a benchmark index. We motivate our model with the empirical observation that the best performing stocks in a broad market index often perform much better than the other stocks in the index. Randomly selecting a subset of securities from the index may dramatically increase the chance of underperforming the index. The relative likelihood of underperformance by investors choosing active management likely is much more important than the loss to those same investors from the higher fees for active management relative to passive index investing. Thus, active management may be even more challenging than previously believed, and the stakes for finding the best active managers may be larger than previously assumed.



قيم البحث

اقرأ أيضاً

In this study, we have investigated empirically the effects of market properties on the degree of diversification of investment weights among stocks in a portfolio. The weights of stocks within a portfolio were determined on the basis of Markowitzs p ortfolio theory. We identified that there was a negative relationship between the influence of market properties and the degree of diversification of the weights among stocks in a portfolio. Furthermore, we noted that the random matrix theory method could control the properties of correlation matrix between stocks; this may be useful in improving portfolio management for practical application.
Optimal portfolio selection problems are determined by the (unknown) parameters of the data generating process. If an investor want to realise the position suggested by the optimal portfolios he/she needs to estimate the unknown parameters and to acc ount the parameter uncertainty into the decision process. Most often, the parameters of interest are the population mean vector and the population covariance matrix of the asset return distribution. In this paper we characterise the exact sampling distribution of the estimated optimal portfolio weights and their characteristics by deriving their sampling distribution which is present in terms of a stochastic representation. This approach possesses several advantages, like (i) it determines the sampling distribution of the estimated optimal portfolio weights by expressions which could be used to draw samples from this distribution efficiently; (ii) the application of the derived stochastic representation provides an easy way to obtain the asymptotic approximation of the sampling distribution. The later property is used to show that the high-dimensional asymptotic distribution of optimal portfolio weights is a multivariate normal and to determine its parameters. Moreover, a consistent estimator of optimal portfolio weights and their characteristics is derived under the high-dimensional settings. Via an extensive simulation study, we investigate the finite-sample performance of the derived asymptotic approximation and study its robustness to the violation of the model assumptions used in the derivation of the theoretical results.
In recent years, cryptocurrencies have gone from an obscure niche to a prominent place, with investment in these assets becoming increasingly popular. However, cryptocurrencies carry a high risk due to their high volatility. In this paper, criteria b ased on historical cryptocurrency data are defined in order to characterize returns and risks in different ways, in short time windows (7 and 15 days); then, the importance of criteria is analyzed by various methods and their impact is evaluated. Finally, the future plan is projected to use the knowledge obtained for the selection of investment portfolios by applying multi-criteria methods.
69 - Fengxiang He , Tongliang Liu , 2019
Residual connections significantly boost the performance of deep neural networks. However, there are few theoretical results that address the influence of residuals on the hypothesis complexity and the generalization ability of deep neural networks. This paper studies the influence of residual connections on the hypothesis complexity of the neural network in terms of the covering number of its hypothesis space. We prove that the upper bound of the covering number is the same as chain-like neural networks, if the total numbers of the weight matrices and nonlinearities are fixed, no matter whether they are in the residuals or not. This result demonstrates that residual connections may not increase the hypothesis complexity of the neural network compared with the chain-like counterpart. Based on the upper bound of the covering number, we then obtain an $mathcal O(1 / sqrt{N})$ margin-based multi-class generalization bound for ResNet, as an exemplary case of any deep neural network with residual connections. Generalization guarantees for similar state-of-the-art neural network architectures, such as DenseNet and ResNeXt, are straight-forward. From our generalization bound, a practical implementation is summarized: to approach a good generalization ability, we need to use regularization terms to control the magnitude of the norms of weight matrices not to increase too much, which justifies the standard technique of weight decay.
Supplementary Training on Intermediate Labeled-data Tasks (STILTs) is a widely applied technique, which first fine-tunes the pretrained language models on an intermediate task before on the target task of interest. While STILTs is able to further imp rove the performance of pretrained language models, it is still unclear why and when it works. Previous research shows that those intermediate tasks involving complex inference, such as commonsense reasoning, work especially well for RoBERTa. In this paper, we discover that the improvement from an intermediate task could be orthogonal to it containing reasoning or other complex skills -- a simple real-fake discrimination task synthesized by GPT2 can benefit diverse target tasks. We conduct extensive experiments to study the impact of different factors on STILTs. These findings suggest rethinking the role of intermediate fine-tuning in the STILTs pipeline.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا