ترغب بنشر مسار تعليمي؟ اضغط هنا

Parameter Estimation using Empirical Likelihood combined with Market Information

251   0   0.0 ( 0 )
 نشر من قبل Tony Sit
 تاريخ النشر 2012
  مجال البحث الاحصاء الرياضي مالية
والبحث باللغة English




اسأل ChatGPT حول البحث

During the last decade Levy processes with jumps have received increasing popularity for modelling market behaviour for both derviative pricing and risk management purposes. Chan et al. (2009) introduced the use of empirical likelihood methods to estimate the parameters of various diffusion processes via their characteristic functions which are readily avaiable in most cases. Return series from the market are used for estimation. In addition to the return series, there are many derivatives actively traded in the market whose prices also contain information about parameters of the underlying process. This observation motivates us, in this paper, to combine the return series and the associated derivative prices observed at the market so as to provide a more reflective estimation with respect to the market movement and achieve a gain of effciency. The usual asymptotic properties, including consistency and asymptotic normality, are established under suitable regularity conditions. Simulation and case studies are performed to demonstrate the feasibility and effectiveness of the proposed method.



قيم البحث

اقرأ أيضاً

The aim of this study is to investigate quantitatively whether share prices deviated from company fundamentals in the stock market crash of 2008. For this purpose, we use a large database containing the balance sheets and share prices of 7,796 worldw ide companies for the period 2004 through 2013. We develop a panel regression model using three financial indicators--dividends per share, cash flow per share, and book value per share--as explanatory variables for share price. We then estimate individual company fundamentals for each year by removing the time fixed effects from the two-way fixed effects model, which we identified as the best of the panel regression models. One merit of our model is that we are able to extract unobservable factors of company fundamentals by using the individual fixed effects. Based on these results, we analyze the market anomaly quantitatively using the divergence rate--the rate of the deviation of share price from a companys fundamentals. We find that share prices on average were overvalued in the period from 2005 to 2007, and were undervalued significantly in 2008, when the global financial crisis occurred. Share prices were equivalent to the fundamentals on average in the subsequent period. Our empirical results clearly demonstrate that the worldwide stock market fluctuated excessively in the time period before and just after the global financial crisis of 2008.
329 - Xin Gao , Daniel Q. Pu , Yuehua Wu 2009
In a Gaussian graphical model, the conditional independence between two variables are characterized by the corresponding zero entries in the inverse covariance matrix. Maximum likelihood method using the smoothly clipped absolute deviation (SCAD) pen alty (Fan and Li, 2001) and the adaptive LASSO penalty (Zou, 2006) have been proposed in literature. In this article, we establish the result that using Bayesian information criterion (BIC) to select the tuning parameter in penalized likelihood estimation with both types of penalties can lead to consistent graphical model selection. We compare the empirical performance of BIC with cross validation method and demonstrate the advantageous performance of BIC criterion for tuning parameter selection through simulation studies.
Nonparametric empirical Bayes methods provide a flexible and attractive approach to high-dimensional data analysis. One particularly elegant empirical Bayes methodology, involving the Kiefer-Wolfowitz nonparametric maximum likelihood estimator (NPMLE ) for mixture models, has been known for decades. However, implementation and theoretical analysis of the Kiefer-Wolfowitz NPMLE are notoriously difficult. A fast algorithm was recently proposed that makes NPMLE-based procedures feasible for use in large-scale problems, but the algorithm calculates only an approximation to the NPMLE. In this paper we make two contributions. First, we provide upper bounds on the convergence rate of the approximate NPMLEs statistical error, which have the same order as the best known bounds for the true NPMLE. This suggests that the approximate NPMLE is just as effective as the true NPMLE for statistical applications. Second, we illustrate the promise of NPMLE procedures in a high-dimensional binary classification problem. We propose a new procedure and show that it vastly outperforms existing methods in experiments with simulated data. In real data analyses involving cancer survival and gene expression data, we show that it is very competitive with several recently proposed methods for regularized linear discriminant analysis, another popular approach to high-dimensional classification.
A maximum likelihood methodology for a general class of models is presented, using an approximate Bayesian computation (ABC) approach. The typical target of ABC methods are models with intractable likelihoods, and we combine an ABC-MCMC sampler with so-called data cloning for maximum likelihood estimation. Accuracy of ABC methods relies on the use of a small threshold value for comparing simulations from the model and observed data. The proposed methodology shows how to use large threshold values, while the number of data-clones is increased to ease convergence towards an approximate maximum likelihood estimate. We show how to exploit the methodology to reduce the number of iterations of a standard ABC-MCMC algorithm and therefore reduce the computational effort, while obtaining reasonable point estimates. Simulation studies show the good performance of our approach on models with intractable likelihoods such as g-and-k distributions, stochastic differential equations and state-space models.
High-dimensional statistical inference with general estimating equations are challenging and remain less explored. In this paper, we study two problems in the area: confidence set estimation for multiple components of the model parameters, and model specifications test. For the first one, we propose to construct a new set of estimating equations such that the impact from estimating the high-dimensional nuisance parameters becomes asymptotically negligible. The new construction enables us to estimate a valid confidence region by empirical likelihood ratio. For the second one, we propose a test statistic as the maximum of the marginal empirical likelihood ratios to quantify data evidence against the model specification. Our theory establishes the validity of the proposed empirical likelihood approaches, accommodating over-identification and exponentially growing data dimensionality. The numerical studies demonstrate promising performance and potential practical benefits of the new methods.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا