ترغب بنشر مسار تعليمي؟ اضغط هنا

Characterizing the Functional Density Power Divergence Class

103   0   0.0 ( 0 )
 نشر من قبل Souvik Ray
 تاريخ النشر 2021
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

The density power divergence (DPD) and related measures have produced many useful statistical procedures which provide a good balance between model efficiency on one hand, and outlier stability or robustness on the other. The large number of citations received by the original DPD paper (Basu et al., 1998) and its many demonstrated applications indicate the popularity of these divergences and the related methods of inference. The estimators that are derived from this family of divergences are all M-estimators where the defining $psi$ function is based explicitly on the form of the model density. The success of the minimum divergence estimators based on the density power divergence makes it imperative and meaningful to look for other, similar divergences in the same spirit. The logarithmic density power divergence (Jones et al., 2001), a logarithmic transform of the density power divergence, has also been very successful in producing inference procedures with a high degree of efficiency simultaneously with a high degree of robustness. This further strengthens the motivation to look for statistical divergences that are transforms of the density power divergence, or, alternatively, members of the functional density power divergence class. This note characterizes the functional density power divergence class, and thus identifies the available divergence measures within this construct that may possibly be explored for robust and efficient statistical inference.



قيم البحث

اقرأ أيضاً

84 - Luai Al-Labadi , Ce Wang 2019
This paper deals with measuring the Bayesian robustness of classes of contaminated priors. Two different classes of priors in the neighborhood of the elicited prior are considered. The first one is the well-known $epsilon$-contaminated class, while t he second one is the geometric mixing class. The proposed measure of robustness is based on computing the curvature of Renyi divergence between posterior distributions. Examples are used to illustrate the results by using simulated and real data sets.
Minimum divergence procedures based on the density power divergence and the logarithmic density power divergence have been extremely popular and successful in generating inference procedures which combine a high degree of model efficiency with strong outlier stability. Such procedures are always preferable in practical situations over procedures which achieve their robustness at a major cost of efficiency or are highly efficient but have poor robustness properties. The density power divergence (DPD) family of Basu et al.(1998) and the logarithmic density power divergence (LDPD) family of Jones et al.(2001) provide flexible classes of divergences where the adjustment between efficiency and robustness is controlled by a single, real, non-negative parameter. The usefulness of these two families of divergences in statistical inference makes it meaningful to search for other related families of divergences in the same spirit. The DPD family is a member of the class of Bregman divergences, and the LDPD family is obtained by log transformations of the different segments of the divergences within the DPD family. Both the DPD and LDPD families lead to the Kullback-Leibler divergence in the limiting case as the tuning parameter $alpha rightarrow 0$. In this paper we study this relation in detail, and demonstrate that such log transformations can only be meaningful in the context of the DPD (or the convex generating function of the DPD) within the general fold of Bregman divergences, giving us a limit to the extent to which the search for useful divergences could be successful.
Bayesian nonparametric statistics is an area of considerable research interest. While recently there has been an extensive concentration in developing Bayesian nonparametric procedures for model checking, the use of the Dirichlet process, in its simp lest form, along with the Kullback-Leibler divergence is still an open problem. This is mainly attributed to the discreteness property of the Dirichlet process and that the Kullback-Leibler divergence between any discrete distribution and any continuous distribution is infinity. The approach proposed in this paper, which is based on incorporating the Dirichlet process, the Kullback-Leibler divergence and the relative belief ratio, is considered the first concrete solution to this issue. Applying the approach is simple and does not require obtaining a closed form of the relative belief ratio. A Monte Carlo study and real data examples show that the developed approach exhibits excellent performance.
147 - Hongjian Shi , Mathias Drton , 2020
Chatterjee (2021) introduced a simple new rank correlation coefficient that has attracted much recent attention. The coefficient has the unusual appeal that it not only estimates a population quantity first proposed by Dette et al. (2013) that is zer o if and only if the underlying pair of random variables is independent, but also is asymptotically normal under independence. This paper compares Chatterjees new correlation coefficient to three established rank correlations that also facilitate consistent tests of independence, namely, Hoeffdings $D$, Blum-Kiefer-Rosenblatts $R$, and Bergsma-Dassios-Yanagimotos $tau^*$. We contrast their computational efficiency in light of recent advances, and investigate their power against local rotation and mixture alternatives. Our main results show that Chatterjees coefficient is unfortunately rate sub-optimal compared to $D$, $R$, and $tau^*$. The situation is more subtle for a related earlier estimator of Dette et al. (2013). These results favor $D$, $R$, and $tau^*$ over Chatterjees new correlation coefficient for the purpose of testing independence.
In this paper we consider the linear regression model $Y =S X+varepsilon $ with functional regressors and responses. We develop new inference tools to quantify deviations of the true slope $S$ from a hypothesized operator $S_0$ with respect to the Hi lbert--Schmidt norm $| S- S_0|^2$, as well as the prediction error $mathbb{E} | S X - S_0 X |^2$. Our analysis is applicable to functional time series and based on asymptotically pivotal statistics. This makes it particularly user friendly, because it avoids the choice of tuning parameters inherent in long-run variance estimation or bootstrap of dependent data. We also discuss two sample problems as well as change point detection. Finite sample properties are investigated by means of a simulation study. Mathematically our approach is based on a sequential version of the popular spectral cut-off estimator $hat S_N$ for $S$. It is well-known that the $L^2$-minimax rates in the functional regression model, both in estimation and prediction, are substantially slower than $1/sqrt{N}$ (where $N$ denotes the sample size) and that standard estimators for $S$ do not converge weakly to non-degenerate limits. However, we demonstrate that simple plug-in estimators - such as $| hat S_N - S_0 |^2$ for $| S - S_0 |^2$ - are $sqrt{N}$-consistent and its sequenti
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا