ترغب بنشر مسار تعليمي؟ اضغط هنا

Stochastic Approximation Algorithm for Estimating Mixing Distribution for Dependent Observations

129   0   0.0 ( 0 )
 نشر من قبل Nilabja Guha
 تاريخ النشر 2020
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

Estimating the mixing density of a mixture distribution remains an interesting problem in statistics literature. Using a stochastic approximation method, Newton and Zhang (1999) introduced a fast recursive algorithm for estimating the mixing density of a mixture. Under suitably chosen weights the stochastic approximation estimator converges to the true solution. In Tokdar et. al. (2009) the consistency of this recursive estimation method was established. However, the proof of consistency of the resulting estimator used independence among observations as an assumption. Here, we extend the investigation of performance of Newtons algorithm to several dependent scenarios. We first prove that the original algorithm under certain conditions remains consistent when the observations are arising form a weakly dependent process with fixed marginal with the target mixture as the marginal density. For some of the common dependent structures where the original algorithm is no longer consistent, we provide a modification of the algorithm that generates a consistent estimator.



قيم البحث

اقرأ أيضاً

72 - Peggy Cenac 2020
Non linear regression models are a standard tool for modeling real phenomena, with several applications in machine learning, ecology, econometry... Estimating the parameters of the model has garnered a lot of attention during many years. We focus her e on a recursive method for estimating parameters of non linear regressions. Indeed, these kinds of methods, whose most famous are probably the stochastic gradient algorithm and its averaged version, enable to deal efficiently with massive data arriving sequentially. Nevertheless, they can be, in practice, very sensitive to the case where the eigen-values of the Hessian of the functional we would like to minimize are at different scales. To avoid this problem, we first introduce an online Stochastic Gauss-Newton algorithm. In order to improve the estimates behavior in case of bad initialization, we also introduce a new Averaged Stochastic Gauss-Newton algorithm and prove its asymptotic efficiency.
We study the problem of the non-parametric estimation for the density $pi$ of the stationary distribution of a stochastic two-dimensional damping Hamiltonian system $(Z_t)_{tin[0,T]}=(X_t,Y_t)_{t in [0,T]}$. From the continuous observation of the sam pling path on $[0,T]$, we study the rate of estimation for $pi(x_0,y_0)$ as $T to infty$. We show that kernel based estimators can achieve the rate $T^{-v}$ for some explicit exponent $v in (0,1/2)$. One finding is that the rate of estimation depends on the smoothness of $pi$ and is completely different with the rate appearing in the standard i.i.d. setting or in the case of two-dimensional non degenerate diffusion processes. Especially, this rate depends also on $y_0$. Moreover, we obtain a minimax lower bound on the $L^2$-risk for pointwise estimation, with the same rate $T^{-v}$, up to $log(T)$ terms.
In this article we study the existence and strong consistency of GEE estimators, when the generalized estimating functions are martingales with random coefficients. Furthermore, we characterize estimating functions which are asymptotically optimal.
A sum of observations derived by a simple random sampling design from a population of independent random variables is studied. A procedure finding a general term of Edgeworth asymptotic expansion is presented. The Lindeberg condition of asymptotic no rmality, Berry-Esseen bound, Edgeworth asymptotic expansions under weakened conditions and Cramer type large deviation results are derived.
We study the existence, strong consistency and asymptotic normality of estimators obtained from estimating functions, that are p-dimensional martingale transforms. The problem is motivated by the analysis of evolutionary clustered data, with distribu tions belonging to the exponential family, and which may also vary in terms of other component series. Within a quasi-likelihood approach, we construct estimating equations, which accommodate different forms of dependency among the components of the response vector and establish multivariate extensions of results on linear and generalized linear models, with stochastic covariates. Furthermore, we characterize estimating functions which are asymptotically optimal, in that they lead to confidence regions for the regression parameters which are of minimum size, asymptotically. Results from a simulation study and an application to a real dataset are included.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا