ترغب بنشر مسار تعليمي؟ اضغط هنا

Strong consistency and optimality for generalized estimating equations with stochastic covariates

98   0   0.0 ( 0 )
 نشر من قبل Laura Dumitrescu
 تاريخ النشر 2017
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

In this article we study the existence and strong consistency of GEE estimators, when the generalized estimating functions are martingales with random coefficients. Furthermore, we characterize estimating functions which are asymptotically optimal.

قيم البحث

اقرأ أيضاً

In the low-dimensional case, the generalized additive coefficient model (GACM) proposed by Xue and Yang [Statist. Sinica 16 (2006) 1423-1446] has been demonstrated to be a powerful tool for studying nonlinear interaction effects of variables. In this paper, we propose estimation and inference procedures for the GACM when the dimension of the variables is high. Specifically, we propose a groupwise penalization based procedure to distinguish significant covariates for the large $p$ small $n$ setting. The procedure is shown to be consistent for model structure identification. Further, we construct simultaneous confidence bands for the coefficient functions in the selected model based on a refined two-step spline estimator. We also discuss how to choose the tuning parameters. To estimate the standard deviation of the functional estimator, we adopt the smoothed bootstrap method. We conduct simulation experiments to evaluate the numerical performance of the proposed methods and analyze an obesity data set from a genome-wide association study as an illustration.
We study the existence, strong consistency and asymptotic normality of estimators obtained from estimating functions, that are p-dimensional martingale transforms. The problem is motivated by the analysis of evolutionary clustered data, with distribu tions belonging to the exponential family, and which may also vary in terms of other component series. Within a quasi-likelihood approach, we construct estimating equations, which accommodate different forms of dependency among the components of the response vector and establish multivariate extensions of results on linear and generalized linear models, with stochastic covariates. Furthermore, we characterize estimating functions which are asymptotically optimal, in that they lead to confidence regions for the regression parameters which are of minimum size, asymptotically. Results from a simulation study and an application to a real dataset are included.
128 - Nilabja Guha , Anindya Roy 2020
Estimating the mixing density of a mixture distribution remains an interesting problem in statistics literature. Using a stochastic approximation method, Newton and Zhang (1999) introduced a fast recursive algorithm for estimating the mixing density of a mixture. Under suitably chosen weights the stochastic approximation estimator converges to the true solution. In Tokdar et. al. (2009) the consistency of this recursive estimation method was established. However, the proof of consistency of the resulting estimator used independence among observations as an assumption. Here, we extend the investigation of performance of Newtons algorithm to several dependent scenarios. We first prove that the original algorithm under certain conditions remains consistent when the observations are arising form a weakly dependent process with fixed marginal with the target mixture as the marginal density. For some of the common dependent structures where the original algorithm is no longer consistent, we provide a modification of the algorithm that generates a consistent estimator.
Knowledge gradient is a design principle for developing Bayesian sequential sampling policies to solve optimization problems. In this paper we consider the ranking and selection problem in the presence of covariates, where the best alternative is not universal but depends on the covariates. In this context, we prove that under minimal assumptions, the sampling policy based on knowledge gradient is consistent, in the sense that following the policy the best alternative as a function of the covariates will be identified almost surely as the number of samples grows. We also propose a stochastic gradient ascent algorithm for computing the sampling policy and demonstrate its performance via numerical experiments.
64 - Qiuping Wang 2021
We are concerned here with unrestricted maximum likelihood estimation in a sparse $p_0$ model with covariates for directed networks. The model has a density parameter $ u$, a $2n$-dimensional node parameter $bs{eta}$ and a fixed dimensional regressio n coefficient $bs{gamma}$ of covariates. Previous studies focus on the restricted likelihood inference. When the number of nodes $n$ goes to infinity, we derive the $ell_infty$-error between the maximum likelihood estimator (MLE) $(widehat{bs{eta}}, widehat{bs{gamma}})$ and its true value $(bs{eta}, bs{gamma})$. They are $O_p( (log n/n)^{1/2} )$ for $widehat{bs{eta}}$ and $O_p( log n/n)$ for $widehat{bs{gamma}}$, up to an additional factor. This explains the asymptotic bias phenomenon in the asymptotic normality of $widehat{bs{gamma}}$ in cite{Yan-Jiang-Fienberg-Leng2018}. Further, we derive the asymptotic normality of the MLE. Numerical studies and a data analysis demonstrate our theoretical findings.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا