Do you want to publish a course? Click here

Asymptotic results with estimating equations for time-evolving clustered data

88   0   0.0 ( 0 )
 Added by Laura Dumitrescu
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

We study the existence, strong consistency and asymptotic normality of estimators obtained from estimating functions, that are p-dimensional martingale transforms. The problem is motivated by the analysis of evolutionary clustered data, with distributions belonging to the exponential family, and which may also vary in terms of other component series. Within a quasi-likelihood approach, we construct estimating equations, which accommodate different forms of dependency among the components of the response vector and establish multivariate extensions of results on linear and generalized linear models, with stochastic covariates. Furthermore, we characterize estimating functions which are asymptotically optimal, in that they lead to confidence regions for the regression parameters which are of minimum size, asymptotically. Results from a simulation study and an application to a real dataset are included.



rate research

Read More

In this article we study the existence and strong consistency of GEE estimators, when the generalized estimating functions are martingales with random coefficients. Furthermore, we characterize estimating functions which are asymptotically optimal.
119 - Xin Gao , Grace Y. Yi 2012
This paper investigates the property of the penalized estimating equations when both the mean and association structures are modelled. To select variables for the mean and association structures sequentially, we propose a hierarchical penalized generalized estimating equations (HPGEE2) approach. The first set of penalized estimating equations is solved for the selection of significant mean parameters. Conditional on the selected mean model, the second set of penalized estimating equations is solved for the selection of significant association parameters. The hierarchical approach is designed to accommodate possible model constraints relating the inclusion of covariates into the mean and the association models. This two-step penalization strategy enjoys a compelling advantage of easing computational burdens compared to solving the two sets of penalized equations simultaneously. HPGEE2 with a smoothly clipped absolute deviation (SCAD) penalty is shown to have the oracle property for the mean and association models. The asymptotic behavior of the penalized estimator under this hierarchical approach is established. An efficient two-stage penalized weighted least square algorithm is developed to implement the proposed method. The empirical performance of the proposed HPGEE2 is demonstrated through Monte-Carlo studies and the analysis of a clinical data set.
We study periodic review stochastic inventory control in the data-driven setting, in which the retailer makes ordering decisions based only on historical demand observations without any knowledge of the probability distribution of the demand. Since an $(s, S)$-policy is optimal when the demand distribution is known, we investigate the statistical properties of the data-driven $(s, S)$-policy obtained by recursively computing the empirical cost-to-go functions. This policy is inherently challenging to analyze because the recursion induces propagation of the estimation error backwards in time. In this work, we establish the asymptotic properties of this data-driven policy by fully accounting for the error propagation. First, we rigorously show the consistency of the estimated parameters by filling in some gaps (due to unaccounted error propagation) in the existing studies. On the other hand, empirical process theory cannot be directly applied to show asymptotic normality. To explain, the empirical cost-to-go functions for the estimated parameters are not i.i.d. sums, again due to the error propagation. Our main methodological innovation comes from an asymptotic representation for multi-sample $U$-processes in terms of i.i.d. sums. This representation enables us to apply empirical process theory to derive the influence functions of the estimated parameters and establish joint asymptotic normality. Based on these results, we also propose an entirely data-driven estimator of the optimal expected cost and we derive its asymptotic distribution. We demonstrate some useful applications of our asymptotic results, including sample size determination, as well as interval estimation and hypothesis testing on vital parameters of the inventory problem. The results from our numerical simulations conform to our theoretical analysis.
Let $X$ be a mean zero Gaussian random vector in a separable Hilbert space ${mathbb H}$ with covariance operator $Sigma:={mathbb E}(Xotimes X).$ Let $Sigma=sum_{rgeq 1}mu_r P_r$ be the spectral decomposition of $Sigma$ with distinct eigenvalues $mu_1>mu_2> dots$ and the corresponding spectral projectors $P_1, P_2, dots.$ Given a sample $X_1,dots, X_n$ of size $n$ of i.i.d. copies of $X,$ the sample covariance operator is defined as $hat Sigma_n := n^{-1}sum_{j=1}^n X_jotimes X_j.$ The main goal of principal component analysis is to estimate spectral projectors $P_1, P_2, dots$ by their empirical counterparts $hat P_1, hat P_2, dots$ properly defined in terms of spectral decomposition of the sample covariance operator $hat Sigma_n.$ The aim of this paper is to study asymptotic distributions of important statistics related to this problem, in particular, of statistic $|hat P_r-P_r|_2^2,$ where $|cdot|_2^2$ is the squared Hilbert--Schmidt norm. This is done in a high-complexity asymptotic framework in which the so called effective rank ${bf r}(Sigma):=frac{{rm tr}(Sigma)}{|Sigma|_{infty}}$ (${rm tr}(cdot)$ being the trace and $|cdot|_{infty}$ being the operator norm) of the true covariance $Sigma$ is becoming large simultaneously with the sample size $n,$ but ${bf r}(Sigma)=o(n)$ as $ntoinfty.$ In this setting, we prove that, in the case of one-dimensional spectral projector $P_r,$ the properly centered and normalized statistic $|hat P_r-P_r|_2^2$ with {it data-dependent} centering and normalization converges in distribution to a Cauchy type limit. The proofs of this and other related results rely on perturbation analysis and Gaussian concentration.
The last decades have seen an unprecedented increase in the availability of data sets that are inherently global and temporally evolving, from remotely sensed networks to climate model ensembles. This paper provides a view of statistical modeling techniques for space-time processes, where space is the sphere representing our planet. In particular, we make a distintion between (a) second order-based, and (b) practical approaches to model temporally evolving global processes. The former are based on the specification of a class of space-time covariance functions, with space being the two-dimensional sphere. The latter are based on explicit description of the dynamics of the space-time process, i.e., by specifying its evolution as a function of its past history with added spatially dependent noise. We especially focus on approach (a), where the literature has been sparse. We provide new models of space-time covariance functions for random fields defined on spheres cross time. Practical approaches, (b), are also discussed, with special emphasis on models built directly on the sphere, without projecting the spherical coordinate on the plane. We present a case study focused on the analysis of air pollution from the 2015 wildfires in Equatorial Asia, an event which was classified as the years worst environmental disaster. The paper finishes with a list of the main theoretical and applied research problems in the area, where we expect the statistical community to engage over the next decade.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا