Do you want to publish a course? Click here

A Global Bias-Correction DC Method for Biased Estimation under Memory Constraint

55   0   0.0 ( 0 )
 Added by Feng Li
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

This paper establishes a global bias-correction divide-and-conquer (GBC-DC) rule for biased estimation under the case of memory constraint. In order to introduce the new estimation, a closed representation of the local estimators obtained by the data in each batch is adopted, aiming to formulate a pro forma linear regression between the local estimators and the true parameter of interest. Least square method is then used within this framework to composite a global estimator of the parameter. Thus, the main advantage over the classical DC method is that the new GBC-DC method can absorb the information hidden in the statistical structure and the variables in each batch of data. Consequently, the resulting global estimator is strictly unbiased even if the local estimator has a non-negligible bias. Moreover, the global estimator is consistent, and even can achieve root-$n$ consistency, without the constraint on the number of batches. Another attractive feature of the new method is computationally simple and efficient, without use of any iterative algorithm and local bias-correction. Specifically, the proposed GBC-DC method applies to various biased estimations such as shrinkage-type estimation and nonparametric regression estimation. Detailed simulation studies demonstrate that the proposed GBC-DC approach is significantly bias-corrected, and the behavior is comparable with the full data estimation and is much better than the competitors.



rate research

Read More

190 - Hanqing Cai , Tengyao Wang 2021
Change-points are a routine feature of big data observed in the form of high-dimensional data streams. In many such data streams, the component series possess group structures and it is natural to assume that changes only occur in a small number of all groups. We propose a new change point procedure, called groupInspect, that exploits the group sparsity structure to estimate a projection direction so as to aggregate information across the component series to successfully estimate the change-point in the mean structure of the series. We prove that the estimated projection direction is minimax optimal, up to logarithmic factors, when all group sizes are of comparable order. Moreover, our theory provide strong guarantees on the rate of convergence of the change-point location estimator. Numerical studies demonstrates the competitive performance of groupInspect in a wide range of settings and a real data example confirms the practical usefulness of our procedure.
Missing data and confounding are two problems researchers face in observational studies for comparative effectiveness. Williamson et al. (2012) recently proposed a unified approach to handle both issues concurrently using a multiply-robust (MR) methodology under the assumption that confounders are missing at random. Their approach considers a union of models in which any submodel has a parametric component while the remaining models are unrestricted. We show that while their estimating function is MR in theory, the possibility for multiply robust inference is complicated by the fact that parametric models for different components of the union model are not variation independent and therefore the MR property is unlikely to hold in practice. To address this, we propose an alternative transparent parametrization of the likelihood function, which makes explicit the model dependencies between various nuisance functions needed to evaluate the MR efficient score. The proposed method is genuinely doubly-robust (DR) in that it is consistent and asymptotic normal if one of two sets of modeling assumptions holds. We evaluate the performance and doubly robust property of the DR method via a simulation study.
We offer a survey of recent results on covariance estimation for heavy-tailed distributions. By unifying ideas scattered in the literature, we propose user-friendly methods that facilitate practical implementation. Specifically, we introduce element-wise and spectrum-wise truncation operators, as well as their $M$-estimator counterparts, to robustify the sample covariance matrix. Different from the classical notion of robustness that is characterized by the breakdown property, we focus on the tail robustness which is evidenced by the connection between nonasymptotic deviation and confidence level. The key observation is that the estimators needs to adapt to the sample size, dimensionality of the data and the noise level to achieve optimal tradeoff between bias and robustness. Furthermore, to facilitate their practical use, we propose data-driven procedures that automatically calibrate the tuning parameters. We demonstrate their applications to a series of structured models in high dimensions, including the bandable and low-rank covariance matrices and sparse precision matrices. Numerical studies lend strong support to the proposed methods.
The variance of noise plays an important role in many change-point detection procedures and the associated inferences. Most commonly used variance estimators require strong assumptions on the true mean structure or normality of the error distribution, which may not hold in applications. More importantly, the qualities of these estimators have not been discussed systematically in the literature. In this paper, we introduce a framework of equivariant variance estimation for multiple change-point models. In particular, we characterize the set of all equivariant unbiased quadratic variance estimators for a family of change-point model classes, and develop a minimax theory for such estimators.
Sensitivity indices when the inputs of a model are not independent are estimated by local polynomial techniques. Two original estimators based on local polynomial smoothers are proposed. Both have good theoretical properties which are exhibited and also illustrated through analytical examples. They are used to carry out a sensitivity analysis on a real case of a kinetic model with correlated parameters.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا