ترغب بنشر مسار تعليمي؟ اضغط هنا

Density Deconvolution with Additive Measurement Errors using Quadratic Programming

45   0   0.0 ( 0 )
 نشر من قبل Ran Yang
 تاريخ النشر 2018
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

Distribution estimation for noisy data via density deconvolution is a notoriously difficult problem for typical noise distributions like Gaussian. We develop a density deconvolution estimator based on quadratic programming (QP) that can achieve better estimation than kernel density deconvolution methods. The QP approach appears to have a more favorable regularization tradeoff between oversmoothing vs. oscillation, especially at the tails of the distribution. An additional advantage is that it is straightforward to incorporate a number of common density constraints such as nonnegativity, integration-to-one, unimodality, tail convexity, tail monotonicity, and support constraints. We demonstrate that the QP approach has outstanding estimation performance relative to existing methods. Its performance is superior when only the universally applicable nonnegativity and integration-to-one constraints are incorporated, and incorporating additional common constraints when applicable (e.g., nonnegative support, unimodality, tail monotonicity or convexity, etc.) can further substantially improve the estimation.



قيم البحث

اقرأ أيضاً

We consider the problem of multivariate density deconvolution when the interest lies in estimating the distribution of a vector-valued random variable but precise measurements of the variable of interest are not available, observations being contamin ated with additive measurement errors. The existing sparse literature on the problem assumes the density of the measurement errors to be completely known. We propose robust Bayesian semiparametric multivariate deconvolution approaches when the measurement error density is not known but replicated proxies are available for each unobserved value of the random vector. Additionally, we allow the variability of the measurement errors to depend on the associated unobserved value of the vector of interest through unknown relationships which also automatically includes the case of multivariate multiplicative measurement errors. Basic properties of finite mixture models, multivariate normal kernels and exchangeable priors are exploited in many novel ways to meet the modeling and computational challenges. Theoretical results that show the flexibility of the proposed methods are provided. We illustrate the efficiency of the proposed methods in recovering the true density of interest through simulation experiments. The methodology is applied to estimate the joint consumption pattern of different dietary components from contaminated 24 hour recalls.
This paper aims to build an estimate of an unknown density of the data with measurement error as a linear combination of functions from a dictionary. Inspired by the penalization approach, we propose the weighted Elastic-net penalized minimal $ell_2$ -distance method for sparse coefficients estimation, where the adaptive weights come from sharp concentration inequalities. The optimal weighted tuning parameters are obtained by the first-order conditions holding with a high probability. Under local coherence or minimal eigenvalue assumptions, non-asymptotical oracle inequalities are derived. These theoretical results are transposed to obtain the support recovery with a high probability. Then, some numerical experiments for discrete and continuous distributions confirm the significant improvement obtained by our procedure when compared with other conventional approaches. Finally, the application is performed in a meteorology data set. It shows that our method has potency and superiority of detecting the shape of multi-mode density compared with other conventional approaches.
This paper develops a method for estimating parameters of a vector autoregression (VAR) observed in white noise. The estimation method assumes the noise variance matrix is known and does not require any iterative process. This study provides consiste nt estimators and shows the asymptotic distribution of the parameters required for conducting tests of Granger causality. Methods in the existing statistical literature cannot be used for testing Granger causality, since under the null hypothesis the model becomes unidentifiable. Measurement error effects on parameter estimates were evaluated by using computational simulations. The results show that the proposed approach produces empirical false positive rates close to the adopted nominal level (even for small samples) and has a good performance around the null hypothesis. The applicability and usefulness of the proposed approach are illustrated using a functional magnetic resonance imaging dataset.
We consider nonparametric measurement error density deconvolution subject to heteroscedastic measurement errors as well as symmetry about zero and shape constraints, in particular unimodality. The problem is motivated by applications where the observ ed data are estimated effect sizes from regressions on multiple factors, where the target is the distribution of the true effect sizes. We exploit the fact that any symmetric and unimodal density can be expressed as a mixture of symmetric uniform densities, and model the mixing density in a new way using a Dirichlet process location-mixture of Gamma distributions. We do the computations within a Bayesian context, describe a simple scalable implementation that is linear in the sample size, and show that the estimate of the unknown target density is consistent. Within our application context of regression effect sizes, the target density is likely to have a large probability near zero (the near null effects) coupled with a heavy-tailed distribution (the actual effects). Simulations show that unlike standard deconvolution methods, our Constrained Bayesian Deconvolution method does a much better job of reconstruction of the target density. Applications to a genome-wise association study (GWAS) and microarray data reveal similar results.
73 - Zhichao Jiang , Peng Ding 2019
Instrumental variable methods can identify causal effects even when the treatment and outcome are confounded. We study the problem of imperfect measurements of the binary instrumental variable, treatment or outcome. We first consider non-differential measurement errors, that is, the mis-measured variable does not depend on other variables given its true value. We show that the measurement error of the instrumental variable does not bias the estimate, the measurement error of the treatment biases the estimate away from zero, and the measurement error of the outcome biases the estimate toward zero. Moreover, we derive sharp bounds on the causal effects without additional assumptions. These bounds are informative because they exclude zero. We then consider differential measurement errors, and focus on sensitivity analyses in those settings.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا