ترغب بنشر مسار تعليمي؟ اضغط هنا

Robust and Efficient Empirical Bayes Confidence Intervals using $gamma$-Divergence

77   0   0.0 ( 0 )
 نشر من قبل Shonosuke Sugasawa
 تاريخ النشر 2021
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

Although parametric empirical Bayes confidence intervals of multiple normal means are fundamental tools for compound decision problems, their performance can be sensitive to the misspecification of the parametric prior distribution (typically normal distribution), especially when some strong signals are included. We suggest a simple modification of the standard confidence intervals such that the proposed interval is robust against misspecification of the prior distribution. Our main idea is using well-known Tweedies formula with robust likelihood based on $gamma$-divergence. An advantage of the new interval is that the interval lengths are always smaller than or equal to those of the parametric empirical Bayes confidence interval so that the new interval is efficient and robust. We prove asymptotic validity that the coverage probability of the proposed confidence intervals attain a nominal level even when the true underlying distribution of signals is contaminated, and the coverage accuracy is less sensitive to the contamination ratio. The numerical performance of the proposed method is demonstrated through simulation experiments and a real data application.



قيم البحث

اقرأ أيضاً

76 - Jinghao Sun 2020
Capture-recapture (CRC) surveys are widely used to estimate the size of a population whose members cannot be enumerated directly. When $k$ capture samples are obtained, counts of unit captures in subsets of samples are represented naturally by a $2^k $ contingency table in which one element -- the number of individuals appearing in none of the samples -- remains unobserved. In the absence of additional assumptions, the population size is not point-identified. Assumptions about independence between samples are often used to achieve point-identification. However, real-world CRC surveys often use convenience samples in which independence cannot be guaranteed, and population size estimates under independence assumptions may lack empirical credibility. In this work, we apply the theory of partial identification to show that weak assumptions or qualitative knowledge about the nature of dependence between samples can be used to characterize a non-trivial set in which the true population size lies with high probability. We construct confidence sets for the population size under bounds on pairwise capture probabilities, and bounds on the highest order interaction term in a log-linear model using two methods: test inversion bootstrap confidence intervals, and profile likelihood confidence intervals. We apply these methods to recent survey data to estimate the number of people who inject drugs in Brussels, Belgium.
136 - Christoph Dalitz 2018
Introductory texts on statistics typically only cover the classical two sigma confidence interval for the mean value and do not describe methods to obtain confidence intervals for other estimators. The present technical report fills this gap by first defining different methods for the construction of confidence intervals, and then by their application to a binomial proportion, the mean value, and to arbitrary estimators. Beside the frequentist approach, the likelihood ratio and the highest posterior density approach are explained. Two methods to estimate the variance of general maximum likelihood estimators are described (Hessian, Jackknife), and for arbitrary estimators the bootstrap is suggested. For three examples, the different methods are evaluated by means of Monte Carlo simulations with respect to their coverage probability and interval length. R code is given for all methods, and the practitioner obtains a guideline which method should be used in which cases.
142 - Yunan Wu , Lan Wang 2019
We propose a new procedure for inference on optimal treatment regimes in the model-free setting, which does not require to specify an outcome regression model. Existing model-free estimators for optimal treatment regimes are usually not suitable for the purpose of inference, because they either have nonstandard asymptotic distributions or do not necessarily guarantee consistent estimation of the parameter indexing the Bayes rule due to the use of surrogate loss. We first study a smoothed robust estimator that directly targets the parameter corresponding to the Bayes decision rule for optimal treatment regimes estimation. This estimator is shown to have an asymptotic normal distribution. Furthermore, we verify that a resampling procedure provides asymptotically accurate inference for both the parameter indexing the optimal treatment regime and the optimal value function. A new algorithm is developed to calculate the proposed estimator with substantially improved speed and stability. Numerical results demonstrate the satisfactory performance of the new methods.
129 - Ryan Chen , Javier Cabrera 2020
This study aims to evaluate the performance of power in the likelihood ratio test for changepoint detection by bootstrap sampling, and proposes a hypothesis test based on bootstrapped confidence interval lengths. Assuming i.i.d normally distributed e rrors, and using the bootstrap method, the changepoint sampling distribution is estimated. Furthermore, this study describes a method to estimate a data set with no changepoint to form the null sampling distribution. With the null sampling distribution, and the distribution of the estimated changepoint, critical values and power calculations can be made, over the lengths of confidence intervals.
We consider a linear regression model with regression parameter beta=(beta_1,...,beta_p) and independent and identically N(0,sigma^2) distributed errors. Suppose that the parameter of interest is theta = a^T beta where a is a specified vector. Define the parameter tau=c^T beta-t where the vector c and the number t are specified and a and c are linearly independent. Also suppose that we have uncertain prior information that tau = 0. We present a new frequentist 1-alpha confidence interval for theta that utilizes this prior information. We require this confidence interval to (a) have endpoints that are continuous functions of the data and (b) coincide with the standard 1-alpha confidence interval when the data strongly contradicts this prior information. This interval is optimal in the sense that it has minimum weighted average expected length where the largest weight is given to this expected length when tau=0. This minimization leads to an interval that has the following desirable properties. This interval has expected length that (a) is relatively small when the prior information about tau is correct and (b) has a maximum value that is not too large. The following problem will be used to illustrate the application of this new confidence interval. Consider a 2-by 2 factorial experiment with 20 replicates. Suppose that the parameter of interest theta is a specified simple effect and that we have uncertain prior information that the two-factor interaction is zero. Our aim is to find a frequentist 0.95 confidence interval for theta that utilizes this prior information.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا