ترغب بنشر مسار تعليمي؟ اضغط هنا

Sharpening Jensens Inequality

79   0   0.0 ( 0 )
 نشر من قبل Arthur Berg
 تاريخ النشر 2017
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

This paper proposes a new sharpened version of the Jensens inequality. The proposed new bound is simple and insightful, is broadly applicable by imposing minimum assumptions, and provides fairly accurate result in spite of its simple form. Applications to the moment generating function, power mean inequalities, and Rao-Blackwell estimation are presented. This presentation can be incorporated in any calculus-based statistical course.



قيم البحث

اقرأ أيضاً

We establish Bernstein inequalities for functions of general (general-state-space, not necessarily reversible) Markov chains. These inequalities achieve sharp variance proxies and recover the classical Bernsteins inequality under independence. The ke y analysis lies in upper bounding the operator norm of a perturbed Markov transition kernel by the limiting operator norm of a sequence of finite-rank and perturbed Markov transition kernels. For each finite-rank and perturbed Markov kernel, we bound its norm by the sum of two convex functions. One coincides with what delivers the classical Bernsteins inequality, and the other reflects the influence of the Markov dependence. A convex analysis on conjugates of these two functions then derives our Bernstein inequalities.
260 - Guangyan Jia , Shige Peng 2008
A real valued function defined on}$mathbb{R}$ {small is called}$g${small --convex if it satisfies the following textquotedblleft generalized Jensens inequalitytextquotedblright under a given}$g${small -expectation, i.e., }$h(mathbb{E}^{g}[X])leq math bb{E}% ^{g}[h(X)]${small, for all random variables}$X$ {small such that both sides of the inequality are meaningful. In this paper we will give a necessary and sufficient conditions for a }$C^{2}${small -function being}$% g ${small -convex. We also studied some more general situations. We also studied}$g${small -concave and}$g${small -affine functions.
We extend Fanos inequality, which controls the average probability of events in terms of the average of some $f$--divergences, to work with arbitrary events (not necessarily forming a partition) and even with arbitrary $[0,1]$--valued random variable s, possibly in continuously infinite number. We provide two applications of these extensions, in which the consideration of random variables is particularly handy: we offer new and elegant proofs for existing lower bounds, on Bayesian posterior concentration (minimax or distribution-dependent) rates and on the regret in non-stochastic sequential learning.
83 - David M. Kaplan 2016
Bayesian and frequentist criteria are fundamentally different, but often posterior and sampling distributions are asymptotically equivalent (e.g., Gaussian). For the corresponding limit experiment, we characterize the frequentist size of a certain Ba yesian hypothesis test of (possibly nonlinear) inequalities. If the null hypothesis is that the (possibly infinite-dimensional) parameter lies in a certain half-space, then the Bayesian tests size is $alpha$; if the null hypothesis is a subset of a half-space, then size is above $alpha$ (sometimes strictly); and in other cases, size may be above, below, or equal to $alpha$. Two examples illustrate our results: testing stochastic dominance and testing curvature of a translog cost function.
Olkin [3] obtained a neat upper bound for the determinant of a correlation matrix. In this note, we present an extension and improvement of his result.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا