Do you want to publish a course? Click here

Sharpening Jensens Inequality

79   0   0.0 ( 0 )
 Added by Arthur Berg
 Publication date 2017
and research's language is English




Ask ChatGPT about the research

This paper proposes a new sharpened version of the Jensens inequality. The proposed new bound is simple and insightful, is broadly applicable by imposing minimum assumptions, and provides fairly accurate result in spite of its simple form. Applications to the moment generating function, power mean inequalities, and Rao-Blackwell estimation are presented. This presentation can be incorporated in any calculus-based statistical course.



rate research

Read More

We establish Bernstein inequalities for functions of general (general-state-space, not necessarily reversible) Markov chains. These inequalities achieve sharp variance proxies and recover the classical Bernsteins inequality under independence. The key analysis lies in upper bounding the operator norm of a perturbed Markov transition kernel by the limiting operator norm of a sequence of finite-rank and perturbed Markov transition kernels. For each finite-rank and perturbed Markov kernel, we bound its norm by the sum of two convex functions. One coincides with what delivers the classical Bernsteins inequality, and the other reflects the influence of the Markov dependence. A convex analysis on conjugates of these two functions then derives our Bernstein inequalities.
260 - Guangyan Jia , Shige Peng 2008
A real valued function defined on}$mathbb{R}$ {small is called}$g${small --convex if it satisfies the following textquotedblleft generalized Jensens inequalitytextquotedblright under a given}$g${small -expectation, i.e., }$h(mathbb{E}^{g}[X])leq mathbb{E}% ^{g}[h(X)]${small, for all random variables}$X$ {small such that both sides of the inequality are meaningful. In this paper we will give a necessary and sufficient conditions for a }$C^{2}${small -function being}$% g ${small -convex. We also studied some more general situations. We also studied}$g${small -concave and}$g${small -affine functions.
We extend Fanos inequality, which controls the average probability of events in terms of the average of some $f$--divergences, to work with arbitrary events (not necessarily forming a partition) and even with arbitrary $[0,1]$--valued random variables, possibly in continuously infinite number. We provide two applications of these extensions, in which the consideration of random variables is particularly handy: we offer new and elegant proofs for existing lower bounds, on Bayesian posterior concentration (minimax or distribution-dependent) rates and on the regret in non-stochastic sequential learning.
83 - David M. Kaplan 2016
Bayesian and frequentist criteria are fundamentally different, but often posterior and sampling distributions are asymptotically equivalent (e.g., Gaussian). For the corresponding limit experiment, we characterize the frequentist size of a certain Bayesian hypothesis test of (possibly nonlinear) inequalities. If the null hypothesis is that the (possibly infinite-dimensional) parameter lies in a certain half-space, then the Bayesian tests size is $alpha$; if the null hypothesis is a subset of a half-space, then size is above $alpha$ (sometimes strictly); and in other cases, size may be above, below, or equal to $alpha$. Two examples illustrate our results: testing stochastic dominance and testing curvature of a translog cost function.
Olkin [3] obtained a neat upper bound for the determinant of a correlation matrix. In this note, we present an extension and improvement of his result.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا