Do you want to publish a course? Click here

Some compact notations for concentration inequalities and user-friendly results

69   0   0.0 ( 0 )
 Added by Kaizheng Wang
 Publication date 2019
and research's language is English
 Authors Kaizheng Wang




Ask ChatGPT about the research

This paper presents compact notations for concentration inequalities and convenient results to streamline probabilistic analysis. The new expressions describe the typical sizes and tails of random variables, allowing for simple operations without heavy use of inessential constants. They bridge classical asymptotic notations and modern non-asymptotic tail bounds together. Examples of different kinds demonstrate their efficacy.



rate research

Read More

This paper gives a review of concentration inequalities which are widely employed in non-asymptotical analyses of mathematical statistics in a wide range of settings, from distribution-free to distribution-dependent, from sub-Gaussian to sub-exponential, sub-Gamma, and sub-Weibull random variables, and from the mean to the maximum concentration. This review provides results in these settings with some fresh new results. Given the increasing popularity of high-dimensional data and inference, results in the context of high-dimensional linear and Poisson regressions are also provided. We aim to illustrate the concentration inequalities with known constants and to improve existing bounds with sharper constants.
222 - Xinjia Chen 2014
We explore the applications of our previously established likelihood-ratio method for deriving concentration inequalities for a wide variety of univariate and multivariate distributions. New concentration inequalities for various distributions are developed without the idea of minimizing moment generating functions.
122 - Tomohiro Nishiyama 2019
Log-concave distributions include some important distributions such as normal distribution, exponential distribution and so on. In this note, we show inequalities between two Lp-norms for log-concave distributions on the Euclidean space. These inequalities are the generalizations of the upper and lower bound of the differential entropy and are also interpreted as a kind of expansion of the inequality between two Lp-norms on the measurable set with finite measure.
In many applications it is useful to replace the Moore-Penrose pseudoinverse (MPP) by a different generalized inverse with more favorable properties. We may want, for example, to have many zero entries, but without giving up too much of the stability of the MPP. One way to quantify stability is by how much the Frobenius norm of a generalized inverse exceeds that of the MPP. In this paper we derive finite-size concentration bounds for the Frobenius norm of $ell^p$-minimal general inverses of iid Gaussian matrices, with $1 leq p leq 2$. For $p = 1$ we prove exponential concentration of the Frobenius norm of the sparse pseudoinverse; for $p = 2$, we get a similar concentration bound for the MPP. Our proof is based on the convex Gaussian min-max theorem, but unlike previous applications which give asymptotic results, we derive finite-size bounds.
We present some new results on the joint distribution of an arbitrary subset of the ordered eigenvalues of complex Wishart, double Wishart, and Gaussian hermitian random matrices of finite dimensions, using a tensor pseudo-determinant operator. Specifically, we derive compact expressions for the joint probability distribution function of the eigenvalues and the expectation of functions of the eigenvalues, including joint moments, for the case of both ordered and unordered eigenvalues.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا