ترغب بنشر مسار تعليمي؟ اضغط هنا

Sharp bounds on $p$-norms for sums of independent uniform random variables, $0 < p < 1$

84   0   0.0 ( 0 )
 نشر من قبل Tomasz Tkocz
 تاريخ النشر 2021
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

We provide a sharp lower bound on the $p$-norm of a sum of independent uniform random variables in terms of its variance when $0 < p < 1$. We address an analogous question for $p$-Renyi entropy for $p$ in the same range.

قيم البحث

اقرأ أيضاً

We consider the problem of bounding large deviations for non-i.i.d. random variables that are allowed to have arbitrary dependencies. Previous works typically assumed a specific dependence structure, namely the existence of independent components. Bo unds that depend on the degree of dependence between the observations have only been studied in the theory of mixing processes, where variables are time-ordered. Here, we introduce a new way of measuring dependences within an unordered set of variables. We prove concentration inequalities, that apply to any set of random variables, but benefit from the presence of weak dependencies. We also discuss applications and extensions of our results to related problems of machine learning and large deviations.
We establish several optimal moment comparison inequalities (Khinchin-type inequalities) for weighted sums of independent identically distributed symmetric discrete random variables which are uniform on sets of consecutive integers. Specifically, we obtain sharp constants for the second moment and any moment of order at least 3 (using convex dominance by Gaussian random variables). In the case of only 3 atoms, we also establish a Schur-convexity result. For moments of order less than 2, we get sharp constants in two cases by exploiting Haagerups arguments for random signs.
101 - Luc Devroye , Gabor Lugosi 2007
It is shown that functions defined on ${0,1,...,r-1}^n$ satisfying certain conditions of bounded differences that guarantee sub-Gaussian tail behavior also satisfy a much stronger ``local sub-Gaussian property. For self-bounding and configuration fun ctions we derive analogous locally subexponential behavior. The key tool is Talagrands [Ann. Probab. 22 (1994) 1576--1587] variance inequality for functions defined on the binary hypercube which we extend to functions of uniformly distributed random variables defined on ${0,1,...,r-1}^n$ for $rge2$.
For an $ntimes n$ matrix $A_n$, the $rto p$ operator norm is defined as $$|A_n|_{rto p}:= sup_{boldsymbol{x} in mathbb{R}^n:|boldsymbol{x}|_rleq 1 } |A_nboldsymbol{x}|_pquadtext{for}quad r,pgeq 1.$$ For different choices of $r$ and $p$, this norm cor responds to key quantities that arise in diverse applications including matrix condition number estimation, clustering of data, and finding oblivious routing schemes in transportation networks. This article considers $rto p$ norms of symmetric random matrices with nonnegative entries, including adjacency matrices of ErdH{o}s-Renyi random graphs, matrices with positive sub-Gaussian entries, and certain sparse matrices. For $1< pleq r< infty$, the asymptotic normality, as $ntoinfty$, of the appropriately centered and scaled norm $|A_n|_{rto p}$ is established. When $p geq 2$, this is shown to imply, as a corollary, asymptotic normality of the solution to the $ell_p$ quadratic maximization problem, also known as the $ell_p$ Grothendieck problem. Furthermore, a sharp $ell_infty$-approximation bound for the unique maximizing vector in the definition of $|A_n|_{rto p}$ is obtained. This result, which may be of independent interest, is in fact shown to hold for a broad class of deterministic sequences of matrices having certain asymptotic expansion properties. The results obtained can be viewed as a generalization of the seminal results of F{u}redi and Koml{o}s (1981) on asymptotic normality of the largest singular value of a class of symmetric random matrices, which corresponds to the special case $r=p=2$ considered here. In the general case with $1< pleq r < infty$, spectral methods are no longer applicable, and so a new approach is developed, which involves a refined convergence analysis of a nonlinear power method and a perturbation bound on the maximizing vector.
We obtain new estimates on the maximal operator applied to the Weyl sums. We also consider the quadratic case (that is, Gauss sums) in more details. In wide ranges of parameters our estimates are optimal and match lower bounds. Our approach is based on a combination of ideas of Baker (2021) and Chen and Shparlinski (2020).
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا