Do you want to publish a course? Click here

Dimension-free Bounds for Sums of Independent Matrices and Simple Tensors via the Variational Principle

107   0   0.0 ( 0 )
 Added by Nikita Zhivotovskiy
 Publication date 2021
  fields
and research's language is English




Ask ChatGPT about the research

We consider the deviation inequalities for the sums of independent $d$ by $d$ random matrices, as well as rank one random tensors. Our focus is on the non-isotropic case and the bounds that do not depend explicitly on the dimension $d$, but rather on the effective rank. In a rather elementary and unified way, we show the following results: 1) A deviation bound for the sums of independent positive-semi-definite matrices of any rank. This result generalizes the dimension-free bound of Koltchinskii and Lounici [Bernoulli, 23(1): 110-133, 2017] on the sample covariance matrix in the sub-Gaussian case. 2) Dimension-free bounds for the operator norm of the sums of random tensors of rank one formed either by sub-Gaussian or log-concave random vectors. This extends the result of Guedon and Rudelson [Adv. in Math., 208: 798-823, 2007]. 3) A non-isotropic version of the result of Alesker [Geom. Asp. of Funct. Anal., 77: 1--4, 1995] on the concentration of the norm of sub-exponential random vectors. 4) A dimension-free lower tail bound for sums of positive semi-definite matrices with heavy-tailed entries, sharpening the bound of Oliveira [Prob. Th. and Rel. Fields, 166: 1175-1194, 2016]. Our approach is based on the duality formula between entropy and moment generating functions. In contrast to the known proofs of dimension-free bounds, we avoid Talagrands majorizing measure theorem, as well as generic chaining bounds for empirical processes. Some of our tools were pioneered by O. Catoni and co-authors in the context of robust statistical estimation.



rate research

Read More

In this paper we obtain the limit distribution for partial sums with a random number of terms following a class of mixed Poisson distributions. The resulting weak limit is a mixing between a normal distribution and an exponential family, which we call by normal exponential family (NEF) laws. A new stability concept is introduced and a relationship between {alpha}-stable distributions and NEF laws is established. We propose estimation of the parameters of the NEF models through the method of moments and also by the maximum likelihood method, which is performed via an Expectation-Maximization algorithm. Monte Carlo simulation studies are addressed to check the performance of the proposed estimators and an empirical illustration on financial market is presented.
146 - Christian Houdre , Hua Xu 2007
We derive concentration inequalities for functions of the empirical measure of large random matrices with infinitely divisible entries and, in particular, stable ones. We also give concentration results for some other functionals of these random matrices, such as the largest eigenvalue or the largest singular value.
104 - Debraj Das 2020
In this article, we are interested in the normal approximation of the self-normalized random vector $Big(frac{sum_{i=1}^{n}X_{i1}}{sqrt{sum_{i=1}^{n}X_{i1}^2}},dots,frac{sum_{i=1}^{n}X_{ip}}{sqrt{sum_{i=1}^{n}X_{ip}^2}}Big)$ in $mathcal{R}^p$ uniformly over the class of hyper-rectangles $mathcal{A}^{re}={prod_{j=1}^{p}[a_j,b_j]capmathcal{R}:-inftyleq a_jleq b_jleq infty, j=1,ldots,p}$, where $X_1,dots,X_n$ are non-degenerate independent $p-$dimensional random vectors with each having independent and identically distributed (iid) components. We investigate the optimal cut-off rate of $log p$ in the uniform central limit theorem (UCLT) under variety of moment conditions. When $X_{ij}$s have $(2+delta)$th absolute moment for some $0< deltaleq 1$, the optimal rate of $log p$ is $obig(n^{delta/(2+delta)}big)$. When $X_{ij}$s are independent and identically distributed (iid) across $(i,j)$, even $(2+delta)$th absolute moment of $X_{11}$ is not needed. Only under the condition that $X_{11}$ is in the domain of attraction of the normal distribution, the growth rate of $log p$ can be made to be $o(eta_n)$ for some $eta_nrightarrow 0$ as $nrightarrow infty$. We also establish that the rate of $log p$ can be pushed to $log p =o(n^{1/2})$ if we assume the existence of fourth moment of $X_{ij}$s. By an example, it is shown however that the rate of growth of $log p$ can not further be improved from $n^{1/2}$ as a power of $n$. As an application, we found respecti
In this paper, we present a new framework to obtain tail inequalities for sums of random matrices. Compared with existing works, our tail inequalities have the following characteristics: 1) high feasibility--they can be used to study the tail behavior of various matrix functions, e.g., arbitrary matrix norms, the absolute value of the sum of the sum of the $j$ largest singular values (resp. eigenvalues) of complex matrices (resp. Hermitian matrices); and 2) independence of matrix dimension --- they do not have the matrix-dimension term as a product factor, and thus are suitable to the scenario of high-dimensional or infinite-dimensional random matrices. The price we pay to obtain these advantages is that the convergence rate of the resulting inequalities will become slow when the number of summand random matrices is large. We also develop the tail inequalities for matrix random series and matrix martingale difference sequence. We also demonstrate usefulness of our tail bounds in several fields. In compressed sensing, we employ the resulted tail inequalities to achieve a proof of the restricted isometry property when the measurement matrix is the sum of random matrices without any assumption on the distributions of matrix entries. In probability theory, we derive a new upper bound to the supreme of stochastic processes. In machine learning, we prove new expectation bounds of sums of random matrices matrix and obtain matrix approximation schemes via random sampling. In quantum information, we show a new analysis relating to the fractional cover number of quantum hypergraphs. In theoretical computer science, we obtain randomness-efficient samplers using matrix expander graphs that can be efficiently implemented in time without dependence on matrix dimensions.
We establish a quantitative version of the Tracy--Widom law for the largest eigenvalue of high dimensional sample covariance matrices. To be precise, we show that the fluctuations of the largest eigenvalue of a sample covariance matrix $X^*X$ converge to its Tracy--Widom limit at a rate nearly $N^{-1/3}$, where $X$ is an $M times N$ random matrix whose entries are independent real or complex random variables, assuming that both $M$ and $N$ tend to infinity at a constant rate. This result improves the previous estimate $N^{-2/9}$ obtained by Wang [73]. Our proof relies on a Green function comparison method [27] using iterative cumulant expansions, the local laws for the Green function and asymptotic properties of the correlation kernel of the white Wishart ensemble.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا