ترغب بنشر مسار تعليمي؟ اضغط هنا

Concentration and convergence rates for spectral measures of random matrices

152   0   0.0 ( 0 )
 نشر من قبل Elizabeth Meckes
 تاريخ النشر 2011
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

The topic of this paper is the typical behavior of the spectral measures of large random matrices drawn from several ensembles of interest, including in particular matrices drawn from Haar measure on the classical Lie groups, random compressions of random Hermitian matrices, and the so-called random sum of two independent random matrices. In each case, we estimate the expected Wasserstein distance from the empirical spectral measure to a deterministic reference measure, and prove a concentration result for that distance. As a consequence we obtain almost sure convergence of the empirical spectral measures in all cases.



قيم البحث

اقرأ أيضاً

This paper considers the empirical spectral measure of a power of a random matrix drawn uniformly from one of the compact classical matrix groups. We give sharp bounds on the $L_p$-Wasserstein distances between this empirical measure and the uniform measure on the circle, which show a smooth transition in behavior when the power increases and yield rates on almost sure convergence when the dimension grows. Along the way, we prove the sharp logarithmic Sobolev inequality on the unitary group.
Let $A$ and $B$ be two $N$ by $N$ deterministic Hermitian matrices and let $U$ be an $N$ by $N$ Haar distributed unitary matrix. It is well known that the spectral distribution of the sum $H=A+UBU^*$ converges weakly to the free additive convolution of the spectral distributions of $A$ and $B$, as $N$ tends to infinity. We establish the optimal convergence rate ${frac{1}{N}}$ in the bulk of the spectrum.
75 - Boris Landa 2020
It is well known that any positive matrix can be scaled to have prescribed row and column sums by multiplying its rows and columns by certain positive scaling factors (which are unique up to a positive scalar). This procedure is known as matrix scali ng, and has found numerous applications in operations research, economics, image processing, and machine learning. In this work, we investigate the behavior of the scaling factors and the resulting scaled matrix when the matrix to be scaled is random. Specifically, letting $widetilde{A}inmathbb{R}^{Mtimes N}$ be a positive and bounded random matrix whose entries assume a certain type of independence, we provide a concentration inequality for the scaling factors of $widetilde{A}$ around those of $A = mathbb{E}[widetilde{A}]$. This result is employed to bound the convergence rate of the scaling factors of $widetilde{A}$ to those of $A$, as well as the concentration of the scaled version of $widetilde{A}$ around the scaled version of $A$ in operator norm, as $M,Nrightarrowinfty$. When the entries of $widetilde{A}$ are independent, $M=N$, and all prescribed row and column sums are $1$ (i.e., doubly-stochastic matrix scaling), both of the previously-mentioned bounds are $mathcal{O}(sqrt{log N / N})$ with high probability. We demonstrate our results in several simulations.
We consider the sum of two large Hermitian matrices $A$ and $B$ with a Haar unitary conjugation bringing them into a general relative position. We prove that the eigenvalue density on the scale slightly above the local eigenvalue spacing is asymptoti cally given by the free convolution of the laws of $A$ and $B$ as the dimension of the matrix increases. This implies optimal rigidity of the eigenvalues and optimal rate of convergence in Voiculescus theorem. Our previous works [3,4] established these results in the bulk spectrum, the current paper completely settles the problem at the spectral edges provided they have the typical square-root behavior. The key element of our proof is to compensate the deterioration of the stability of the subordination equations by sharp error estimates that properly account for the local density near the edge. Our results also hold if the Haar unitary matrix is replaced by the Haar orthogonal matrix.
131 - Zhigang Bao , Yukun He 2019
Let $F_N$ and $F$ be the empirical and limiting spectral distributions of an $Ntimes N$ Wigner matrix. The Cram{e}r-von Mises (CvM) statistic is a classical goodness-of-fit statistic that characterizes the distance between $F_N$ and $F$ in $ell^2$-no rm. In this paper, we consider a mesoscopic approximation of the CvM statistic for Wigner matrices, and derive its limiting distribution. In the appendix, we also give the limiting distribution of the CvM statistic (without approximation) for the toy model CUE.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا