Do you want to publish a course? Click here

A generalized Liebs theorem and its applications to spectrum estimates for a sum of random matrices

71   0   0.0 ( 0 )
 Added by De Huang
 Publication date 2018
and research's language is English
 Authors De Huang




Ask ChatGPT about the research

In this paper we prove the concavity of the $k$-trace functions, $Amapsto (text{Tr}_k[exp(H+ln A)])^{1/k}$, on the convex cone of all positive definite matrices. $text{Tr}_k[A]$ denotes the $k_{mathrm{th}}$ elementary symmetric polynomial of the eigenvalues of $A$. As an application, we use the concavity of these $k$-trace functions to derive tail bounds and expectation estimates on the sum of the $k$ largest (or smallest) eigenvalues of a sum of random matrices.



rate research

Read More

96 - De Huang 2019
We show that Liebs concavity theorem holds more generally for any unitary invariant matrix function $phi:mathbf{H}_+^nrightarrow mathbb{R}_+^n$ that is concave and satisfies Holders inequality. Concretely, we prove the joint concavity of the function $(A,B) mapstophibig[(B^frac{qs}{2}K^*A^{ps}KB^frac{qs}{2})^{frac{1}{s}}big] $ on $mathbf{H}_+^ntimesmathbf{H}_+^m$, for any $Kin mathbb{C}^{ntimes m}$ and any $s,p,qin(0,1], p+qleq 1$. This result improves a recent work by Huang for a more specific class of $phi$.
We present some new results on the joint distribution of an arbitrary subset of the ordered eigenvalues of complex Wishart, double Wishart, and Gaussian hermitian random matrices of finite dimensions, using a tensor pseudo-determinant operator. Specifically, we derive compact expressions for the joint probability distribution function of the eigenvalues and the expectation of functions of the eigenvalues, including joint moments, for the case of both ordered and unordered eigenvalues.
314 - Lihua You , Man Yang , JInxi Li 2016
In this paper, we give the spectrum of a matrix by using the quotient matrix, then we apply this result to various matrices associated to a graph and a digraph, including adjacency matrix, (signless) Laplacian matrix, distance matrix, distance (signless) Laplacian matrix, to obtain some known and new results. Moreover, we propose some problems for further research.
We consider the problem of learning a coefficient vector x_0in R^N from noisy linear observation y=Ax_0+w in R^n. In many contexts (ranging from model selection to image processing) it is desirable to construct a sparse estimator x. In this case, a popular approach consists in solving an L1-penalized least squares problem known as the LASSO or Basis Pursuit DeNoising (BPDN). For sequences of matrices A of increasing dimensions, with independent gaussian entries, we prove that the normalized risk of the LASSO converges to a limit, and we obtain an explicit expression for this limit. Our result is the first rigorous derivation of an explicit formula for the asymptotic mean square error of the LASSO for random instances. The proof technique is based on the analysis of AMP, a recently developed efficient algorithm, that is inspired from graphical models ideas. Simulations on real data matrices suggest that our results can be relevant in a broad array of practical applications.
275 - Piero Barone 2010
Pencils of Hankel matrices whose elements have a joint Gaussian distribution with nonzero mean and not identical covariance are considered. An approximation to the distribution of the squared modulus of their determinant is computed which allows to get a closed form approximation of the condensed density of the generalized eigenvalues of the pencils. Implications of this result for solving several moments problems are discussed and some numerical examples are provided.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا