Do you want to publish a course? Click here

Concentration of the Frobenius norm of generalized matrix inverses

68   0   0.0 ( 0 )
 Added by Remi Gribonval
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

In many applications it is useful to replace the Moore-Penrose pseudoinverse (MPP) by a different generalized inverse with more favorable properties. We may want, for example, to have many zero entries, but without giving up too much of the stability of the MPP. One way to quantify stability is by how much the Frobenius norm of a generalized inverse exceeds that of the MPP. In this paper we derive finite-size concentration bounds for the Frobenius norm of $ell^p$-minimal general inverses of iid Gaussian matrices, with $1 leq p leq 2$. For $p = 1$ we prove exponential concentration of the Frobenius norm of the sparse pseudoinverse; for $p = 2$, we get a similar concentration bound for the MPP. Our proof is based on the convex Gaussian min-max theorem, but unlike previous applications which give asymptotic results, we derive finite-size bounds.



rate research

Read More

Estimating the rank of a corrupted data matrix is an important task in data science, most notably for choosing the number of components in principal component analysis. Significant progress on this task has been made using random matrix theory by characterizing the spectral properties of large noise matrices. However, utilizing such tools is not straightforward when the data matrix consists of count random variables, such as Poisson or binomial, in which case the noise can be heteroskedastic with an unknown variance in each entry. In this work, focusing on a Poisson random matrix with independent entries, we propose a simple procedure termed textit{biwhitening} that makes it possible to estimate the rank of the underlying data matrix (i.e., the Poisson parameter matrix) without any prior knowledge on its structure. Our approach is based on the key observation that one can scale the rows and columns of the data matrix simultaneously so that the spectrum of the corresponding noise agrees with the standard Marchenko-Pastur (MP) law, justifying the use of the MP upper edge as a threshold for rank selection. Importantly, the required scaling factors can be estimated directly from the observations by solving a matrix scaling problem via the Sinkhorn-Knopp algorithm. Aside from the Poisson distribution, we extend our biwhitening approach to other discrete distributions, such as the generalized Poisson, binomial, multinomial, and negative binomial. We conduct numerical experiments that corroborate our theoretical findings, and demonstrate our approach on real single-cell RNA sequencing (scRNA-seq) data, where we show that our results agree with a slightly overdispersed generalized Poisson model.
70 - De Huang 2018
In this paper we prove the concavity of the $k$-trace functions, $Amapsto (text{Tr}_k[exp(H+ln A)])^{1/k}$, on the convex cone of all positive definite matrices. $text{Tr}_k[A]$ denotes the $k_{mathrm{th}}$ elementary symmetric polynomial of the eigenvalues of $A$. As an application, we use the concavity of these $k$-trace functions to derive tail bounds and expectation estimates on the sum of the $k$ largest (or smallest) eigenvalues of a sum of random matrices.
In this paper, we explicitly prove that statistical manifolds, related to exponential families and with flat structure connection have a Frobenius manifold structure. This latter object, at the interplay of beautiful interactions between topology and quantum field theory, raises natural questions, concerning the existence of Gromov--Witten invariants for those statistical manifolds. We prove that an analog of Gromov--Witten invariants for those statistical manifolds (GWS) exists. Similarly to its original version, these new invariants have a geometric interpretation concerning intersection points of para-holomorphic curves. However, it also plays an important role in the learning process, since it determines whether a system has succeeded in learning or failed.
In this paper, we introduce two new generalized inverses of matrices, namely, the $bra{i}{m}$-core inverse and the $pare{j}{m}$-core inverse. The $bra{i}{m}$-core inverse of a complex matrix extends the notions of the core inverse defined by Baksalary and Trenkler cite{BT} and the core-EP inverse defined by Manjunatha Prasad and Mohana cite{MM}. The $pare{j}{m}$-core inverse of a complex matrix extends the notions of the core inverse and the ${rm DMP}$-inverse defined by Malik and Thome cite{MT}. Moreover, the formulae and properties of these two new concepts are investigated by using matrix decompositions and matrix powers.
We present some new results on the joint distribution of an arbitrary subset of the ordered eigenvalues of complex Wishart, double Wishart, and Gaussian hermitian random matrices of finite dimensions, using a tensor pseudo-determinant operator. Specifically, we derive compact expressions for the joint probability distribution function of the eigenvalues and the expectation of functions of the eigenvalues, including joint moments, for the case of both ordered and unordered eigenvalues.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا