Do you want to publish a course? Click here

Diagonality Measures of Hermitian Positive-Definite Matrices with Application to the Approximate Joint Diagonalization Problem

129   0   0.0 ( 0 )
 Added by Maher Moakher
 Publication date 2016
and research's language is English




Ask ChatGPT about the research

In this paper, we introduce properly-invariant diagonality measures of Hermitian positive-definite matrices. These diagonality measures are defined as distances or divergences between a given positive-definite matrix and its diagonal part. We then give closed-form expressions of these diagonality measures and discuss their invariance properties. The diagonality measure based on the log-determinant $alpha$-divergence is general enough as it includes a diagonality criterion used by the signal processing community as a special case. These diagonality measures are then used to formulate minimization problems for finding the approximate joint diagonalizer of a given set of Hermitian positive-definite matrices. Numerical computations based on a modified Newton method are presented and commented.



rate research

Read More

290 - Marco Congedo 2015
We explore the connection between two problems that have arisen independently in the signal processing and related fields: the estimation of the geometric mean of a set of symmetric positive definite (SPD) matrices and their approximate joint diagonalization (AJD). Today there is a considerable interest in estimating the geometric mean of a SPD matrix set in the manifold of SPD matrices endowed with the Fisher information metric. The resulting mean has several important invariance properties and has proven very useful in diverse engineering applications such as biomedical and image data processing. While for two SPD matrices the mean has an algebraic closed form solution, for a set of more than two SPD matrices it can only be estimated by iterative algorithms. However, none of the existing iterative algorithms feature at the same time fast convergence, low computational complexity per iteration and guarantee of convergence. For this reason, recently other definitions of geometric mean based on symmetric divergence measures, such as the Bhattacharyya divergence, have been considered. The resulting means, although possibly useful in practice, do not satisfy all desirable invariance properties. In this paper we consider geometric means of co-variance matrices estimated on high-dimensional time-series, assuming that the data is generated according to an instantaneous mixing model, which is very common in signal processing. We show that in these circumstances we can approximate the Fisher information geometric mean by employing an efficient AJD algorithm. Our approximation is in general much closer to the Fisher information geometric mean as compared to its competitors and verifies many invariance properties. Furthermore, convergence is guaranteed, the computational complexity is low and the convergence rate is quadratic. The accuracy of this new geometric mean approximation is demonstrated by means of simulations.
99 - Suvrit Sra 2019
We study metric properties of symmetric divergences on Hermitian positive definite matrices. In particular, we prove that the square root of these divergences is a distance metric. As a corollary we obtain a proof of the metric property for Quantum Jensen-Shannon-(Tsallis) divergences (parameterized by $alphain [0,2]$), which in turn (for $alpha=1$) yields a proof of the metric property of the Quantum Jensen-Shannon divergence that was conjectured by Lamberti emph{et al.} a decade ago (emph{Metric character of the quantum Jensen-Shannon divergence}, Phy. Rev. A, textbf{79}, (2008).) A somewhat more intricate argument also establishes metric properties of Jensen-Renyi divergences (for $alpha in (0,1)$), and outlines a technique that may be of independent interest.
Let $mathbb{C}^{ntimes n}$ be the set of all $n times n$ complex matrices. For any Hermitian positive semi-definite matrices $A$ and $B$ in $mathbb{C}^{ntimes n}$, their new common upper bound less than $A+B-A:B$ is constructed, where $(A+B)^dag$ denotes the Moore-Penrose inverse of $A+B$, and $A:B=A(A+B)^dag B$ is the parallel sum of $A$ and $B$. A factorization formula for $(A+X):(B+Y)-A:B-X:Y$ is derived, where $X,Yinmathbb{C}^{ntimes n}$ are any Hermitian positive semi-definite perturbations of $A$ and $B$, respectively. Based on the derived factorization formula and the constructed common upper bound of $X$ and $Y$, some new and sharp norm upper bounds of $(A+X):(B+Y)-A:B$ are provided. Numerical examples are also provided to illustrate the sharpness of the obtained norm upper bounds.
In this paper a geometric method based on Grassmann manifolds and matrix Riccati equations to make hermitian matrices diagonal is presented. We call it Riccati Diagonalization.
Geostatistical modeling for continuous point-referenced data has been extensively applied to neuroimaging because it produces efficient and valid statistical inference. However, diffusion tensor imaging (DTI), a neuroimaging characterizing the brain structure produces a positive definite (p.d.) matrix for each voxel. Current geostatistical modeling has not been extended to p.d. matrices because introducing spatial dependence among positive definite matrices properly is challenging. In this paper, we use the spatial Wishart process, a spatial stochastic process (random field) where each p.d. matrix-variate marginally follows a Wishart distribution, and spatial dependence between random matrices is induced by latent Gaussian processes. This process is valid on an uncountable collection of spatial locations and is almost surely continuous, leading to a reasonable means of modeling spatial dependence. Motivated by a DTI dataset of cocaine users, we propose a spatial matrix-variate regression model based on the spatial Wishart process. A problematic issue is that the spatial Wishart process has no closed-form density function. Hence, we propose approximation methods to obtain a feasible working model. A local likelihood approximation method is also applied to achieve fast computation. The simulation studies and real data analysis demonstrate that the working model produces reliable inference and improved performance compared to other methods.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا