ترغب بنشر مسار تعليمي؟ اضغط هنا

Perturbation estimation for the parallel sum of Hermitian positive semi-definite matrices

110   0   0.0 ( 0 )
 نشر من قبل Qingxiang Xu
 تاريخ النشر 2018
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

Let $mathbb{C}^{ntimes n}$ be the set of all $n times n$ complex matrices. For any Hermitian positive semi-definite matrices $A$ and $B$ in $mathbb{C}^{ntimes n}$, their new common upper bound less than $A+B-A:B$ is constructed, where $(A+B)^dag$ denotes the Moore-Penrose inverse of $A+B$, and $A:B=A(A+B)^dag B$ is the parallel sum of $A$ and $B$. A factorization formula for $(A+X):(B+Y)-A:B-X:Y$ is derived, where $X,Yinmathbb{C}^{ntimes n}$ are any Hermitian positive semi-definite perturbations of $A$ and $B$, respectively. Based on the derived factorization formula and the constructed common upper bound of $X$ and $Y$, some new and sharp norm upper bounds of $(A+X):(B+Y)-A:B$ are provided. Numerical examples are also provided to illustrate the sharpness of the obtained norm upper bounds.



قيم البحث

اقرأ أيضاً

163 - Palle Jorgensen , Feng Tian 2019
With view to applications in stochastic analysis and geometry, we introduce a new correspondence for positive definite kernels (p.d.) $K$ and their associated reproducing kernel Hilbert spaces. With this we establish two kinds of factorizations: (i) Probabilistic: Starting with a positive definite kernel $K$ we analyze associated Gaussian processes $V$. Properties of the Gaussian processes will be derived from certain factorizations of $K$, arising as a covariance kernel of $V$. (ii) Geometric analysis: We discuss families of measure spaces arising as boundaries for $K$. Our results entail an analysis of a partial order on families of p.d. kernels, a duality for operators and frames, optimization, Karhunen--Lo`eve expansions, and factorizations. Applications include a new boundary analysis for the Drury-Arveson kernel, and for certain fractals arising as iterated function systems; and an identification of optimal feature spaces in machine learning models.
In this paper, we introduce properly-invariant diagonality measures of Hermitian positive-definite matrices. These diagonality measures are defined as distances or divergences between a given positive-definite matrix and its diagonal part. We then gi ve closed-form expressions of these diagonality measures and discuss their invariance properties. The diagonality measure based on the log-determinant $alpha$-divergence is general enough as it includes a diagonality criterion used by the signal processing community as a special case. These diagonality measures are then used to formulate minimization problems for finding the approximate joint diagonalizer of a given set of Hermitian positive-definite matrices. Numerical computations based on a modified Newton method are presented and commented.
Representations in the form of Symmetric Positive Definite (SPD) matrices have been popularized in a variety of visual learning applications due to their demonstrated ability to capture rich second-order statistics of visual data. There exist several similarity measures for comparing SPD matrices with documented benefits. However, selecting an appropriate measure for a given problem remains a challenge and in most cases, is the result of a trial-and-error process. In this paper, we propose to learn similarity measures in a data-driven manner. To this end, we capitalize on the alphabeta-log-det divergence, which is a meta-divergence parametrized by scalars alpha and beta, subsuming a wide family of popular information divergences on SPD matrices for distinct and discrete values of these parameters. Our key idea is to cast these parameters in a continuum and learn them from data. We systematically extend this idea to learn vector-valued parameters, thereby increasing the expressiveness of the underlying non-linear measure. We conjoin the divergence learning problem with several standard tasks in machine learning, including supervised discriminative dictionary learning and unsupervised SPD matrix clustering. We present Riemannian gradient descent schemes for optimizing our formulations efficiently, and show the usefulness of our method on eight standard computer vision tasks.
We give explicit transforms for Hilbert spaces associated with positive definite functions on $mathbb{R}$, and positive definite tempered distributions, incl., generalizations to non-abelian locally compact groups. Applications to the theory of exten sions of p.d. functions/distributions are included. We obtain explicit representation formulas for positive definite tempered distributions in the sense of L. Schwartz, and we give applications to Dirac combs and to diffraction. As further applications, we give parallels between Bochners theorem (for continuous p.d. functions) on the one hand, and the generalization to Bochner/Schwartz representations for positive definite tempered distributions on the other; in the latter case, via tempered positive measures. Via our transforms, we make precise the respective reproducing kernel Hilbert spaces (RKHSs), that of N. Aronszajn and that of L. Schwartz. Further applications are given to stationary-increment Gaussian processes.
The present paper presents two new approaches to Fourier series and spectral analysis of singular measures.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا