ترغب بنشر مسار تعليمي؟ اضغط هنا

Approximate Joint Matrix Triangularization

73   0   0.0 ( 0 )
 نشر من قبل Nicolo Colombo
 تاريخ النشر 2016
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We consider the problem of approximate joint triangularization of a set of noisy jointly diagonalizable real matrices. Approximate joint triangularizers are commonly used in the estimation of the joint eigenstructure of a set of matrices, with applications in signal processing, linear algebra, and tensor decomposition. By assuming the input matrices to be perturbations of noise-free, simultaneously diagonalizable ground-truth matrices, the approximate joint triangularizers are expected to be perturbations of the exact joint triangularizers of the ground-truth matrices. We provide a priori and a posteriori perturbation bounds on the `distance between an approximate joint triangularizer and its exact counterpart. The a priori bounds are theoretical inequalities that involve functions of the ground-truth matrices and noise matrices, whereas the a posteriori bounds are given in terms of observable quantities that can be computed from the input matrices. From a practical perspective, the problem of finding the best approximate joint triangularizer of a set of noisy matrices amounts to solving a nonconvex optimization problem. We show that, under a condition on the noise level of the input matrices, it is possible to find a good initial triangularizer such that the solution obtained by any local descent-type algorithm has certain global guarantees. Finally, we discuss the application of approximate joint matrix triangularization to canonical tensor decomposition and we derive novel estimation error bounds.

قيم البحث

اقرأ أيضاً

Matrix computations, especially iterative PDE solving (and the sparse matrix vector multiplication subproblem within) using conjugate gradient algorithm, and LU/Cholesky decomposition for solving system of linear equations, form the kernel of many ap plications, such as circuit simulators, computational fluid dynamics or structural analysis etc. The problem of designing approaches for parallelizing these computations, to get good speedups as much as possible as per Amdahls law, has been continuously researched upon. In this paper, we discuss approaches based on the use of finite projective geometry graphs for these two problems. For the problem of conjugate gradient algorithm, the approach looks at an alternative data distribution based on projective-geometry concepts. It is proved that this data distribution is an optimal data distribution for scheduling the main problem of dense matrix-vector multiplication. For the problem of parallel LU/Cholesky decomposition of general matrices, the approach is motivated by the recently published scheme for interconnects of distributed systems, perfect difference networks. We find that projective-geometry based graphs indeed offer an exciting way of parallelizing these computations, and in fact many others. Moreover, their applications ranges from architectural ones (interconnect choice) to algorithmic ones (data distributions).
The problem of transmitting a common message to multiple users over the Gaussian multiple-input multiple-output broadcast channel is considered, where each user is equipped with an arbitrary number of antennas. A closed-loop scenario is assumed, for which a practical capacity-approaching scheme is developed. By applying judiciously chosen unitary operations at the transmit and receive nodes, the channel matrices are triangularized so that the resulting matrices have equal diagonals, up to a possible multiplicative scalar factor. This, along with the utilization of successive interference cancellation, reduces the coding and decoding tasks to those of coding and decoding over the single-antenna additive white Gaussian noise channel. Over the resulting effective channel, any off-the-shelf code may be used. For the two-user case, it was recently shown that such joint unitary triangularization is always possible. In this paper, it is shown that for more than two users, it is necessary to carry out the unitary linear processing jointly over multiple channel uses, i.e., space-time processing is employed. It is further shown that exact triangularization, where all resulting diagonals are equal, is still not always possible, and appropriate conditions for the existence of such are established for certain cases. When exact triangularization is not possible, an asymptotic construction is proposed, that achieves the desired property of equal diagonals up to edge effects that can be made arbitrarily small, at the price of processing a sufficiently large number of channel uses together.
Composite function minimization captures a wide spectrum of applications in both computer vision and machine learning. It includes bound constrained optimization, $ell_1$ norm regularized optimization, and $ell_0$ norm regularized optimization as spe cial cases. This paper proposes and analyzes a new Generalized Matrix Splitting Algorithm (GMSA) for minimizing composite functions. It can be viewed as a generalization of the classical Gauss-Seidel method and the Successive Over-Relaxation method for solving linear systems in the literature. Our algorithm is derived from a novel triangle operator mapping, which can be computed exactly using a new generalized Gaussian elimination procedure. We establish the global convergence, convergence rate, and iteration complexity of GMSA for convex problems. In addition, we also discuss several important extensions of GMSA. Finally, we validate the performance of our proposed method on three particular applications: nonnegative matrix factorization, $ell_0$ norm regularized sparse coding, and $ell_1$ norm regularized Dantzig selector problem. Extensive experiments show that our method achieves state-of-the-art performance in term of both efficiency and efficacy.
In this paper, we present several descent methods that can be applied to nonnegative matrix factorization and we analyze a recently developped fast block coordinate method called Rank-one Residue Iteration (RRI). We also give a comparison of these di fferent methods and show that the new block coordinate method has better properties in terms of approximation error and complexity. By interpreting this method as a rank-one approximation of the residue matrix, we prove that it emph{converges} and also extend it to the nonnegative tensor factorization and introduce some variants of the method by imposing some additional controllable constraints such as: sparsity, discreteness and smoothness.
86 - Stephen A. Vavasis 2007
Nonnegative matrix factorization (NMF) has become a prominent technique for the analysis of image databases, text databases and other information retrieval and clustering applications. In this report, we define an exact version of NMF. Then we establ ish several results about exact NMF: (1) that it is equivalent to a problem in polyhedral combinatorics; (2) that it is NP-hard; and (3) that a polynomial-time local search heuristic exists.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا