ترغب بنشر مسار تعليمي؟ اضغط هنا

Making matrices better: Geometry and topology of polar and singular value decomposition

71   0   0.0 ( 0 )
 نشر من قبل Dennis DeTurck
 تاريخ النشر 2017
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

Our goal here is to see the space of matrices of a given size from a geometric and topological perspective, with emphasis on the families of various ranks and how they fit together. We pay special attention to the nearest orthogonal neighbor and nearest singular neighbor of a given matrix, both of which play central roles in matrix decompositions, and then against this visual backdrop examine the polar and singular value decompositions and some of their applications.



قيم البحث

اقرأ أيضاً

We present the analytical singular value decomposition of the stoichiometry matrix for a spatially discrete reaction-diffusion system on a one dimensional domain. The domain has two subregions which share a single common boundary. Each of the subregi ons is further partitioned into a finite number of compartments. Chemical reactions can occur within a compartment, whereas diffusion is represented as movement between adjacent compartments. Inspired by biology, we study both 1) the case where the reactions on each side of the boundary are different and only certain species diffuse across the boundary as well as 2) the case with spatially homogenous reactions and diffusion. We write the stoichiometry matrix for these two classes of systems using a Kronecker product formulation. For the first scenario, we apply linear perturbation theory to derive an approximate singular value decomposition in the limit as diffusion becomes much faster than reactions. For the second scenario, we derive an exact analytical singular value decomposition for all relative diffusion and reaction time scales. By writing the stoichiometry matrix using Kronecker products, we show that the singular vectors and values can also be written concisely using Kronecker products. Ultimately, we find that the singular value decomposition of the reaction-diffusion stoichiometry matrix depends on the singular value decompositions of smaller matrices. These smaller matrices represent modifie
381 - Michael E. Wall 2002
This chapter describes gene expression analysis by Singular Value Decomposition (SVD), emphasizing initial characterization of the data. We describe SVD methods for visualization of gene expression data, representation of the data using a smaller num ber of variables, and detection of patterns in noisy gene expression data. In addition, we describe the precise relation between SVD analysis and Principal Component Analysis (PCA) when PCA is calculated using the covariance matrix, enabling our descriptions to apply equally well to either method. Our aim is to provide definitions, interpretations, examples, and references that will serve as resources for understanding and extending the application of SVD and PCA to gene expression analysis.
82 - Ehsan Rohani , Gwan Choi , Mi Lu 2017
In this paper we introduce the algorithm and the fixed point hardware to calculate the normalized singular value decomposition of a non-symmetric matrices using Givens fast (approximate) rotations. This algorithm only uses the basic combinational log ic modules such as adders, multiplexers, encoders, Barrel shifters (B-shifters), and comparators and does not use any lookup table. This method in fact combines the iterative properties of singular value decomposition method and CORDIC method in one single iteration. The introduced architecture is a systolic architecture that uses two different types of processors, diagonal and non-diagonal processors. The diagonal processor calculates, transmits and applies the horizontal and vertical rotations, while the non-diagonal processor uses a fully combinational architecture to receive, and apply the rotations. The diagonal processor uses priority encoders, Barrel shifters, and comparators to calculate the rotation angles. Both processors use a series of adders to apply the rotation angles. The design presented in this work provides $2.83sim649$ times better energy per matrix performance compared to the state of the art designs. This performance achieved without the employment of pipelining; a better performance advantage is expected to be achieved employing pipelining.
This paper introduces the functional tensor singular value decomposition (FTSVD), a novel dimension reduction framework for tensors with one functional mode and several tabular modes. The problem is motivated by high-order longitudinal data analysis. Our model assumes the observed data to be a random realization of an approximate CP low-rank functional tensor measured on a discrete time grid. Incorporating tensor algebra and the theory of Reproducing Kernel Hilbert Space (RKHS), we propose a novel RKHS-based constrained power iteration with spectral initialization. Our method can successfully estimate both singular vectors and functions of the low-rank structure in the observed data. With mild assumptions, we establish the non-asymptotic contractive error bounds for the proposed algorithm. The superiority of the proposed framework is demonstrated via extensive experiments on both simulated and real data.
340 - Jouhayna Harmouch 2017
We study the decomposition of a multivariate Hankel matrix H_$sigma$ as a sum of Hankel matrices of small rank in correlation with the decomposition of its symbol $sigma$ as a sum of polynomial-exponential series. We present a new algorithm to comput e the low rank decomposition of the Hankel operator and the decomposition of its symbol exploiting the properties of the associated Artinian Gorenstein quotient algebra A_$sigma$. A basis of A_$sigma$ is computed from the Singular Value Decomposition of a sub-matrix of the Hankel matrix H_$sigma$. The frequencies and the weights are deduced from the generalized eigenvectors of pencils of shifted sub-matrices of H $sigma$. Explicit formula for the weights in terms of the eigenvectors avoid us to solve a Vandermonde system. This new method is a multivariate generalization of the so-called Pencil method for solving Prony-type decomposition problems. We analyse its numerical behaviour in the presence of noisy input moments, and describe a rescaling technique which improves the numerical quality of the reconstruction for frequencies of high amplitudes. We also present a new Newton iteration, which converges locally to the closest multivariate Hankel matrix of low rank and show its impact for correcting errors on input moments.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا