Do you want to publish a course? Click here

Some fast algorithms multiplying a matrix by its adjoint

68   0   0.0 ( 0 )
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

We present a non-commutative algorithm for the multiplication of a 2 x 2 block-matrix by its adjoint, defined by a matrix ring anti-homomorphism. This algorithm uses 5 block products (3 recursive calls and 2 general products)over C or in positive characteristic. The resulting algorithm for arbitrary dimensions is a reduction of multiplication of a matrix by its adjoint to general matrix product, improving by a constant factor previously known reductions. We prove also that there is no algorithm derived from bilinear forms using only four products and the adjoint of one of them. Second we give novel dedicated algorithms for the complex field and the quaternions to alternatively compute the multiplication taking advantage of the structure of the matrix-polynomial arithmetic involved. We then analyze the respective ranges of predominance of the two strategies. Finally we propose schedules with low memory footprint that support a fast and memory efficient practical implementation over a prime field.

rate research

Read More

We present a non-commutative algorithm for the multiplication of a 2x2-block-matrix by its transpose using 5 block products (3 recursive calls and 2 general products) over C or any finite field.We use geometric considerations on the space of bilinear forms describing 2x2 matrix products to obtain this algorithm and we show how to reduce the number of involved additions.The resulting algorithm for arbitrary dimensions is a reduction of multiplication of a matrix by its transpose to general matrix product, improving by a constant factor previously known reductions.Finally we propose schedules with low memory footprint that support a fast and memory efficient practical implementation over a finite field.To conclude, we show how to use our result in LDLT factorization.
136 - Gilles Villard 2008
Kaltofen has proposed a new approach in 1992 for computing matrix determinants without divisions. The algorithm is based on a baby steps/giant steps construction of Krylov subspaces, and computes the determinant as the constant term of a characteristic polynomial. For matrices over an abstract ring, by the results of Baur and Strassen, the determinant algorithm, actually a straight-line program, leads to an algorithm with the same complexity for computing the adjoint of a matrix. However, the latter adjoint algorithm is obtained by the reverse mode of automatic differentiation, hence somehow is not explicit. We present an alternative (still closely related) algorithm for the adjoint thatcan be implemented directly, we mean without resorting to an automatic transformation. The algorithm is deduced by applying program differentiation techniques by hand to Kaltofens method, and is completely decribed. As subproblem, we study the differentiation of programs that compute minimum polynomials of lineraly generated sequences, and we use a lazy polynomial evaluation mechanism for reducing the cost of Strassens avoidance of divisions in our case.
The complexity of matrix multiplication (hereafter MM) has been intensively studied since 1969, when Strassen surprisingly decreased the exponent 3 in the cubic cost of the straightforward classical MM to log 2 (7) $approx$ 2.8074. Applications to some fundamental problems of Linear Algebra and Computer Science have been immediately recognized, but the researchers in Computer Algebra keep discovering more and more applications even today, with no sign of slowdown. We survey the unfinished history of decreasing the exponent towards its information lower bound 2, recall some important techniques discovered in this process and linked to other fields of computing, reveal sample surprising applications to fast computation of the inner products of two vectors and summation of integers, and discuss the curse of recursion, which separates the progress in fast MM into its most acclaimed and purely theoretical part and into valuable acceleration of MM of feasible sizes. Then, in the second part of our paper, we cover fast MM in realistic symbolic computations and discuss applications and implementation of fast exact matrix multiplication. We first review how most of exact linear algebra can be reduced to matrix multiplication over small finite fields. Then we highlight the differences in the design of approximate and exact implementations of fast MM, taking into account nowadays processor and memory hierarchies. In the concluding section we comment on current perspectives of the study of fast MM.
156 - Xavier Caruso 2021
We design algorithms for computing values of many p-adic elementary and special functions, including logarithms, exponentials, polylogarithms, and hypergeometric functions. All our algorithms feature a quasi-linear complexity with respect to the target precision and most of them are based on an adaptation to the-adic setting of the binary splitting and bit-burst strategies.
107 - Daniel S. Roche 2018
We present new algorithms to detect and correct errors in the product of two matrices, or the inverse of a matrix, over an arbitrary field. Our algorithms do not require any additional information or encoding other than the original inputs and the erroneous output. Their running time is softly linear in the number of nonzero entries in these matrices when the number of errors is sufficiently small, and they also incorporate fast matrix multiplication so that the cost scales well when the number of errors is large. These algorithms build on the recent result of Gasieniec et al (2017) on correcting matrix products, as well as existing work on verification algorithms, sparse low-rank linear algebra, and sparse polynomial interpolation.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا