No Arabic abstract
We propose to store several integers modulo a small prime into a single machine word. Modular addition is performed by addition and possibly subtraction of a word containing several times the modulo. Modular Multiplication is not directly accessible but modular dot product can be performed by an integer multiplication by the reverse integer. Modular multiplication by a word containing a single residue is a also possible. Therefore matrix multiplication can be performed on such a compressed storage. We here give bounds on the sizes of primes and matrices for which such a compression is possible. We also explicit the details of the required compressed arithmetic routines.
The complexity of matrix multiplication (hereafter MM) has been intensively studied since 1969, when Strassen surprisingly decreased the exponent 3 in the cubic cost of the straightforward classical MM to log 2 (7) $approx$ 2.8074. Applications to some fundamental problems of Linear Algebra and Computer Science have been immediately recognized, but the researchers in Computer Algebra keep discovering more and more applications even today, with no sign of slowdown. We survey the unfinished history of decreasing the exponent towards its information lower bound 2, recall some important techniques discovered in this process and linked to other fields of computing, reveal sample surprising applications to fast computation of the inner products of two vectors and summation of integers, and discuss the curse of recursion, which separates the progress in fast MM into its most acclaimed and purely theoretical part and into valuable acceleration of MM of feasible sizes. Then, in the second part of our paper, we cover fast MM in realistic symbolic computations and discuss applications and implementation of fast exact matrix multiplication. We first review how most of exact linear algebra can be reduced to matrix multiplication over small finite fields. Then we highlight the differences in the design of approximate and exact implementations of fast MM, taking into account nowadays processor and memory hierarchies. In the concluding section we comment on current perspectives of the study of fast MM.
We present new algorithms to detect and correct errors in the product of two matrices, or the inverse of a matrix, over an arbitrary field. Our algorithms do not require any additional information or encoding other than the original inputs and the erroneous output. Their running time is softly linear in the number of nonzero entries in these matrices when the number of errors is sufficiently small, and they also incorporate fast matrix multiplication so that the cost scales well when the number of errors is large. These algorithms build on the recent result of Gasieniec et al (2017) on correcting matrix products, as well as existing work on verification algorithms, sparse low-rank linear algebra, and sparse polynomial interpolation.
We present a non-commutative algorithm for the multiplication of a 2x2-block-matrix by its transpose using 5 block products (3 recursive calls and 2 general products) over C or any finite field.We use geometric considerations on the space of bilinear forms describing 2x2 matrix products to obtain this algorithm and we show how to reduce the number of involved additions.The resulting algorithm for arbitrary dimensions is a reduction of multiplication of a matrix by its transpose to general matrix product, improving by a constant factor previously known reductions.Finally we propose schedules with low memory footprint that support a fast and memory efficient practical implementation over a finite field.To conclude, we show how to use our result in LDLT factorization.
We propose a new algorithm for multiplying dense polynomials with integer coefficients in a parallel fashion, targeting multi-core processor architectures. Complexity estimates and experimental comparisons demonstrate the advantages of this new approach.
In the sparse polynomial multiplication problem, one is asked to multiply two sparse polynomials f and g in time that is proportional to the size of the input plus the size of the output. The polynomials are given via lists of their coefficients F and G, respectively. Cole and Hariharan (STOC 02) have given a nearly optimal algorithm when the coefficients are positive, and Arnold and Roche (ISSAC 15) devised an algorithm running in time proportional to the structural sparsity of the product, i.e. the set supp(F)+supp(G). The latter algorithm is particularly efficient when there not too many cancellations of coefficients in the product. In this work we give a clean, nearly optimal algorithm for the sparse polynomial multiplication problem.