No Arabic abstract
Computational problem certificates are additional data structures for each output, which can be used by a-possibly randomized-verification algorithm that proves the correctness of each output. In this paper, we give an algorithm that computes a certificate for the minimal polynomial of sparse or structured nxn matrices over an abstract field, of sufficiently large cardinality, whose Monte Carlo verification complexity requires a single matrix-vector multiplication and a linear number of extra field operations. We also propose a novel preconditioner that ensures irreducibility of the characteristic polynomial of the generically preconditioned matrix. This preconditioner takes linear time to be applied and uses only two random entries. We then combine these two techniques to give algorithms that compute certificates for the determinant, and thus for the characteristic polynomial, whose Monte Carlo verification complexity is therefore also linear.
Certificates to a linear algebra computation are additional data structures for each output, which can be used by a-possibly randomized- verification algorithm that proves the correctness of each output. Wiede-manns algorithm projects the Krylov sequence obtained by repeatedly multiplying a vector by a matrix to obtain a linearly recurrent sequence. The minimal polynomial of this sequence divides the minimal polynomial of the matrix. For instance, if the $ntimes n$ input matrix is sparse with n 1+o(1) non-zero entries, the computation of the sequence is quadratic in the dimension of the matrix while the computation of the minimal polynomial is n 1+o(1), once that projected Krylov sequence is obtained. In this paper we give algorithms that compute certificates for the Krylov sequence of sparse or structured $ntimes n$ matrices over an abstract field, whose Monte Carlo verification complexity can be made essentially linear. As an application this gives certificates for the determinant, the minimal and characteristic polynomials of sparse or structured matrices at the same cost.
We present an algorithm for computing a Smith form with multipliers of a regular matrix polynomial over a field. This algorithm differs from previous ones in that it computes a local Smith form for each irreducible factor in the determinant separately and then combines them into a global Smith form, whereas other algorithms apply a sequence of unimodular row and column operations to the original matrix. The performance of the algorithm in exact arithmetic is reported for several test cases.
We present randomized algorithms to compute the sumset (Minkowski sum) of two integer sets, and to multiply two univariate integer polynomials given by sparse representations. Our algorithm for sumset has cost softly linear in the combined size of the inputs and output. This is used as part of our sparse multiplication algorithm, whose cost is softly linear in the combined size of the inputs, output, and the sumset of the supports of the inputs. As a subroutine, we present a new method for computing the coefficients of a sparse polynomial, given a set containing its support. Our multiplication algorithm extends to multivariate Laurent polynomials over finite fields and rational numbers. Our techniques are based on sparse interpolation algorithms and results from analytic number theory.
This work is a comprehensive extension of Abu-Salem et al. (2015) that investigates the prowess of the Funnel Heap for implementing sums of products in the polytope method for factoring polynomials, when the polynomials are in sparse distributed representation. We exploit that the work and cache complexity of an Insert operation using Funnel Heap can be refined to de- pend on the rank of the inserted monomial product, where rank corresponds to its lifetime in Funnel Heap. By optimising on the pattern by which insertions and extractions occur during the Hensel lifting phase of the polytope method, we are able to obtain an adaptive Funnel Heap that minimises all of the work, cache, and space complexity of this phase. Additionally, we conduct a detailed empirical study confirming the superiority of Funnel Heap over the generic Binary Heap once swaps to external memory begin to take place. We demonstrate that Funnel Heap is a more efficient merger than the cache oblivious k-merger, which fails to achieve its optimal (and amortised) cache complexity when used for performing sums of products. This provides an empirical proof of concept that the overlapping approach for perform- ing sums of products using one global Funnel Heap is more suited than the serialised approach, even when the latter uses the best merging structures available.
In the sparse polynomial multiplication problem, one is asked to multiply two sparse polynomials f and g in time that is proportional to the size of the input plus the size of the output. The polynomials are given via lists of their coefficients F and G, respectively. Cole and Hariharan (STOC 02) have given a nearly optimal algorithm when the coefficients are positive, and Arnold and Roche (ISSAC 15) devised an algorithm running in time proportional to the structural sparsity of the product, i.e. the set supp(F)+supp(G). The latter algorithm is particularly efficient when there not too many cancellations of coefficients in the product. In this work we give a clean, nearly optimal algorithm for the sparse polynomial multiplication problem.