ترغب بنشر مسار تعليمي؟ اضغط هنا

On exact division and divisibility testing for sparse polynomials

99   0   0.0 ( 0 )
 نشر من قبل Armelle Perret du Cray
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

No polynomial-time algorithm is known to test whether a sparse polynomial G divides another sparse polynomial $F$. While computing the quotient Q=F quo G can be done in polynomial time with respect to the sparsities of F, G and Q, this is not yet sufficient to get a polynomial-time divisibility test in general. Indeed, the sparsity of the quotient Q can be exponentially larger than the ones of F and G. In the favorable case where the sparsity #Q of the quotient is polynomial, the best known algorithm to compute Q has a non-linear factor #G#Q in the complexity, which is not optimal. In this work, we are interested in the two aspects of this problem. First, we propose a new randomized algorithm that computes the quotient of two sparse polynomials when the division is exact. Its complexity is quasi-linear in the sparsities of F, G and Q. Our approach relies on sparse interpolation and it works over any finite field or the ring of integers. Then, as a step toward faster divisibility testing, we provide a new polynomial-time algorithm when the divisor has a specific shape. More precisely, we reduce the problem to finding a polynomial S such that QS is sparse and testing divisibility by S can be done in polynomial time. We identify some structure patterns in the divisor G for which we can efficiently compute such a polynomial~S.



قيم البحث

اقرأ أيضاً

61 - Daniel S. Roche 2018
Simply put, a sparse polynomial is one whose zero coefficients are not explicitly stored. Such objects are ubiquitous in exact computing, and so naturally we would like to have efficient algorithms to handle them. However, with this compact storage c omes new algorithmic challenges, as fast algorithms for dense polynomials may no longer be efficient. In this tutorial we examine the state of the art for sparse polynomial algorithms in three areas: arithmetic, interpolation, and factorization. The aim is to highlight recent progress both in theory and in practice, as well as opportunities for future work.
The alternating descent statistic on permutations was introduced by Chebikin as a variant of the descent statistic. We show that the alternating descent polynomials on permutations are unimodal via a five-term recurrence relation. We also found a qua dratic recursion for the alternating major index $q$-analog of the alternating descent polynomials. As an interesting application of this quadratic recursion, we show that $(1+q)^{lfloor n/2rfloor}$ divides $sum_{piinmathfrak{S}_n}q^{rm{altmaj}(pi)}$, where $mathfrak{S}_n$ is the set of all permutations of ${1,2,ldots,n}$ and $rm{altmaj}(pi)$ is the alternating major index of $pi$. This leads us to discover a $q$-analog of $n!=2^{ell}m$, $m$ odd, using the statistic of alternating major index. Moreover, we study the $gamma$-vectors of the alternating descent polynomials by using these two recursions and the ${textbf{cd}}$-index. Further intriguing conjectures are formulated, which indicate that the alternating descent statistic deserves more work.
The aim of the paper is to produce new families of irreducible polynomials, generalizing previous results in the area. One example of our general result is that for a near-separated polynomial, i.e., polynomials of the form $F(x,y)=f_1(x)f_2(y)-f_2(x )f_1(y)$, then $F(x,y)+r$ is always irreducible for any constant $r$ different from zero. We also provide the biggest known family of HIP polynomials in several variables. These are polynomials $p(x_1,ldots,x_n) in K[x_1,ldots,x_n]$ over a zero characteristic field $K$ such that $p(h_1(x_1),ldots,h_n(x_n))$ is irreducible over $K$ for every $n$-tuple $h_1(x_1),ldots,h_n(x_n)$ of non constant one variable polynomials over $K$. The results can also be applied to fields of positive characteristic, with some modifications.
149 - Gilles Villard 2008
Kaltofen has proposed a new approach in 1992 for computing matrix determinants without divisions. The algorithm is based on a baby steps/giant steps construction of Krylov subspaces, and computes the determinant as the constant term of a characterist ic polynomial. For matrices over an abstract ring, by the results of Baur and Strassen, the determinant algorithm, actually a straight-line program, leads to an algorithm with the same complexity for computing the adjoint of a matrix. However, the latter adjoint algorithm is obtained by the reverse mode of automatic differentiation, hence somehow is not explicit. We present an alternative (still closely related) algorithm for the adjoint thatcan be implemented directly, we mean without resorting to an automatic transformation. The algorithm is deduced by applying program differentiation techniques by hand to Kaltofens method, and is completely decribed. As subproblem, we study the differentiation of programs that compute minimum polynomials of lineraly generated sequences, and we use a lazy polynomial evaluation mechanism for reducing the cost of Strassens avoidance of divisions in our case.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا