Do you want to publish a course? Click here

Root Separation for Trinomials

74   0   0.0 ( 0 )
 Added by Pascal Koiran
 Publication date 2017
and research's language is English
 Authors Pascal Koiran




Ask ChatGPT about the research

We give a separation bound for the complex roots of a trinomial $f in mathbb{Z}[X]$. The logarithm of the inverse of our separation bound is polynomial in the size of the sparse encoding of $f$; in particular, it is polynomial in $log (deg f)$. It is known that no such bound is possible for 4-nomials (polynomials with 4 monomials). For trinomials, the classical results (which are based on the degree of $f$ rather than the number of monomials) give separation bounds that are exponentially worse.As an algorithmic application, we show that the number of real roots of a trinomial $f$ can be computed in time polynomial in the size of the sparse encoding of~$f$. The same problem is open for 4-nomials.



rate research

Read More

118 - Liyun Dai , Bican Xia 2012
This paper revisits an algorithm for isolating real roots of univariate polynomials based on continued fractions. It follows the work of Vincent, Uspen- sky, Collins and Akritas, Johnson and Krandick. We use some tricks, especially a new algorithm for computing an upper bound of positive roots. In this way, the algorithm of isolating real roots is improved. The complexity of our method for computing an upper bound of positive roots is O(n log(u+1)) where u is the optimal upper bound satisfying Theorem 3 and n is the degree of the polynomial. Our method has been implemented as a software package logcf using C++ language. For many benchmarks logcf is two or three times faster than the function RootIntervals of Mathematica. And it is much faster than another continued fractions based software CF, which seems to be one of the fastest available open software for exact real root isolation. For those benchmarks which have only real roots, logcf is much faster than Sleeve and eigensolve which are based on numerical computation.
The polynomial multiplication problem has attracted considerable attention since the early days of computer algebra, and several algorithms have been designed to achieve the best possible time complexity. More recently, efforts have been made to improve the space complexity, developing modifi
We present randomized algorithms to compute the sumset (Minkowski sum) of two integer sets, and to multiply two univariate integer polynomials given by sparse representations. Our algorithm for sumset has cost softly linear in the combined size of the inputs and output. This is used as part of our sparse multiplication algorithm, whose cost is softly linear in the combined size of the inputs, output, and the sumset of the supports of the inputs. As a subroutine, we present a new method for computing the coefficients of a sparse polynomial, given a set containing its support. Our multiplication algorithm extends to multivariate Laurent polynomials over finite fields and rational numbers. Our techniques are based on sparse interpolation algorithms and results from analytic number theory.
147 - Gilles Villard 2008
Kaltofen has proposed a new approach in 1992 for computing matrix determinants without divisions. The algorithm is based on a baby steps/giant steps construction of Krylov subspaces, and computes the determinant as the constant term of a characteristic polynomial. For matrices over an abstract ring, by the results of Baur and Strassen, the determinant algorithm, actually a straight-line program, leads to an algorithm with the same complexity for computing the adjoint of a matrix. However, the latter adjoint algorithm is obtained by the reverse mode of automatic differentiation, hence somehow is not explicit. We present an alternative (still closely related) algorithm for the adjoint thatcan be implemented directly, we mean without resorting to an automatic transformation. The algorithm is deduced by applying program differentiation techniques by hand to Kaltofens method, and is completely decribed. As subproblem, we study the differentiation of programs that compute minimum polynomials of lineraly generated sequences, and we use a lazy polynomial evaluation mechanism for reducing the cost of Strassens avoidance of divisions in our case.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا