No Arabic abstract
In his 2006 paper, Jin proves that Kalantaris bounds on polynomial zeros, indexed by $m leq 2$ and called $L_m$ and $U_m$ respectively, become sharp as $mrightarrowinfty$. That is, given a degree $n$ polynomial $p(z)$ not vanishing at the origin and an error tolerance $epsilon > 0$, Jin proves that there exists an $m$ such that $frac{L_m}{rho_{min}} > 1-epsilon$, where $rho_{min} := min_{rho:p(rho) = 0} left|rhoright|$. In this paper we derive a formula that yields such an $m$, thereby constructively proving Jins theorem. In fact, we prove the stronger theorem that this convergence is uniform in a sense, its rate depending only on $n$ and a few other parameters. We also give experimental results that suggest an optimal m of (asymptotically) $Oleft(frac{1}{epsilon^d}right)$ for some $d ll 2$. A proof of these results would show that Jins method runs in $Oleft(frac{n}{epsilon^d}right)$ time, making it efficient for isolating polynomial zeros of high degree.
The Modified Szpiro Conjecture, equivalent to the $abc$ Conjecture, states that for each $epsilon>0$, there are finitely many rational elliptic curves satisfying $N_{E}^{6+epsilon}<max!left{ leftvert c_{4}^{3}rightvert,c_{6}^{2}right} $ where $c_{4}$ and $c_{6}$ are the invariants associated to a minimal model of $E$ and $N_{E}$ is the conductor of $E$. We say $E$ is a good elliptic curve if $N_{E}^{6}<max!left{ leftvert c_{4}^{3}rightvert,c_{6}^{2}right} $. Masser showed that there are infinitely many good Frey curves. Here we give a constructive proof of this assertion.
A 1993 result of Alon and Furedi gives a sharp upper bound on the number of zeros of a multivariate polynomial over an integral domain in a finite grid, in terms of the degree of the polynomial. This result was recently generalized to polynomials over an arbitrary commutative ring, assuming a certain Condition (D) on the grid which holds vacuously when the ring is a domain. In the first half of this paper we give a further Generalized Alon-Furedi Theorem which provides a sharp upper bound when the degrees of the polynomial in each variable are also taken into account. This yields in particular a new proof of Alon-Furedi. We then discuss the relationship between Alon-Furedi and results of DeMillo-Lipton, Schwartz and Zippel. A direct coding theoretic interpretation of Alon-Furedi Theorem and its generalization in terms of Reed--Muller type affine variety codes is shown which gives us the minimum Hamming distance of these codes. Then we apply the Alon-Furedi Theorem to quickly recover (and sometimes strengthen) old and new results in finite geometry, including the Jamison/Brouwer-Schrijver bound on affine blocking sets. We end with a discussion of multiplicity enhancements.
Let $A$ be an algebra and let $f(x_1,...,x_d)$ be a multilinear polynomial in noncommuting indeterminates $x_i$. We consider the problem of describing linear maps $phi:Ato A$ that preserve zeros of $f$. Under certain technical restrictions we solve the problem for general polynomials $f$ in the case where $A=M_n(F)$. We also consider quite general algebras $A$, but only for specific polynomials $f$.
In this article, we consider the preconditioned Hamiltonian Monte Carlo (pHMC) algorithm defined directly on an infinite-dimensional Hilbert space. In this context, and under a condition reminiscent of strong log-concavity of the target measure, we prove convergence bounds for adjusted pHMC in the standard 1-Wasserstein distance. The arguments rely on a synchronous coupling of two copies of pHMC, which is controlled by adapting elements from arXiv:1805.00452.
In this paper, we develop a symmetric accelerated stochastic Alternating Direction Method of Multipliers (SAS-ADMM) for solving separable convex optimization problems with linear constraints. The objective function is the sum of a possibly nonsmooth convex function and an average function of many smooth convex functions. Our proposed algorithm combines both ideas of ADMM and the techniques of accelerated stochastic gradient methods using variance reduction to solve the smooth subproblem. One main feature of SAS-ADMM {is} that its dual variable is symmetrically updated after each update of the separated primal variable, which would allow a more flexible and larger convergence region of the dual variable compared with that of standard deterministic or stochastic ADMM. This new stochastic optimization algorithm is shown to converge in expectation with $C{O}(1/T)$ convergence rate, where $T$ is the number of outer iterations. In addition, 3-block extensions of the algorithm and its variant of an accelerated stochastic augmented Lagrangian method are also discussed. Our preliminary numerical experiments indicate the proposed algorithm is very effective for solving separable optimization problems from big-data applications