In this paper we give an algorithm to round the floating point output of a semidefinite programming solver to a solution over the rationals or a quadratic extension of the rationals. We apply this to get sharp bounds for packing problems, and we use these sharp bounds to prove that certain optimal packing configurations are unique up to rotations. In particular, we show that the configuration coming from the $mathsf{E}_8$ root lattice is the unique optimal code with minimal angular distance $pi/3$ on the hemisphere in $mathbb R^8$, and we prove that the three-point bound for the $(3, 8, vartheta)$-spherical code, where $vartheta$ is such that $cos vartheta = (2sqrt{2}-1)/7$, is sharp by rounding to $mathbb Q[sqrt{2}]$. We also use our machinery to compute sharp upper bounds on the number of spheres that can be packed into a larger sphere.
We provide a condition-based analysis of two interior-point methods for unconstrained geometric programs, a class of convex programs that arise naturally in applications including matrix scaling, matrix balancing, and entropy maximization. Our condition numbers are natural geometric quantities associated with the Newton polytope of the geometric program, and lead to diameter bounds on approximate minimizers. We also provide effective bounds on the condition numbers both in general and under combinatorial assumptions on the Newton polytope. In this way, we generalize the iteration complexity of recent interior-point methods for matrix scaling and matrix balancing. Recently, there has been much work on algorithms for certain optimization problems on Lie groups, known as capacity and scaling problems. For commutative groups, these problems reduce to unconstrained geometric programs, which serves as a particular source of motivation for our work.
We introduce primal and dual stochastic gradient oracle methods for decentralized convex optimization problems. Both for primal and dual oracles, the proposed methods are optimal in terms of the number of communication steps. However, for all classes of the objective, the optimality in terms of the number of oracle calls per node takes place only up to a logarithmic factor and the notion of smoothness. By using mini-batching technique, we show that the proposed methods with stochastic oracle can be additionally parallelized at each node. The considered algorithms can be applied to many data science problems and inverse problems.
Many papers in the field of integer linear programming (ILP, for short) are devoted to problems of the type $max{c^top x colon A x = b,, x in mathbb{Z}^n_{geq 0}}$, where all the entries of $A,b,c$ are integer, parameterized by the number of rows of $A$ and $|A|_{max}$. This class of problems is known under the name of ILP problems in the standard form, adding the word bounded if $x leq u$, for some integer vector $u$. Recently, many new sparsity, proximity, and complexity results were obtained for bounded and unbounded ILP problems in the standard form. In this paper, we consider ILP problems in the canonical form $$max{c^top x colon b_l leq A x leq b_r,, x in mathbb{Z}^n},$$ where $b_l$ and $b_r$ are integer vectors. We assume that the integer matrix $A$ has the rank $n$, $(n + m)$ rows, $n$ columns, and parameterize the problem by $m$ and $Delta(A)$, where $Delta(A)$ is the maximum of $n times n$ sub-determinants of $A$, taken in the absolute value. We show that any ILP problem in the standard form can be polynomially reduced to some ILP problem in the canonical form, preserving $m$ and $Delta(A)$, but the reverse reduction is not always possible. More precisely, we define the class of generalized ILP problems in the standard form, which includes an additional group constraint, and prove the equivalence to ILP problems in the canonical form. We generalize known sparsity, proximity, and complexity bounds for ILP problems in the canonical form. Additionally, sometimes, we strengthen previously known results for ILP problems in the canonical form, and, sometimes, we give shorter proofs. Finally, we consider the special cases of $m in {0,1}$. By this way, we give specialised sparsity, proximity, and complexity bounds for the problems on simplices, Knapsack problems and Subset-Sum problems.