ترغب بنشر مسار تعليمي؟ اضغط هنا

In seminal work, Lovasz, Spencer, and Vesztergombi [European J. Combin., 1986] proved a lower bound for the hereditary discrepancy of a matrix $A in mathbb{R}^{m times n}$ in terms of the maximum $|det(B)|^{1/k}$ over all $k times k$ submatrices $B$ of $A$. We show algorithmically that this determinant lower bound can be off by at most a factor of $O(sqrt{log (m) cdot log (n)})$, improving over the previous bound of $O(log(mn) cdot sqrt{log (n)})$ given by Matouv{s}ek [Proc. of the AMS, 2013]. Our result immediately implies $mathrm{herdisc}(mathcal{F}_1 cup mathcal{F}_2) leq O(sqrt{log (m) cdot log (n)}) cdot max(mathrm{herdisc}(mathcal{F}_1), mathrm{herdisc}(mathcal{F}_2))$, for any two set systems $mathcal{F}_1, mathcal{F}_2$ over $[n]$ satisfying $|mathcal{F}_1 cup mathcal{F}_2| = m$. Our bounds are tight up to constants when $m = O(mathrm{poly}(n))$ due to a construction of Palvolgyi [Discrete Comput. Geom., 2010] or the counterexample to Becks three permutation conjecture by Newman, Neiman and Nikolov [FOCS, 2012].
We study the approximation properties of convolutional architectures applied to time series modelling, which can be formulated mathematically as a functional approximation problem. In the recurrent setting, recent results reveal an intricate connecti on between approximation efficiency and memory structures in the data generation process. In this paper, we derive parallel results for convolutional architectures, with WaveNet being a prime example. Our results reveal that in this new setting, approximation efficiency is not only characterised by memory, but also additional fine structures in the target relationship. This leads to a novel definition of spectrum-based regularity that measures the complexity of temporal relationships under the convolutional approximation scheme. These analyses provide a foundation to understand the differences between architectural choices for time series modelling and can give theoretically grounded guidance for practical applications.
Semidefinite programs (SDPs) are a fundamental class of optimization problems with important recent applications in approximation algorithms, quantum complexity, robust learning, algorithmic rounding, and adversarial deep learning. This paper present s a faster interior point method to solve generic SDPs with variable size $n times n$ and $m$ constraints in time begin{align*} widetilde{O}(sqrt{n}( mn^2 + m^omega + n^omega) log(1 / epsilon) ), end{align*} where $omega$ is the exponent of matrix multiplication and $epsilon$ is the relative accuracy. In the predominant case of $m geq n$, our runtime outperforms that of the previous fastest SDP solver, which is based on the cutting plane method of Jiang, Lee, Song, and Wong [JLSW20]. Our algorithms runtime can be naturally interpreted as follows: $widetilde{O}(sqrt{n} log (1/epsilon))$ is the number of iterations needed for our interior point method, $mn^2$ is the input size, and $m^omega + n^omega$ is the time to invert the Hessian and slack matrix in each iteration. These constitute natural barriers to further improving the runtime of interior point methods for solving generic SDPs.
In the stochastic online vector balancing problem, vectors $v_1,v_2,ldots,v_T$ chosen independently from an arbitrary distribution in $mathbb{R}^n$ arrive one-by-one and must be immediately given a $pm$ sign. The goal is to keep the norm of the discr epancy vector, i.e., the signed prefix-sum, as small as possible for a given target norm. We consider some of the most well-known problems in discrepancy theory in the above online stochastic setting, and give algorithms that match the known offline bounds up to $mathsf{polylog}(nT)$ factors. This substantially generalizes and improves upon the previous results of Bansal, Jiang, Singla, and Sinha (STOC 20). In particular, for the Koml{o}s problem where $|v_t|_2leq 1$ for each $t$, our algorithm achieves $tilde{O}(1)$ discrepancy with high probability, improving upon the previous $tilde{O}(n^{3/2})$ bound. For Tusn{a}dys problem of minimizing the discrepancy of axis-aligned boxes, we obtain an $O(log^{d+4} T)$ bound for arbitrary distribution over points. Previous techniques only worked for product distributions and gave a weaker $O(log^{2d+1} T)$ bound. We also consider the Banaszczyk setting, where given a symmetric convex body $K$ with Gaussian measure at least $1/2$, our algorithm achieves $tilde{O}(1)$ discrepancy with respect to the norm given by $K$ for input distributions with sub-exponential tails. Our key idea is to introduce a potential that also enforces constraints on how the discrepancy vector evolves, allowing us to maintain certain anti-concentration properties. For the Banaszczyk setting, we further enhance this potential by combining it with ideas from generic chaining. Finally, we also extend these results to the setting of online multi-color discrepancy.
75 - Haotian Jiang 2020
Given a separation oracle $mathsf{SO}$ for a convex function $f$ that has an integral minimizer inside a box with radius $R$, we show how to find an exact minimizer of $f$ using at most (a) $O(n (n + log(R)))$ calls to $mathsf{SO}$ and $mathsf{poly}( n, log(R))$ arithmetic operations, or (b) $O(n log(nR))$ calls to $mathsf{SO}$ and $exp(n) cdot mathsf{poly}(log(R))$ arithmetic operations. When the set of minimizers of $f$ has integral extreme points, our algorithm outputs an integral minimizer of $f$. This improves upon the previously best oracle complexity of $O(n^2 (n + log(R)))$ for polynomial time algorithms obtained by [Grotschel, Lovasz and Schrijver, Prog. Comb. Opt. 1984, Springer 1988] over thirty years ago. For the Submodular Function Minimization problem, our result immediately implies a strongly polynomial algorithm that makes at most $O(n^3)$ calls to an evaluation oracle, and an exponential time algorithm that makes at most $O(n^2 log(n))$ calls to an evaluation oracle. These improve upon the previously best $O(n^3 log^2(n))$ oracle complexity for strongly polynomial algorithms given in [Lee, Sidford and Wong, FOCS 2015] and [Dadush, Vegh and Zambelli, SODA 2018], and an exponential time algorithm with oracle complexity $O(n^3 log(n))$ given in the former work. Our result is achieved via a reduction to the Shortest Vector Problem in lattices. We show how an approximately shortest vector of certain lattice can be used to effectively reduce the dimension of the problem. Our analysis of the oracle complexity is based on a potential function that captures simultaneously the size of the search set and the density of the lattice, which we analyze via technical tools from convex geometry.
Given a metric $(V,d)$ and a $textsf{root} in V$, the classic $textsf{$k$-TSP}$ problem is to find a tour originating at the $textsf{root}$ of minimum length that visits at least $k$ nodes in $V$. In this work, motivated by applications where the inp ut to an optimization problem is uncertain, we study two stochast
Consider a unit interval $[0,1]$ in which $n$ points arrive one-by-one independently and uniformly at random. On arrival of a point, the problem is to immediately and irrevocably color it in ${+1,-1}$ while ensuring that every interval $[a,b] subsete q [0,1]$ is nearly-balanced. We define emph{discrepancy} as the largest imbalance of any interval during the entire process. If all the arriving points were known upfront then we can color them alternately to achieve a discrepancy of $1$. What is the minimum possible expected discrepancy when we color the points online? We show that the discrepancy of the above problem is sub-polynomial in $n$ and that no algorithm can achieve a constant discrepancy. This is a substantial improvement over the trivial random coloring that only gets an $widetilde{O}(sqrt n)$ discrepancy. We then obtain similar results for a natural generalization of this problem to $2$-dimensions where the points arrive uniformly at random in a unit square. This generalization allows us to improve recent results of Benade et al.cite{BenadeKPP-EC18} for the online envy minimization problem when the arrivals are stochastic.
The central limit theorem for convex bodies says that with high probability the marginal of an isotropic log-concave distribution along a random direction is close to a Gaussian, with the quantitative difference determined asymptotically by the Cheeg er/Poincare/KLS constant. Here we propose a generalized CLT for marginals along random directions drawn from any isotropic log-concave distribution; namely, for $x,y$ drawn independently from isotropic log-concave densities $p,q$, the random variable $langle x,yrangle$ is close to Gaussian. Our main result is that this generalized CLT is quantitatively equivalent (up to a small factor) to the KLS conjecture. Any polynomial improvement in the current KLS bound of $n^{1/4}$ in $mathbb{R}^n$ implies the generalized CLT, and vice versa. This tight connection suggests that the generalized CLT might provide insight into basic open questions in asymptotic convex geometry.
Minimum storage regenerating (MSR) codes are MDS codes which allow for recovery of any single erased symbol with optimal repair bandwidth, based on the smallest possible fraction of the contents downloaded from each of the other symbols. Recently, ce rtain Reed-Solomon codes were constructed which are MSR. However, the sub-packetization of these codes is exponentially large, growing like $n^{Omega(n)}$ in the constant-rate regime. In this work, we study the relaxed notion of $epsilon$-MSR codes, which incur a factor of $(1+epsilon)$ higher than the optimal repair bandwidth, in the context of Reed-Solomon codes. We give constructions of constant-rate $epsilon$-MSR Reed-Solomon codes with polynomial sub-packetization of $n^{O(1/epsilon)}$ and thereby giving an explicit tradeoff between the repair bandwidth and sub-packetization.
Suppose there are $n$ Markov chains and we need to pay a per-step emph{price} to advance them. The destination states of the Markov chains contain rewards; however, we can only get rewards for a subset of them that satisfy a combinatorial constraint, e.g., at most $k$ of them, or they are acyclic in an underlying graph. What strategy should we choose to advance the Markov chains if our goal is to maximize the total reward emph{minus} the total price that we pay? In this paper we introduce a Markovian price of information model to capture settings such as the above, where the input parameters of a combinatorial optimization problem are given via Markov chains. We design optimal/approximation algorithms that jointly optimize the value of the combinatorial problem and the total paid price. We also study emph{robustness} of our algorithms to the distribution parameters and how to handle the emph{commitment} constraint. Our work brings together two classical lines of investigation: getting optimal strategies for Markovian multi-armed bandits, and getting exact and approximation algorithms for discrete optimization problems using combinatorial as well as linear-programming relaxation ideas.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا