ترغب بنشر مسار تعليمي؟ اضغط هنا

Online Discrepancy Minimization for Stochastic Arrivals

85   0   0.0 ( 0 )
 نشر من قبل Haotian Jiang
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In the stochastic online vector balancing problem, vectors $v_1,v_2,ldots,v_T$ chosen independently from an arbitrary distribution in $mathbb{R}^n$ arrive one-by-one and must be immediately given a $pm$ sign. The goal is to keep the norm of the discrepancy vector, i.e., the signed prefix-sum, as small as possible for a given target norm. We consider some of the most well-known problems in discrepancy theory in the above online stochastic setting, and give algorithms that match the known offline bounds up to $mathsf{polylog}(nT)$ factors. This substantially generalizes and improves upon the previous results of Bansal, Jiang, Singla, and Sinha (STOC 20). In particular, for the Koml{o}s problem where $|v_t|_2leq 1$ for each $t$, our algorithm achieves $tilde{O}(1)$ discrepancy with high probability, improving upon the previous $tilde{O}(n^{3/2})$ bound. For Tusn{a}dys problem of minimizing the discrepancy of axis-aligned boxes, we obtain an $O(log^{d+4} T)$ bound for arbitrary distribution over points. Previous techniques only worked for product distributions and gave a weaker $O(log^{2d+1} T)$ bound. We also consider the Banaszczyk setting, where given a symmetric convex body $K$ with Gaussian measure at least $1/2$, our algorithm achieves $tilde{O}(1)$ discrepancy with respect to the norm given by $K$ for input distributions with sub-exponential tails. Our key idea is to introduce a potential that also enforces constraints on how the discrepancy vector evolves, allowing us to maintain certain anti-concentration properties. For the Banaszczyk setting, we further enhance this potential by combining it with ideas from generic chaining. Finally, we also extend these results to the setting of online multi-color discrepancy.

قيم البحث

اقرأ أيضاً

Consider a unit interval $[0,1]$ in which $n$ points arrive one-by-one independently and uniformly at random. On arrival of a point, the problem is to immediately and irrevocably color it in ${+1,-1}$ while ensuring that every interval $[a,b] subsete q [0,1]$ is nearly-balanced. We define emph{discrepancy} as the largest imbalance of any interval during the entire process. If all the arriving points were known upfront then we can color them alternately to achieve a discrepancy of $1$. What is the minimum possible expected discrepancy when we color the points online? We show that the discrepancy of the above problem is sub-polynomial in $n$ and that no algorithm can achieve a constant discrepancy. This is a substantial improvement over the trivial random coloring that only gets an $widetilde{O}(sqrt n)$ discrepancy. We then obtain similar results for a natural generalization of this problem to $2$-dimensions where the points arrive uniformly at random in a unit square. This generalization allows us to improve recent results of Benade et al.cite{BenadeKPP-EC18} for the online envy minimization problem when the arrivals are stochastic.
Online bipartite matching with edge arrivals remained a major open question for a long time until a recent negative result by [Gamlath et al. FOCS 2019], who showed that no online policy is better than the straightforward greedy algorithm, i.e., no o nline algorithm has a worst-case competitive ratio better than $0.5$. In this work, we consider the bipartite matching problem with edge arrivals in a natural stochastic framework, i.e., Bayesian setting where each edge of the graph is independently realized according to a known probability distribution. We focus on a natural class of prune & greedy online policies motivated by practical considerations from a multitude of online matching platforms. Any prune & greedy algorithm consists of two stages: first, it decreases the probabilities of some edges in the stochastic instance and then runs greedy algorithm on the pruned graph. We propose prune & greedy algorithms that are $0.552$-competitive on the instances that can be pruned to a $2$-regular stochastic bipartite graph, and $0.503$-competitive on arbitrary bipartite graphs. The algorithms and our analysis significantly deviate from the prior work. We first obtain analytically manageable lower bound on the size of the matching, which leads to a non linear optimization problem. We further reduce this problem to a continuous optimization with a constant number of parameters that can be solved using standard software tools.
We study the online discrepancy minimization problem for vectors in $mathbb{R}^d$ in the oblivious setting where an adversary is allowed fix the vectors $x_1, x_2, ldots, x_n$ in arbitrary order ahead of time. We give an algorithm that maintains $O(s qrt{log(nd/delta)})$ discrepancy with probability $1-delta$, matching the lower bound given in [Bansal et al. 2020] up to an $O(sqrt{log log n})$ factor in the high-probability regime. We also provide results for the weighted and multi-col
94 - Benjamin Doerr 2021
For $m, d in {mathbb N}$, a jittered sampling point set $P$ having $N = m^d$ points in $[0,1)^d$ is constructed by partitioning the unit cube $[0,1)^d$ into $m^d$ axis-aligned cubes of equal size and then placing one point independently and uniformly at random in each cube. We show that there are constants $c ge 0$ and $C$ such that for all $d$ and all $m ge d$ the expected non-normalized star discrepancy of a jittered sampling point set satisfies [c ,dm^{frac{d-1}{2}} sqrt{1 + log(tfrac md)} le {mathbb E} D^*(P) le C, dm^{frac{d-1}{2}} sqrt{1 + log(tfrac md)}.] This discrepancy is thus smaller by a factor of $Thetabig(sqrt{frac{1+log(m/d)}{m/d}},big)$ than the one of a uniformly distributed random point set of $m^d$ points. This result improves both the upper and the lower bound for the discrepancy of jittered sampling given by Pausinger and Steinerberger (Journal of Complexity (2016)). It also removes the asymptotic requirement that $m$ is sufficiently large compared to $d$.
119 - Yuntao Du , Zhiwen Tan , Qian Chen 2019
Transfer learning has been demonstrated to be successful and essential in diverse applications, which transfers knowledge from related but different source domains to the target domain. Online transfer learning(OTL) is a more challenging problem wher e the target data arrive in an online manner. Most OTL methods combine source classifier and target classifier directly by assigning a weight to each classifier, and adjust the weights constantly. However, these methods pay little attention to reducing the distribution discrepancy between domains. In this paper, we propose a novel online transfer learning method which seeks to find a new feature representation, so that the marginal distribution and conditional distribution discrepancy can be online reduced simultaneously. We focus on online transfer learning with multiple source domains and use the Hedge strategy to leverage knowledge from source domains. We analyze the theoretical properties of the proposed algorithm and provide an upper mistake bound. Comprehensive experiments on two real-world datasets show that our method outperforms state-of-the-art methods by a large margin.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا