ترغب بنشر مسار تعليمي؟ اضغط هنا

Online Discrepancy Minimization via Persistent Self-Balancing Walks

83   0   0.0 ( 0 )
 نشر من قبل Tung Mai
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We study the online discrepancy minimization problem for vectors in $mathbb{R}^d$ in the oblivious setting where an adversary is allowed fix the vectors $x_1, x_2, ldots, x_n$ in arbitrary order ahead of time. We give an algorithm that maintains $O(sqrt{log(nd/delta)})$ discrepancy with probability $1-delta$, matching the lower bound given in [Bansal et al. 2020] up to an $O(sqrt{log log n})$ factor in the high-probability regime. We also provide results for the weighted and multi-col



قيم البحث

اقرأ أيضاً

242 - Shachar Lovett , Raghu Meka 2012
Minimizing the discrepancy of a set system is a fundamental problem in combinatorics. One of the cornerstones in this area is the celebrated six standard deviations result of Spencer (AMS 1985): In any system of n sets in a universe of size n, there always exists a coloring which achieves discrepancy 6sqrt{n}. The original proof of Spencer was existential in nature, and did not give an efficient algorithm to find such a coloring. Recently, a breakthrough work of Bansal (FOCS 2010) gave an efficient algorithm which finds such a coloring. His algorithm was based on an SDP relaxation of the discrepancy problem and a clever rounding procedure. In this work we give a new randomized algorithm to find a coloring as in Spencers result based on a restricted random walk we call Edge-Walk. Our algorithm and its analysis use only basic linear algebra and is truly constructive in that it does not appeal to the existential arguments, giving a new proof of Spencers theorem and the partial coloring lemma.
In the stochastic online vector balancing problem, vectors $v_1,v_2,ldots,v_T$ chosen independently from an arbitrary distribution in $mathbb{R}^n$ arrive one-by-one and must be immediately given a $pm$ sign. The goal is to keep the norm of the discr epancy vector, i.e., the signed prefix-sum, as small as possible for a given target norm. We consider some of the most well-known problems in discrepancy theory in the above online stochastic setting, and give algorithms that match the known offline bounds up to $mathsf{polylog}(nT)$ factors. This substantially generalizes and improves upon the previous results of Bansal, Jiang, Singla, and Sinha (STOC 20). In particular, for the Koml{o}s problem where $|v_t|_2leq 1$ for each $t$, our algorithm achieves $tilde{O}(1)$ discrepancy with high probability, improving upon the previous $tilde{O}(n^{3/2})$ bound. For Tusn{a}dys problem of minimizing the discrepancy of axis-aligned boxes, we obtain an $O(log^{d+4} T)$ bound for arbitrary distribution over points. Previous techniques only worked for product distributions and gave a weaker $O(log^{2d+1} T)$ bound. We also consider the Banaszczyk setting, where given a symmetric convex body $K$ with Gaussian measure at least $1/2$, our algorithm achieves $tilde{O}(1)$ discrepancy with respect to the norm given by $K$ for input distributions with sub-exponential tails. Our key idea is to introduce a potential that also enforces constraints on how the discrepancy vector evolves, allowing us to maintain certain anti-concentration properties. For the Banaszczyk setting, we further enhance this potential by combining it with ideas from generic chaining. Finally, we also extend these results to the setting of online multi-color discrepancy.
Consider a unit interval $[0,1]$ in which $n$ points arrive one-by-one independently and uniformly at random. On arrival of a point, the problem is to immediately and irrevocably color it in ${+1,-1}$ while ensuring that every interval $[a,b] subsete q [0,1]$ is nearly-balanced. We define emph{discrepancy} as the largest imbalance of any interval during the entire process. If all the arriving points were known upfront then we can color them alternately to achieve a discrepancy of $1$. What is the minimum possible expected discrepancy when we color the points online? We show that the discrepancy of the above problem is sub-polynomial in $n$ and that no algorithm can achieve a constant discrepancy. This is a substantial improvement over the trivial random coloring that only gets an $widetilde{O}(sqrt n)$ discrepancy. We then obtain similar results for a natural generalization of this problem to $2$-dimensions where the points arrive uniformly at random in a unit square. This generalization allows us to improve recent results of Benade et al.cite{BenadeKPP-EC18} for the online envy minimization problem when the arrivals are stochastic.
In bipartite matching problems, vertices on one side of a bipartite graph are paired with those on the other. In its online variant, one side of the graph is available offline, while the vertices on the other side arrive online. When a vertex arrives , an irrevocable and immediate decision should be made by the algorithm; either match it to an available vertex or drop it. Examples of such problems include matching workers to firms, advertisers to keywords, organs to patients, and so on. Much of the literature focuses on maximizing the total relevance---modeled via total weight---of the matching. However, in many real-world problems, it is also important to consider contributions of diversity: hiring a diverse pool of candidates, displaying a relevant but diverse set of ads, and so on. In this paper, we propose the Online Submodular Bipartite Matching (osbm) problem, where the goal is to maximize a submodular function $f$ over the set of matched edges. This objective is general enough to capture the notion of both diversity (emph{e.g.,} a weighted coverage function) and relevance (emph{e.g.,} the traditional linear function)---as well as many other natural objective functions occurring in practice (emph{e.g.,} limited total budget in advertising settings). We propose novel algorithms that have provable guarantees and are essentially optimal when restricted to various special cases. We also run experiments on real-world and synthetic datasets to validate our algorithms.
119 - Yuntao Du , Zhiwen Tan , Qian Chen 2019
Transfer learning has been demonstrated to be successful and essential in diverse applications, which transfers knowledge from related but different source domains to the target domain. Online transfer learning(OTL) is a more challenging problem wher e the target data arrive in an online manner. Most OTL methods combine source classifier and target classifier directly by assigning a weight to each classifier, and adjust the weights constantly. However, these methods pay little attention to reducing the distribution discrepancy between domains. In this paper, we propose a novel online transfer learning method which seeks to find a new feature representation, so that the marginal distribution and conditional distribution discrepancy can be online reduced simultaneously. We focus on online transfer learning with multiple source domains and use the Hedge strategy to leverage knowledge from source domains. We analyze the theoretical properties of the proposed algorithm and provide an upper mistake bound. Comprehensive experiments on two real-world datasets show that our method outperforms state-of-the-art methods by a large margin.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا