ﻻ يوجد ملخص باللغة العربية
We prove novel algorithmic guarantees for several online problems in the smoothed analysis model. In this model, at each time an adversary chooses an input distribution with density function bounded above by $tfrac{1}{sigma}$ times that of the uniform distribution; nature then samples an input from this distribution. Crucially, our results hold for {em adaptive} adversaries that can choose an input distribution based on the decisions of the algorithm and the realizations of the inputs in the previous time steps. This paper presents a general technique for proving smoothed algorithmic guarantees against adaptive adversaries, in effect reducing the setting of adaptive adversaries to the simpler case of oblivious adversaries. We apply this technique to prove strong smoothed guarantees for three problems: -Online learning: We consider the online prediction problem, where instances are generated from an adaptive sequence of $sigma$-smooth distributions and the hypothesis class has VC dimension $d$. We bound the regret by $tilde{O}big(sqrt{T dln(1/sigma)} + dsqrt{ln(T/sigma)}big)$. This answers open questions of [RST11,Hag18]. -Online discrepancy minimization: We consider the online Komlos problem, where the input is generated from an adaptive sequence of $sigma$-smooth and isotropic distributions on the $ell_2$ unit ball. We bound the $ell_infty$ norm of the discrepancy vector by $tilde{O}big(ln^2!big( frac{nT}{sigma}big) big)$. -Dispersion in online optimization: We consider online optimization of piecewise Lipschitz functions where functions with $ell$ discontinuities are chosen by a smoothed adaptive adversary and show that the resulting sequence is $big( {sigma}/{sqrt{Tell}}, tilde Obig(sqrt{Tell} big)big)$-dispersed. This matches the parameters of [BDV18] for oblivious adversaries, up to log factors.
Practical and pervasive needs for robustness and privacy in algorithms have inspired the design of online adversarial and differentially private learning algorithms. The primary quantity that characterizes learnability in these settings is the Little
Greedy decision tree learning heuristics are mainstays of machine learning practice, but theoretical justification for their empirical success remains elusive. In fact, it has long been known that there are simple target functions for which they fail
Deep neural networks are vulnerable to small input perturbations known as adversarial attacks. Inspired by the fact that these adversaries are constructed by iteratively minimizing the confidence of a network for the true class label, we propose the
Adaptive sequential decision making is one of the central challenges in machine learning and artificial intelligence. In such problems, the goal is to design an interactive policy that plans for an action to take, from a finite set of $n$ actions, gi
Large scale cryptocurrencies require the participation of millions of participants and support economic activity of billions of dollars, which has led to new lines of work in binary Byzantine Agreement (BBA) and consensus. The new work aims to achiev