ترغب بنشر مسار تعليمي؟ اضغط هنا

The study of strategic or adversarial manipulation of testing data to fool a classifier has attracted much recent attention. Most previous works have focused on two extreme situations where any testing data point either is completely adversarial or a lways equally prefers the positive label. In this paper, we generalize both of these through a unified framework for strategic classification, and introduce the notion of strategic VC-dimension (SVC) to capture the PAC-learnability in our general strategic setup. SVC provably generalizes the recent concept of adversarial VC-dimension (AVC) introduced by Cullina et al. arXiv:1806.01471. We instantiate our framework for the fundamental strategic linear classification problem. We fully characterize: (1) the statistical learnability of linear classifiers by pinning down its SVC; (2) its computational tractability by pinning down the complexity of the empirical risk minimization problem. Interestingly, the SVC of linear classifiers is always upper bounded by its standard VC-dimension. This characterization also strictly generalizes the AVC bound for linear classifiers in arXiv:1806.01471.
We establish an uncertainty principle for functions $f: mathbb{Z}/p rightarrow mathbb{F}_q$ with constant support (where $p mid q-1$). In particular, we show that for any constant $S > 0$, functions $f: mathbb{Z}/p rightarrow mathbb{F}_q$ for which $ |text{supp}; {f}| = S$ must satisfy $|text{supp}; hat{f}| = (1 - o(1))p$. The proof relies on an application of Szemeredis theorem; the celebrated improvements by Gowers translate into slightly stronger statements permitting conclusions for functions possessing slowly growing support as a function of $p$.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا