Do you want to publish a course? Click here

Adversarially Robust Streaming via Dense--Sparse Trade-offs

109   0   0.0 ( 0 )
 Added by Talya Eden
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

A streaming algorithm is adversarially robust if it is guaranteed to perform correctly even in the presence of an adaptive adversary. Recently, several sophisticated frameworks for robustification of classical streaming algorithms have been developed. One of the main open questions in this area is whether efficient adversarially robust algorithms exist for moment estimation problems under the turnstile streaming model, where both insertions and deletions are allowed. So far, the best known space complexity for streams of length $m$, achieved using differential privacy (DP) based techniques, is of order $tilde{O}(m^{1/2})$ for computing a constant-factor approximation with high constant probability. In this work, we propose a new simple approach to tracking moments by alternating between two different regimes: a sparse regime, in which we can explicitly maintain the current frequency vector and use standard sparse recovery techniques, and a dense regime, in which we make use of existing DP-based robustification frameworks. The results obtained using our technique break the previous $m^{1/2}$ barrier for any fixed $p$. More specifically, our space complexity for $F_2$-estimation is $tilde{O}(m^{2/5})$ and for $F_0$-estimation, i.e., counting the number of distinct elements, it is $tilde O(m^{1/3})$. All existing robustness frameworks have their space complexity depend multiplicatively on a parameter $lambda$ called the emph{flip number} of the streaming problem, where $lambda = m$ in turnstile moment estimation. The best known dependence in these frameworks (for constant factor approximation) is of order $tilde{O}(lambda^{1/2})$, and it is known to be tight for certain problems. Again, our approach breaks this barrier, achieving a dependence of order $tilde{O}(lambda^{1/2 - c(p)})$ for $F_p$-estimation, where $c(p) > 0$ depends only on $p$.



rate research

Read More

A streaming algorithm is said to be adversarially robust if its accuracy guarantees are maintained even when the data stream is chosen maliciously, by an adaptive adversary. We establish a connection between adversarial robustness of streaming algorithms and the notion of differential privacy. This connection allows us to design new adversarially robust streaming algorithms that outperform the current state-of-the-art constructions for many interesting regimes of parameters.
Unlike traditional file transfer where only total delay matters, streaming applications impose delay constraints on each packet and require them to be in order. To achieve fast in-order packet decoding, we have to compromise on the throughput. We study this trade-off between throughput and smoothness in packet decoding. We first consider a point-to-point streaming and analyze how the trade-off is affected by the frequency of block-wise feedback, whereby the source receives full channel state feedback at periodic intervals. We show that frequent feedback can drastically improve the throughput-smoothness trade-off. Then we consider the problem of multicasting a packet stream to two users. For both point-to-point and multicast streaming, we propose a spectrum of coding schemes that span different throughput-smoothness tradeoffs. One can choose an appropriate coding scheme from these, depending upon the delay-sensitivity and bandwidth limitations of the application. This work introduces a novel style of analysis using renewal processes and Markov chains to analyze coding schemes.
We revisit the longest common extension (LCE) problem, that is, preprocess a string $T$ into a compact data structure that supports fast LCE queries. An LCE query takes a pair $(i,j)$ of indices in $T$ and returns the length of the longest common prefix of the suffixes of $T$ starting at positions $i$ and $j$. We study the time-space trade-offs for the problem, that is, the space used for the data structure vs. the worst-case time for answering an LCE query. Let $n$ be the length of $T$. Given a parameter $tau$, $1 leq tau leq n$, we show how to achieve either $O(infrac{n}{sqrt{tau}})$ space and $O(tau)$ query time, or $O(infrac{n}{tau})$ space and $O(tau log({|LCE(i,j)|}/{tau}))$ query time, where $|LCE(i,j)|$ denotes the length of the LCE returned by the query. These bounds provide the first smooth trade-offs for the LCE problem and almost match the previously known bounds at the extremes when $tau=1$ or $tau=n$. We apply the result to obtain improved bounds for several applications where the LCE problem is the computational bottleneck, including approximate string matching and computing palindromes. We also present an efficient technique to reduce LCE queries on two strings to one string. Finally, we give a lower bound on the time-space product for LCE data structures in the non-uniform cell probe model showing that our second trade-off is nearly optimal.
Neural networks are proven to be remarkably successful for classification and diagnosis in medical applications. However, the ambiguity in the decision-making process and the interpretability of the learned features is a matter of concern. In this work, we propose a method for improving the feature interpretability of neural network classifiers. Initially, we propose a baseline convolutional neural network with state of the art performance in terms of accuracy and weakly supervised localization. Subsequently, the loss is modified to integrate robustness to adversarial examples into the training process. In this work, feature interpretability is quantified via evaluating the weakly supervised localization using the ground truth bounding boxes. Interpretability is also visually assessed using class activation maps and saliency maps. The method is applied to NIH ChestX-ray14, the largest publicly available chest x-rays dataset. We demonstrate that the adversarially robust optimization paradigm improves feature interpretability both quantitatively and visually.
The study of interactive proofs in the context of distributed network computing is a novel topic, recently introduced by Kol, Oshman, and Saxena [PODC 2018]. In the spirit of sequential interactive proofs theory, we study the power of distributed interactive proofs. This is achieved via a series of results establishing trade-offs between various parameters impacting the power of interactive proofs, including the number of interactions, the certificate size, the communication complexity, and the form of randomness used. Our results also connect distributed interactive proofs with the established field of distributed verification. In general, our results contribute to providing structure to the landscape of distributed interactive proofs.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا