ﻻ يوجد ملخص باللغة العربية
Website fingerprinting attacks enable an adversary to infer which website a victim is visiting, even if the victim uses an encrypting proxy, such as Tor. Previous work has shown that all proposed defenses against website fingerprinting attacks are ineffective. This paper advances the study of website fingerprinting attacks and defenses in two ways. First, we develop bounds on the trade-off between security and bandwidth overhead that any fingerprinting defense scheme can achieve. This enables us to compare schemes with different security/overhead trade-offs by comparing how close they are to the lower bound. We then refine, implement, and evaluate the Congestion Sensitive BuFLO scheme outlined by Cai, et al. CS-BuFLO, which is based on the provably-secure BuFLO defense proposed by Dyer, et al., was not fully-specified by Cai, et al, but has nonetheless attracted the attention of the Tor developers. Our experiments find that CS-BuFLO has high overhead (around 2.3-2.8x) but can get 6x closer to the bandwidth/security trade-off lower bound than Tor or plain SSH.
Website Fingerprinting (WF) attacks raise major concerns about users privacy. They employ Machine Learning (ML) to allow a local passive adversary to uncover the Web browsing behavior of a user, even if she browses through an encrypted tunnel (e.g. T
Website fingerprinting attacks, which use statistical analysis on network traffic to compromise user privacy, have been shown to be effective even if the traffic is sent over anonymity-preserving networks such as Tor. The classical attack model used
Browser fingerprinting is an invasive and opaque stateless tracking technique. Browser vendors, academics, and standards bodies have long struggled to provide meaningful protections against browser fingerprinting that are both accurate and do not deg
Quantum copy protection uses the unclonability of quantum states to construct quantum software that provably cannot be pirated. Copy protection would be immensely useful, but unfortunately little is known about how to achieve it in general. In this w
It is critical to understand the privacy and robustness vulnerabilities of machine learning models, as their implementation expands in scope. In membership inference attacks, adversaries can determine whether a particular set of data was used in trai