No Arabic abstract
With current technology, a number of entities have access to user mobility traces at different levels of spatio-temporal granularity. At the same time, users frequently reveal their location through different means, including geo-tagged social media posts and mobile app usage. Such leaks are often bound to a pseudonym or a fake identity in an attempt to preserve ones privacy. In this work, we investigate how large-scale mobility traces can de-anonymize anonymous location leaks. By mining the country-wide mobility traces of tens of millions of users, we aim to understand how many location leaks are required to uniquely match a trace, how spatio-temporal obfuscation decreases the matching quality, and how the location popularity and time of the leak influence de-anonymization. We also study the mobility characteristics of those individuals whose anonymous leaks are more prone to identification. Finally, by extending our matching methodology to full traces, we show how large-scale human mobility is highly unique. Our quantitative results have implications for the privacy of users traces, and may serve as a guideline for future policies regarding the management and publication of mobility data.
In this paper we aim to compare Kurepa trees and Aronszajn trees. Moreover, we analyze the affect of large cardinal assumptions on this comparison. Using the the method of walks on ordinals, we will show it is consistent with ZFC that there is a Kurepa tree and every Kurepa tree contains a Souslin subtree, if there is an inaccessible cardinal. This is stronger than Komjaths theorem that asserts the same consistency from two inaccessible cardinals. We will show that our large cardinal assumption is optimal, i.e. if every Kurepa tree has an Aronszajn subtree then $omega_2$ is inaccessible in the constructible universe textsc{L}. Moreover, we prove it is consistent with ZFC that there is a Kurepa tree $T$ such that if $U subset T$ is a Kurepa tree with the inherited order from $T$, then $U$ has an Aronszajn subtree. This theorem uses no large cardinal assumption. Our last theorem immediately implies the following: assume $textrm{MA}_{omega_2}$ holds and $omega_2$ is not a Mahlo cardinal in $textsc{L}$. Then there is a Kurepa tree with the property that every Kurepa subset has an Aronszajn subtree. Our work entails proving a new lemma about Todorcevics $rho$ function which might be useful in other contexts.
Adversarial patch attack against image classification deep neural networks (DNNs), in which the attacker can inject arbitrary distortions within a bounded region of an image, is able to generate adversarial perturbations that are robust (i.e., remain adversarial in physical world) and universal (i.e., remain adversarial on any input). It is thus important to detect and mitigate such attack to ensure the security of DNNs. This work proposes Jujutsu, a technique to detect and mitigate robust and universal adversarial patch attack. Jujutsu leverages the universal property of the patch attack for detection. It uses explainable AI technique to identify suspicious features that are potentially malicious, and verify their maliciousness by transplanting the suspicious features to new images. An adversarial patch continues to exhibit the malicious behavior on the new images and thus can be detected based on prediction consistency. Jujutsu leverages the localized nature of the patch attack for mitigation, by randomly masking the suspicious features to remove adversarial perturbations. However, the network might fail to classify the images as some of the contents are removed (masked). Therefore, Jujutsu uses image inpainting for synthesizing alternative contents from the pixels that are masked, which can reconstruct the clean image for correct prediction. We evaluate Jujutsu on five DNNs on two datasets, and show that Jujutsu achieves superior performance and significantly outperforms existing techniques. Jujutsu can further defend against various variants of the basic attack, including 1) physical-world attack; 2) attacks that target diverse classes; 3) attacks that use patches in different shapes and 4) adaptive attacks.
Today, two-factor authentication (2FA) is a widely implemented mechanism to counter phishing attacks. Although much effort has been investigated in 2FA, most 2FA systems are still vulnerable to carefully designed phishing attacks, and some even request special hardware, which limits their wide deployment. Recently, real-time phishing (RTP) has made the situation even worse because an adversary can effortlessly establish a phishing website replicating a target website without any background of the web page design technique. Traditional 2FA can be easily bypassed by such RTP attacks. In this work, we propose a novel 2FA system to counter RTP attacks. The main idea is to request a user to take a photo of the web browser with the domain name in the address bar as the 2nd authentication factor. The web server side extracts the domain name information based on Optical Character Recognition (OCR), and then determines if the user is visiting this website or a fake one, thus defeating the RTP attacks where an adversary must set up a fake website with a different domain. We prototyped our system and evaluated its performance in various environments. The results showed that PhotoAuth is an effective technique with good scalability. We also showed that compared to other 2FA systems, PhotoAuth has several advantages, especially no special hardware or software support is needed on the client side except a phone, making it readily deployable.
The smartphone and laptop can be unlocked by face or fingerprint recognition, while neural networks which confront numerous requests every day have little capability to distinguish between untrustworthy and credible users. It makes model risky to be traded as a commodity. Existed research either focuses on the intellectual property rights ownership of the commercialized model, or traces the source of the leak after pirated models appear. Nevertheless, active identifying users legitimacy before predicting output has not been considered yet. In this paper, we propose Model-Lock (M-LOCK) to realize an end-to-end neural network with local dynamic access control, which is similar to the automatic locking function of the smartphone to prevent malicious attackers from obtaining available performance actively when you are away. Three kinds of model training strategy are essential to achieve the tremendous performance divergence between certified and suspect input in one neural network. Extensive experiments based on MNIST, FashionMNIST, CIFAR10, CIFAR100, SVHN and GTSRB datasets demonstrated the feasibility and effectiveness of the proposed scheme.
Deep learning techniques have made tremendous progress in a variety of challenging tasks, such as image recognition and machine translation, during the past decade. Training deep neural networks is computationally expensive and requires both human and intellectual resources. Therefore, it is necessary to protect the intellectual property of the model and externally verify the ownership of the model. However, previous studies either fail to defend against the evasion attack or have not explicitly dealt with fraudulent claims of ownership by adversaries. Furthermore, they can not establish a clear association between the model and the creators identity. To fill these gaps, in this paper, we propose a novel intellectual property protection (IPP) framework based on blind-watermark for watermarking deep neural networks that meet the requirements of security and feasibility. Our framework accepts ordinary samples and the exclusive logo as inputs, outputting newly generated samples as watermarks, which are almost indistinguishable from the origin, and infuses these watermarks into DNN models by assigning specific labels, leaving the backdoor as the basis for our copyright claim. We evaluated our IPP framework on two benchmark datasets and 15 popular deep learning models. The results show that our framework successfully verifies the ownership of all the models without a noticeable impact on their primary task. Most importantly, we are the first to successfully design and implement a blind-watermark based framework, which can achieve state-of-art performances on undetectability against evasion attack and unforgeability against fraudulent claims of ownership. Further, our framework shows remarkable robustness and establishes a clear association between the model and the authors identity.