No Arabic abstract
We firstly suggest privacy protection cache policy applying the duty to delete personal information on a hybrid main memory system. This cache policy includes generating random data and overwriting the random data into the personal information. Proposed cache policy is more economical and effective regarding perfect deletion of data.
Location privacy has been extensively studied in the literature. However, existing location privacy models are either not rigorous or not customizable, which limits the trade-off between privacy and utility in many real-world applications. To address this issue, we propose a new location privacy notion called PGLP, i.e., textit{Policy Graph based Location Privacy}, providing a rich interface to release private locations with customizable and rigorous privacy guarantee. First, we design the privacy metrics of PGLP by extending differential privacy. Specifically, we formalize a users location privacy requirements using a textit{location policy graph}, which is expressive and customizable. Second, we investigate how to satisfy an arbitrarily given location policy graph under adversarial knowledge. We find that a location policy graph may not always be viable and may suffer textit{location exposure} when the attacker knows the users mobility pattern. We propose efficient methods to detect location exposure and repair the policy graph with optimal utility. Third, we design a private location trace release framework that pipelines the detection of location exposure, policy graph repair, and private trajectory release with customizable and rigorous location privacy. Finally, we conduct experiments on real-world datasets to verify the effectiveness of the privacy-utility trade-off and the efficiency of the proposed algorithms.
Privacy protection is an important research area, which is especially critical in this big data era. To a large extent, the privacy of visual classification data is mainly in the mapping between the image and its corresponding label, since this relation provides a great amount of information and can be used in other scenarios. In this paper, we propose the mapping distortion based protection (MDP) and its augmentation-based extension (AugMDP) to protect the data privacy by modifying the original dataset. In the modified dataset generated by MDP, the image and its label are not consistent ($e.g.$, a cat-like image is labeled as the dog), whereas the DNNs trained on it can still achieve good performance on benign testing set. As such, this method can protect privacy when the dataset is leaked. Extensive experiments are conducted, which verify the effectiveness and feasibility of our method. The code for reproducing main results is available at url{https://github.com/PerdonLiu/Visual-Privacy-Protection-via-Mapping-Distortion}.
Recently, a medical privacy protection scheme (MPPS) based on DNA coding and chaos was proposed in [IEEETrans. Nanobioscience, vol. 16, pp. 850--858, 2017], which uses two coupled chaotic system to generate cryptographic primitives to encrypt color DICOM image. Relying on several statistical experimental results and some theoretical analyses, the designers of MPPS claimed that it is secure against chosen-plaintext attack and the other classic attacks. However, the above conclusion is insufficient without cryptanalysis. In this paper, we first study some properties of MPPS and DNA coding and then propose a chosen-plaintext attack to reveal its equivalent secret-key. It is proved that the attack only needs $lceil log_{256}(3cdot Mcdot N)rceil+4$ chosen plain-images, where $M times N$ is the size of the RGB color image, and ``3 is the number of color channels. Also, the other claimed superiorities are questioned from the viewpoint of modern cryptography. Both theoretical and experimental results are provided to support the feasibility of the attack and the other reported security defects. The proposed cryptanalysis work will promote the proper application of DNA encoding in protecting multimedia data including the DICOM image.
Privacy is a major concern in sharing human subject data to researchers for secondary analyses. A simple binary consent (opt-in or not) may significantly reduce the amount of sharable data, since many patients might only be concerned about a few sensitive medical conditions rather than the entire medical records. We propose event-level privacy protection, and develop a feature ablation method to protect event-level privacy in electronic medical records. Using a list of 13 sensitive diagnoses, we evaluate the feasibility and the efficacy of the proposed method. As feature ablation progresses, the identifiability of a sensitive medical condition decreases with varying speeds on different diseases. We find that these sensitive diagnoses can be divided into 3 categories: (1) 5 diseases have fast declining identifiability (AUC below 0.6 with less than 400 features excluded); (2) 7 diseases with progressively declining identifiability (AUC below 0.7 with between 200 and 700 features excluded); and (3) 1 disease with slowly declining identifiability (AUC above 0.7 with 1000 features excluded). The fact that the majority (12 out of 13) of the sensitive diseases fall into the first two categories suggests the potential of the proposed feature ablation method as a solution for event-level record privacy protection.
Governments around the world have become increasingly frustrated with tech giants dictating public health policy. The software created by Apple and Google enables individuals to track their own potential exposure through collated exposure notifications. However, the same software prohibits location tracking, denying key information needed by public health officials for robust contract tracing. This information is needed to treat and isolate COVID-19 positive people, identify transmission hotspots, and protect against continued spread of infection. In this article, we present two simple ideas: the lighthouse and the covid-commons that address the needs of public health authorities while preserving the privacy-sensitive goals of the Apple and google exposure notification protocols.