Do you want to publish a course? Click here

The Effectiveness of Privacy Enhancing Technologies against Fingerprinting

295   0   0.0 ( 0 )
 Added by Michael Tschantz
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

We measure how effective Privacy Enhancing Technologies (PETs) are at protecting users from website fingerprinting. Our measurements use both experimental and observational methods. Experimental methods allow control, precision, and use on new PETs that currently lack a user base. Observational methods enable scale and drawing from the browsers currently in real-world use. By applying experimentally created models of a PETs behavior to an observational data set, our novel hybrid method offers the best of both worlds. We find the Tor Browser Bundle to be the most effective PET amongst the set we tested. We find that some PETs have inconsistent behaviors, which can do more harm than good.



rate research

Read More

The AN.ON-Next project aims to integrate privacy-enhancing technologies into the internets infrastructure and establish them in the consumer mass market. The technologies in focus include a basis protection at internet service provider level, an improved overlay network-based protection and a concept for privacy protection in the emerging 5G mobile network. A crucial success factor will be the viable adjustment and development of standards, business models and pricing strategies for those new technologies.
Multisite medical data sharing is critical in modern clinical practice and medical research. The challenge is to conduct data sharing that preserves individual privacy and data usability. The shortcomings of traditional privacy-enhancing technologies mean that institutions rely on bespoke data sharing contracts. These contracts increase the inefficiency of data sharing and may disincentivize important clinical treatment and medical research. This paper provides a synthesis between two novel advanced privacy enhancing technologies (PETs): Homomorphic Encryption and Secure Multiparty Computation (defined together as Multiparty Homomorphic Encryption or MHE). These PETs provide a mathematical guarantee of privacy, with MHE providing a performance advantage over separately using HE or SMC. We argue MHE fulfills legal requirements for medical data sharing under the General Data Protection Regulation (GDPR) which has set a global benchmark for data protection. Specifically, the data processed and shared using MHE can be considered anonymized data. We explain how MHE can reduce the reliance on customized contractual measures between institutions. The proposed approach can accelerate the pace of medical research whilst offering additional incentives for healthcare and research institutes to employ common data interoperability standards.
332 - Yufei Chen , Chao Shen , Cong Wang 2021
Transfer learning has become a common solution to address training data scarcity in practice. It trains a specified student model by reusing or fine-tuning early layers of a well-trained teacher model that is usually publicly available. However, besides utility improvement, the transferred public knowledge also brings potential threats to model confidentiality, and even further raises other security and privacy issues. In this paper, we present the first comprehensive investigation of the teacher model exposure threat in the transfer learning context, aiming to gain a deeper insight into the tension between public knowledge and model confidentiality. To this end, we propose a teacher model fingerprinting attack to infer the origin of a student model, i.e., the teacher model it transfers from. Specifically, we propose a novel optimization-based method to carefully generate queries to probe the student model to realize our attack. Unlike existing model reverse engineering approaches, our proposed fingerprinting method neither relies on fine-grained model outputs, e.g., posteriors, nor auxiliary information of the model architecture or training dataset. We systematically evaluate the effectiveness of our proposed attack. The empirical results demonstrate that our attack can accurately identify the model origin with few probing queries. Moreover, we show that the proposed attack can serve as a stepping stone to facilitating other attacks against machine learning models, such as model stealing.
72 - Dashan Gao , Ben Tan , Ce Ju 2020
Matrix Factorization has been very successful in practical recommendation applications and e-commerce. Due to data shortage and stringent regulations, it can be hard to collect sufficient data to build performant recommender systems for a single company. Federated learning provides the possibility to bridge the data silos and build machine learning models without compromising privacy and security. Participants sharing common users or items collaboratively build a model over data from all the participants. There have been some works exploring the application of federated learning to recommender systems and the privacy issues in collaborative filtering systems. However, the privacy threats in federated matrix factorization are not studied. In this paper, we categorize federated matrix factorization into three types based on the partition of feature space and analyze privacy threats against each type of federated matrix factorization model. We also discuss privacy-preserving approaches. As far as we are aware, this is the first study of privacy threats of the matrix factorization method in the federated learning framework.
Blockchains are turning into decentralized computing platforms and are getting worldwide recognition for their unique advantages. There is an emerging trend beyond payments that blockchains could enable a new breed of decentralized applications, and serve as the foundation for Internets security infrastructure. The immutable nature of the blockchain makes it a winner on security and transparency; it is nearly inconceivable for ledgers to be altered in a way not instantly clear to every single user involved. However, most blockchains fall short in privacy aspects, particularly in data protection. Garlic Routing and Onion Routing are two of major Privacy Enhancing Techniques (PETs) which are popular for anonymization and security. Garlic Routing is a methodology using by I2P Anonymous Network to hide the identity of sender and receiver of data packets by bundling multiple messages into a layered encryption structure. The Onion Routing attempts to provide lowlatency Internet-based connections that resist traffic analysis, deanonymization attack, eavesdropping, and other attacks both by outsiders (e.g. Internet routers) and insiders (Onion Routing servers themselves). As there are a few controversies over the rate of resistance of these two techniques to privacy attacks, we propose a PET-Enabled Sidechain (PETES) as a new privacy enhancing technique by integrating Garlic Routing and Onion Routing into a Garlic Onion Routing (GOR) framework suitable to the structure of blockchains. The preliminary proposed GOR aims to improve the privacy of transactions in blockchains via PETES structure.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا