ترغب بنشر مسار تعليمي؟ اضغط هنا

PPaaS: Privacy Preservation as a Service

108   0   0.0 ( 0 )
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Personally identifiable information (PII) can find its way into cyberspace through various channels, and many potential sources can leak such information. Data sharing (e.g. cross-agency data sharing) for machine learning and analytics is one of the important components in data science. However, due to privacy concerns, data should be enforced with strong privacy guarantees before sharing. Different privacy-preserving approaches were developed for privacy preserving data sharing; however, identifying the best privacy-preservation approach for the privacy-preservation of a certain dataset is still a challenge. Different parameters can influence the efficacy of the process, such as the characteristics of the input dataset, the strength of the privacy-preservation approach, and the expected level of utility of the resulting dataset (on the corresponding data mining application such as classification). This paper presents a framework named underline{P}rivacy underline{P}reservation underline{a}s underline{a} underline{S}ervice (PPaaS) to reduce this complexity. The proposed method employs selective privacy preservation via data perturbation and looks at different dynamics that can influence the quality of the privacy preservation of a dataset. PPaaS includes pools of data perturbation methods, and for each application and the input dataset, PPaaS selects the most suitable data perturbation approach after rigorous evaluation. It enhances the usability of privacy-preserving methods within its pool; it is a generic platform that can be used to sanitize big data in a granular, application-specific manner by employing a suitable combination of diverse privacy-preserving algorithms to provide a proper balance between privacy and utility.



قيم البحث

اقرأ أيضاً

Aiming at the privacy preservation of dynamic Web service composition, this paper proposes a SDN-based runtime security enforcement approach for privacy preservation of dynamic Web service composition. The main idea of this approach is that the owner of service composition leverages the security policy model (SPM) to define the access control relationships that service composition must comply with in the application plane, then SPM model is transformed into the low-level security policy model (RSPM) containing the information of SDN data plane, and RSPM model is uploaded into the SDN controller. After uploading, the virtual machine access control algorithm integrated in the SDN controller monitors all of access requests towards service composition at runtime. Only the access requests that meet the definition of RSPM model can be forwarded to the target terminal. Any access requests that do not meet the definition of RSPM model will be automatically blocked by Openflow switches or deleted by SDN controller, Thus, this approach can effectively solve the problems of network-layer illegal accesses, identity theft attacks and service leakages when Web service composition is running. In order to verify the feasibility of this approach, this paper implements an experimental system by using POX controller and Mininet virtual network simulator, and evaluates the effectiveness and performance of this approach by using this system. The final experimental results show that the method is completely effective, and the method can always get the correct calculation results in an acceptable time when the scale of RSPM model is gradually increasing.
Differential privacy is a definition of privacy for algorithms that analyze and publish information about statistical databases. It is often claimed that differential privacy provides guarantees against adversaries with arbitrary side information. In this paper, we provide a precise formulation of these guarantees in terms of the inferences drawn by a Bayesian adversary. We show that this formulation is satisfied by both vanilla differential privacy as well as a relaxation known as (epsilon,delta)-differential privacy. Our formulation follows the ideas originally due to Dwork and McSherry [Dwork 2006]. This paper is, to our knowledge, the first place such a formulation appears explicitly. The analysis of the relaxed definition is new to this paper, and provides some concrete guidance for setting parameters when using (epsilon,delta)-differential privacy.
Differential privacy offers a formal framework for reasoning about privacy and accuracy of computations on private data. It also offers a rich set of building blocks for constructing data analyses. When carefully calibrated, these analyses simultaneo usly guarantee privacy of the individuals contributing their data, and accuracy of their results for inferring useful properties about the population. The compositional nature of differential privacy has motivated the design and implementation of several programming languages aimed at helping a data analyst in programming differentially private analyses. However, most of the programming languages for differential privacy proposed so far provide support for reasoning about privacy but not for reasoning about the accuracy of data analyses. To overcome this limitation, in this work we present DPella, a programming framework providing data analysts with support for reasoning about privacy, accuracy and their trade-offs. The distinguishing feature of DPella is a novel component which statically tracks the accuracy of different data analyses. In order to make tighter accuracy estimations, this component leverages taint analysis for automatically inferring statistical independence of the different noise quantities added for guaranteeing privacy. We show the flexibility of our approach by not only implementing classical counting queries (e.g., CDFs) but also by analyzing hierarchical counting queries (like those done by Census Bureaus), where accuracy have different constraints per level and data analysts should figure out the best manner to calibrate privacy to meet the accuracy requirements.
Security monitoring via ubiquitous cameras and their more extended in intelligent buildings stand to gain from advances in signal processing and machine learning. While these innovative and ground-breaking applications can be considered as a boon, at the same time they raise significant privacy concerns. In fact, recent GDPR (General Data Protection Regulation) legislation has highlighted and become an incentive for privacy-preserving solutions. Typical privacy-preserving video monitoring schemes address these concerns by either anonymizing the sensitive data. However, these approaches suffer from some limitations, since they are usually non-reversible, do not provide multiple levels of decryption and computationally costly. In this paper, we provide a novel privacy-preserving method, which is reversible, supports de-identification at multiple privacy levels, and can efficiently perform data acquisition, encryption and data hiding by combining multi-level encryption with compressive sensing. The effectiveness of the proposed approach in protecting the identity of the users has been validated using the goodness of reconstruction quality and strong anonymization of the faces.
Blockchain offers traceability and transparency to supply chain event data and hence can help overcome many challenges in supply chain management such as: data integrity, provenance and traceability. However, data privacy concerns such as the protect ion of trade secrets have hindered adoption of blockchain technology. Although consortium blockchains only allow authorised supply chain entities to read/write to the ledger, privacy preservation of trade secrets cannot be ascertained. In this work, we propose a privacy-preservation framework, PrivChain, to protect sensitive data on blockchain using zero knowledge proofs. PrivChain provides provenance and traceability without revealing any sensitive information to end-consumers or supply chain entities. Its novelty stems from: a) its ability to allow data owners to protect trade related information and instead provide proofs on the data, and b) an integrated incentive mechanism for entities providing valid proofs over provenance data. In particular, PrivChain uses Zero Knowledge Range Proofs (ZKRPs), an efficient variant of ZKPs, to provide origin information without disclosing the exact location of a supply chain product. Furthermore, the framework allows to compute proofs and commitments off-line, decoupling the computational overhead from blockchain. The proof verification process and incentive payment initiation are automated using blockchain transactions, smart contracts, and events. A proof of concept implementation on Hyperledger Fabric reveals a minimal overhead of using PrivChain for blockchain enabled supply chains.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا