ترغب بنشر مسار تعليمي؟ اضغط هنا

Drynx: Decentralized, Secure, Verifiable System for Statistical Queries and Machine Learning on Distributed Datasets

60   0   0.0 ( 0 )
 نشر من قبل David Froelicher
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Data sharing has become of primary importance in many domains such as big-data analytics, economics and medical research, but remains difficult to achieve when the data are sensitive. In fact, sharing personal information requires individuals unconditional consent or is often simply forbidden for privacy and security reasons. In this paper, we propose Drynx, a decentralized system for privacy-conscious statistical analysis on distributed datasets. Drynx relies on a set of computing nodes to enable the computation of statistics such as standard deviation or extrema, and the training and evaluation of machine-learning models on sensitive and distributed data. To ensure data confidentiality and the privacy of the data providers, Drynx combines interactive protocols, homomorphic encryption, zero-knowledge proofs of correctness, and differential privacy. It enables an efficient and decentralized verification of the input data and of all the systems computations thus provides auditability in a strong adversarial model in which no entity has to be individually trusted. Drynx is highly modular, dynamic and parallelizable. Our evaluation shows that it enables the training of a logistic regression model on a dataset (12 features and 600,000 records) distributed among 12 data providers in less than 2 seconds. The computations are distributed among 6 computing nodes, and Drynx enables the verification of the query executions correctness in less than 22 seconds.



قيم البحث

اقرأ أيضاً

265 - Wenxiu Ding , Wei Sun , Zheng Yan 2021
Cloud computing offers resource-constrained users big-volume data storage and energy-consuming complicated computation. However, owing to the lack of full trust in the cloud, the cloud users prefer privacy-preserving outsourced data computation with correctness verification. However, cryptography-based schemes introduce high computational costs to both the cloud and its users for verifiable computation with privacy preservation, which makes it difficult to support complicated computations in practice. Intel Software Guard Extensions (SGX) as a trusted execution environment is widely researched in various fields (such as secure data analytics and computation), and is regarded as a promising way to achieve efficient outsourced data computation with privacy preservation over the cloud. But we find two types of threats towards the computation with SGX: Disarranging Data-Related Code threat and Output Tampering and Misrouting threat. In this paper, we depict these threats using formal methods and successfully conduct the two threats on the enclave program constructed by Rust SGX SDK to demonstrate their impacts on the correctness of computations over SGX enclaves. In order to provide countermeasures, we propose an efficient and secure scheme to resist the threats and realize verifiable computation for Intel SGX. We prove the security and show the efficiency and correctness of our proposed scheme through theoretic analysis and extensive experiments. Furthermore, we compare the performance of our scheme with that of some cryptography-based schemes to show its high efficiency.
104 - Kai Zhou , M. H. Afifi , Jian Ren 2016
Discrete exponential operation, such as modular exponentiation and scalar multiplication on elliptic curves, is a basic operation of many public-key cryptosystems. However, the exponential operations are considered prohibitively expensive for resourc e-constrained mobile devices. In this paper, we address the problem of secure outsourcing of exponentiation operations to one single untrusted server. Our proposed scheme (ExpSOS) only requires very limited number of modular multiplications at local mobile environment thus it can achieve impressive computational gain. ExpSOS also provides a secure verification scheme with probability approximately 1 to ensure that the mobile end-users can always receive valid results. The comprehensive analysis as well as the simulation results in real mobile device demonstrates that our proposed ExpSOS can significantly improve the existing schemes in efficiency, security and result verifiability. We apply ExpSOS to securely outsource several cryptographic protocols to show that ExpSOS is widely applicable to many cryptographic computations.
In IPFS content identifiers are constructed based on the items data therefore the binding between an items identifier and its data can be deterministically verified. Nevertheless, once an item is modified, its identifier also changes. Therefore when it comes to mutable content there is a need for keeping track of the latest IPFS identifier. This is achieved using naming protocols on top of IPFS, such as IPNS and DNSlink, that map a constant name to an IPFS identifier, allowing at the same time content owners to update these mappings. Nevertheless, IPNS relies on a cryptographic key pair that cannot be rotated, and DNSlink does not provide content authenticity protection. In this paper, we propose a naming protocol that combines DNSlink and decentralized identifiers to enable self-verifiable content items. Our protocol provides content authenticity without imposing any security requirement to DNSlink. Furthermore, our protocol prevent fake content even if attackers have access to the DNS server of the content owner or have access to the content owner secret keys. Our proof of concept implementation shows that our protocol is feasible and can be used with existing IPFS tools.
E-voting systems are a powerful technology for improving democracy. Unfortunately, prior voting systems have single points-of-failure, which may compromise availability, privacy, or integrity of the election results. We present the design, implemen tation, security analysis, and evaluation of the D-DEMOS suite of distributed, privacy-preserving, and end-to-end verifiable e-voting systems. We present two systems: one asynchronous and one with minimal timing assumptions but better performance. Our systems include a distributed vote collection subsystem that does not require cryptographic operations on behalf of the voter. We also include a distributed, replicated and fault-tolerant Bulletin Board component, that stores all necessary election-related information, and allows any party to read and verify the complete election process. Finally, we incorporate trustees, who control result production while guaranteeing privacy and end-to-end-verifiability as long as their strong majority is honest. Our suite of e-voting systems are the first whose voting operation is human verifiable, i.e., a voter can vote over the web, even when her web client stack is potentially unsafe, without sacrificing her privacy, and still be assured her vote was recorded as cast. Additionally, a voter can outsource election auditing to third parties, still without sacrificing privacy. We provide a model and security analysis of the systems, implement complete prototypes, measure their performance experimentally, and demonstrate their ability to handle large-scale elections. Finally, we demonstrate the performance trade-offs between the t
80 - Yusen Wu , Hao Chen , Xin Wang 2021
Adversarial attacks attempt to disrupt the training, retraining and utilizing of artificial intelligence and machine learning models in large-scale distributed machine learning systems. This causes security risks on its prediction outcome. For exampl e, attackers attempt to poison the model by either presenting inaccurate misrepresentative data or altering the models parameters. In addition, Byzantine faults including software, hardware, network issues occur in distributed systems which also lead to a negative impact on the prediction outcome. In this paper, we propose a novel distributed training algorithm, partial synchronous stochastic gradient descent (ParSGD), which defends adversarial attacks and/or tolerates Byzantine faults. We demonstrate the effectiveness of our algorithm under three common adversarial attacks again the ML models and a Byzantine fault during the training phase. Our results show that using ParSGD, ML models can still produce accurate predictions as if it is not being attacked nor having failures at all when almost half of the nodes are being compromised or failed. We will report the experimental evaluations of ParSGD in comparison with other algorithms.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا