ترغب بنشر مسار تعليمي؟ اضغط هنا

Distributed and Secure ML with Self-tallying Multi-party Aggregation

99   0   0.0 ( 0 )
 نشر من قبل Tanmay Gangwani
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Privacy preserving multi-party computation has many applications in areas such as medicine and online advertisements. In this work, we propose a framework for distributed, secure machine learning among untrusted individuals. The framework consists of two parts: a two-step training protocol based on homomorphic addition and a zero knowledge proof for data validity. By combining these two techniques, our framework provides privacy of per-user data, prevents against a malicious user contributing corrupted data to the shared pool, enables each user to self-compute the results of the algorithm without relying on external trusted third parties, and requires no private channels between groups of users. We show how different ML algorithms such as Latent Dirichlet Allocation, Naive Bayes, Decision Trees etc. fit our framework for distributed, secure computing.

قيم البحث

اقرأ أيضاً

The purpose of Secure Multi-Party Computation is to enable protocol participants to compute a public function of their private inputs while keeping their inputs secret, without resorting to any trusted third party. However, opening the public output of such computations inevitably reveals some information about the private inputs. We propose a measure generalising both Renyi entropy and g-entropy so as to quantify this information leakage. In order to control and restrain such information flows, we introduce the notion of function substitution which replaces the computation of a function that reveals sensitive information with that of an approximate function. We exhibit theoretical bounds for the privacy gains that this approach provides and experimentally show that this enhances the confidentiality of the inputs while controlling the distortion of computed output values. Finally, we investigate the inherent compromise between accuracy of computation and privacy of inputs and we demonstrate how to realise such optimal trade-offs.
In this work, we study how to securely evaluate the value of trading data without requiring a trusted third party. We focus on the important machine learning task of classification. This leads us to propose a provably secure four-round protocol that computes the value of the data to be traded without revealing the data to the potential acquirer. The theoretical results demonstrate a number of important properties of the proposed protocol. In particular, we prove the security of the proposed protocol in the honest-but-curious adversary model.
We consider the problem of computing an aggregation function in a emph{secure} and emph{scalable} way. Whereas previous distributed solutions with similar security guarantees have a communication cost of $O(n^3)$, we present a distributed protocol th at requires only a communication complexity of $O(nlog^3 n)$, which we prove is near-optimal. Our protocol ensures perfect security against a computationally-bounded adversary, tolerates $(1/2-epsilon)n$ malicious nodes for any constant $1/2 > epsilon > 0$ (not depending on $n$), and outputs the exact value of the aggregated function with high probability.
Many applications and protocols depend on the ability to generate a pool of servers to conduct majority-based consensus mechanisms and often this is done by doing plain DNS queries. A recent off-path attack [1] against NTP and security enhanced NTP w ith Chronos [2] showed that relying on DNS for generating the pool of NTP servers introduces a weak link. In this work, we propose a secure, backward-compatible address pool generation method using distributed DNS-over-HTTPS (DoH) resolvers which is aimed to prevent such attacks against server pool generation.
We consider the task of secure multi-party distributed quantum computation on a quantum network. We propose a protocol based on quantum error correction which reduces the number of necessary qubits. That is, each of the $n$ nodes in our protocol requ ires an operational workspace of $n^2 + 4n$ qubits, as opposed to previously shown $Omegabig((n^3+n^2s^2)log nbig)$ qubits, where $s$ is a security parameter. Additionally, we reduce the communication complexity by a factor of $mathcal{O}(n^3log(n))$ qubits per node, as compared to existing protocols. To achieve universal computation, we develop a distributed procedure for verifying magic states, which allows us to apply distributed gate teleportation and which may be of independent interest. We showcase our protocol on a small example for a 7-node network.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا