Do you want to publish a course? Click here

Privacy-Preserving Decentralized Multi-Agent Cooperative Optimization -- Paradigm Design and Privacy Analysis

277   0   0.0 ( 0 )
 Added by Xiang Huo
 Publication date 2021
  fields
and research's language is English




Ask ChatGPT about the research

Large-scale multi-agent cooperative control problems have materially enjoyed the scalability, adaptivity, and flexibility of decentralized optimization. However, due to the mandatory iterative communications between the agents and the system operator, the decentralized architecture is vulnerable to malicious attacks and privacy breach. Current research on addressing privacy preservation of both agents and the system operator in cooperative decentralized optimization with strongly coupled objective functions and constraints is still primitive. To fill in the gaps, this paper proposes a novel privacy-preserving decentralized optimization paradigm based on Paillier cryptosystem. The proposed paradigm achieves ideal correctness and security, as well as resists attacks from a range of adversaries. The efficacy and efficiency of the proposed approach are verified via numerical simulations and a real-world physical platform.



rate research

Read More

Privacy concerns with sensitive data are receiving increasing attention. In this paper, we study local differential privacy (LDP) in interactive decentralized optimization. By constructing random local aggregators, we propose a framework to amplify LDP by a constant. We take Alternating Direction Method of Multipliers (ADMM), and decentralized gradient descent as two concrete examples, where experiments support our theory. In an asymptotic view, we address the following question: Under LDP, is it possible to design a distributed private minimizer for arbitrary closed convex constraints with utility loss not explicitly dependent on dimensionality? As an affiliated result, we also show that with merely linear secret sharing, information theoretic privacy is achievable for bounded colluding agents.
This document describes and analyzes a system for secure and privacy-preserving proximity tracing at large scale. This system, referred to as DP3T, provides a technological foundation to help slow the spread of SARS-CoV-2 by simplifying and accelerating the process of notifying people who might have been exposed to the virus so that they can take appropriate measures to break its transmission chain. The system aims to minimise privacy and security risks for individuals and communities and guarantee the highest level of data protection. The goal of our proximity tracing system is to determine who has been in close physical proximity to a COVID-19 positive person and thus exposed to the virus, without revealing the contacts identity or where the contact occurred. To achieve this goal, users run a smartphone app that continually broadcasts an ephemeral, pseudo-random ID representing the users phone and also records the pseudo-random IDs observed from smartphones in close proximity. When a patient is diagnosed with COVID-19, she can upload pseudo-random IDs previously broadcast from her phone to a central server. Prior to the upload, all data remains exclusively on the users phone. Other users apps can use data from the server to locally estimate whether the devices owner was exposed to the virus through close-range physical proximity to a COVID-19 positive person who has uploaded their data. In case the app detects a high risk, it will inform the user.
Autonomous exploration is an application of growing importance in robotics. A promising strategy is ergodic trajectory planning, whereby an agent spends in each area a fraction of time which is proportional to its probability information density function. In this paper, a decentralized ergodic multi-agent trajectory planning algorithm featuring limited communication constraints is proposed. The agents trajectories are designed by optimizing a weighted cost encompassing ergodicity, control energy and close-distance operation objectives. To solve the underlying optimal control problem, a second-order descent iterative method coupled with a projection operator in the form of an optimal feedback controller is used. Exhaustive numerical analyses show that the multi-agent solution allows a much more efficient exploration in terms of completion task time and control energy distribution by leveraging collaboration among agents.
Distributed Virtual Private Networks (dVPNs) are new VPN solutions aiming to solve the trust-privacy concern of a VPNs central authority by leveraging a distributed architecture. In this paper, we first review the existing dVPN ecosystem and debate on its privacy requirements. Then, we present VPN0, a dVPN with strong privacy guarantees and minimal performance impact on its users. VPN0 guarantees that a dVPN node only carries traffic it has whitelisted, without revealing its whitelist or knowing the traffic it tunnels. This is achieved via three main innovations. First, an attestation mechanism which leverages TLS to certify a user visit to a specific domain. Second, a zero knowledge proof to certify that some incoming traffic is authorized, e.g., falls in a nodes whitelist, without disclosing the target domain. Third, a dynamic chain of VPN tunnels to both increase privacy and guarantee service continuation while traffic certification is in place. The paper demonstrates VPN0 functioning when integrated with several production systems, namely BitTorrent DHT and ProtonVPN.
330 - Ye Yuan , Ruijuan Chen , Chuan Sun 2021
Federated learning enables a large number of clients to participate in learning a shared model while maintaining the training data stored in each client, which protects data privacy and security. Till now, federated learning frameworks are built in a centralized way, in which a central client is needed for collecting and distributing information from every other client. This not only leads to high communication pressure at the central client, but also renders the central client highly vulnerable to failure and attack. Here we propose a principled decentralized federated learning algorithm (DeFed), which removes the central client in the classical Federated Averaging (FedAvg) setting and only relies information transmission between clients and their local neighbors. The proposed DeFed algorithm is proven to reach the global minimum with a convergence rate of $O(1/T)$ when the loss function is smooth and strongly convex, where $T$ is the number of iterations in gradient descent. Finally, the proposed algorithm has been applied to a number of toy examples to demonstrate its effectiveness.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا