ترغب بنشر مسار تعليمي؟ اضغط هنا

Partial Disclosure of Private Dependencies in Privacy Preserving Planning

56   0   0.0 ( 0 )
 نشر من قبل Rotem Lev Lehman
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In collaborative privacy preserving planning (CPPP), a group of agents jointly creates a plan to achieve a set of goals while preserving each others privacy. During planning, agents often reveal the private dependencies between their public actions to other agents, that is, which public action facilitates the preconditions of another public action. Previous work in CPPP does not limit the disclosure of such dependencies. In this paper, we explicitly limit the amount of disclosed dependencies, allowing agents to publish only a part of their private dependencies. We investigate different strategies for deciding which dependencies to publish, and how they affect the ability to find solutions. We evaluate the ability of two solvers -- distribute forward search and centralized planning based on a single-agent projection -- to produce plans under this constraint. Experiments over standard CPPP domains show that the proposed dependency-sharing strategies enable generating plans while sharing only a small fraction of all private dependencies.



قيم البحث

اقرأ أيضاً

In this paper, we study the problem of deceptive reinforcement learning to preserve the privacy of a reward function. Reinforcement learning is the problem of finding a behaviour policy based on rewards received from exploratory behaviour. A key ingr edient in reinforcement learning is a reward function, which determines how much reward (negative or positive) is given and when. However, in some situations, we may want to keep a reward function private; that is, to make it difficult for an observer to determine the reward function used. We define the problem of privacy-preserving reinforcement learning, and present two models for solving it. These models are based on dissimulation -- a form of deception that `hides the truth. We evaluate our models both computationally and via human behavioural experiments. Results show that the resulting policies are indeed deceptive, and that participants can determine the true reward function less reliably than that of an honest agent.
Distributed Virtual Private Networks (dVPNs) are new VPN solutions aiming to solve the trust-privacy concern of a VPNs central authority by leveraging a distributed architecture. In this paper, we first review the existing dVPN ecosystem and debate o n its privacy requirements. Then, we present VPN0, a dVPN with strong privacy guarantees and minimal performance impact on its users. VPN0 guarantees that a dVPN node only carries traffic it has whitelisted, without revealing its whitelist or knowing the traffic it tunnels. This is achieved via three main innovations. First, an attestation mechanism which leverages TLS to certify a user visit to a specific domain. Second, a zero knowledge proof to certify that some incoming traffic is authorized, e.g., falls in a nodes whitelist, without disclosing the target domain. Third, a dynamic chain of VPN tunnels to both increase privacy and guarantee service continuation while traffic certification is in place. The paper demonstrates VPN0 functioning when integrated with several production systems, namely BitTorrent DHT and ProtonVPN.
We consider a resource allocation problem involving a large number of agents with individual constraints subject to privacy, and a central operator whose objective is to optimizing a global, possibly non-convex, cost while satisfying the agentsc onst raints. We focus on the practical case of the management of energy consumption flexibilities by the operator of a microgrid. This paper provides a privacy-preserving algorithm that does compute the optimal allocation of resources, avoiding each agent to reveal her private information (constraints and individual solution profile) neither to the central operator nor to a third party. Our method relies on an aggregation procedure: we maintain a global allocation of resources, and gradually disaggregate this allocation to enforce the satisfaction of private contraints, by a protocol involving the generation of polyhedral cuts and secure multiparty computations (SMC). To obtain these cuts, we use an alternate projections method `a la Von Neumann, which is implemented locally by each agent, preserving her privacy needs. Our theoretical and numerical results show that the method scales well as the number of agents gets large, and thus can be used to solve the allocation problem in high dimension, while addressing privacy issues.
When it comes to large-scale multi-agent systems with a diverse set of agents, traditional differential privacy (DP) mechanisms are ill-matched because they consider a very broad class of adversaries, and they protect all users, independent of their characteristics, by the same guarantee. Achieving a meaningful privacy leads to pronounced reduction in solution quality. Such assumptions are unnecessary in many real-world applications for three key reasons: (i) users might be willing to disclose less sensitive information (e.g., city of residence, but not exact location), (ii) the attacker might posses auxiliary information (e.g., city of residence in a mobility-on-demand system, or reviewer expertise in a paper assignment problem), and (iii) domain characteristics might exclude a subset of solutions (an expert on auctions would not be assigned to review a robotics paper, thus there is no need for indistinguishably between reviewers on different fields). We introduce Piecewise Local Differential Privacy (PLDP), a privacy model designed to protect the utility function in applications where the attacker possesses additional information on the characteristics of the utility space. PLDP enables a high degree of privacy, while being applicable to real-world, unboundedly large settings. Moreover, we propose PALMA, a privacy-preserving heuristic for maximum-weight matching. We evaluate PALMA in a vehicle-passenger matching scenario using real data and demonstrate that it provides strong privacy, $varepsilon leq 3$ and a median of $varepsilon = 0.44$, and high quality matchings ($10.8%$ worse than the non-private optimal).
Searching for available parking spaces is a major problem for drivers especially in big crowded cities, causing traffic congestion and air pollution, and wasting drivers time. Smart parking systems are a novel solution to enable drivers to have real- time parking information for pre-booking. However, current smart parking requires drivers to disclose their private information, such as desired destinations. Moreover, the existing schemes are centralized and vulnerable to the bottleneck of the single point of failure and data breaches. In this paper, we propose a distributed privacy-preserving smart parking system using blockchain. A consortium blockchain created by different parking lot owners to ensure security, transparency, and availability is proposed to store their parking offers on the blockchain. To preserve drivers location privacy, we adopt a private information retrieval (PIR) technique to enable drivers to retrieve parking offers from blockchain nodes privately, without revealing which parking offers are retrieved. Furthermore, a short randomizable signature is used to enable drivers to reserve available parking slots in an anonymous manner. Besides, we introduce an anonymous payment system that cannot link drivers to specific parking locations. Finally, our performance evaluations demonstrate that the proposed scheme can preserve drivers privacy with low communication and computation overhead.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا