ترغب بنشر مسار تعليمي؟ اضغط هنا

Prospect Theoretic Analysis of Privacy-Preserving Mechanism

123   0   0.0 ( 0 )
 نشر من قبل Guocheng Liao
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We study a problem of privacy-preserving mechanism design. A data collector wants to obtain data from individuals to perform some computations. To relieve the privacy threat to the contributors, the data collector adopts a privacy-preserving mechanism by adding random noise to the computation result, at the cost of reduced accuracy. Individuals decide whether to contribute data when faced with the privacy issue. Due to the intrinsic uncertainty in privacy protection, we model individuals privacy-related decision using Prospect Theory. Such a theory more accurately models individuals behavior under uncertainty than the traditional expected utility theory, whose prediction always deviates from practical human behavior. We show that the data collectors utility maximization problem involves a polynomial of high and fractional order, the root of which is difficult to compute analytically. We get around this issue by considering a large population approximation, and obtain a closed-form solution that well approximates the precise solution. We discover that the data collector who considers the more realistic Prospect Theory based individual decision modeling would adopt a more conservative privacy-preserving mechanism, compared with the case based on the expected utility theory modeling. We also study the impact of Prospect Theory parameters, and concludes that more loss-averse or risk-seeking individuals will trigger a more conservative mechanism. When individuals have different Prospect Theory parameters, simulations demonstrate that the privacy protection first becomes stronger and then becomes weaker as the heterogeneity increases from a low value to a high one.



قيم البحث

اقرأ أيضاً

216 - Jiajun Sun 2013
In crowdsourcing markets, there are two different type jobs, i.e. homogeneous jobs and heterogeneous jobs, which need to be allocated to workers. Incentive mechanisms are essential to attract extensive user participating for achieving good service qu ality, especially under a given budget constraint condition. To this end, recently, Singer et al. propose a novel class of auction mechanisms for determining near-optimal prices of tasks for crowdsourcing markets constrained by the given budget. Their mechanisms are very useful to motivate extensive user to truthfully participate in crowdsourcing markets. Although they are so important, there still exist many security and privacy challenges in real-life environments. In this paper, we present a general privacy-preserving verifiable incentive mechanism for crowdsourcing markets with the budget constraint, not only to exploit how to protect the bids and assignments privacy, and the chosen winners privacy in crowdsourcing markets with homogeneous jobs and heterogeneous jobs and identity privacy from users, but also to make the verifiable payment between the platform and users for crowdsourcing applications. Results show that our general privacy-preserving verifiable incentive mechanisms achieve the same results as the generic one without privacy preservation.
185 - Jiajun Sun 2013
Recently, a novel class of incentive mechanisms is proposed to attract extensive users to truthfully participate in crowd sensing applications with a given budget constraint. The class mechanisms also bring good service quality for the requesters in crowd sensing applications. Although it is so important, there still exists many verification and privacy challenges, including users bids and subtask information privacy and identification privacy, winners set privacy of the platform, and the security of the payment outcomes. In this paper, we present a privacy-preserving verifiable incentive mechanism for crowd sensing applications with the budget constraint, not only to explore how to protect the privacies of users and the platform, but also to make the verifiable payment correct between the platform and users for crowd sensing applications. Results indicate that our privacy-preserving verifiable incentive mechanism achieves the same results as the generic one without privacy preservation.
360 - Shuyuan Zheng , Yang Cao , 2021
Federated learning (FL) is an emerging paradigm for machine learning, in which data owners can collaboratively train a model by sharing gradients instead of their raw data. Two fundamental research problems in FL are incentive mechanism and privacy p rotection. The former focuses on how to incentivize data owners to participate in FL. The latter studies how to protect data owners privacy while maintaining high utility of trained models. However, incentive mechanism and privacy protection in FL have been studied separately and no work solves both problems at the same time. In this work, we address the two problems simultaneously by an FL-Market that incentivizes data owners participation by providing appropriate payments and privacy protection. FL-Market enables data owners to obtain compensation according to their privacy loss quantified by local differential privacy (LDP). Our insight is that, by meeting data owners personalized privacy preferences and providing appropriate payments, we can (1) incentivize privacy risk-tolerant data owners to set larger privacy parameters (i.e., gradients with less noise) and (2) provide preferred privacy protection for privacy risk-averse data owners. To achieve this, we design a personalized LDP-based FL framework with a deep learning-empowered auction mechanism for incentivizing trading gradients with less noise and optimal aggregation mechanisms for model updates. Our experiments verify the effectiveness of the proposed framework and mechanisms.
In this paper, we study the incentive mechanism design for real-time data aggregation, which holds a large spectrum of crowdsensing applications. Despite extensive studies on static incentive mechanisms, none of these are applicable to real-time data aggregation due to their incapability of maintaining PUs long-term participation. We emphasize that, to maintain PUs long-term participation, it is of significant importance to protect their privacy as well as to provide them a desirable cumulative compensation. Thus motivated, in this paper, we propose LEPA, an efficient incentive mechanism to stimulate long-term participation in real-time data aggregation. Specifically, we allow PUs to preserve their privacy by reporting noisy data, the impact of which on the aggregation accuracy is quantified with proper privacy and accuracy measures. Then, we provide a framework that jointly optimizes the incentive schemes in different time slots to ensure desirable cumulative compensation for PUs and thereby prevent PUs from leaving the system halfway. Considering PUs strategic behaviors and combinatorial nature of the sensing tasks, we propose a computationally efficient on-line auction with close-to-optimal performance in presence of NP-hardness of winner user selection. We further show that the proposed on-line auction satisfies desirable properties of truthfulness and individual rationality. The performance of LEPA is validated by both theoretical analysis and extensive simulations.
Low transaction throughput and poor scalability are significant issues in public blockchain consensus protocols such as Bitcoins. Recent research efforts in this direction have proposed shard-based consensus protocols where the key idea is to split t he transactions among multiple committees (or shards), which then process these shards or set of transactions in parallel. Such a parallel processing of disjoint sets of transactions or shards by multiple committees significantly improves the overall scalability and transaction throughout of the system. However, one significant research gap is a lack of understanding of the strategic behavior of rational processors within committees in such shard-based consensus protocols. Such an understanding is critical for designing appropriate incentives that will foster cooperation within committees and prevent free-riding. In this paper, we address this research gap by analyzing the behavior of processors using a game-theoretic model, where each processor aims at maximizing its reward at a minimum cost of participating in the protocol. We first analyze the Nash equilibria in an N-player static game model of the sharding protocol. We show that depending on the reward sharing approach employed, processors can potentially increase their payoff by unilaterally behaving in a defective fashion, thus resulting in a social dilemma. In order to overcome this social dilemma, we propose a novel incentive-compatible reward sharing mechanism to promote cooperation among processors. Our numerical results show that achieving a majority of cooperating processors (required to ensure a healthy state of the blockchain network) is easier to achieve with the proposed incentive-compatible reward sharing mechanism than with other reward sharing mechanisms.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا