ترغب بنشر مسار تعليمي؟ اضغط هنا

General Privacy-Preserving Verifiable Incentive Mechanism for Crowdsourcing Markets

215   0   0.0 ( 0 )
 نشر من قبل David Sun
 تاريخ النشر 2013
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English
 تأليف Jiajun Sun




اسأل ChatGPT حول البحث

In crowdsourcing markets, there are two different type jobs, i.e. homogeneous jobs and heterogeneous jobs, which need to be allocated to workers. Incentive mechanisms are essential to attract extensive user participating for achieving good service quality, especially under a given budget constraint condition. To this end, recently, Singer et al. propose a novel class of auction mechanisms for determining near-optimal prices of tasks for crowdsourcing markets constrained by the given budget. Their mechanisms are very useful to motivate extensive user to truthfully participate in crowdsourcing markets. Although they are so important, there still exist many security and privacy challenges in real-life environments. In this paper, we present a general privacy-preserving verifiable incentive mechanism for crowdsourcing markets with the budget constraint, not only to exploit how to protect the bids and assignments privacy, and the chosen winners privacy in crowdsourcing markets with homogeneous jobs and heterogeneous jobs and identity privacy from users, but also to make the verifiable payment between the platform and users for crowdsourcing applications. Results show that our general privacy-preserving verifiable incentive mechanisms achieve the same results as the generic one without privacy preservation.



قيم البحث

اقرأ أيضاً

181 - Jiajun Sun 2013
Recently, a novel class of incentive mechanisms is proposed to attract extensive users to truthfully participate in crowd sensing applications with a given budget constraint. The class mechanisms also bring good service quality for the requesters in crowd sensing applications. Although it is so important, there still exists many verification and privacy challenges, including users bids and subtask information privacy and identification privacy, winners set privacy of the platform, and the security of the payment outcomes. In this paper, we present a privacy-preserving verifiable incentive mechanism for crowd sensing applications with the budget constraint, not only to explore how to protect the privacies of users and the platform, but also to make the verifiable payment correct between the platform and users for crowd sensing applications. Results indicate that our privacy-preserving verifiable incentive mechanism achieves the same results as the generic one without privacy preservation.
360 - Shuyuan Zheng , Yang Cao , 2021
Federated learning (FL) is an emerging paradigm for machine learning, in which data owners can collaboratively train a model by sharing gradients instead of their raw data. Two fundamental research problems in FL are incentive mechanism and privacy p rotection. The former focuses on how to incentivize data owners to participate in FL. The latter studies how to protect data owners privacy while maintaining high utility of trained models. However, incentive mechanism and privacy protection in FL have been studied separately and no work solves both problems at the same time. In this work, we address the two problems simultaneously by an FL-Market that incentivizes data owners participation by providing appropriate payments and privacy protection. FL-Market enables data owners to obtain compensation according to their privacy loss quantified by local differential privacy (LDP). Our insight is that, by meeting data owners personalized privacy preferences and providing appropriate payments, we can (1) incentivize privacy risk-tolerant data owners to set larger privacy parameters (i.e., gradients with less noise) and (2) provide preferred privacy protection for privacy risk-averse data owners. To achieve this, we design a personalized LDP-based FL framework with a deep learning-empowered auction mechanism for incentivizing trading gradients with less noise and optimal aggregation mechanisms for model updates. Our experiments verify the effectiveness of the proposed framework and mechanisms.
Incentive mechanism plays a critical role in privacy-aware crowdsensing. Most previous studies on co-design of incentive mechanism and privacy preservation assume a trustworthy fusion center (FC). Very recent work has taken steps to relax the assumpt ion on trustworthy FC and allows participatory users (PUs) to add well calibrated noise to their raw sensing data before reporting them, whereas the focus is on the equilibrium behavior of data subjects with binary data. Making a paradigm shift, this paper aim to quantify the privacy compensation for continuous data sensing while allowing FC to directly control PUs. There are two conflicting objectives in such scenario: FC desires better quality data in order to achieve higher aggregation accuracy whereas PUs prefer adding larger noise for higher privacy-preserving levels (PPLs). To achieve a good balance therein, we design an efficient incentive mechanism to REconcile FCs Aggregation accuracy and individual PUs data Privacy (REAP). Specifically, we adopt the celebrated notion of differential privacy to measure PUs PPLs and quantify their impacts on FCs aggregation accuracy. Then, appealing to Contract Theory, we design an incentive mechanism to maximize FCs aggregation accuracy under a given budget. The proposed incentive mechanism offers different contracts to PUs with different privacy preferences, by which FC can directly control PUs. It can further overcome the information asymmetry, i.e., the FC typically does not know each PUs precise privacy preference. We derive closed-form solutions for the optimal contracts in both complete information and incomplete information scenarios. Further, the results are generalized to the continuous case where PUs privacy preferences take values in a continuous domain. Extensive simulations are provided to validate the feasibility and advantages of our proposed incentive mechanism.
With the proliferation of the digital data economy, digital data is considered as the crude oil in the twenty-first century, and its value is increasing. Keeping pace with this trend, the model of data market trading between data providers and data c onsumers, is starting to emerge as a process to obtain high-quality personal information in exchange for some compensation. However, the risk of privacy violations caused by personal data analysis hinders data providers participation in the data market. Differential privacy, a de-facto standard for privacy protection, can solve this problem, but, on the other hand, it deteriorates the data utility. In this paper, we introduce a pricing mechanism that takes into account the trade-off between privacy and accuracy. We propose a method to induce the data provider to accurately report her privacy price and, we optimize it in order to maximize the data consumers profit within budget constraints. We show formally that the proposed mechanism achieves these properties, and also, validate them experimentally.
Outsourcing neural network inference tasks to an untrusted cloud raises data privacy and integrity concerns. To address these challenges, several privacy-preserving and verifiable inference techniques have been proposed based on replacing the non-pol ynomial activation functions such as the rectified linear unit (ReLU) function with polynomial activation functions. Such techniques usually require polynomials with integer coefficients or polynomials over finite fields. Motivated by such requirements, several works proposed replacing the ReLU activation function with the square activation function. In this work, we empirically show that the square function is not the best degree-$2$ polynomial that can replace the ReLU function even when restricting the polynomials to have integer coefficients. We instead propose a degree-$2$ polynomial activation function with a first order term and empirically show that it can lead to much better models. Our experiments on the CIFAR-$10$ and CIFAR-$100$ datasets on various architectures show that our proposed activation function improves the test accuracy by up to $9.4%$ compared to the square function.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا