ترغب بنشر مسار تعليمي؟ اضغط هنا

How to Incentivize Data-Driven Collaboration Among Competing Parties

144   0   0.0 ( 0 )
 نشر من قبل Sunoo Park
 تاريخ النشر 2016
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The availability of vast amounts of data is changing how we can make medical discoveries, predict global market trends, save energy, and develop educational strategies. In some settings such as Genome Wide Association Studies or deep learning, sheer size of data seems critical. When data is held distributedly by many parties, they must share it to reap its full benefits. One obstacle to this revolution is the lack of willingness of different parties to share data, due to reasons such as loss of privacy or competitive edge. Cryptographic works address privacy aspects, but shed no light on individual parties losses/gains when access to data carries tangible rewards. Even if it is clear that better overall conclusions can be drawn from collaboration, are individual collaborators better off by collaborating? Addressing this question is the topic of this paper. * We formalize a model of n-party collaboration for computing functions over private inputs in which participants receive their outputs in sequence, and the order depends on their private inputs. Each output improves on preceding outputs according to a score function. * We say a mechanism for collaboration achieves collaborative equilibrium if it ensures higher reward for all participants when collaborating (rather than working alone). We show that in general, computing a collaborative equilibrium is NP-complete, yet we design efficient algorithms to compute it in a range of natural model settings. Our collaboration mechanisms are in the standard model, and thus require a central trusted party; however, we show this assumption is unnecessary under standard cryptographic assumptions. We show how to implement the mechanisms in a decentralized way with new extensions of secure multiparty computation that impose order/timing constraints on output delivery to different players, as well as privacy and correctness.



قيم البحث

اقرأ أيضاً

The Bitcoin protocol prescribes certain behavior by the miners who are responsible for maintaining and extending the underlying blockchain; in particular, miners who successfully solve a puzzle, and hence can extend the chain by a block, are supposed to release that block immediately. Eyal and Sirer showed, however, that a selfish miner is incentivized to deviate from the protocol and withhold its blocks under certain conditions. The analysis by Eyal and Sirer, as well as in followup work, considers a emph{single} deviating miner (who may control a large fraction of the hashing power in the network) interacting with a remaining pool of honest miners. Here, we extend this analysis to the case where there are emph{multiple} (non-colluding) selfish miners. We find that with multiple strategic miners, specific deviations from honest mining by multiple strategic agents can outperform honest mining, even if individually miners would not be incentivised to be dishonest. This previous point effectively renders the Bitcoin protocol to be less secure than previously thought.
All proper scoring rules incentivize an expert to predict emph{accurately} (report their true estimate), but not all proper scoring rules equally incentivize emph{precision}. Rather than treating the experts belief as exogenously given, we consider a model where a rational expert can endogenously refine their belief by repeatedly paying a fixed cost, and is incentivized to do so by a proper scoring rule. Specifically, our expert aims to predict the probability that a biased coin flipped tomorrow will land heads, and can flip the coin any number of times today at a cost of $c$ per flip. Our first main result defines an emph{incentivization index} for proper scoring rules, and proves that this index measures the expected error of the experts estimate (where the number of flips today is chosen adaptively to maximize the predictors expected payoff). Our second main result finds the unique scoring rule which optimizes the incentivization index over all proper scoring rules. We also consider extensions to minimizing the $ell^{th}$ moment of error, and again provide an incentivization index and optimal proper scoring rule. In some cases, the resulting scoring rule is differentiable, but not infinitely differentiable. In these cases, we further prove that the optimum can be uniformly approximated by polynomial scoring rules. Finally, we compare common scoring rules via our measure, and include simulations confirming the relevance of our measure even in domains outside where it provably applies.
We consider the problem of designing a survey to aggregate non-verifiable information from a privacy-sensitive population: an analyst wants to compute some aggregate statistic from the private bits held by each member of a population, but cannot veri fy the correctness of the bits reported by participants in his survey. Individuals in the population are strategic agents with a cost for privacy, ie, they not only account for the payments they expect to receive from the mechanism, but also their privacy costs from any information revealed about them by the mechanisms outcome---the computed statistic as well as the payments---to determine their utilities. How can the analyst design payments to obtain an accurate estimate of the population statistic when individuals strategically decide both whether to participate and whether to truthfully report their sensitive information? We design a differentially private peer-prediction mechanism that supports accurate estimation of the population statistic as a Bayes-Nash equilibrium in settings where agents have explicit preferences for privacy. The mechanism requires knowledge of the marginal prior distribution on bits $b_i$, but does not need full knowledge of the marginal distribution on the costs $c_i$, instead requiring only an approximate upper bound. Our mechanism guarantees $epsilon$-differential privacy to each agent $i$ against any adversary who can observe the statistical estimate output by the mechanism, as well as the payments made to the $n-1$ other agents $j eq i$. Finally, we show that with slightly more structured assumptions on the privacy cost functions of each agent, the cost of running the survey goes to $0$ as the number of agents diverges.
Since training a large-scale backdoored model from scratch requires a large training dataset, several recent attacks have considered to inject backdoors into a trained clean model without altering model behaviors on the clean data. Previous work find s that backdoors can be injected into a trained clean model with Adversarial Weight Perturbation (AWP). Here AWPs refers to the variations of parameters that are small in backdoor learning. In this work, we observe an interesting phenomenon that the variations of parameters are always AWPs when tuning the trained clean model to inject backdoors. We further provide theoretical analysis to explain this phenomenon. We formulate the behavior of maintaining accuracy on clean data as the consistency of backdoored models, which includes both global consistency and instance-wise consistency. We extensively analyze the effects of AWPs on the consistency of backdoored models. In order to achieve better consistency, we propose a novel anchoring loss to anchor or freeze the model behaviors on the clean data, with a theoretical guarantee. Both the analytical and the empirical results validate the effectiveness of the anchoring loss in improving the consistency, especially the instance-wise consistency.
Prediction is a well-studied machine learning task, and prediction algorithms are core ingredients in online products and services. Despite their centrality in the competition between online companies who offer prediction-based products, the strategi c use of prediction algorithms remains unexplored. The goal of this paper is to examine strategic use of prediction algorithms. We introduce a novel game-theoretic setting that is based on the PAC learning framework, where each player (aka a prediction algorithm at competition) seeks to maximize the sum of points for which it produces an accurate prediction and the others do not. We show that algorithms aiming at generalization may wittingly miss-predict some points to perform better than others on expectation. We analyze the empirical game, i.e. the game induced on a given sample, prove that it always possesses a pure Nash equilibrium, and show that every better-response learning process converges. Moreover, our learning-theoretic analysis suggests that players can, with high probability, learn an approximate pure Nash equilibrium for the whole population using a small number of samples.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا