ﻻ يوجد ملخص باللغة العربية
In a multi-party machine learning system, different parties cooperate on optimizing towards better models by sharing data in a privacy-preserving way. A major challenge in learning is the incentive issue. For example, if there is competition among the parties, one may strategically hide his data to prevent other parties from getting better models. In this paper, we study the problem through the lens of mechanism design and incorporate the features of multi-party learning in our setting. First, each agents valuation has externalities that depend on others types and actions. Second, each agent can only misreport a type lower than his true type, but not the other way round. We call this setting interdependent value with type-dependent action spaces. We provide the optimal truthful mechanism in the quasi-monotone utility setting. We also provide necessary and sufficient conditions for truthful mechanisms in the most general case. Finally, we show the existence of such mechanisms is highly affected by the market growth rate and provide empirical analysis.
We study the problem of allocating impressions to sellers in e-commerce websites, such as Amazon, eBay or Taobao, aiming to maximize the total revenue generated by the platform. We employ a general framework of reinforcement mechanism design, which u
A distributed machine learning platform needs to recruit many heterogeneous worker nodes to finish computation simultaneously. As a result, the overall performance may be degraded due to straggling workers. By introducing redundancy into computation,
Secure multi-party computation (MPC) allows parties to perform computations on data while keeping that data private. This capability has great potential for machine-learning applications: it facilitates training of machine-learning models on private
Agent technology, a new paradigm in software engineering, has received attention from research and industry since 1990s. However, it is still not used widely to date because it requires expertise on both programming and agent technology; gaps among r
The performance of machine learning algorithms heavily relies on the availability of a large amount of training data. However, in reality, data usually reside in distributed parties such as different institutions and may not be directly gathered and