ترغب بنشر مسار تعليمي؟ اضغط هنا

Bridging Machine Learning and Mechanism Design towards Algorithmic Fairness

76   0   0.0 ( 0 )
 نشر من قبل Faidra Monachou
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Decision-making systems increasingly orchestrate our world: how to intervene on the algorithmic components to build fair and equitable systems is therefore a question of utmost importance; one that is substantially complicated by the context-dependent nature of fairness and discrimination. Modern decision-making systems that involve allocating resources or information to people (e.g., school choice, advertising) incorporate machine-learned predictions in their pipelines, raising concerns about potential strategic behavior or constrained allocation, concerns usually tackled in the context of mechanism design. Although both machine learning and mechanism design have developed frameworks for addressing issues of fairness and equity, in some complex decision-making systems, neither framework is individually sufficient. In this paper, we develop the position that building fair decision-making systems requires overcoming these limitations which, we argue, are inherent to each field. Our ultimate objective is to build an encompassing framework that cohesively bridges the individual frameworks of mechanism design and machine learning. We begin to lay the ground work towards this goal by comparing the perspective each discipline takes on fair decision-making, teasing out the lessons each field has taught and can teach the other, and highlighting application domains that require a strong collaboration between these disciplines.



قيم البحث

اقرأ أيضاً

A distributed machine learning platform needs to recruit many heterogeneous worker nodes to finish computation simultaneously. As a result, the overall performance may be degraded due to straggling workers. By introducing redundancy into computation, coded machine learning can effectively improve the runtime performance by recovering the final computation result through the first $k$ (out of the total $n$) workers who finish computation. While existing studies focus on designing efficient coding schemes, the issue of designing proper incentives to encourage worker participation is still under-explored. This paper studies the platforms optimal incentive mechanism for motivating proper workers participation in coded machine learning, despite the incomplete information about heterogeneous workers computation performances and costs. A key contribution of this work is to summarize workers multi-dimensional heterogeneity as a one-dimensional metric, which guides the platforms efficient selection of workers under incomplete information with a linear computation complexity. Moreover, we prove that the optimal recovery threshold $k$ is linearly proportional to the participator number $n$ if we use the widely adopted MDS (Maximum Distance Separable) codes for data encoding. We also show that the platforms increased cost due to incomplete information disappears when worker number is sufficiently large, but it does not monotonically decrease in worker number.
The design of revenue-maximizing auctions with strong incentive guarantees is a core concern of economic theory. Computational auctions enable online advertising, sourcing, spectrum allocation, and myriad financial markets. Analytic progress in this space is notoriously difficult; since Myersons 1981 work characterizing single-item optimal auctions, there has been limited progress outside of restricted settings. A recent paper by Dutting et al. circumvents analytic difficulties by applying deep learning techniques to, instead, approximate optimal auctions. In parallel, new research from Ilvento et al. and other groups has developed notions of fairness in the context of auction design. Inspired by these advances, in this paper, we extend techniques for approximating auctions using deep learning to address concerns of fairness while maintaining high revenue and strong incentive guarantees.
In 2001, Leo Breiman wrote of a divide between data modeling and algorithmic modeling cultures. Twenty years later this division feels far more ephemeral, both in terms of assigning individuals to camps, and in terms of intellectual boundaries. We ar gue that this is largely due to the data modelers incorporating algorithmic methods into their toolbox, particularly driven by recent developments in the statistical understanding of Breimans own Random Forest methods. While this can be simplistically described as Breiman won, these same developments also expose the limitations of the prediction-first philosophy that he espoused, making careful statistical analysis all the more important. This paper outlines these exciting recent developments in the random forest literature which, in our view, occurred as a result of a necessary blending of the two ways of thinking Breiman originally described. We also ask what areas statistics and statisticians might currently overlook.
Game theory is often used as a tool to analyze decentralized systems and their properties, in particular, blockchains. In this note, we take the opposite view. We argue that blockchains can and should be used to implement economic mechanisms because they can help to overcome problems that occur if trust in the mechanism designer cannot be assumed. Mechanism design deals with the allocation of resources to agents, often by extracting private information from them. Some mechanisms are immune to early information disclosure, while others may heavily depend on it. Some mechanisms have to randomize to achieve fairness and efficiency. Both issues, information disclosure, and randomness require trust in the mechanism designer. If there is no trust, mechanisms can be manipulated. We claim that mechanisms that use randomness or sequential information disclosure are much harder, if not impossible, to audit. Therefore, centralized implementation is often not a good solution. We consider some of the most frequently used mechanisms in practice and identify circumstances under which manipulation is possible. We propose a decentralized implementation of such mechanisms, that can be, in practical terms, realized by blockchain technology. Moreover, we argue in which environments a decentralized implementation of a mechanism brings a significant advantage.
Increasingly, scholars seek to integrate legal and technological insights to combat bias in AI systems. In recent years, many different definitions for ensuring non-discrimination in algorithmic decision systems have been put forward. In this paper, we first briefly describe the EU law framework covering cases of algorithmic discrimination. Second, we present an algorithm that harnesses optimal transport to provide a flexible framework to interpolate between different fairness definitions. Third, we show that important normative and legal challenges remain for the implementation of algorithmic fairness interventions in real-world scenarios. Overall, the paper seeks to contribute to the quest for flexible technical frameworks that can be adapted to varying legal and normative fairness constraints.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا