ﻻ يوجد ملخص باللغة العربية
Increasingly, scholars seek to integrate legal and technological insights to combat bias in AI systems. In recent years, many different definitions for ensuring non-discrimination in algorithmic decision systems have been put forward. In this paper, we first briefly describe the EU law framework covering cases of algorithmic discrimination. Second, we present an algorithm that harnesses optimal transport to provide a flexible framework to interpolate between different fairness definitions. Third, we show that important normative and legal challenges remain for the implementation of algorithmic fairness interventions in real-world scenarios. Overall, the paper seeks to contribute to the quest for flexible technical frameworks that can be adapted to varying legal and normative fairness constraints.
Using the concept of principal stratification from the causal inference literature, we introduce a new notion of fairness, called principal fairness, for human and algorithmic decision-making. The key idea is that one should not discriminate among in
Society increasingly relies on machine learning models for automated decision making. Yet, efficiency gains from automation have come paired with concern for algorithmic discrimination that can systematize inequality. Recent work has proposed optimal
Conventional algorithmic fairness is Western in its sub-groups, values, and optimizations. In this paper, we ask how portable the assumptions of this largely Western take on algorithmic fairness are to a different geo-cultural context such as India.
It has become trivial to point out how decision-making processes in various social, political and economical sphere are assisted by automated systems. Improved efficiency, the hallmark of these systems, drives the mass scale integration of automated
As the role of algorithmic systems and processes increases in society, so does the risk of bias, which can result in discrimination against individuals and social groups. Research on algorithmic bias has exploded in recent years, highlighting both th