Do you want to publish a course? Click here

On the Problem of Underranking in Group-Fair Ranking

78   0   0.0 ( 0 )
 Added by Sruthi Gorantla
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Search and recommendation systems, such as search engines, recruiting tools, online marketplaces, news, and social media, output ranked lists of content, products, and sometimes, people. Credit ratings, standardized tests, risk assessments output only a score, but are also used implicitly for ranking. Bias in such ranking systems, especially among the top ranks, can worsen social and economic inequalities, polarize opinions, and reinforce stereotypes. On the other hand, a bias correction for minority groups can cause more harm if perceived as favoring group-fair outcomes over meritocracy. In this paper, we formulate the problem of underranking in group-fair rankings, which was not addressed in previous work. Most group-fair ranking algorithms post-process a given ranking and output a group-fair ranking. We define underranking based on how close the group-fair rank of each item is to its original rank, and prove a lower bound on the trade-off achievable for simultaneous underranking and group fairness in ranking. We give a fair ranking algorithm that takes any given ranking and outputs another ranking with simultaneous underranking and group fairness guarantees comparable to the lower bound we prove. Our algorithm works with group fairness constraints for any number of groups. Our experimental results confirm the theoretical trade-off between underranking and group fairness, and also show that our algorithm achieves the best of both when compared to the state-of-the-art baselines.



rate research

Read More

While implicit feedback (e.g., clicks, dwell times, etc.) is an abundant and attractive source of data for learning to rank, it can produce unfair ranking policies for both exogenous and endogenous reasons. Exogenous reasons typically manifest themselves as biases in the training data, which then get reflected in the learned ranking policy and often lead to rich-get-richer dynamics. Moreover, even after the correction of such biases, reasons endogenous to the design of the learning algorithm can still lead to ranking policies that do not allocate exposure among items in a fair way. To address both exogenous and endogenous sources of unfairness, we present the first learning-to-rank approach that addresses both presentation bias and merit-based fairness of exposure simultaneously. Specifically, we define a class of amortized fairness-of-exposure constraints that can be chosen based on the needs of an application, and we show how these fairness criteria can be enforced despite the selection biases in implicit feedback data. The key result is an efficient and flexible policy-gradient algorithm, called FULTR, which is the first to enable the use of counterfactual estimators for both utility estimation and fairness constraints. Beyond the theoretical justification of the framework, we show empirically that the proposed algorithm can learn accurate and fair ranking policies from biased and noisy feedback.
We consider the problem of fairly allocating indivisible public goods. We model the public goods as elements with feasibility constraints on what subsets of elements can be chosen, and assume that agents have additive utilities across elements. Our model generalizes existing frameworks such as fair public decision making and participatory budgeting. We study a groupwise fairness notion called the core, which generalizes well-studied notions of proportionality and Pareto efficiency, and requires that each subset of agents must receive an outcome that is fair relative to its size. In contrast to the case of divisible public goods (where fractional allocations are permitted), the core is not guaranteed to exist when allocating indivisible public goods. Our primary contributions are the notion of an additive approximation to the core (with a tiny multiplicative loss), and polynomial time algorithms that achieve a small additive approximation, where the additive factor is relative to the largest utility of an agent for an element. If the feasibility constraints define a matroid, we show an additive approximation of 2. A similar approach yields a constant additive bound when the feasibility constraints define a matching. More generally, if the feasibility constraints define an arbitrary packing polytope with mild restrictions, we show an additive guarantee that is logarithmic in the width of the polytope. Our algorithms are based on variants of the convex program for maximizing the Nash social welfare, but differ significantly from previous work in how it is used. Our guarantees are meaningful even when there are fewer elements than the number of agents. As far as we are aware, our work is the first to approximate the core in indivisible settings.
Ranked search results have become the main mechanism by which we find content, products, places, and people online. Thus their ordering contributes not only to the satisfaction of the searcher, but also to career and business opportunities, educational placement, and even social success of those being ranked. Researchers have become increasingly concerned with systematic biases in data-driven ranking models, and various post-processing methods have been proposed to mitigate discrimination and inequality of opportunity. This approach, however, has the disadvantage that it still allows an unfair ranking model to be trained. In this paper we explore a new in-processing approach: DELTR, a learning-to-rank framework that addresses potential issues of discrimination and unequal opportunity in rankings at training time. We measure these problems in terms of discrepancies in the average group exposure and design a ranker that optimizes search results in terms of relevance and in terms of reducing such discrepancies. We perform an extensive experimental study showing that being colorblind can be among the best or the worst choices from the perspective of relevance and exposure, depending on how much and which kind of bias is present in the training set. We show that our in-processing method performs better in terms of relevance and exposure than a pre-processing and a post-processing method across all tested scenarios.
In participatory budgeting, communities collectively decide on the allocation of public tax dollars for local public projects. In this work, we consider the question of fairly aggregating the preferences of community members to determine an allocation of funds to projects. This problem is different from standard fair resource allocation because of public goods: The allocated goods benefit all users simultaneously. Fairness is crucial in participatory decision making, since generating equitable outcomes is an important goal of these processes. We argue that the classic game theoretic notion of core captures fairness in the setting. To compute the core, we first develop a novel characterization of a public goods market equilibrium called the Lindahl equilibrium, which is always a core solution. We then provide the first (to our knowledge) polynomial time algorithm for computing such an equilibrium for a broad set of utility functions; our algorithm also generalizes (in a non-trivial way) the well-known concept of proportional fairness. We use our theoretical insights to perform experiments on real participatory budgeting voting data. We empirically show that the core can be efficiently computed for utility functions that naturally model our practical setting, and examine the relation of the core with the familiar welfare objective. Finally, we address concerns of incentives and mechanism design by developing a randomized approximately dominant-strategy truthful mechanism building on the exponential mechanism from differential privacy.
Rankings of people and items has been highly used in selection-making, match-making, and recommendation algorithms that have been deployed on ranging of platforms from employment websites to searching tools. The ranking position of a candidate affects the amount of opportunities received by the ranked candidate. It has been observed in several works that the ranking of candidates based on their score can be biased for candidates belonging to the minority community. In recent works, the fairness-aware representative ranking was proposed for computing fairness-aware re-ranking of results. The proposed algorithm achieves the desired distribution of top-ranked results with respect to one or more protected attributes. In this work, we highlight the bias in fairness-aware representative ranking for an individual as well as for a group if the group is sub-active on the platform. We define individual unfairness and group unfairness and propose methods to generate ideal individual and group fair representative ranking if the universal representation ratio is known or unknown. The simulation results show the quantified analysis of fairness in the proposed solutions. The paper is concluded with open challenges and further directions.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا