ترغب بنشر مسار تعليمي؟ اضغط هنا

In this report we provide an improvement of the significance adjustment from the FA*IR algorithm of Zehlike et al., which did not work for very short rankings in combination with a low minimum proportion $p$ for the protected group. We show how the m inimum number of protected candidates per ranking position can be calculated exactly and provide a mapping from the continuous space of significance levels ($alpha$) to a discrete space of tables, which allows us to find $alpha_c$ using a binary search heuristic.
Ranking algorithms are being widely employed in various online hiring platforms including LinkedIn, TaskRabbit, and Fiverr. Prior research has demonstrated that ranking algorithms employed by these platforms are prone to a variety of undesirable bias es, leading to the proposal of fair ranking algorithms (e.g., Det-Greedy) which increase exposure of underrepresented candidates. However, there is little to no work that explores whether fair ranking algorithms actually improve real world outcomes (e.g., hiring decisions) for underrepresented groups. Furthermore, there is no clear understanding as to how other factors (e.g., job context, inherent biases of the employers) may impact the efficacy of fair ranking in practice. In this work, we analyze various sources of gender biases in online hiring platforms, including the job context and inherent biases of employers and establish how these factors interact with ranking algorithms to affect hiring decisions. To the best of our knowledge, this work makes the first attempt at studying the interplay between the aforementioned factors in the context of online hiring. We carry out a largescale user study simulating online hiring scenarios with data from TaskRabbit, a popular online freelancing site. Our results demonstrate that while fair ranking algorithms generally improve the selection rates of underrepresented minorities, their effectiveness relies heavily on the job contexts and candidate profiles.
Ranked search results and recommendations have become the main mechanism by which we find content, products, places, and people online. With hiring, selecting, purchasing, and dating being increasingly mediated by algorithms, rankings may determine c areer and business opportunities, educational placement, access to benefits, and even social and reproductive success. It is therefore of societal and ethical importance to ask whether search results can demote, marginalize, or exclude individuals of unprivileged groups or promote products with undesired features. In this paper we present FairSearch, the first fair open source search API to provide fairness notions in ranked search results. We implement two algorithms from the fair ranking literature, namely FA*IR (Zehlike et al., 2017) and DELTR (Zehlike and Castillo, 2018) and provide them as stand-alone libraries in Python and Java. Additionally we implement interfaces to Elasticsearch for both algorithms, that use the aforementioned Java libraries and are then provided as Elasticsearch plugins. Elasticsearch is a well-known search engine API based on Apache Lucene. With our plugins we enable search engine developers who wish to ensure fair search results of different styles to easily integrate DELTR and FA*IR into their existing Elasticsearch environment.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا