ترغب بنشر مسار تعليمي؟ اضغط هنا

Two-Sided Fairness in Non-Personalised Recommendations

76   0   0.0 ( 0 )
 نشر من قبل Sayan Sinha
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Recommender systems are one of the most widely used services on several online platforms to suggest potential items to the end-users. These services often use different machine learning techniques for which fairness is a concerning factor, especially when the downstream services have the ability to cause social ramifications. Thus, focusing on the non-personalised (global) recommendations in news media platforms (e.g., top-k trending topics on Twitter, top-k news on a news platform, etc.), we discuss on two specific fairness concerns together (traditionally studied separately)---user fairness and organisational fairness. While user fairness captures the idea of representing the choices of all the individual users in the case of global recommendations, organisational fairness tries to ensure politically/ideologically balanced recommendation sets. This makes user fairness a user-side requirement and organisational fairness a platform-side requirement. For user fairness, we test with methods from social choice theory, i.e., various voting rules known to better represent user choices in their results. Even in our application of voting rules to the recommendation setup, we observe high user satisfaction scores. Now for organisational fairness, we propose a bias metric which measures the aggregate ideological bias of a recommended set of items (articles). Analysing the results obtained from voting rule-based recommendation, we find that while the well-known voting rules are better from the user side, they show high bias values and clearly not suitable for organisational requirements of the platforms. Thus, there is a need to build an encompassing mechanism by cohesively bridging ideas of user fairness and organisational fairness. In this abstract paper, we intend to frame the elementary ideas along with the clear motivation behind the requirement of such a mechanism.

قيم البحث

اقرأ أيضاً

We investigate the problem of fair recommendation in the context of two-sided online platforms, comprising customers on one side and producers on the other. Traditionally, recommendation services in these platforms have focused on maximizing customer satisfaction by tailoring the results according to the personalized preferences of individual customers. However, our investigation reveals that such customer-centric design may lead to unfair distribution of exposure among the producers, which may adversely impact their well-being. On the other hand, a producer-centric design might become unfair to the customers. Thus, we consider fairness issues that span both customers and producers. Our approach involves a novel mapping of the fair recommendation problem to a constrained version of the problem of fairly allocating indivisible goods. Our proposed FairRec algorithm guarantees at least Maximin Share (MMS) of exposure for most of the producers and Envy-Free up to One item (EF1) fairness for every customer. Extensive evaluations over multiple real-world datasets show the effectiveness of FairRec in ensuring two-sided fairness while incurring a marginal loss in the overall recommendation quality.
Many interesting problems in the Internet industry can be framed as a two-sided marketplace problem. Examples include search applications and recommender systems showing people, jobs, movies, products, restaurants, etc. Incorporating fairness while b uilding such systems is crucial and can have a deep social and economic impact (applications include job recommendations, recruiters searching for candidates, etc.). In this paper, we propose a definition and develop an end-to-end framework for achieving fairness while building such machine learning systems at scale. We extend prior work to develop an optimization framework that can tackle fairness constraints from both the source and destination sides of the marketplace, as well as dynamic aspects of the problem. The framework is flexible enough to adapt to different definitions of fairness and can be implemented in very large-scale settings. We perform simulations to show the efficacy of our approach.
Major online platforms today can be thought of as two-sided markets with producers and customers of goods and services. There have been concerns that over-emphasis on customer satisfaction by the platforms may affect the well-being of the producers. To counter such issues, few recent works have attempted to incorporate fairness for the producers. However, these studies have overlooked an important issue in such platforms -- to supposedly improve customer utility, the underlying algorithms are frequently updated, causing abrupt changes in the exposure of producers. In this work, we focus on the fairness issues arising out of such frequent updates, and argue for incremental updates of the platform algorithms so that the producers have enough time to adjust (both logistically and mentally) to the change. However, naive incremental updates may become unfair to the customers. Thus focusing on recommendations deployed on two-sided platforms, we formulate an ILP based online optimization to deploy changes incrementally in n steps, where we can ensure smooth transition of the exposure of items while guaranteeing a minimum utility for every customer. Evaluations over multiple real world datasets show that our proposed mechanism for platform updates can be efficient and fair to both the producers and the customers in two-sided platforms.
In online platforms, recommender systems are responsible for directing users to relevant contents. In order to enhance the users engagement, recommender systems adapt their output to the reactions of the users, who are in turn affected by the recomme nded contents. In this work, we study a tractable analytical model of a user that interacts with an online news aggregator, with the purpose of making explicit the feedback loop between the evolution of the users opinion and the personalised recommendation of contents. More specifically, we assume that the user is endowed with a scalar opinion about a certain issue and seeks news about it on a news aggregator: this opinion is influenced by all received news, which are characterized by a binary position on the issue at hand. The user is affected by a confirmation bias, that is, a preference for news that confirm her current opinion. The news aggregator recommends items with the goal of maximizing the number of users clicks (as a measure of her engagement): in order to fulfil its goal, the recommender has to compromise between exploring the users preferences and exploiting what it has learned so far. After defining suitable metrics for the effectiveness of the recommender systems (such as the click-through rate) and for its impact on the opinion, we perform both extensive numerical simulations and a mathematical analysis of the model. We find that personalised recommendations markedly affect the evolution of opinions and favor the emergence of more extreme ones: the intensity of these effects is inherently related to the effectiveness of the recommender. We also show that by tuning the amount of randomness in the recommendation algorithm, one can seek a balance between the effectiveness of the recommendation system and its impact on the opinions.
186 - Ophir Flomenbom 2011
Models that explain the economical and political realities of nowadays societies should help all the worlds citizens. Yet, the last four years showed that the current models are missing. Here we develop a dynamical society-deciders model showing that the long lasting economical stress can be solved when increasing fairness in nations. fairness is computed for each nation using indicators from economy and politics. Rather than austerity versus spending, the dynamical model suggests that solving crises in western societies is possible with regulations that reduce the stability of the deciders, while shifting wealth in the direction of the people. This shall increase the dynamics among socio-economic classes, further increasing fairness.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا