ترغب بنشر مسار تعليمي؟ اضغط هنا

The closed loop between opinion formation and personalised recommendations

70   0   0.0 ( 0 )
 نشر من قبل Paolo Frasca
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In online platforms, recommender systems are responsible for directing users to relevant contents. In order to enhance the users engagement, recommender systems adapt their output to the reactions of the users, who are in turn affected by the recommended contents. In this work, we study a tractable analytical model of a user that interacts with an online news aggregator, with the purpose of making explicit the feedback loop between the evolution of the users opinion and the personalised recommendation of contents. More specifically, we assume that the user is endowed with a scalar opinion about a certain issue and seeks news about it on a news aggregator: this opinion is influenced by all received news, which are characterized by a binary position on the issue at hand. The user is affected by a confirmation bias, that is, a preference for news that confirm her current opinion. The news aggregator recommends items with the goal of maximizing the number of users clicks (as a measure of her engagement): in order to fulfil its goal, the recommender has to compromise between exploring the users preferences and exploiting what it has learned so far. After defining suitable metrics for the effectiveness of the recommender systems (such as the click-through rate) and for its impact on the opinion, we perform both extensive numerical simulations and a mathematical analysis of the model. We find that personalised recommendations markedly affect the evolution of opinions and favor the emergence of more extreme ones: the intensity of these effects is inherently related to the effectiveness of the recommender. We also show that by tuning the amount of randomness in the recommendation algorithm, one can seek a balance between the effectiveness of the recommendation system and its impact on the opinions.



قيم البحث

اقرأ أيضاً

Recommender systems are one of the most widely used services on several online platforms to suggest potential items to the end-users. These services often use different machine learning techniques for which fairness is a concerning factor, especially when the downstream services have the ability to cause social ramifications. Thus, focusing on the non-personalised (global) recommendations in news media platforms (e.g., top-k trending topics on Twitter, top-k news on a news platform, etc.), we discuss on two specific fairness concerns together (traditionally studied separately)---user fairness and organisational fairness. While user fairness captures the idea of representing the choices of all the individual users in the case of global recommendations, organisational fairness tries to ensure politically/ideologically balanced recommendation sets. This makes user fairness a user-side requirement and organisational fairness a platform-side requirement. For user fairness, we test with methods from social choice theory, i.e., various voting rules known to better represent user choices in their results. Even in our application of voting rules to the recommendation setup, we observe high user satisfaction scores. Now for organisational fairness, we propose a bias metric which measures the aggregate ideological bias of a recommended set of items (articles). Analysing the results obtained from voting rule-based recommendation, we find that while the well-known voting rules are better from the user side, they show high bias values and clearly not suitable for organisational requirements of the platforms. Thus, there is a need to build an encompassing mechanism by cohesively bridging ideas of user fairness and organisational fairness. In this abstract paper, we intend to frame the elementary ideas along with the clear motivation behind the requirement of such a mechanism.
The pervasive use of social media has grown to over two billion users to date, and is commonly utilized as a means to share information and shape world events. Evidence suggests that passive social media usage (i.e., viewing without taking action) ha s an impact on the users perspective. This empirical influence over perspective could have significant impact on social events. Therefore, it is important to understand how social media contributes to the formation of an individuals perspective. A set of experimental tasks were designed to investigate empirically derived thresholds for opinion formation as a result of passive interactions with different social media data types (i.e., videos, images, and messages). With a better understanding of how humans passively interact with social media information, a paradigm can be developed that allows the exploitation of this interaction and plays a significant role in future military plans and operations.
Recent times have seen data analytics software applications become an integral part of the decision-making process of analysts. The users of these software applications generate a vast amount of unstructured log data. These logs contain clues to the users goals, which traditional recommender systems may find difficult to model implicitly from the log data. With this assumption, we would like to assist the analytics process of a user through command recommendations. We categorize the commands into software and data categories based on their purpose to fulfill the task at hand. On the premise that the sequence of commands leading up to a data command is a good predictor of the latter, we design, develop, and validate various sequence modeling techniques. In this paper, we propose a framework to provide goal-driven data command recommendations to the user by leveraging unstructured logs. We use the log data of a web-based analytics software to train our neural network models and quantify their performance, in comparison to relevant and competitive baselines. We propose a custom loss function to tailor the recommended data commands according to the goal information provided exogenously. We also propose an evaluation metric that captures the degree of goal orientation of the recommendations. We demonstrate the promise of our approach by evaluating the models with the proposed metric and showcasing the robustness of our models in the case of adversarial examples, where the user activity is misaligned with selected goal, through offline evaluation.
Stance detection, which aims to determine whether an individual is for or against a target concept, promises to uncover public opinion from large streams of social media data. Yet even human annotation of social media content does not always capture stance as measured by public opinion polls. We demonstrate this by directly comparing an individuals self-reported stance to the stance inferred from their social media data. Leveraging a longitudinal public opinion survey with respondent Twitter handles, we conducted this comparison for 1,129 individuals across four salient targets. We find that recall is high for both Pro and Anti stance classifications but precision is variable in a number of cases. We identify three factors leading to the disconnect between text and author stance: temporal inconsistencies, differences in constructs, and measurement errors from both survey respondents and annotators. By presenting a framework for assessing the limitations of stance detection models, this work provides important insight into what stance detection truly measures.
The recent growth of digital interventions for mental well-being prompts a call-to-arms to explore the delivery of personalised recommendations from a users perspective. In a randomised placebo study with a two-way factorial design, we analysed the d ifference between an autonomous user experience as opposed to personalised guidance, with respect to both users preference and their actual usage of a mental well-being app. Furthermore, we explored users preference in sharing their data for receiving personalised recommendations, by juxtaposing questionnaires and mobile sensor data. Interestingly, self-reported results indicate the preference for personalised guidance, whereas behavioural data suggests that a blend of autonomous choice and recommended activities results in higher engagement. Additionally, although users reported a strong preference of filling out questionnaires instead of sharing their mobile data, the data source did not have any impact on the actual app use. We discuss the implications of our findings and provide takeaways for designers of mental well-being applications.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا