ترغب بنشر مسار تعليمي؟ اضغط هنا

Assessing top-$k$ preferences

42   0   0.0 ( 0 )
 نشر من قبل Charles Clarke
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Assessors make preference judgments faster and more consistently than graded judgments. Preference judgments can also recognize distinctions between items that appear equivalent under graded judgments. Unfortunately, preference judgments can require more than linear effort to fully order a pool of items, and evaluation measures for preference judgments are not as well established as those for graded judgments, such as NDCG. In this paper, we explore the assessment process for partial preference judgments, with the aim of identifying and ordering the top items in the pool, rather than fully ordering the entire pool. To measure the performance of a ranker, we compare its output to this preferred ordering by applying a rank similarity measure.We demonstrate the practical feasibility of this approach by crowdsourcing partial preferences for the TREC 2019 Conversational Assistance Track, replacing NDCG with a new measure named compatibility. This new measure has its most striking impact when comparing modern neural rankers, where it is able to recognize significant improvements in quality that would otherwise be missed by NDCG.



قيم البحث

اقرأ أيضاً

The top-k recommendation is a fundamental task in recommendation systems which is generally learned by comparing positive and negative pairs. The Contrastive Loss (CL) is the key in contrastive learning that has received more attention recently and w e find it is well suited for top-k recommendations. However, it is a problem that CL treats the importance of the positive and negative samples as the same. On the one hand, CL faces the imbalance problem of one positive sample and many negative samples. On the other hand, positive items are so few in sparser datasets that their importance should be emphasized. Moreover, the other important issue is that the sparse positive items are still not sufficiently utilized in recommendations. So we propose a new data augmentation method by using multiple positive items (or samples) simultaneously with the CL loss function. Therefore, we propose a Multi-Sample based Contrastive Loss (MSCL) function which solves the two problems by balancing the importance of positive and negative samples and data augmentation. And based on the graph convolution network (GCN) method, experimental results demonstrate the state-of-the-art performance of MSCL. The proposed MSCL is simple and can be applied in many methods. We will release our code on GitHub upon the acceptance.
Answering complex questions over knowledge bases (KB-QA) faces huge input data with billions of facts, involving millions of entities and thousands of predicates. For efficiency, QA systems first reduce the answer search space by identifying a set of facts that is likely to contain all answers and relevant cues. The most common technique is to apply named entity disambiguation (NED) systems to the question, and retrieve KB facts for the disambiguated entities. This work presents ECQA, an efficient method that prunes irrelevant parts of the search space using KB-aware signals. ECQA is based on top-k query processing over score-ordered lists of KB items that combine signals about lexical matching, relevance to the question, coherence among candidate items, and connectivity in the KB graph. Experiments with two recent QA benchmarks demonstrate the superiority of ECQA over state-of-the-art baselines with respect to answer presence, size of the search space, and runtimes.
A recommender system generates personalized recommendations for a user by computing the preference score of items, sorting the items according to the score, and filtering top-K items with high scores. While sorting and ranking items are integral for this recommendation procedure, it is nontrivial to incorporate them in the process of end-to-end model training since sorting is nondifferentiable and hard to optimize with gradient descent. This incurs the inconsistency issue between existing learning objectives and ranking metrics of recommenders. In this work, we present DRM (differentiable ranking metric) that mitigates the inconsistency and improves recommendation performance by employing the differentiable relaxation of ranking metrics. Via experiments with several real-world datasets, we demonstrate that the joint learning of the DRM objective upon existing factor based recommenders significantly improves the quality of recommendations, in comparison with other state-of-the-art recommendation methods.
With the increasing popularity of location-based social media applications and devices that automatically tag generated content with locations, large repositories of collaborative geo-referenced data are appearing on-line. Efficiently extracting user preferences from these data to determine what information to recommend is challenging because of the sheer volume of data as well as the frequency of updates. Traditional recommender systems focus on the interplay between users and items, but ignore contextual parameters such as location. In this paper we take a geospatial approach to determine locational preferences and similarities between users. We propose to capture the geographic context of user preferences for items using a relational graph, through which we are able to derive many new and state-of-the-art recommendation algorithms, including combinations of them, requiring changes only in the definition of the edge weights. Furthermore, we discuss several solutions for cold-start scenarios. Finally, we conduct experiments using two real-world datasets and provide empirical evidence that many of the proposed algorithms outperform existing location-aware recommender algorithms.
In this paper, we propose a two-stage ranking approach for recommending linear TV programs. The proposed approach first leverages user viewing patterns regarding time and TV channels to identify potential candidates for recommendation and then furthe r leverages user preferences to rank these candidates given textual information about programs. To evaluate the method, we conduct empirical studies on a real-world TV dataset, the results of which demonstrate the superior performance of our model in terms of both recommendation accuracy and time efficiency.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا