Do you want to publish a course? Click here

A Novel User Representation Paradigm for Making Personalized Candidate Retrieval

127   0   0.0 ( 0 )
 Added by Jianxun Lian
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

Candidate retrieval is a fundamental issue in recommendation system. Given users recommendation request, relevant candidates need to be retrieved in realtime for subsequent ranking operations. Considering that the retrieval operation is conducted over considerable items, it has to be both precise and scalable so that high-quality candidates can be acquired within tolerable latency. Unfortunately, conventional methods would trade off precision for high running efficiency, which leads to inferior retrieval quality. In contrast, those deep learning-based approaches can be highly accurate in identifying relevant items; yet, they are unsuitable for candidate retrieval due to their inherent limitation on scalability. In this work, a novel framework is proposed to address the above challenges. The underlying intuition is to rely on a well-trained ranking model for the supervision of an efficient retrieval model, such that it will unify the scalability and precision as a whole. We have implemented our conceptual framework and made comprehensive evaluation for it, where promising results are achieved against representative baselines. Our work is undergoing a anonymous review, and it will soon be released after the notification. If youre also interested in this problem, please feel free to contact us.



rate research

Read More

In this paper, we explore the problem of developing personalized chatbots. A personalized chatbot is designed as a digital chatting assistant for a user. The key characteristic of a personalized chatbot is that it should have a consistent personality with the corresponding user. It can talk the same way as the user when it is delegated to respond to others messages. We present a retrieval-based personalized chatbot model, namely IMPChat, to learn an implicit user profile from the users dialogue history. We argue that the implicit user profile is superior to the explicit user profile regarding accessibility and flexibility. IMPChat aims to learn an implicit user profile through modeling users personalized language style and personalized preferences separately. To learn a users personalized language style, we elaborately build language models from shallow to deep using the users historical responses; To model a users personalized preferences, we explore the conditional relations underneath each post-response pair of the user. The personalized preferences are dynamic and context-aware: we assign higher weights to those historical pairs that are topically related to the current query when aggregating the personalized preferences. We match each response candidate with the personalized language style and personalized preference, respectively, and fuse the two matching signals to determine the final ranking score. Comprehensive experiments on two large datasets show that our method outperforms all baseline models.
Classical recommender system methods typically face the filter bubble problem when users only receive recommendations of their familiar items, making them bored and dissatisfied. To address the filter bubble problem, unexpected recommendations have been proposed to recommend items significantly deviating from users prior expectations and thus surprising them by presenting fresh and previously unexplored items to the users. In this paper, we describe a novel Personalized Unexpected Recommender System (PURS) model that incorporates unexpectedness into the recommendation process by providing multi-cluster modeling of user interests in the latent space and personalized unexpectedness via the self-attention mechanism and via selection of an appropriate unexpected activation function. Extensive offline experiments on three real-world datasets illustrate that the proposed PURS model significantly outperforms the state-of-the-art baseline approaches in terms of both accuracy and unexpectedness measures. In addition, we conduct an online A/B test at a major video platform Alibaba-Youku, where our model achieves over 3% increase in the average video view per user metric. The proposed model is in the process of being deployed by the company.
In this paper, we propose a two-stage ranking approach for recommending linear TV programs. The proposed approach first leverages user viewing patterns regarding time and TV channels to identify potential candidates for recommendation and then further leverages user preferences to rank these candidates given textual information about programs. To evaluate the method, we conduct empirical studies on a real-world TV dataset, the results of which demonstrate the superior performance of our model in terms of both recommendation accuracy and time efficiency.
General-purpose representation learning through large-scale pre-training has shown promising results in the various machine learning fields. For an e-commerce domain, the objective of general-purpose, i.e., one for all, representations would be efficient applications for extensive downstream tasks such as user profiling, targeting, and recommendation tasks. In this paper, we systematically compare the generalizability of two learning strategies, i.e., transfer learning through the proposed model, ShopperBERT, vs. learning from scratch. ShopperBERT learns nine pretext tasks with 79.2M parameters from 0.8B user behaviors collected over two years to produce user embeddings. As a result, the MLPs that employ our embedding method outperform more complex models trained from scratch for five out of six tasks. Specifically, the pre-trained embeddings have superiority over the task-specific supervised features and the strong baselines, which learn the auxiliary dataset for the cold-start problem. We also show the computational efficiency and embedding visualization of the pre-trained features.
An effective email search engine can facilitate users search tasks and improve their communication efficiency. Users could have varied preferences on various ranking signals of an email, such as relevance and recency based on their tasks at hand and even their jobs. Thus a uniform matching pattern is not optimal for all users. Instead, an effective email ranker should conduct personalized ranking by taking users characteristics into account. Existing studies have explored user characteristics from various angles to make email search results personalized. However, little attention has been given to users search history for characterizing users. Although users historical behaviors have been shown to be beneficial as context in Web search, their effect in email search has not been studied and remains unknown. Given these observations, we propose to leverage user search history as query context to characterize users and build a context-aware ranking model for email search. In contrast to previous context-dependent ranking techniques that are based on raw texts, we use ranking features in the search history. This frees us from potential privacy leakage while giving a better generalization power to unseen users. Accordingly, we propose a context-dependent neural ranking model (CNRM) that encodes the ranking features in users search history as query context and show that it can significantly outperform the baseline neural model without using the context. We also investigate the benefit of the query context vectors obtained from CNRM on the state-of-the-art learning-to-rank model LambdaMart by clustering the vectors and incorporating the cluster information. Experimental results show that significantly better results can be achieved on LambdaMart as well, indicating that the query clusters can characterize different users and effectively turn the ranking model personalized.

suggested questions

comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا