Do you want to publish a course? Click here

Factor-level Attentive ICF for Recommendation

346   0   0.0 ( 0 )
 Added by Zhiyong Cheng
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Item-based collaborative filtering (ICF) enjoys the advantages of high recommendation accuracy and ease in online penalization and thus is favored by the industrial recommender systems. ICF recommends items to a target user based on their similarities to the previously interacted items of the user. Great progresses have been achieved for ICF in recent years by applying advanced machine learning techniques (e.g., deep neural networks) to learn the item similarity from data. The early methods simply treat all the historical items equally and recent ones distinguish the different importance of items for a prediction. Despite the progress, we argue that those ICF models neglect the diverse intents of users on adopting items (e.g., watching a movie because of the director, leading actors, or the visual effects). As a result, they fail to estimate the item similarity on a finer-grained level to predict the users preference for an item, resulting in sub-optimal recommendation. In this work, we propose a general factor-level attention method for ICF models. The key of our method is to distinguish the importance of different factors when computing the item similarity for a prediction. To demonstrate the effectiveness of our method, we design a light attention neural network to integrate both item-level and factor-level attention for neural ICF models. It is model-agnostic and easy-to-implement. We apply it to two baseline ICF models and evaluate its effectiveness on six public datasets. Extensive experiments show the factor-level attention enhanced models consistently outperform their counterparts, demonstrating the potential of differentiate user intents on the factor-level for ICF recommendation models.



rate research

Read More

Modern deep neural networks (DNNs) have greatly facilitated the development of sequential recommender systems by achieving state-of-the-art recommendation performance on various sequential recommendation tasks. Given a sequence of interacted items, existing DNN-based sequential recommenders commonly embed each item into a unique vector to support subsequent computations of the user interest. However, due to the potentially large number of items, the over-parameterised item embedding matrix of a sequential recommender has become a memory bottleneck for efficient deployment in resource-constrained environments, e.g., smartphones and other edge devices. Furthermore, we observe that the widely-used multi-head self-attention, though being effective in modelling sequential dependencies among items, heavily relies on redundant attention units to fully capture both global and local item-item transition patterns within a sequence. In this paper, we introduce a novel lightweight self-attentive network (LSAN) for sequential recommendation. To aggressively compress the original embedding matrix, LSAN leverages the notion of compositional embeddings, where each item embedding is composed by merging a group of selected base embedding vectors derived from substantially smaller embedding matrices. Meanwhile, to account for the intrinsic dynamics of each item, we further propose a temporal context-aware embedding composition scheme. Besides, we develop an innovative twin-attention network that alleviates the redundancy of the traditional multi-head self-attention while retaining full capacity for capturing long- and short-term (i.e., global and local) item dependencies. Comprehensive experiments demonstrate that LSAN significantly advances the accuracy and memory efficiency of existing sequential recommenders.
Recently, deep learning has made significant progress in the task of sequential recommendation. Existing neural sequential recommenders typically adopt a generative way trained with Maximum Likelihood Estimation (MLE). When context information (called factor) is involved, it is difficult to analyze when and how each individual factor would affect the final recommendation performance. For this purpose, we take a new perspective and introduce adversarial learning to sequential recommendation. In this paper, we present a Multi-Factor Generative Adversarial Network (MFGAN) for explicitly modeling the effect of context information on sequential recommendation. Specifically, our proposed MFGAN has two kinds of modules: a Transformer-based generator taking user behavior sequences as input to recommend the possible next items, and multiple factor-specific discriminators to evaluate the generated sub-sequence from the perspectives of different factors. To learn the parameters, we adopt the classic policy gradient method, and utilize the reward signal of discriminators for guiding the learning of the generator. Our framework is flexible to incorporate multiple kinds of factor information, and is able to trace how each factor contributes to the recommendation decision over time. Extensive experiments conducted on three real-world datasets demonstrate the superiority of our proposed model over the state-of-the-art methods, in terms of effectiveness and interpretability.
232 - Fan Liu , Zhiyong Cheng , Lei Zhu 2020
As important side information, attributes have been widely exploited in the existing recommender system for better performance. In the real-world scenarios, it is common that some attributes of items/users are missing (e.g., some movies miss the genre data). Prior studies usually use a default value (i.e., other) to represent the missing attribute, resulting in sub-optimal performance. To address this problem, in this paper, we present an attribute-aware attentive graph convolution network (A${^2}$-GCN). In particular, we first construct a graph, whereby users, items, and attributes are three types of nodes and their associations are edges. Thereafter, we leverage the graph convolution network to characterize the complicated interactions among <users, items, attributes>. To learn the node representation, we turn to the message-passing strategy to aggregate the message passed from the other directly linked types of nodes (e.g., a user or an attribute). To this end, we are capable of incorporating associate attributes to strengthen the user and item representations, and thus naturally solve the attribute missing problem. Considering the fact that for different users, the attributes of an item have different influence on their preference for this item, we design a novel attention mechanism to filter the message passed from an item to a target user by considering the attribute information. Extensive experiments have been conducted on several publicly accessible datasets to justify our model. Results show that our model outperforms several state-of-the-art methods and demonstrate the effectiveness of our attention method.
To alleviate data sparsity and cold-start problems of traditional recommender systems (RSs), incorporating knowledge graphs (KGs) to supplement auxiliary information has attracted considerable attention recently. However, simply integrating KGs in current KG-based RS models is not necessarily a guarantee to improve the recommendation performance, which may even weaken the holistic model capability. This is because the construction of these KGs is independent of the collection of historical user-item interactions; hence, information in these KGs may not always be helpful for recommendation to all users. In this paper, we propose attentive Knowledge-aware Graph convolutional networks with Collaborative Guidance for personalized Recommendation (CG-KGR). CG-KGR is a novel knowledge-aware recommendation model that enables ample and coherent learning of KGs and user-item interactions, via our proposed Collaborative Guidance Mechanism. Specifically, CG-KGR first encapsulates historical interactions to interactive information summarization. Then CG-KGR utilizes it as guidance to extract information out of KGs, which eventually provides more precise personalized recommendation. We conduct extensive experiments on four real-world datasets over two recommendation tasks, i.e., Top-K recommendation and Click-Through rate (CTR) prediction. The experimental results show that the CG-KGR model significantly outperforms recent state-of-the-art models by 4.0-53.2% and 0.4-3.2%, in terms of Recall metric on Top-K recommendation and AUC on CTR prediction, respectively.
134 - Hansi Zeng , Qingyao Ai 2020
Using reviews to learn user and item representations is important for recommender system. Current review based methods can be divided into two categories: (1) the Convolution Neural Network (CNN) based models that extract n-gram features from user/item reviews; (2) the Recurrent Neural Network (RNN) based models that learn global contextual representations from reviews for users and items. Despite their success, both CNN and RNN based models in previous studies suffer from their own drawbacks. While CNN based models are weak in modeling long-dependency relation in text, RNN based models are slow in training and inference due to their incapability with parallel computing. To alleviate these problems, we propose a new text encoder module for review modeling in recommendation by combining convolution networks with self-attention networks to model local and global interactions in text together.As different words, sentences, reviews have different importance for modeling user and item representations, we construct review models hierarchically in sentence-level, review-level, and user/item level by encoding words for sentences, encoding sentences for reviews, and encoding reviews for user and item representations. Experiments on Amazon Product Benchmark show that our model can achieve significant better performance comparing to the state-of-the-art review based recommendation models.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا