No Arabic abstract
Recommender systems have played an increasingly important role in providing users with tailored suggestions based on their preferences. However, the conventional offline recommender systems cannot handle the ubiquitous data stream well. To address this issue, Streaming Recommender Systems (SRSs) have emerged in recent years, which incrementally train recommendation models on newly received data for effective real-time recommendations. Focusing on new data only benefits addressing concept drift, i.e., the changing user preferences towards items. However, it impedes capturing long-term user preferences. In addition, the commonly existing underload and overload problems should be well tackled for higher accuracy of streaming recommendations. To address these problems, we propose a Stratified and Time-aware Sampling based Adaptive Ensemble Learning framework, called STS-AEL, to improve the accuracy of streaming recommendations. In STS-AEL, we first devise stratified and time-aware sampling to extract representative data from both new data and historical data to address concept drift while capturing long-term user preferences. Also, incorporating the historical data benefits utilizing the idle resources in the underload scenario more effectively. After that, we propose adaptive ensemble learning to efficiently process the overloaded data in parallel with multiple individual recommendation models, and then effectively fuse the results of these models with a sequential adaptive mechanism. Extensive experiments conducted on three real-world datasets demonstrate that STS-AEL, in all the cases, significantly outperforms the state-of-the-art SRSs.
Streaming Recommender Systems (SRSs) commonly train recommendation models on newly received data only to address user preference drift, i.e., the changing user preferences towards items. However, this practice overlooks the long-term user preferences embedded in historical data. More importantly, the common heterogeneity in data stream greatly reduces the accuracy of streaming recommendations. The reason is that different preferences (or characteristics) of different types of users (or items) cannot be well learned by a unified model. To address these two issues, we propose a Variational and Reservoir-enhanced Sampling based Double-Wing Mixture of Experts framework, called VRS-DWMoE, to improve the accuracy of streaming recommendations. In VRS-DWMoE, we first devise variational and reservoir-enhanced sampling to wisely complement new data with historical data, and thus address the user preference drift issue while capturing long-term user preferences. After that, we propose a Double-Wing Mixture of Experts (DWMoE) model to first effectively learn heterogeneous user preferences and item characteristics, and then make recommendations based on them. Specifically, DWMoE contains two Mixture of Experts (MoE, an effective ensemble learning model) to learn user preferences and item characteristics, respectively. Moreover, the multiple experts in each MoE learn the preferences (or characteristics) of different types of users (or items) where each expert specializes in one underlying type. Extensive experiments demonstrate that VRS-DWMoE consistently outperforms the state-of-the-art SRSs.
Science and engineering problems subject to uncertainty are frequently both computationally expensive and feature nonsmooth parameter dependence, making standard Monte Carlo too slow, and excluding efficient use of accelerated uncertainty quantification methods relying on strict smoothness assumptions. To remedy these challenges, we propose an adaptive stratification method suitable for nonsmooth problems and with significantly reduced variance compared to Monte Carlo sampling. The stratification is iteratively refined and samples are added sequentially to satisfy an allocation criterion combining the benefits of proportional and optimal sampling. Theoretical estimates are provided for the expected performance and probability of failure to correctly estimate essential statistics. We devise a practical adaptive stratification method with strata of the same kind of geometrical shapes, cost-effective refinement satisfying a greedy variance reduction criterion. Numerical experiments corroborate the theoretical findings and exhibit speedups of up to three orders of magnitude compared to standard Monte Carlo sampling.
News articles usually contain knowledge entities such as celebrities or organizations. Important entities in articles carry key messages and help to understand the content in a more direct way. An industrial news recommender system contains various key applications, such as personalized recommendation, item-to-item recommendation, news category classification, news popularity prediction and local news detection. We find that incorporating knowledge entities for better document understanding benefits these applications consistently. However, existing document understanding models either represent news articles without considering knowledge entities (e.g., BERT) or rely on a specific type of text encoding model (e.g., DKN) so that the generalization ability and efficiency is compromised. In this paper, we propose KRED, which is a fast and effective model to enhance arbitrary document representation with a knowledge graph. KRED first enriches entities embeddings by attentively aggregating information from their neighborhood in the knowledge graph. Then a context embedding layer is applied to annotate the dynamic context of different entities such as frequency, category and position. Finally, an information distillation layer aggregates the entity embeddings under the guidance of the original document representation and transforms the document vector into a new one. We advocate to optimize the model with a multi-task framework, so that different news recommendation applications can be united and useful information can be shared across different tasks. Experiments on a real-world Microsoft News dataset demonstrate that KRED greatly benefits a variety of news recommendation applications.
Deep learning based recommender systems (DLRSs) often have embedding layers, which are utilized to lessen the dimensionality of categorical variables (e.g. user/item identifiers) and meaningfully transform them in the low-dimensional space. The majority of existing DLRSs empirically pre-define a fixed and unified dimension for all user/item embeddings. It is evident from recent researches that different embedding sizes are highly desired for different users/items according to their popularity. However, manually selecting embedding sizes in recommender systems can be very challenging due to the large number of users/items and the dynamic nature of their popularity. Thus, in this paper, we propose an AutoML based end-to-end framework (AutoEmb), which can enable various embedding dimensions according to the popularity in an automated and dynamic manner. To be specific, we first enhance a typical DLRS to allow various embedding dimensions; then we propose an end-to-end differentiable framework that can automatically select different embedding dimensions according to user/item popularity; finally we propose an AutoML based optimization algorithm in a streaming recommendation setting. The experimental results based on widely used benchmark datasets demonstrate the effectiveness of the AutoEmb framework.
Sequential recommendation systems model dynamic preferences of users based on their historical interactions with platforms. Despite recent progress, modeling short-term and long-term behavior of users in such systems is nontrivial and challenging. To address this, we present a solution enhanced by a knowledge graph called KATRec (Knowledge Aware aTtentive sequential Recommendations). KATRec learns the short and long-term interests of users by modeling their sequence of interacted items and leveraging pre-existing side information through a knowledge graph attention network. Our novel knowledge graph-enhanced sequential recommender contains item multi-relations at the entity-level and users dynamic sequences at the item-level. KATRec improves item representation learning by considering higher-order connections and incorporating them in user preference representation while recommending the next item. Experiments on three public datasets show that KATRec outperforms state-of-the-art recommendation models and demonstrates the importance of modeling both temporal and side information to achieve high-quality recommendations.