Do you want to publish a course? Click here

Mixture of Virtual-Kernel Experts for Multi-Objective User Profile Modeling

145   0   0.0 ( 0 )
 Added by Zhenhui Xu
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

In many industrial applications like online advertising and recommendation systems, diverse and accurate user profiles can greatly help improve personalization. For building user profiles, deep learning is widely used to mine expressive tags to describe users preferences from their historical actions. For example, tags mined from users click-action history can represent the categories of ads that users are interested in, and they are likely to continue being clicked in the future. Traditional solutions usually introduce multiple independent Two-Tower models to mine tags from different actions, e.g., click, conversion. However, the models cannot learn complementarily and support effective training for data-sparse actions. Besides, limited by the lack of information fusion between the two towers, the model learning is insufficient to represent users preferences on various topics well. This paper introduces a novel multi-task model called Mixture of Virtual-Kernel Experts (MVKE) to learn multiple topic-related user preferences based on different actions unitedly. In MVKE, we propose a concept of Virtual-Kernel Expert, which focuses on modeling one particular facet of the users preference, and all of them learn coordinately. Besides, the gate-based structure used in MVKE builds an information fusion bridge between two towers, improving the models capability much and maintaining high efficiency. We apply the model in Tencent Advertising System, where both online and offline evaluations show that our method has a significant improvement compared with the existing ones and brings about an obvious lift to actual advertising revenue.



rate research

Read More

269 - Iyad Batal , Akshay Soni 2020
Multiple content providers rely on native advertisement for revenue by placing ads within the organic content of their pages. We refer to this setting as ``queryless to differentiate from search advertisement where a user submits a search query and gets back related ads. Understanding user intent is critical because relevant ads improve user experience and increase the likelihood of delivering clicks that have value to our advertisers. This paper presents Multi-Channel Sequential Behavior Network (MC-SBN), a deep learning approach for embedding users and ads in a semantic space in which relevance can be evaluated. Our proposed user encoder architecture summarizes user activities from multiple input channels--such as previous search queries, visited pages, or clicked ads--into a user vector. It uses multiple RNNs to encode sequences of event sessions from the different channels and then applies an attention mechanism to create the user representation. A key property of our approach is that user vectors can be maintained and updated incrementally, which makes it feasible to be deployed for large-scale serving. We conduct extensive experiments on real-world datasets. The results demonstrate that MC-SBN can improve the ranking of relevant ads and boost the performance of both click prediction and conversion prediction in the queryless native advertising setting.
306 - Yan Zhao , Shoujin Wang , Yan Wang 2020
Streaming Recommender Systems (SRSs) commonly train recommendation models on newly received data only to address user preference drift, i.e., the changing user preferences towards items. However, this practice overlooks the long-term user preferences embedded in historical data. More importantly, the common heterogeneity in data stream greatly reduces the accuracy of streaming recommendations. The reason is that different preferences (or characteristics) of different types of users (or items) cannot be well learned by a unified model. To address these two issues, we propose a Variational and Reservoir-enhanced Sampling based Double-Wing Mixture of Experts framework, called VRS-DWMoE, to improve the accuracy of streaming recommendations. In VRS-DWMoE, we first devise variational and reservoir-enhanced sampling to wisely complement new data with historical data, and thus address the user preference drift issue while capturing long-term user preferences. After that, we propose a Double-Wing Mixture of Experts (DWMoE) model to first effectively learn heterogeneous user preferences and item characteristics, and then make recommendations based on them. Specifically, DWMoE contains two Mixture of Experts (MoE, an effective ensemble learning model) to learn user preferences and item characteristics, respectively. Moreover, the multiple experts in each MoE learn the preferences (or characteristics) of different types of users (or items) where each expert specializes in one underlying type. Extensive experiments demonstrate that VRS-DWMoE consistently outperforms the state-of-the-art SRSs.
63 - Faicel Chamroukhi 2016
Mixture of Experts (MoE) is a popular framework for modeling heterogeneity in data for regression, classification, and clustering. For regression and cluster analyses of continuous data, MoE usually use normal experts following the Gaussian distribution. However, for a set of data containing a group or groups of observations with heavy tails or atypical observations, the use of normal experts is unsuitable and can unduly affect the fit of the MoE model. We introduce a robust MoE modeling using the $t$ distribution. The proposed $t$ MoE (TMoE) deals with these issues regarding heavy-tailed and noisy data. We develop a dedicated expectation-maximization (EM) algorithm to estimate the parameters of the proposed model by monotonically maximizing the observed data log-likelihood. We describe how the presented model can be used in prediction and in model-based clustering of regression data. The proposed model is validated on numerical experiments carried out on simulated data, which show the effectiveness and the robustness of the proposed model in terms of modeling non-linear regression functions as well as in model-based clustering. Then, it is applied to the real-world data of tone perception for musical data analysis, and the one of temperature anomalies for the analysis of climate change data. The obtained results show the usefulness of the TMoE model for practical applications.
127 - Faicel Chamroukhi 2016
Mixture of Experts (MoE) is a popular framework in the fields of statistics and machine learning for modeling heterogeneity in data for regression, classification and clustering. MoE for continuous data are usually based on the normal distribution. However, it is known that for data with asymmetric behavior, heavy tails and atypical observations, the use of the normal distribution is unsuitable. We introduce a new robust non-normal mixture of experts modeling using the skew $t$ distribution. The proposed skew $t$ mixture of experts, named STMoE, handles these issues of the normal mixtures experts regarding possibly skewed, heavy-tailed and noisy data. We develop a dedicated expectation conditional maximization (ECM) algorithm to estimate the model parameters by monotonically maximizing the observed data log-likelihood. We describe how the presented model can be used in prediction and in model-based clustering of regression data. Numerical experiments carried out on simulated data show the effectiveness and the robustness of the proposed model in fitting non-linear regression functions as well as in model-based clustering. Then, the proposed model is applied to the real-world data of tone perception for musical data analysis, and the one of temperature anomalies for the analysis of climate change data. The obtained results confirm the usefulness of the model for practical data analysis applications.
Learning generative models that span multiple data modalities, such as vision and language, is often motivated by the desire to learn more useful, generalisable representations that faithfully capture common underlying factors between the modalities. In this work, we characterise successful learning of such models as the fulfillment of four criteria: i) implicit latent decomposition into shared and private subspaces, ii) coherent joint generation over all modalities, iii) coherent cross-generation across individual modalities, and iv) improved model learning for individual modalities through multi-modal integration. Here, we propose a mixture-of-experts multimodal variational autoencoder (MMVAE) to learn generative models on different sets of modalities, including a challenging image-language dataset, and demonstrate its ability to satisfy all four criteria, both qualitatively and quantitatively.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا