Do you want to publish a course? Click here

Ultra accurate personalized recommendation via eliminating redundant correlations

240   0   0.0 ( 0 )
 Added by Tao Zhou
 Publication date 2009
  fields Physics
and research's language is English




Ask ChatGPT about the research

In this paper, based on a weighted projection of bipartite user-object network, we introduce a personalized recommendation algorithm, called the emph{network-based inference} (NBI), which has higher accuracy than the classical algorithm, namely emph{collaborative filtering}. In the NBI, the correlation resulting from a specific attribute may be repeatedly counted in the cumulative recommendations from different objects. By considering the higher order correlations, we design an improved algorithm that can, to some extent, eliminate the redundant correlations. We test our algorithm on two benchmark data sets, emph{MovieLens} and emph{Netflix}. Compared with the NBI, the algorithmic accuracy, measured by the ranking score, can be further improved by 23% for emph{MovieLens} and 22% for emph{Netflix}, respectively. The present algorithm can even outperform the emph{Latent Dirichlet Allocation} algorithm, which requires much longer computational time. Furthermore, most of the previous studies considered the algorithmic accuracy only, in this paper, we argue that the diversity and popularity, as two significant criteria of algorithmic performance, should also be taken into account. With more or less the same accuracy, an algorithm giving higher diversity and lower popularity is more favorable. Numerical results show that the present algorithm can outperform the standard one simultaneously in all five adopted metrics: lower ranking score and higher precision for accuracy, larger Hamming distance and lower intra-similarity for diversity, as well as smaller average degree for popularity.



rate research

Read More

This paper describes the application of statistical methods to political polling data in order to look for correlations and memory effects. We propose measures for quantifying the political memory using the correlation function and scaling analysis. These methods reveal time correlations and self-affine scaling properties respectively, and they have been applied to polling data from Norway. Power-law dependencies have been found between correlation measures and party size, and different scaling behaviour has been found for large and small parties.
Recommendation algorithms typically build models based on historical user-item interactions (e.g., clicks, likes, or ratings) to provide a personalized ranked list of items. These interactions are often distributed unevenly over different groups of items due to varying user preferences. However, we show that recommendation algorithms can inherit or even amplify this imbalanced distribution, leading to unfair recommendations to item groups. Concretely, we formalize the concepts of ranking-based statistical parity and equal opportunity as two measures of fairness in personalized ranking recommendation for item groups. Then, we empirically show that one of the most widely adopted algorithms -- Bayesian Personalized Ranking -- produces unfair recommendations, which motivates our effort to propose the novel fairness-aware personalized ranking model. The debiased model is able to improve the two proposed fairness metrics while preserving recommendation performance. Experiments on three public datasets show strong fairness improvement of the proposed model versus state-of-the-art alternatives. This is paper is an extended and reorganized version of our SIGIR 2020~cite{zhu2020measuring} paper. In this paper, we re-frame the studied problem as `item recommendation fairness in personalized ranking recommendation systems, and provide more details about the training process of the proposed model and details of experiment setup.
Heterogeneity of both the source and target objects is taken into account in a network-based algorithm for the directional resource transformation between objects. Based on a biased heat conduction recommendation method (BHC) which considers the heterogeneity of the target object, we propose a heterogeneous heat conduction algorithm (HHC), by further taking the source object degree as the weight of diffusion. Tested on three real datasets, the Netflix, RYM and MovieLens, the HHC algorithm is found to present a better recommendation in both the accuracy and personalization than two excellent algorithms, i.e., the original BHC and a hybrid algorithm of heat conduction and mass diffusion (HHM), while not requiring any other accessorial information or parameter. Moreover, the HHC even elevates the recommendation accuracy on cold objects, referring to the so-called cold start problem, for effectively relieving the recommendation bias on objects with different level of popularity.
153 - Xin Qian , Ryan A. Rossi , Fan Du 2021
Visualization recommendation work has focused solely on scoring visualizations based on the underlying dataset and not the actual user and their past visualization feedback. These systems recommend the same visualizations for every user, despite that the underlying user interests, intent, and visualization preferences are likely to be fundamentally different, yet vitally important. In this work, we formally introduce the problem of personalized visualization recommendation and present a generic learning framework for solving it. In particular, we focus on recommending visualizations personalized for each individual user based on their past visualization interactions (e.g., viewed, clicked, manually created) along with the data from those visualizations. More importantly, the framework can learn from visualizations relevant to other users, even if the visualizations are generated from completely different datasets. Experiments demonstrate the effectiveness of the approach as it leads to higher quality visualization recommendations tailored to the specific user intent and preferences. To support research on this new problem, we release our user-centric visualization corpus consisting of 17.4k users exploring 94k datasets with 2.3 million attributes and 32k user-generated visualizations.
We introduce Q-space, the tensor product of an index space with a primary space, to achieve a more general mathematical description of correlations in terms of q-tuples. Topics discussed include the decomposition of Q-space into a sum-variable (location) subspace S plus an orthogonal difference-variable subspace D, and a systematisation of q-tuple size estimation in terms of p-norms. The GHP sum prescription for q-tuple size emerges naturally as the 2-norm of difference-space vectors. Maximum- and minimum-size prescriptions are found to be special cases of a continuum of p-sizes.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا