Do you want to publish a course? Click here

Projected Hamming Dissimilarity for Bit-Level Importance Coding in Collaborative Filtering

55   0   0.0 ( 0 )
 Added by Casper Hansen
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

When reasoning about tasks that involve large amounts of data, a common approach is to represent data items as objects in the Hamming space where operations can be done efficiently and effectively. Object similarity can then be computed by learning binary representations (hash codes) of the objects and computing their Hamming distance. While this is highly efficient, each bit dimension is equally weighted, which means that potentially discriminative information of the data is lost. A more expressive alternative is to use real-valued vector representations and compute their inner product; this allows varying the weight of each dimension but is many magnitudes slower. To fix this, we derive a new way of measuring the dissimilarity between two objects in the Hamming space with binary weighting of each dimension (i.e., disabling bits): we consider a field-agnostic dissimilarity that projects the vector of one object onto the vector of the other. When working in the Hamming space, this results in a novel projected Hamming dissimilarity, which by choice of projection, effectively allows a binary importance weighting of the hash code of one object through the hash code of the other. We propose a variational hashing model for learning hash codes optimized for this projected Hamming dissimilarity, and experimentally evaluate it in collaborative filtering experiments. The resultant hash codes lead to effectiveness gains of up to +7% in NDCG and +14% in MRR compared to state-of-the-art hashing-based collaborative filtering baselines, while requiring no additional storage and no computational overhead compared to using the Hamming distance.

rate research

Read More

A growing proportion of human interactions are digitized on social media platforms and subjected to algorithmic decision-making, and it has become increasingly important to ensure fair treatment from these algorithms. In this work, we investigate gender bias in collaborative-filtering recommender systems trained on social media data. We develop neural fair collaborative filtering (NFCF), a practical framework for mitigating gender bias in recommending sensitive items (e.g. jobs, academic concentrations, or courses of study) using a pre-training and fine-tuning approach to neural collaborative filtering, augmented with bias correction techniques. We show the utility of our methods for gender de-biased career and college major recommendations on the MovieLens dataset and a Facebook dataset, respectively, and achieve better performance and fairer behavior than several state-of-the-art models.
Latent factor models play a dominant role among recommendation techniques. However, most of the existing latent factor models assume both historical interactions and embedding dimensions are independent of each other, and thus regrettably ignore the high-order interaction information among historical interactions and embedding dimensions. In this paper, we propose a novel latent factor model called COMET (COnvolutional diMEnsion inTeraction), which simultaneously model the high-order interaction patterns among historical interactions and embedding dimensions. To be specific, COMET stacks the embeddings of historical interactions horizontally at first, which results in two embedding maps. In this way, internal interactions and dimensional interactions can be exploited by convolutional neural networks with kernels of different sizes simultaneously. A fully-connected multi-layer perceptron is then applied to obtain two interaction vectors. Lastly, the representations of users and items are enriched by the learnt interaction vectors, which can further be used to produce the final prediction. Extensive experiments and ablation studies on various public implicit feedback datasets clearly demonstrate the effectiveness and the rationality of our proposed method.
In recent years, text-aware collaborative filtering methods have been proposed to address essential challenges in recommendations such as data sparsity, cold start problem, and long-tail distribution. However, many of these text-oriented methods rely heavily on the availability of text information for every user and item, which obviously does not hold in real-world scenarios. Furthermore, specially designed network structures for text processing are highly inefficient for on-line serving and are hard to integrate into current systems. In this paper, we propose a flexible neural recommendation framework, named Review Regularized Recommendation, short as R3. It consists of a neural collaborative filtering part that focuses on prediction output, and a text processing part that serves as a regularizer. This modular design incorporates text information as richer data sources in the training phase while being highly friendly for on-line serving as it needs no on-the-fly text processing in serving time. Our preliminary results show that by using a simple text processing approach, it could achieve better prediction performance than state-of-the-art text-aware methods.
The item cold-start problem seriously limits the recommendation performance of Collaborative Filtering (CF) methods when new items have either none or very little interactions. To solve this issue, many modern Internet applications propose to predict a new items interaction from the possessing contents. However, it is difficult to design and learn a map between the items interaction history and the corresponding contents. In this paper, we apply the Wasserstein distance to address the item cold-start problem. Given item content information, we can calculate the similarity between the interacted items and cold-start ones, so that a users preference on cold-start items can be inferred by minimizing the Wasserstein distance between the distributions over these two types of items. We further adopt the idea of CF and propose Wasserstein CF (WCF) to improve the recommendation performance on cold-start items. Experimental results demonstrate the superiority of WCF over state-of-the-art approaches.
With increasing and extensive use of electronic health records, clinicians are often under time pressure when they need to retrieve important information efficiently among large amounts of patients health records in clinics. While a search function can be a useful alternative to browsing through a patients record, it is cumbersome for clinicians to search repeatedly for the same or similar information on similar patients. Under such circumstances, there is a critical need to build effective recommender systems that can generate accurate search term recommendations for clinicians. In this manuscript, we developed a hybrid collaborative filtering model using patients encounter and search term information to recommend the next search terms for clinicians to retrieve important information fast in clinics. For each patient, the model will recommend terms that either have high co-occurrence frequencies with his/her most recent ICD codes or are highly relevant to the most recent search terms on this patient. We have conducted comprehensive experiments to evaluate the proposed model, and the experimental results demonstrate that our model can outperform all the state-of-the-art baseline methods for top-N search term recommendation on different datasets.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا