ﻻ يوجد ملخص باللغة العربية
We propose to formulate multi-label learning as a estimation of class distribution in a non-linear embedding space, where for each label, its positive data embeddings and negative data embeddings distribute compactly to form a positive component and negative component respectively, while the positive component and negative component are pushed away from each other. Duo to the shared embedding space for all labels, the distribution of embeddings preserves instances label membership and feature matrix, thus encodes the feature-label relation and nonlinear label dependency. Labels of a given instance are inferred in the embedding space by measuring the probabilities of its belongingness to the positive or negative components of each label. Specially, the probabilities are modeled as the distance from the given instance to representative positive or negative prototypes. Extensive experiments validate that the proposed solution can provide distinctively more accurate multi-label classification than other state-of-the-art algorithms.
Multi-label classification (MLC) studies the problem where each instance is associated with multiple relevant labels, which leads to the exponential growth of output space. MLC encourages a popular framework named label compression (LC) for capturing
Embedding approaches have become one of the most pervasive techniques for multi-label classification. However, the training process of embedding methods usually involves a complex quadratic or semidefinite programming problem, or the model may even i
Partial multi-label learning (PML) models the scenario where each training instance is annotated with a set of candidate labels, and only some of the labels are relevant. The PML problem is practical in real-world scenarios, as it is difficult and ev
Multi-typed objects Multi-view Multi-instance Multi-label Learning (M4L) deals with interconnected multi-typed objects (or bags) that are made of diverse instances, represented with heterogeneous feature views and annotated with a set of non-exclusiv
Label space expansion for multi-label classification (MLC) is a methodology that encodes the original label vectors to higher dimensional codes before training and decodes the predicted codes back to the label vectors during testing. The methodology