Do you want to publish a course? Click here

Collective decision for open set recognition

95   0   0.0 ( 0 )
 Added by Chuanxing Geng
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

In open set recognition (OSR), almost all existing methods are designed specially for recognizing individual instances, even these instances are collectively coming in batch. Recognizers in decision either reject or categorize them to some known class using empirically-set threshold. Thus the decision threshold plays a key role. However, the selection for it usually depends on the knowledge of known classes, inevitably incurring risks due to lacking available information from unknown classes. On the other hand, a more realistic OSR system should NOT just rest on a reject decision but should go further, especially for discovering the hidden unknown classes among the reject instances, whereas existing OSR methods do not pay special attention. In this paper, we introduce a novel collective/batch decision strategy with an aim to extend existing OSR for new class discovery while considering correlations among the testing instances. Specifically, a collective decision-based OSR framework (CD-OSR) is proposed by slightly modifying the Hierarchical Dirichlet process (HDP). Thanks to HDP, our CD-OSR does not need to define the decision threshold and can implement the open set recognition and new class discovery simultaneously. Finally, extensive experiments on benchmark datasets indicate the validity of CD-OSR.



rate research

Read More

In real-world recognition/classification tasks, limited by various objective factors, it is usually difficult to collect training samples to exhaust all classes when training a recognizer or classifier. A more realistic scenario is open set recognition (OSR), where incomplete knowledge of the world exists at training time, and unknown classes can be submitted to an algorithm during testing, requiring the classifiers to not only accurately classify the seen classes, but also effectively deal with the unseen ones. This paper provides a comprehensive survey of existing open set recognition techniques covering various aspects ranging from related definitions, representations of models, datasets, evaluation criteria, and algorithm comparisons. Furthermore, we briefly analyze the relationships between OSR and its related tasks including zero-shot, one-shot (few-shot) recognition/learning techniques, classification with reject option, and so forth. Additionally, we also overview the open world recognition which can be seen as a natural extension of OSR. Importantly, we highlight the limitations of existing approaches and point out some promising subsequent research directions in this field.
In open set learning, a model must be able to generalize to novel classes when it encounters a sample that does not belong to any of the classes it has seen before. Open set learning poses a realistic learning scenario that is receiving growing attention. Existing studies on open set learning mainly focused on detecting novel classes, but few studies tried to model them for differentiating novel classes. In this paper, we recognize that novel classes should be different from each other, and propose distribution networks for open set learning that can model different novel classes based on probability distributions. We hypothesize that, through a certain mapping, samples from different classes with the same classification criterion should follow different probability distributions from the same distribution family. A deep neural network is learned to map the samples in the original feature space to a latent space where the distributions of known classes can be jointly learned with the network. We additionally propose a distribution parameter transfer and updating strategy for novel class modeling when a novel class is detected in the latent space. By novel class modeling, the detected novel classes can serve as known classes to the subsequent classification. Our experimental results on image datasets MNIST and CIFAR10 show that the distribution networks can detect novel classes accurately, and model them well for the subsequent classification tasks.
Open set recognition (OSR), aiming to simultaneously classify the seen classes and identify the unseen classes as unknown, is essential for reliable machine learning.The key challenge of OSR is how to reduce the empirical classification risk on the labeled known data and the open space risk on the potential unknown data simultaneously. To handle the challenge, we formulate the open space risk problem from the perspective of multi-class integration, and model the unexploited extra-class space with a novel concept Reciprocal Point. Follow this, a novel learning framework, termed Adversarial Reciprocal Point Learning (ARPL), is proposed to minimize the overlap of known distribution and unknown distributions without loss of known classification accuracy. Specifically, each reciprocal point is learned by the extra-class space with the corresponding known category, and the confrontation among multiple known categories are employed to reduce the empirical classification risk. Then, an adversarial margin constraint is proposed to reduce the open space risk by limiting the latent open space constructed by reciprocal points. To further estimate the unknown distribution from open space, an instantiated adversarial enhancement method is designed to generate diverse and confusing training samples, based on the adversarial mechanism between the reciprocal points and known classes. This can effectively enhance the model distinguishability to the unknown classes. Extensive experimental results on various benchmark datasets indicate that the proposed method is significantly superior to other existing approaches and achieves state-of-the-art performance.
Open set recognition is designed to identify known classes and to reject unknown classes simultaneously. Specifically, identifying known classes and rejecting unknown classes correspond to reducing the empirical risk and the open space risk, respectively. First, the motorial prototype framework (MPF) is proposed, which classifies known classes according to the prototype classification idea. Moreover, a motorial margin constraint term is added into the loss function of the MPF, which can further improve the clustering compactness of known classes in the feature space to reduce both risks. Second, this paper proposes the adversarial motorial prototype framework (AMPF) based on the MPF. On the one hand, this model can generate adversarial samples and add these samples into the training phase; on the other hand, it can further improve the differential mapping ability of the model to known and unknown classes with the adversarial motion of the margin constraint radius. Finally, this paper proposes an upgraded version of the AMPF, AMPF++, which adds much more generated unknown samples into the training phase. In this paper, a large number of experiments prove that the performance of the proposed models is superior to that of other current works.
155 - Zhen Fang , Jie Lu , Anjin Liu 2021
Traditional supervised learning aims to train a classifier in the closed-set world, where training and test samples share the same label space. In this paper, we target a more challenging and realistic setting: open-set learning (OSL), where there exist test samples from the classes that are unseen during training. Although researchers have designed many methods from the algorithmic perspectives, there are few methods that provide generalization guarantees on their ability to achieve consistent performance on different training samples drawn from the same distribution. Motivated by the transfer learning and probably approximate correct (PAC) theory, we make a bold attempt to study OSL by proving its generalization error-given training samples with size n, the estimation error will get close to order O_p(1/sqrt{n}). This is the first study to provide a generalization bound for OSL, which we do by theoretically investigating the risk of the target classifier on unknown classes. According to our theory, a novel algorithm, called auxiliary open-set risk (AOSR) is proposed to address the OSL problem. Experiments verify the efficacy of AOSR. The code is available at github.com/Anjin-Liu/Openset_Learning_AOSR.

suggested questions

comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا