ﻻ يوجد ملخص باللغة العربية
Few-shot classification is a challenging task which aims to formulate the ability of humans to learn concepts from limited prior data and has drawn considerable attention in machine learning. Recent progress in few-shot classification has featured meta-learning, in which a parameterized model for a learning algorithm is defined and trained to learn the ability of handling classification tasks on extremely large or infinite episodes representing different classification task, each with a small labeled support set and its corresponding query set. In this work, we advance this few-shot classification paradigm by formulating it as a supervised classification learning problem. We further propose multi-episode and cross-way training techniques, which respectively correspond to the minibatch and pretraining in classification problems. Experimental results on a state-of-the-art few-shot classification method (prototypical networks) demonstrate that both the proposed training strategies can highly accelerate the training process without accuracy loss for varying few-shot classification problems on Omniglot and miniImageNet.
A meta-model is trained on a distribution of similar tasks such that it learns an algorithm that can quickly adapt to a novel task with only a handful of labeled examples. Most of current meta-learning methods assume that the meta-training set consis
Recent few-shot learning works focus on training a model with prior meta-knowledge to fast adapt to new tasks with unseen classes and samples. However, conventional time-series classification algorithms fail to tackle the few-shot scenario. Existing
Recent algorithms with state-of-the-art few-shot classification results start their procedure by computing data features output by a large pretrained model. In this paper we systematically investigate which models provide the best representations for
This paper investigates the effectiveness of pre-training for few-shot intent classification. While existing paradigms commonly further pre-train language models such as BERT on a vast amount of unlabeled corpus, we find it highly effective and effic
This paper studies few-shot relation extraction, which aims at predicting the relation for a pair of entities in a sentence by training with a few labeled examples in each relation. To more effectively generalize to new relations, in this paper we st