ترغب بنشر مسار تعليمي؟ اضغط هنا

Meta-Dataset: A Dataset of Datasets for Learning to Learn from Few Examples

292   0   0.0 ( 0 )
 نشر من قبل Pascal Lamblin
 تاريخ النشر 2019
والبحث باللغة English




اسأل ChatGPT حول البحث

Few-shot classification refers to learning a classifier for new classes given only a few examples. While a plethora of models have emerged to tackle it, we find the procedure and datasets that are used to assess their progress lacking. To address this limitation, we propose Meta-Dataset: a new benchmark for training and evaluating models that is large-scale, consists of diverse datasets, and presents more realistic tasks. We experiment with popular baselines and meta-learners on Meta-Dataset, along with a competitive method that we propose. We analyze performance as a function of various characteristics of test tasks and examine the models ability to leverage diverse training sources for improving their generalization. We also propose a new set of baselines for quantifying the benefit of meta-learning in Meta-Dataset. Our extensive experimentation has uncovered important research challenges and we hope to inspire work in these directions.



قيم البحث

اقرأ أيضاً

We present a visualization tool to exhaustively search and browse through a set of large-scale machine learning datasets. Built on the top of the VizWiz dataset, our dataset browser tool has the potential to support and enable a variety of qualitativ e and quantitative research, and open new directions for visualizing and researching with multimodal information. The tool is publicly available at https://vizwiz.org/browse.
Hierarchical Bayesian methods can unify many related tasks (e.g. k-shot classification, conditional and unconditional generation) as inference within a single generative model. However, when this generative model is expressed as a powerful neural net work such as a PixelCNN, we show that existing learning techniques typically fail to effectively use latent variables. To address this, we develop a modification of the Variational Autoencoder in which encoded observations are decoded to new elements from the same class. This technique, which we call a Variational Homoencoder (VHE), produces a hierarchical latent variable model which better utilises latent variables. We use the VHE framework to learn a hierarchical PixelCNN on the Omniglot dataset, which outperforms all existing models on test set likelihood and achieves strong performance on one-shot generation and classification tasks. We additionally validate the VHE on natural images from the YouTube Faces database. Finally, we develop extensions of the model that apply to richer dataset structures such as factorial and hierarchical categories.
Few-shot dataset generalization is a challenging variant of the well-studied few-shot classification problem where a diverse training set of several datasets is given, for the purpose of training an adaptable model that can then learn classes from ne w datasets using only a few examples. To this end, we propose to utilize the diverse training set to construct a universal template: a partial model that can define a wide array of dataset-specialized models, by plugging in appropriate components. For each new few-shot classification problem, our approach therefore only requires inferring a small number of parameters to insert into the universal template. We design a separate network that produces an initialization of those parameters for each given task, and we then fine-tune its proposed initialization via a few steps of gradient descent. Our approach is more parameter-efficient, scalable and adaptable compared to previous methods, and achieves the state-of-the-art on the challenging Meta-Dataset benchmark.
Meta-Learning (ML) has proven to be a useful tool for training Few-Shot Learning (FSL) algorithms by exposure to batches of tasks sampled from a meta-dataset. However, the standard training procedure overlooks the dynamic nature of the real-world whe re object classes are likely to occur at different frequencies. While it is generally understood that imbalanced tasks harm the performance of supervised methods, there is no significant research examining the impact of imbalanced meta-datasets on the FSL evaluation task. This study exposes the magnitude and extent of this problem. Our results show that ML methods are more robust against meta-dataset imbalance than imbalance at the task-level with a similar imbalance ratio ($rho<20$), with the effect holding even in long-tail datasets under a larger imbalance ($rho=65$). Overall, these results highlight an implicit strength of ML algorithms, capable of learning generalizable features under dataset imbalance and domain-shift. The code to reproduce the experiments is released under an open-source license.
This paper presents a meta-learning framework for few-shots One-Class Classification (OCC) at test-time, a setting where labeled examples are only available for the positive class, and no supervision is given for the negative example. We consider tha t we have a set of `one-class classification objective-tasks with only a small set of positive examples available for each task, and a set of training tasks with full supervision (i.e. highly imbalanced classification). We propose an approach using order-equivariant networks to learn a meta binary-classifier. The model will take as input an example to classify from a given task, as well as the corresponding supervised set of positive examples for this OCC task. Thus, the output of the model will be conditioned on the available positive example of a given task, allowing to predict on new tasks and new examples without labeled negative examples. In this paper, we are motivated by an astronomy application. Our goal is to identify if stars belong to a specific stellar group (the one-class for a given task), called textit{stellar streams}, where each stellar stream is a different OCC-task. We show that our method transfers well on unseen (test) synthetic streams, and outperforms the baselines even though it is not retrained and accesses a much smaller part of the data per task to predict (only positive supervision). We see however that it doesnt transfer as well on the real stream GD-1. This could come from intrinsic differences from the synthetic and real stream, highlighting the need for consistency in the nature of the task for this method. However, light fine-tuning improve performances and outperform our baselines. Our experiments show encouraging results to further explore meta-learning methods for OCC tasks.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا