ﻻ يوجد ملخص باللغة العربية
Deep neural networks have been shown to easily overfit to biased training data with label noise or class imbalance. Meta-learning algorithms are commonly designed to alleviate this issue in the form of sample reweighting, by learning a meta weighting network that takes training losses as inputs to generate sample weights. In this paper, we advocate that choosing proper inputs for the meta weighting network is crucial for desired sample weights in a specific task, while training loss is not always the correct answer. In view of this, we propose a novel meta-learning algorithm, MetaInfoNet, which automatically learns effective representations as inputs for the meta weighting network by emphasizing task-related information with an information bottleneck strategy. Extensive experimental results on benchmark datasets with label noise or class imbalance validate that MetaInfoNet is superior to many state-of-the-art methods.
Supervised learning requires a large amount of training data, limiting its application where labeled data is scarce. To compensate for data scarcity, one possible method is to utilize auxiliary tasks to provide additional supervision for the main tas
While deep learning and deep reinforcement learning (RL) systems have demonstrated impressive results in domains such as image classification, game playing, and robotic control, data efficiency remains a major challenge. Multi-task learning has emerg
Multi-task learning (MTL) optimizes several learning tasks simultaneously and leverages their shared information to improve generalization and the prediction of the model for each task. Auxiliary tasks can be added to the main task to ultimately boos
Progress in deep reinforcement learning (RL) research is largely enabled by benchmark task environments. However, analyzing the nature of those environments is often overlooked. In particular, we still do not have agreeable ways to measure the diffic
Machine learning is gaining popularity in a broad range of areas working with geographic data, such as ecology or atmospheric sciences. Here, data often exhibit spatial effects, which can be difficult to learn for neural networks. In this study, we p