ﻻ يوجد ملخص باللغة العربية
Models for Visual Question Answering (VQA) are notorious for their tendency to rely on dataset biases, as the large and unbalanced diversity of questions and concepts involved and tends to prevent models from learning to reason, leading them to perform educated guesses instead. In this paper, we claim that the standard evaluation metric, which consists in measuring the overall in-domain accuracy, is misleading. Since questions and concepts are unbalanced, this tends to favor models which exploit subtle training set statistics. Alternatively, naively introducing artificial distribution shifts between train and test splits is also not completely satisfying. First, the shifts do not reflect real-world tendencies, resulting in unsuitable models; second, since the shifts are handcrafted, trained models are specifically designed for this particular setting, and do not generalize to other configurations. We propose the GQA-OOD benchmark designed to overcome these concerns: we measure and compare accuracy over both rare and frequent question-answer pairs, and argue that the former is better suited to the evaluation of reasoning abilities, which we experimentally validate with models trained to more or less exploit biases. In a large-scale study involving 7 VQA models and 3 bias reduction techniques, we also experimentally demonstrate that these models fail to address questions involving infrequent concepts and provide recommendations for future directions of research.
Standard Knowledge Distillation (KD) approaches distill the knowledge of a cumbersome teacher model into the parameters of a student model with a pre-defined architecture. However, the knowledge of a neural network, which is represented by the networ
Triplet loss is an extremely common approach to distance metric learning. Representations of images from the same class are optimized to be mapped closer together in an embedding space than representations of images from different classes. Much work
Understanding the links between the activity of supermassive black holes (SMBH) at the centres of galaxies and their host dark matter haloes is a key question in modern astrophysics. The final data release of the SDSS-IV eBOSS provides the largest co
$renewcommand{Re}{mathbb{R}}$Given a set $P$ of $n$ points in $Re^d$, consider the problem of computing $k$ subsets of $P$ that form clusters that are well-separated from each other, and each of them is large (cardinality wise). We provide tight uppe
Luminous Compact Blue Galaxies (LCBGs) are common at z~1, contributing significantly to the total star formation rate density. By z~0, they are a factor of ten rarer. While we know that LCBGs evolve rapidly, we do not know what drives their evolution