ﻻ يوجد ملخص باللغة العربية
We address the problem of severe class imbalance in unsupervised domain adaptation, when the class spaces in source and target domains diverge considerably. Till recently, domain adaptation methods assumed the aligned class spaces, such that reducing distribution divergence makes the transfer between domains easier. Such an alignment assumption is invalidated in real world scenarios where some source classes are often under-represented or simply absent in the target domain. We revise the current approaches to class imbalance and propose a new one that uses latent codes in the adversarial domain adaptation framework. We show how the latent codes can be used to disentangle the silent structure of the target domain and to identify under-represented classes. We show how to learn the latent code reconstruction jointly with the domain invariant representation and use them to accurately estimate the target labels.
We address the problem of unsupervised domain adaptation (UDA) by learning a cross-domain agnostic embedding space, where the distance between the probability distributions of the two source and target visual domains is minimized. We use the output s
Deep neural networks, trained with large amount of labeled data, can fail to generalize well when tested with examples from a emph{target domain} whose distribution differs from the training data distribution, referred as the emph{source domain}. It
In this paper, we address the Online Unsupervised Domain Adaptation (OUDA) problem, where the target data are unlabelled and arriving sequentially. The traditional methods on the OUDA problem mainly focus on transforming each arriving target data to
Although achieving remarkable progress, it is very difficult to induce a supervised classifier without any labeled data. Unsupervised domain adaptation is able to overcome this challenge by transferring knowledge from a labeled source domain to an un
Unsupervised domain adaptation aims to transfer the classifier learned from the source domain to the target domain in an unsupervised manner. With the help of target pseudo-labels, aligning class-level distributions and learning the classifier in the