ﻻ يوجد ملخص باللغة العربية
Unsupervised domain adaptation aims at transferring knowledge from the labeled source domain to the unlabeled target domain. Previous adversarial domain adaptation methods mostly adopt the discriminator with binary or $K$-dimensional output to perform marginal or conditional alignment independently. Recent experiments have shown that when the discriminator is provided with domain information in both domains and label information in the source domain, it is able to preserve the complex multimodal information and high semantic information in both domains. Following this idea, we adopt a discriminator with $2K$-dimensional output to perform both domain-level and class-level alignments simultaneously in a single discriminator. However, a single discriminator can not capture all the useful information across domains and the relationships between the examples and the decision boundary are rarely explored before. Inspired by multi-view learning and latest advances in domain adaptation, besides the adversarial process between the discriminator and the feature extractor, we also design a novel mechanism to make two discriminators pit against each other, so that they can provide diverse information for each other and avoid generating target features outside the support of the source domain. To the best of our knowledge, it is the first time to explore a dual adversarial strategy in domain adaptation. Moreover, we also use the semi-supervised learning regularization to make the representations more discriminative. Comprehensive experiments on two real-world datasets verify that our method outperforms several state-of-the-art domain adaptation methods.
Domain adaptation aims to exploit the knowledge in source domain to promote the learning tasks in target domain, which plays a critical role in real-world applications. Recently, lots of deep learning approaches based on autoencoders have achieved a
Heterogeneous domain adaptation (HDA) tackles the learning of cross-domain samples with both different probability distributions and feature representations. Most of the existing HDA studies focus on the single-source scenario. In reality, however, i
Extensive Unsupervised Domain Adaptation (UDA) studies have shown great success in practice by learning transferable representations across a labeled source domain and an unlabeled target domain with deep models. However, previous works focus on impr
Deep domain adaptation models learn a neural network in an unlabeled target domain by leveraging the knowledge from a labeled source domain. This can be achieved by learning a domain-invariant feature space. Though the learned representations are sep
In many real-world applications, we want to exploit multiple source datasets of similar tasks to learn a model for a different but related target dataset -- e.g., recognizing characters of a new font using a set of different fonts. While most recent