ﻻ يوجد ملخص باللغة العربية
Typical adversarial-training-based unsupervised domain adaptation methods are vulnerable when the source and target datasets are highly-complex or exhibit a large discrepancy between their data distributions. Recently, several Lipschitz-constraint-based methods have been explored. The satisfaction of Lipschitz continuity guarantees a remarkable performance on a target domain. However, they lack a mathematical analysis of why a Lipschitz constraint is beneficial to unsupervised domain adaptation and usually perform poorly on large-scale datasets. In this paper, we take the principle of utilizing a Lipschitz constraint further by discussing how it affects the error bound of unsupervised domain adaptation. A connection between them is built and an illustration of how Lipschitzness reduces the error bound is presented. A textbf{local smooth discrepancy} is defined to measure Lipschitzness of a target distribution in a pointwise way. When constructing a deep end-to-end model, to ensure the effectiveness and stability of unsupervised domain adaptation, three critical factors are considered in our proposed optimization strategy, i.e., the sample amount of a target domain, dimension and batchsize of samples. Experimental results demonstrate that our model performs well on several standard benchmarks. Our ablation study shows that the sample amount of a target domain, the dimension and batchsize of samples indeed greatly impact Lipschitz-constraint-based methods ability to handle large-scale datasets. Code is available at https://github.com/CuthbertCai/SRDA.
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain. Most existing UDA methods learn domain-invariant feature representations by minimizing feature distance
In this paper, we present DRANet, a network architecture that disentangles image representations and transfers the visual attributes in a latent space for unsupervised cross-domain adaptation. Unlike the existing domain adaptation methods that learn
Unsupervised domain adaptation (UDA) enables a learning machine to adapt from a labeled source domain to an unlabeled domain under the distribution shift. Thanks to the strong representation ability of deep neural networks, recent remarkable achievem
Domain generalization (DG) aims to help models trained on a set of source domains generalize better on unseen target domains. The performances of current DG methods largely rely on sufficient labeled data, which however are usually costly or unavaila
Domain adaptation is an important but challenging task. Most of the existing domain adaptation methods struggle to extract the domain-invariant representation on the feature space with entangling domain information and semantic information. Different