ﻻ يوجد ملخص باللغة العربية
An essential problem in domain adaptation is to understand and make use of distribution changes across domains. For this purpose, we first propose a flexible Generative Domain Adaptation Network (G-DAN) with specific latent variables to capture changes in the generating process of features across domains. By explicitly modeling the changes, one can even generate data in new domains using the generating process with new values for the latent variables in G-DAN. In practice, the process to generate all features together may involve high-dimensional latent variables, requiring dealing with distributions in high dimensions and making it difficult to learn domain changes from few source domains. Interestingly, by further making use of the causal representation of joint distributions, we then decompose the joint distribution into separate modules, each of which involves different low-dimensional latent variables and can be learned separately, leading to a Causal G-DAN (CG-DAN). This improves both statistical and computational efficiency of the learning procedure. Finally, by matching the feature distribution in the target domain, we can recover the target-domain joint distribution and derive the learning machine for the target domain. We demonstrate the efficacy of both G-DAN and CG-DAN in domain generation and cross-domain prediction on both synthetic and real data experiments.
Domain adaptation (DA) arises as an important problem in statistical machine learning when the source data used to train a model is different from the target data used to test the model. Recent advances in DA have mainly been application-driven and h
Despite recent advances, the remaining bottlenecks in deep generative models are necessity of extensive training and difficulties with generalization from small number of training examples. We develop a new generative model called Generative Matching
A novel approach for unsupervised domain adaptation for neural networks is proposed. It relies on metric-based regularization of the learning process. The metric-based regularization aims at domain-invariant latent feature representations by means of
Supervised learning with large scale labeled datasets and deep layered models has made a paradigm shift in diverse areas in learning and recognition. However, this approach still suffers generalization issues under the presence of a domain shift betw
Deep generative models can learn to generate realistic-looking images, but many of the most effective methods are adversarial and involve a saddlepoint optimization, which requires a careful balancing of training between a generator network and a cri