ترغب بنشر مسار تعليمي؟ اضغط هنا

Multi-Agent Diverse Generative Adversarial Networks

86   0   0.0 ( 0 )
 نشر من قبل Viveka Kulharia
 تاريخ النشر 2017
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We propose MAD-GAN, an intuitive generalization to the Generative Adversarial Networks (GANs) and its conditional variants to address the well known problem of mode collapse. First, MAD-GAN is a multi-agent GAN architecture incorporating multiple generators and one discriminator. Second, to enforce that different generators capture diverse high probability modes, the discriminator of MAD-GAN is designed such that along with finding the real and fake samples, it is also required to identify the generator that generated the given fake sample. Intuitively, to succeed in this task, the discriminator must learn to push different generators towards different identifiable modes. We perform extensive experiments on synthetic and real datasets and compare MAD-GAN with different variants of GAN. We show high quality diverse sample generations for challenging tasks such as image-to-image translation and face generation. In addition, we also show that MAD-GAN is able to disentangle different modalities when trained using highly challenging diverse-class dataset (e.g. dataset with images of forests, icebergs, and bedrooms). In the end, we show its efficacy on the unsupervised feature representation task. In Appendix, we introduce a similarity based competing objective (MAD-GAN-Sim) which encourages different generators to generate diverse samples based on a user defined similarity metric. We show its performance on the image-to-image translation, and also show its effectiveness on the unsupervised feature representation task.



قيم البحث

اقرأ أيضاً

Generative Adversarial Networks (GANs) have recently achieved impressive results for many real-world applications, and many GAN variants have emerged with improvements in sample quality and training stability. However, they have not been well visuali zed or understood. How does a GAN represent our visual world internally? What causes the artifacts in GAN results? How do architectural choices affect GAN learning? Answering such questions could enable us to develop new insights and better models. In this work, we present an analytic framework to visualize and understand GANs at the unit-, object-, and scene-level. We first identify a group of interpretable units that are closely related to object concepts using a segmentation-based network dissection method. Then, we quantify the causal effect of interpretable units by measuring the ability of interventions to control objects in the output. We examine the contextual relationship between these units and their surroundings by inserting the discovered object concepts into new images. We show several practical applications enabled by our framework, from comparing internal representations across different layers, models, and datasets, to improving GANs by locating and removing artifact-causing units, to interactively manipulating objects in a scene. We provide open source interpretation tools to help researchers and practitioners better understand their GAN models.
86 - Fei Ye , Adrian G. Bors 2021
In this paper, we propose a new continuously learning generative model, called the Lifelong Twin Generative Adversarial Networks (LT-GANs). LT-GANs learns a sequence of tasks from several databases and its architecture consists of three components: t wo identical generators, namely the Teacher and Assistant, and one Discriminator. In order to allow for the LT-GANs to learn new concepts without forgetting, we introduce a new lifelong training approach, namely Lifelong Adversarial Knowledge Distillation (LAKD), which encourages the Teacher and Assistant to alternately teach each other, while learning a new database. This training approach favours transferring knowledge from a more knowledgeable player to another player which knows less information about a previously given task.
Disentanglement is defined as the problem of learninga representation that can separate the distinct, informativefactors of variations of data. Learning such a representa-tion may be critical for developing explainable and human-controllable Deep Gen erative Models (DGMs) in artificialintelligence. However, disentanglement in GANs is not a triv-ial task, as the absence of sample likelihood and posteriorinference for latent variables seems to prohibit the forwardstep. Inspired by contrastive learning (CL), this paper, froma new perspective, proposes contrastive disentanglement ingenerative adversarial networks (CD-GAN). It aims at dis-entangling the factors of inter-class variation of visual datathrough contrasting image features, since the same factorvalues produce images in the same class. More importantly,we probe a novel way to make use of limited amount ofsupervision to the largest extent, to promote inter-class dis-entanglement performance. Extensive experimental resultson many well-known datasets demonstrate the efficacy ofCD-GAN for disentangling inter-class variation.
Generative Adversarial Networks (GANs) are able to generate high-quality images, but it remains difficult to explicitly specify the semantics of synthesized images. In this work, we aim to better understand the semantic representation of GANs, and th ereby enable semantic control in GANs generation process. Interestingly, we find that a well-trained GAN encodes image semantics in its internal feature maps in a surprisingly simple way: a linear transformation of feature maps suffices to extract the generated image semantics. To verify this simplicity, we conduct extensive experiments on various GANs and datasets; and thanks to this simplicity, we are able to learn a semantic segmentation model for a trained GAN from a small number (e.g., 8) of labeled images. Last but not least, leveraging our findings, we propose two few-shot image editing approaches, namely Semantic-Conditional Sampling and Semantic Image Editing. Given a trained GAN and as few as eight semantic annotations, the user is able to generate diverse images subject to a user-provided semantic layout, and control the synthesized image semantics. We have made the code publicly available.
200 - Aidin Ferdowsi , Walid Saad 2020
To achieve a high learning accuracy, generative adversarial networks (GANs) must be fed by large datasets that adequately represent the data space. However, in many scenarios, the available datasets may be limited and distributed across multiple agen ts, each of which is seeking to learn the distribution of the data on its own. In such scenarios, the local datasets are inherently private and agents often do not wish to share them. In this paper, to address this multi-agent GAN problem, a novel brainstorming GAN (BGAN) architecture is proposed using which multiple agents can generate real-like data samples while operating in a fully distributed manner and preserving their data privacy. BGAN allows the agents to gain information from other agents without sharing their real datasets but by brainstorming via the sharing of their generated data samples. In contrast to existing distributed GAN solutions, the proposed BGAN architecture is designed to be fully distributed, and it does not need any centralized controller. Moreover, BGANs are shown to be scalable and not dependent on the hyperparameters of the agents deep neural networks (DNNs) thus enabling the agents to have different DNN architectures. Theoretically, the interactions between BGAN agents are analyzed as a game whose unique Nash equilibrium is derived. Experimental results show that BGAN can generate real-like data samples with higher quality and lower Jensen-Shannon divergence (JSD) and Frechet Inception distance (FID) compared to other distributed GAN architectures.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا