ﻻ يوجد ملخص باللغة العربية
We propose a novel loss for generative models, dubbed as GRaWD (Generative Random Walk Deviation), to improve learning representations of unexplored visual spaces. Quality learning representation of unseen classes (or styles) is crucial to facilitate novel image generation and better generative understanding of unseen visual classes (a.k.a. Zero-Shot Learning, ZSL). By generating representations of unseen classes from their semantic descriptions, such as attributes or text, Generative ZSL aims at identifying unseen categories discriminatively from seen ones. We define GRaWD by constructing a dynamic graph, including the seen class/style centers and generated samples in the current mini-batch. Our loss starts a random walk probability from each center through visual generations produced from hallucinated unseen classes. As a deviation signal, we encourage the random walk to eventually land after t steps in a feature representation that is hard to classify to any of the seen classes. We show that our loss can improve unseen class representation quality on four text-based ZSL benchmarks on CUB and NABirds datasets and three attribute-based ZSL benchmarks on AWA2, SUN, and aPY datasets. We also study our losss ability to produce meaningful novel visual art generations on WikiArt dataset. Our experiments and human studies show that our loss can improve StyleGAN1 and StyleGAN2 generation quality, creating novel art that is significantly more preferred. Code will be made available.
We study the evolution of a random walker on a conservative dynamic random environment composed of independent particles performing simple symmetric random walks, generalizing results of [16] to higher dimensions and more general transition kernels w
We consider a random walker in a dynamic random environment given by a system of independent simple symmetric random walks. We obtain ballisticity results under two types of perturbations: low particle density, and strong local drift on particles. Su
Unsupervised representation learning has recently received lots of interest due to its powerful generalizability through effectively leveraging large-scale unlabeled data. There are two prevalent approaches for this, contrastive learning and generati
As a kind of generative self-supervised learning methods, generative adversarial nets have been widely studied in the field of anomaly detection. However, the representation learning ability of the generator is limited since it pays too much attentio
In this paper, following the recent paper on Walk/Zeta Correspondence by the first author and his coworkers, we compute the zeta function for the three- and four-state quantum walk and correlated random walk, and the multi-state random walk on the on