Do you want to publish a course? Click here

Imaginative Walks: Generative Random Walk Deviation Loss for Improved Unseen Learning Representation

73   0   0.0 ( 0 )
 Added by Kai Yi
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

We propose a novel loss for generative models, dubbed as GRaWD (Generative Random Walk Deviation), to improve learning representations of unexplored visual spaces. Quality learning representation of unseen classes (or styles) is crucial to facilitate novel image generation and better generative understanding of unseen visual classes (a.k.a. Zero-Shot Learning, ZSL). By generating representations of unseen classes from their semantic descriptions, such as attributes or text, Generative ZSL aims at identifying unseen categories discriminatively from seen ones. We define GRaWD by constructing a dynamic graph, including the seen class/style centers and generated samples in the current mini-batch. Our loss starts a random walk probability from each center through visual generations produced from hallucinated unseen classes. As a deviation signal, we encourage the random walk to eventually land after t steps in a feature representation that is hard to classify to any of the seen classes. We show that our loss can improve unseen class representation quality on four text-based ZSL benchmarks on CUB and NABirds datasets and three attribute-based ZSL benchmarks on AWA2, SUN, and aPY datasets. We also study our losss ability to produce meaningful novel visual art generations on WikiArt dataset. Our experiments and human studies show that our loss can improve StyleGAN1 and StyleGAN2 generation quality, creating novel art that is significantly more preferred. Code will be made available.



rate research

Read More

We study the evolution of a random walker on a conservative dynamic random environment composed of independent particles performing simple symmetric random walks, generalizing results of [16] to higher dimensions and more general transition kernels without the assumption of uniform ellipticity or nearest-neighbour jumps. Specifically, we obtain a strong law of large numbers, a functional central limit theorem and large deviation estimates for the position of the random walker under the annealed law in a high density regime. The main obstacle is the intrinsic lack of monotonicity in higher-dimensional, non-nearest neighbour settings. Here we develop more general renormalization and renewal schemes that allow us to overcome this issue. As a second application of our methods, we provide an alternative proof of the ballistic behaviour of the front of (the discrete-time version of) the infection model introduced in [23].
We consider a random walker in a dynamic random environment given by a system of independent simple symmetric random walks. We obtain ballisticity results under two types of perturbations: low particle density, and strong local drift on particles. Surprisingly, the random walker may behave very differently depending on whether the underlying environment particles perform lazy or non-lazy random walks, which is related to a notion of permeability of the system. We also provide a strong law of large numbers, a functional central limit theorem and large deviation bounds under an ellipticity condition.
Unsupervised representation learning has recently received lots of interest due to its powerful generalizability through effectively leveraging large-scale unlabeled data. There are two prevalent approaches for this, contrastive learning and generative pre-training, where the former learns representations from instance-wise discrimination tasks and the latter learns them from estimating the likelihood. These seemingly orthogonal approaches have their own strengths and weaknesses. Contrastive learning tends to extract semantic information and discards details irrelevant for classifying objects, making the representations effective for discriminative tasks while degrading robustness to out-of-distribution data. On the other hand, the generative pre-training directly estimates the data distribution, so the representations tend to be robust but not optimal for discriminative tasks. In this paper, we show that we could achieve the best of both worlds by a hybrid training scheme. Specifically, we demonstrated that a transformer-based encoder-decoder architecture trained with both contrastive and generative losses can learn highly discriminative and robust representations without hurting the generative performance. We extensively validate our approach on various tasks.
138 - Xuan Xia , Xizhou Pan , Xing He 2021
As a kind of generative self-supervised learning methods, generative adversarial nets have been widely studied in the field of anomaly detection. However, the representation learning ability of the generator is limited since it pays too much attention to pixel-level details, and generator is difficult to learn abstract semantic representations from label prediction pretext tasks as effective as discriminator. In order to improve the representation learning ability of generator, we propose a self-supervised learning framework combining generative methods and discriminative methods. The generator no longer learns representation by reconstruction error, but the guidance of discriminator, and could benefit from pretext tasks designed for discriminative methods. Our discriminative-generative representation learning method has performance close to discriminative methods and has a great advantage in speed. Our method used in one-class anomaly detection task significantly outperforms several state-of-the-arts on multiple benchmark data sets, increases the performance of the top-performing GAN-based baseline by 6% on CIFAR-10 and 2% on MVTAD.
167 - Norio Konno , Shunya Tamura 2021
In this paper, following the recent paper on Walk/Zeta Correspondence by the first author and his coworkers, we compute the zeta function for the three- and four-state quantum walk and correlated random walk, and the multi-state random walk on the one-dimensional torus by using the Fourier analysis. We deal with also the four-state quantum walk and correlated random walk on the two-dimensional torus. In addition, we introduce a new class of models determined by the generalized Grover matrix bridging the gap between the Grover matrix and the positive-support of the Grover matrix. Finally, we give a generalized version of the Konno-Sato theorem for the new class. As a corollary, we calculate the zeta function for the generalized Grover matrix on the d-dimensional torus.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا