ﻻ يوجد ملخص باللغة العربية
An explicit discriminator trained on observable in-distribution (ID) samples can make high-confidence prediction on out-of-distribution (OOD) samples due to its distributional vulnerability. This is primarily caused by the limited ID samples observable for training discriminators when OOD samples are unavailable. To address this issue, the state-of-the-art methods train the discriminator with OOD samples generated by general assumptions without considering the data and network characteristics. However, different network architectures and training ID datasets may cause diverse vulnerabilities, and the generated OOD samples thus usually misaddress the specific distributional vulnerability of the explicit discriminator. To reveal and patch the distributional vulnerabilities, we propose a novel method of textit{fine-tuning explicit discriminators by implicit generators} (FIG). According to the Shannon entropy, an explicit discriminator can construct its corresponding implicit generator to generate specific OOD samples without extra training costs. A Langevin Dynamic sampler then draws high-quality OOD samples from the generator to reveal the vulnerability. Finally, a regularizer, constructed according to the design principle of the implicit generator, patches the distributional vulnerability by encouraging those generated OOD samples with high entropy. Our experiments on four networks, four ID datasets and seven OOD datasets demonstrate that FIG achieves state-of-the-art OOD detection performance and maintains a competitive classification capability.
To improve the sample efficiency of policy-gradient based reinforcement learning algorithms, we propose implicit distributional actor-critic (IDAC) that consists of a distributional critic, built on two deep generator networks (DGNs), and a semi-impl
Transformer models have recently attracted much interest from computer vision researchers and have since been successfully employed for several problems traditionally addressed with convolutional neural networks. At the same time, image synthesis usi
Combinatorial features are essential for the success of many commercial models. Manually crafting these features usually comes with high cost due to the variety, volume and velocity of raw data in web-scale systems. Factorization based models, which
Deep ReLU networks trained with the square loss have been observed to perform well in classification tasks. We provide here a theoretical justification based on analysis of the associated gradient flow. We show that convergence to a solution with the
A basic question in learning theory is to identify if two distributions are identical when we have access only to examples sampled from the distributions. This basic task is considered, for example, in the context of Generative Adversarial Networks (