No Arabic abstract
Our work focuses on unsupervised and generative methods that address the following goals: (a) learning unsupervised generative representations that discover latent factors controlling image semantic attributes, (b) studying how this ability to control attributes formally relates to the issue of latent factor disentanglement, clarifying related but dissimilar concepts that had been confounded in the past, and (c) developing anomaly detection methods that leverage representations learned in (a). For (a), we propose a network architecture that exploits the combination of multiscale generative models with mutual information (MI) maximization. For (b), we derive an analytical result (Lemma 1) that brings clarity to two related but distinct concepts: the ability of generative networks to control semantic attributes of images they generate, resulting from MI maximization, and the ability to disentangle latent space representations, obtained via total correlation minimization. More specifically, we demonstrate that maximizing semantic attribute control encourages disentanglement of latent factors. Using Lemma 1 and adopting MI in our loss function, we then show empirically that, for image generation tasks, the proposed approach exhibits superior performance as measured in the quality and disentanglement trade space, when compared to other state of the art methods, with quality assessed via the Frechet Inception Distance (FID), and disentanglement via mutual information gap. For (c), we design several systems for anomaly detection exploiting representations learned in (a), and demonstrate their performance benefits when compared to state-of-the-art generative and discriminative algorithms. The above contributions in representation learning have potential applications in addressing other important problems in computer vision, such as bias and privacy in AI.
We demonstrate how to explore phase diagrams with automated and unsupervised machine learning to find regions of interest for possible new phases. In contrast to supervised learning, where data is classified using predetermined labels, we here perform anomaly detection, where the task is to differentiate a normal data set, composed of one or several classes, from anomalous data. Asa paradigmatic example, we explore the phase diagram of the extended Bose Hubbard model in one dimension at exact integer filling and employ deep neural networks to determine the entire phase diagram in a completely unsupervised and automated fashion. As input data for learning, we first use the entanglement spectra and central tensors derived from tensor-networks algorithms for ground-state computation and later we extend our method and use experimentally accessible data such as low-order correlation functions as inputs. Our method allows us to reveal a phase-separated region between supersolid and superfluid parts with unexpected properties, which appears in the system in addition to the standard superfluid, Mott insulator, Haldane-insulating, and density wave phases.
We propose FineGAN, a novel unsupervised GAN framework, which disentangles the background, object shape, and object appearance to hierarchically generate images of fine-grained object categories. To disentangle the factors without supervision, our key idea is to use information theory to associate each factor to a latent code, and to condition the relationships between the codes in a specific way to induce the desired hierarchy. Through extensive experiments, we show that FineGAN achieves the desired disentanglement to generate realistic and diverse images belonging to fine-grained classes of birds, dogs, and cars. Using FineGANs automatically learned features, we also cluster real images as a first attempt at solving the novel problem of unsupervised fine-grained object category discovery. Our code/models/demo can be found at https://github.com/kkanshul/finegan
Anomaly detection from a single image is challenging since anomaly data is always rare and can be with highly unpredictable types. With only anomaly-free data available, most existing methods train an AutoEncoder to reconstruct the input image and find the difference between the input and output to identify the anomalous region. However, such methods face a potential problem - a coarse reconstruction generates extra image differences while a high-fidelity one may draw in the anomaly. In this paper, we solve this contradiction by proposing a two-stage approach, which generates high-fidelity yet anomaly-free reconstructions. Our Unsupervised Two-stage Anomaly Detection (UTAD) relies on two technical components, namely the Impression Extractor (IE-Net) and the Expert-Net. The IE-Net and Expert-Net accomplish the two-stage anomaly-free image reconstruction task while they also generate intuitive intermediate results, making the whole UTAD interpretable. Extensive experiments show that our method outperforms state-of-the-arts on four anomaly detection datasets with different types of real-world objects and textures.
Human communication takes many forms, including speech, text and instructional videos. It typically has an underlying structure, with a starting point, ending, and certain objective steps between them. In this paper, we consider instructional videos where there are tens of millions of them on the Internet. We propose a method for parsing a video into such semantic steps in an unsupervised way. Our method is capable of providing a semantic storyline of the video composed of its objective steps. We accomplish this using both visual and language cues in a joint generative model. Our method can also provide a textual description for each of the identified semantic steps and video segments. We evaluate our method on a large number of complex YouTube videos and show that our method discovers semantically correct instructions for a variety of tasks.
Both high-level and high-resolution feature representations are of great importance in various visual understanding tasks. To acquire high-resolution feature maps with high-level semantic information, one common strategy is to adopt dilated convolutions in the backbone networks to extract high-resolution feature maps, such as the dilatedFCN-based methods for semantic segmentation. However, due to many convolution operations are conducted on the high-resolution feature maps, such methods have large computational complexity and memory consumption. In this paper, we propose one novel holistically-guided decoder which is introduced to obtain the high-resolution semantic-rich feature maps via the multi-scale features from the encoder. The decoding is achieved via novel holistic codeword generation and codeword assembly operations, which take advantages of both the high-level and low-level features from the encoder features. With the proposed holistically-guided decoder, we implement the EfficientFCN architecture for semantic segmentation and HGD-FPN for object detection and instance segmentation. The EfficientFCN achieves comparable or even better performance than state-of-the-art methods with only 1/3 of their computational costs for semantic segmentation on PASCAL Context, PASCAL VOC, ADE20K datasets. Meanwhile, the proposed HGD-FPN achieves $>2%$ higher mean Average Precision (mAP) when integrated into several object detection frameworks with ResNet-50 encoding backbones.