No Arabic abstract
We demonstrate how to explore phase diagrams with automated and unsupervised machine learning to find regions of interest for possible new phases. In contrast to supervised learning, where data is classified using predetermined labels, we here perform anomaly detection, where the task is to differentiate a normal data set, composed of one or several classes, from anomalous data. Asa paradigmatic example, we explore the phase diagram of the extended Bose Hubbard model in one dimension at exact integer filling and employ deep neural networks to determine the entire phase diagram in a completely unsupervised and automated fashion. As input data for learning, we first use the entanglement spectra and central tensors derived from tensor-networks algorithms for ground-state computation and later we extend our method and use experimentally accessible data such as low-order correlation functions as inputs. Our method allows us to reveal a phase-separated region between supersolid and superfluid parts with unexpected properties, which appears in the system in addition to the standard superfluid, Mott insulator, Haldane-insulating, and density wave phases.
One of the most promising applications of quantum computing is simulating quantum many-body systems. However, there is still a need for methods to efficiently investigate these systems in a native way, capturing their full complexity. Here, we propose variational quantum anomaly detection, an unsupervised quantum machine learning algorithm to analyze quantum data from quantum simulation. The algorithm is used to extract the phase diagram of a system with no prior physical knowledge and can be performed end-to-end on the same quantum device that the system is simulated on. We showcase its capabilities by mapping out the phase diagram of the one-dimensional extended Bose Hubbard model with dimerized hoppings, which exhibits a symmetry protected topological phase. Further, we show that it can be used with readily accessible devices nowadays and perform the algorithm on a real quantum computer.
Random projection is a common technique for designing algorithms in a variety of areas, including information retrieval, compressive sensing and measuring of outlyingness. In this work, the original random projection outlyingness measure is modified and associated with a neural network to obtain an unsupervised anomaly detection method able to handle multimodal normality. Theoretical and experimental arguments are presented to justify the choice of the anomaly score estimator. The performance of the proposed neural network approach is comparable to a state-of-the-art anomaly detection method. Experiments conducted on the MNIST, Fashion-MNIST and CIFAR-10 datasets show the relevance of the proposed approach.
Our work focuses on unsupervised and generative methods that address the following goals: (a) learning unsupervised generative representations that discover latent factors controlling image semantic attributes, (b) studying how this ability to control attributes formally relates to the issue of latent factor disentanglement, clarifying related but dissimilar concepts that had been confounded in the past, and (c) developing anomaly detection methods that leverage representations learned in (a). For (a), we propose a network architecture that exploits the combination of multiscale generative models with mutual information (MI) maximization. For (b), we derive an analytical result (Lemma 1) that brings clarity to two related but distinct concepts: the ability of generative networks to control semantic attributes of images they generate, resulting from MI maximization, and the ability to disentangle latent space representations, obtained via total correlation minimization. More specifically, we demonstrate that maximizing semantic attribute control encourages disentanglement of latent factors. Using Lemma 1 and adopting MI in our loss function, we then show empirically that, for image generation tasks, the proposed approach exhibits superior performance as measured in the quality and disentanglement trade space, when compared to other state of the art methods, with quality assessed via the Frechet Inception Distance (FID), and disentanglement via mutual information gap. For (c), we design several systems for anomaly detection exploiting representations learned in (a), and demonstrate their performance benefits when compared to state-of-the-art generative and discriminative algorithms. The above contributions in representation learning have potential applications in addressing other important problems in computer vision, such as bias and privacy in AI.
Surrogate task based methods have recently shown great promise for unsupervised image anomaly detection. However, there is no guarantee that the surrogate tasks share the consistent optimization direction with anomaly detection. In this paper, we return to a direct objective function for anomaly detection with information theory, which maximizes the distance between normal and anomalous data in terms of the joint distribution of images and their representation. Unfortunately, this objective function is not directly optimizable under the unsupervised setting where no anomalous data is provided during training. Through mathematical analysis of the above objective function, we manage to decompose it into four components. In order to optimize in an unsupervised fashion, we show that, under the assumption that distribution of the normal and anomalous data are separable in the latent space, its lower bound can be considered as a function which weights the trade-off between mutual information and entropy. This objective function is able to explain why the surrogate task based methods are effective for anomaly detection and further point out the potential direction of improvement. Based on this object function we introduce a novel information theoretic framework for unsupervised image anomaly detection. Extensive experiments have demonstrated that the proposed framework significantly outperforms several state-of-the-arts on multiple benchmark data sets.
Clustering is essential to many tasks in pattern recognition and computer vision. With the advent of deep learning, there is an increasing interest in learning deep unsupervised representations for clustering analysis. Many works on this domain rely on variants of auto-encoders and use the encoder outputs as representations/features for clustering. In this paper, we show that an l2 normalization constraint on these representations during auto-encoder training, makes the representations more separable and compact in the Euclidean space after training. This greatly improves the clustering accuracy when k-means clustering is employed on the representations. We also propose a clustering based unsupervised anomaly detection method using l2 normalized deep auto-encoder representations. We show the effect of l2 normalization on anomaly detection accuracy. We further show that the proposed anomaly detection method greatly improves accuracy compared to previously proposed deep methods such as reconstruction error based anomaly detection.