ترغب بنشر مسار تعليمي؟ اضغط هنا

Learning Energy-Based Models in High-Dimensional Spaces with Multi-scale Denoising Score Matching

225   0   0.0 ( 0 )
 نشر من قبل Zengyi Li
 تاريخ النشر 2019
والبحث باللغة English




اسأل ChatGPT حول البحث

Energy-Based Models (EBMs) assign unnormalized log-probability to data samples. This functionality has a variety of applications, such as sample synthesis, data denoising, sample restoration, outlier detection, Bayesian reasoning, and many more. But training of EBMs using standard maximum likelihood is extremely slow because it requires sampling from the model distribution. Score matching potentially alleviates this problem. In particular, denoising score matching citep{vincent2011connection} has been successfully used to train EBMs. Using noisy data samples with one fixed noise level, these models learn fast and yield good results in data denoising citep{saremi2019neural}. However, demonstrations of such models in high quality sample synthesis of high dimensional data were lacking. Recently, citet{song2019generative} have shown that a generative model trained by denoising score matching accomplishes excellent sample synthesis, when trained with data samples corrupted with multiple levels of noise. Here we provide analysis and empirical evidence showing that training with multiple noise levels is necessary when the data dimension is high. Leveraging this insight, we propose a novel EBM trained with multi-scale denoising score matching. Our model exhibits data generation performance comparable to state-of-the-art techniques such as GANs, and sets a new baseline for EBMs. The proposed model also provides density information and performs well in an image inpainting task.



قيم البحث

اقرأ أيضاً

Deep neural networks have proven extremely efficient at solving a wide rangeof inverse problems, but most often the uncertainty on the solution they provideis hard to quantify. In this work, we propose a generic Bayesian framework forsolving inverse problems, in which we limit the use of deep neural networks tolearning a prior distribution on the signals to recover. We adopt recent denoisingscore matching techniques to learn this prior from data, and subsequently use it aspart of an annealed Hamiltonian Monte-Carlo scheme to sample the full posteriorof image inverse problems. We apply this framework to Magnetic ResonanceImage (MRI) reconstruction and illustrate how this approach not only yields highquality reconstructions but can also be used to assess the uncertainty on particularfeatures of a reconstructed image.
A new low-dimensional parameterization based on principal component analysis (PCA) and convolutional neural networks (CNN) is developed to represent complex geological models. The CNN-PCA method is inspired by recent developments in computer vision u sing deep learning. CNN-PCA can be viewed as a generalization of an existing optimization-based PCA (O-PCA) method. Both CNN-PCA and O-PCA entail post-processing a PCA model to better honor complex geological features. In CNN-PCA, rather than use a histogram-based regularization as in O-PCA, a new regularization involving a set of metrics for multipoint statistics is introduced. The metrics are based on summary statistics of the nonlinear filter responses of geological models to a pre-trained deep CNN. In addition, in the CNN-PCA formulation presented here, a convolutional neural network is trained as an explicit transform function that can post-process PCA models quickly. CNN-PCA is shown to provide both unconditional and conditional realizations that honor the geological features present in reference SGeMS geostatistical realizations for a binary channelized system. Flow statistics obtained through simulation of random CNN-PCA models closely match results for random SGeMS models for a demanding case in which O-PCA models lead to significant discrepancies. Results for history matching are also presented. In this assessment CNN-PCA is applied with derivative-free optimization, and a subspace randomized maximum likelihood method is used to provide multiple posterior models. Data assimilation and significant uncertainty reduction are achieved for existing wells, and physically reasonable predictions are also obtained for new wells. Finally, the CNN-PCA method is extended to a more complex non-stationary bimodal deltaic fan system, and is shown to provide high-quality realizations for this challenging example.
120 - Ruiqi Gao , Yang Lu , Junpei Zhou 2017
This paper proposes a multi-grid method for learning energy-based generative ConvNet models of images. For each grid, we learn an energy-based probabilistic model where the energy function is defined by a bottom-up convolutional neural network (ConvN et or CNN). Learning such a model requires generating synthesized examples from the model. Within each iteration of our learning algorithm, for each observed training image, we generate synthesized images at multiple grids by initializing the finite-step MCMC sampling from a minimal 1 x 1 version of the training image. The synthesized image at each subsequent grid is obtained by a finite-step MCMC initialized from the synthesized image generated at the previous coarser grid. After obtaining the synthesized examples, the parameters of the models at multiple grids are updated separately and simultaneously based on the differences between synthesized and observed examples. We show that this multi-grid method can learn realistic energy-based generative ConvNet models, and it outperforms the original contrastive divergence (CD) and persistent CD.
This paper studies a training method to jointly estimate an energy-based model and a flow-based model, in which the two models are iteratively updated based on a shared adversarial value function. This joint training method has the following traits. (1) The update of the energy-based model is based on noise contrastive estimation, with the flow model serving as a strong noise distribution. (2) The update of the flow model approximately minimizes the Jensen-Shannon divergence between the flow model and the data distribution. (3) Unlike generative adversarial networks (GAN) which estimates an implicit probability distribution defined by a generator model, our method estimates two explicit probabilistic distributions on the data. Using the proposed method we demonstrate a significant improvement on the synthesis quality of the flow model, and show the effectiveness of unsupervised feature learning by the learned energy-based model. Furthermore, the proposed training method can be easily adapted to semi-supervised learning. We achieve competitive results to the state-of-the-art semi-supervised learning methods.
Single Index Models (SIMs) are simple yet flexible semi-parametric models for machine learning, where the response variable is modeled as a monotonic function of a linear combination of features. Estimation in this context requires learning both the feature weights and the nonlinear function that relates features to observations. While methods have been described to learn SIMs in the low dimensional regime, a method that can efficiently learn SIMs in high dimensions, and under general structural assumptions, has not been forthcoming. In this paper, we propose computationally efficient algorithms for SIM inference in high dimensions with structural constraints. Our general approach specializes to sparsity, group sparsity, and low-rank assumptions among others. Experiments show that the proposed method enjoys superior predictive performance when compared to generalized linear models, and achieves results comparable to or better than single layer feedforward neural networks with significantly less computational cost.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا