ترغب بنشر مسار تعليمي؟ اضغط هنا

In All Likelihood, Deep Belief Is Not Enough

229   0   0.0 ( 0 )
 نشر من قبل Lucas Theis
 تاريخ النشر 2010
والبحث باللغة English




اسأل ChatGPT حول البحث

Statistical models of natural stimuli provide an important tool for researchers in the fields of machine learning and computational neuroscience. A canonical way to quantitatively assess and compare the performance of statistical models is given by the likelihood. One class of statistical models which has recently gained increasing popularity and has been applied to a variety of complex data are deep belief networks. Analyses of these models, however, have been typically limited to qualitative analyses based on samples due to the computationally intractable nature of the model likelihood. Motivated by these circumstances, the present article provides a consistent estimator for the likelihood that is both computationally tractable and simple to apply in practice. Using this estimator, a deep belief network which has been suggested for the modeling of natural image patches is quantitatively investigated and compared to other models of natural image patches. Contrary to earlier claims based on qualitative results, the results presented in this article provide evidence that the model under investigation is not a particularly good model for natural images



قيم البحث

اقرأ أيضاً

Pulsar glitches are traditionally viewed as a manifestation of vortex dynamics associated with a neutron superfluid reservoir confined to the inner crust of the star. In this Letter we show that the non-dissipative entrainment coupling between the ne utron superfluid and the nuclear lattice leads to a less mobile crust superfluid, effectively reducing the moment of inertia associated with the angular momentum reservoir. Combining the latest observational data for prolific glitching pulsars with theoretical results for the crust entrainment we find that the required superfluid reservoir exceeds that available in the crust. This challenges our understanding of the glitch phenomenon, and we discuss possible resolutions to the problem.
Deep neural networks (deep learning) have emerged as a technology of choice to tackle problems in natural language processing, computer vision, speech recognition and gameplay, and in just a few years has led to superhuman level performance and usher ed in a new wave of AI. Buoyed by these successes, researchers in the physical sciences have made steady progress in incorporating deep learning into their respective domains. However, such adoption brings substantial challenges that need to be recognized and confronted. Here, we discuss both opportunities and roadblocks to implementation of deep learning within materials science, focusing on the relationship between correlative nature of machine learning and causal hypothesis driven nature of physical sciences. We argue that deep learning and AI are now well positioned to revolutionize fields where causal links are known, as is the case for applications in theory. When confounding factors are frozen or change only weakly, this leaves open the pathway for effective deep learning solutions in experimental domains. Similarly, these methods offer a pathway towards understanding the physics of real-world systems, either via deriving reduced representations, deducing algorithmic complexity, or recovering generative physical models. However, extending deep learning and AI for models with unclear causal relationship can produce misleading and potentially incorrect results. Here, we argue the broad adoption of Bayesian methods incorporating prior knowledge, development of DL solutions with incorporated physical constraints, and ultimately adoption of causal models, offers a path forward for fundamental and applied research. Most notably, while these advances can change the way science is carried out in ways we cannot imagine, machine learning is not going to substitute science any time soon.
Many structured prediction tasks in machine vision have a collection of acceptable answers, instead of one definitive ground truth answer. Segmentation of images, for example, is subject to human labeling bias. Similarly, there are multiple possible pixel values that could plausibly complete occluded image regions. State-of-the art supervised learning methods are typically optimized to make a single test-time prediction for each query, failing to find other modes in the output space. Existing methods that allow for sampling often sacrifice speed or accuracy. We introduce a simple method for training a neural network, which enables diverse structured predictions to be made for each test-time query. For a single input, we learn to predict a range of possible answers. We compare favorably to methods that seek diversity through an ensemble of networks. Such stochastic multiple choice learning faces mode collapse, where one or more ensemble members fail to receive any training signal. Our best performing solution can be deployed for various tasks, and just involves small modifications to the existing single-mode architecture, loss function, and training regime. We demonstrate that our method results in quantitative improvements across three challenging tasks: 2D image completion, 3D volume estimation, and flow prediction.
When a missing process depends on the missing values themselves, it needs to be explicitly modelled and taken into account while doing likelihood-based inference. We present an approach for building and fitting deep latent variable models (DLVMs) in cases where the missing process is dependent on the missing data. Specifically, a deep neural network enables us to flexibly model the conditional distribution of the missingness pattern given the data. This allows for incorporating prior information about the type of missingness (e.g. self-censoring) into the model. Our inference technique, based on importance-weighted variational inference, involves maximising a lower bound of the joint likelihood. Stochastic gradients of the bound are obtained by using the reparameterisation trick both in latent space and data space. We show on various kinds of data sets and missingness patterns that explicitly modelling the missing process can be invaluable.
Marginal-likelihood based model-selection, even though promising, is rarely used in deep learning due to estimation difficulties. Instead, most approaches rely on validation data, which may not be readily available. In this work, we present a scalabl e marginal-likelihood estimation method to select both hyperparameters and network architectures, based on the training data alone. Some hyperparameters can be estimated online during training, simplifying the procedure. Our marginal-likelihood estimate is based on Laplaces method and Gauss-Newton approximations to the Hessian, and it outperforms cross-validation and manual-tuning on standard regression and image classification datasets, especially in terms of calibration and out-of-distribution detection. Our work shows that marginal likelihoods can improve generalization and be useful when validation data is unavailable (e.g., in nonstationary settings).

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا