ترغب بنشر مسار تعليمي؟ اضغط هنا

Probabilistic orientation estimation with matrix Fisher distributions

108   0   0.0 ( 0 )
 نشر من قبل David Mohlin
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

This paper focuses on estimating probability distributions over the set of 3D rotations ($SO(3)$) using deep neural networks. Learning to regress models to the set of rotations is inherently difficult due to differences in topology between $mathbb{R}^N$ and $SO(3)$. We overcome this issue by using a neural network to output the parameters for a matrix Fisher distribution since these parameters are homeomorphic to $mathbb{R}^9$. By using a negative log likelihood loss for this distribution we get a loss which is convex with respect to the network outputs. By optimizing this loss we improve state-of-the-art on several challenging applicable datasets, namely Pascal3D+, ModelNet10-$SO(3)$ and UPNA head pose.



قيم البحث

اقرأ أيضاً

Fisher forecasts are a common tool in cosmology with applications ranging from survey planning to the development of new cosmological probes. While frequently adopted, they are subject to numerical instabilities that need to be carefully investigated to ensure accurate and reproducible results. This research note discusses these challenges using the example of a weak lensing data vector and proposes procedures that can help in their solution.
Canonical Correlation Analysis (CCA) is a classic technique for multi-view data analysis. To overcome the deficiency of linear correlation in practical multi-view learning tasks, various CCA variants were proposed to capture nonlinear dependency. How ever, it is non-trivial to have an in-principle understanding of these variants due to their inherent restrictive assumption on the data and latent code distributions. Although some works have studied probabilistic interpretation for CCA, these models still require the explicit form of the distributions to achieve a tractable solution for the inference. In this work, we study probabilistic interpretation for CCA based on implicit distributions. We present Conditional Mutual Information (CMI) as a new criterion for CCA to consider both linear and nonlinear dependency for arbitrarily distributed data. To eliminate direct estimation for CMI, in which explicit form of the distributions is still required, we derive an objective which can provide an estimation for CMI with efficient inference methods. To facilitate Bayesian inference of multi-view analysis, we propose Adversarial CCA (ACCA), which achieves consistent encoding for multi-view data with the consistent constraint imposed on the marginalization of the implicit posteriors. Such a model would achieve superiority in the alignment of the multi-view data with implicit distributions. It is interesting to note that most of the existing CCA variants can be connected with our proposed CCA model by assigning specific form for the posterior and likelihood distributions. Extensive experiments on nonlinear correlation analysis and cross-view generation on benchmark and real-world datasets demonstrate the superiority of our model.
Statistical models of real world data typically involve continuous probability distributions such as normal, Laplace, or exponential distributions. Such distributions are supported by many probabilistic modelling formalisms, including probabilistic d atabase systems. Yet, the traditional theoretical framework of probabilistic databases focusses entirely on finite probabilistic databases. Only recently, we set out to develop the mathematical theory of infinite probabilistic databases. The present paper is an exposition of two recent papers which are cornerstones of this theory. In (Grohe, Lindner; ICDT 2020) we propose a very general framework for probabilistic databases, possibly involving continuous probability distributions, and show that queries have a well-defined semantics in this framework. In (Grohe, Kaminski, Katoen, Lindner; PODS 2020) we extend the declarative probabilistic programming language Generative Datalog, proposed by (Barany et al.~2017) for discrete probability distributions, to continuous probability distributions and show that such programs yield generative models of continuous probabilistic databases.
The Fisher Information Matrix (FIM) has been the standard approximation to the accuracy of parameter estimation on gravitational-wave signals from merging compact binaries due to its ease-of-use and rapid computation time. While the theoretical faili ngs of this method, such as the signal-to-noise ratio (SNR) limit on the validity of the lowest-order expansion and the difficulty of using non-Gaussian priors, are well understood, the practical effectiveness compared to a real parameter estimation technique (e.g. Markov-chain Monte Carlo) remains an open question. We present a direct comparison between the FIM error estimates and the Bayesian probability density functions produced by the parameter estimation code lalinference_mcmc. In addition to the low-SNR issues usually considered, we find that the FIM can greatly overestimate the uncertainty in parameter estimation achievable by the MCMC. This was found to be a systematic effect for systems composed of binary black holes, with the disagreement increasing with total mass. In some cases, the MCMC search returned standard deviations on the marginalized posteriors that were smaller by several orders of magnitude than the FIM estimates. We conclude that the predictions of the FIM do not represent the capabilities of real gravitational-wave parameter estimation.
Diffusion magnetic resonance imaging (dMRI) is currently the only tool for noninvasively imaging the brains white matter tracts. The fiber orientation (FO) is a key feature computed from dMRI for fiber tract reconstruction. Because the number of FOs in a voxel is usually small, dictionary-based sparse reconstruction has been used to estimate FOs with a relatively small number of diffusion gradients. However, accurate FO estimation in regions with complex FO configurations in the presence of noise can still be challenging. In this work we explore the use of a deep network for FO estimation in a dictionary-based framework and propose an algorithm named Fiber Orientation Reconstruction guided by a Deep Network (FORDN). FORDN consists of two steps. First, we use a smaller dictionary encoding coarse basis FOs to represent the diffusion signals. To estimate the mixture fractions of the dictionary atoms (and thus coarse FOs), a deep network is designed specifically for solving the sparse reconstruction problem. Here, the smaller dictionary is used to reduce the computational cost of training. Second, the coarse FOs inform the final FO estimation, where a larger dictionary encoding dense basis FOs is used and a weighted l1-norm regularized least squares problem is solved to encourage FOs that are consistent with the network output. FORDN was evaluated and compared with state-of-the-art algorithms that estimate FOs using sparse reconstruction on simulated and real dMRI data, and the results demonstrate the benefit of using a deep network for FO estimation.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا