No Arabic abstract
Inverse problems defined on the sphere arise in many fields, and are generally high-dimensional and computationally very complex. As a result, sampling the posterior of spherical inverse problems is a challenging task. In this work, we describe a framework that leverages a proximal Markov chain Monte Carlo algorithm to efficiently sample the high-dimensional space of spherical inverse problems with a sparsity-promoting wavelet prior. We detail the modifications needed for the algorithm to be applied to spherical problems, and give special consideration to the crucial forward modelling step which contains spherical harmonic transforms that are computationally expensive. By sampling the posterior, our framework allows for full and flexible uncertainty quantification, something which is not possible with other methods based on, for example, convex optimisation. We demonstrate our framework in practice on a common problem in global seismic tomography. We find that our approach is potentially useful for a wide range of applications at moderate resolutions.
The Fisher-Bingham distribution ($mathrm{FB}_8$) is an eight-parameter family of probability density functions (PDF) on $S^2$ that, under certain conditions, reduce to spherical analogues of bivariate normal PDFs. Due to difficulties in computing its overall normalization constant, applications have been mainly restricted to subclasses of $mathrm{FB}_8$, such as the Kent ($mathrm{FB}_5$) or von Mises-Fisher (vMF) distributions. However, these subclasses often do not adequately describe directional data that are not symmetric along great circles. The normalizing constant of $mathrm{FB}_8$ can be numerically integrated, and recently Kume and Sei showed that it can be computed using an adjusted holonomic gradient method. Both approaches, however, can be computationally expensive. In this paper, I show that the normalization of $mathrm{FB}_8$ can be expressed as an infinite sum consisting of hypergeometric functions, similar to that of the $mathrm{FB}_5$. This allows the normalization to be computed under summation with adequate stopping conditions. I then fit the $mathrm{FB}_8$ to a synthetic dataset using a maximum-likelihood approach and show its improvements over a fit with the more restrictive $mathrm{FB}_5$ distribution.
We derive the second-order sampling properties of certain autocovariance and autocorrelation estimators for sequences of independent and identically distributed samples. Specifically, the estimators we consider are the classic lag windowed correlogram, the correlogram with subtracted sample mean, and the fixed-length summation correlogram. For each correlogram we derive explicit formulas for the bias, covariance, mean square error and consistency for generalised higher-order white noise sequences. In particular, this class of sequences may have non-zero means, be complexed valued and also includes non-analytical noise signals. We find that these commonly used correlograms exhibit lag dependent covariance despite the fact that these processes are white and hence by definition do not depend on lag.
We study the effect of additive noise to the inversion of FIOs associated to a diffeomorphic canonical relation. We use the microlocal defect measures to measure the power spectrum of the noise and analyze how that power spectrum is transformed under the inversion. In particular, we compute the standard deviation of the noise added to the inversion as a function of the standard deviation of the noise added to the data. As an example, we study the Radon transform in the plane in parallel and fan-beam coordinates, and present numerical examples.
Deep neural networks have been applied successfully to a wide variety of inverse problems arising in computational imaging. These networks are typically trained using a forward model that describes the measurement process to be inverted, which is often incorporated directly into the network itself. However, these approaches are sensitive to changes in the forward model: if at test time the forward model varies (even slightly) from the one the network was trained for, the reconstruction performance can degrade substantially. Given a network trained to solve an initial inverse problem with a known forward model, we propose two novel procedures that adapt the network to a change in the forward model, even without full knowledge of the change. Our approaches do not require access to more labeled data (i.e., ground truth images). We show these simple model adaptation approaches achieve empirical success in a variety of inverse problems, including deblurring, super-resolution, and undersampled image reconstruction in magnetic resonance imaging.
We propose a novel method for computing $p$-values based on nested sampling (NS) applied to the sampling space rather than the parameter space of the problem, in contrast to its usage in Bayesian computation. The computational cost of NS scales as $log^2{1/p}$, which compares favorably to the $1/p$ scaling for Monte Carlo (MC) simulations. For significances greater than about $4sigma$ in both a toy problem and a simplified resonance search, we show that NS requires orders of magnitude fewer simulations than ordinary MC estimates. This is particularly relevant for high-energy physics, which adopts a $5sigma$ gold standard for discovery. We conclude with remarks on new connections between Bayesian and frequentist computation and possibilities for tuning NS implementations for still better performance in this setting.