No Arabic abstract
The Fisher-Bingham distribution ($mathrm{FB}_8$) is an eight-parameter family of probability density functions (PDF) on $S^2$ that, under certain conditions, reduce to spherical analogues of bivariate normal PDFs. Due to difficulties in computing its overall normalization constant, applications have been mainly restricted to subclasses of $mathrm{FB}_8$, such as the Kent ($mathrm{FB}_5$) or von Mises-Fisher (vMF) distributions. However, these subclasses often do not adequately describe directional data that are not symmetric along great circles. The normalizing constant of $mathrm{FB}_8$ can be numerically integrated, and recently Kume and Sei showed that it can be computed using an adjusted holonomic gradient method. Both approaches, however, can be computationally expensive. In this paper, I show that the normalization of $mathrm{FB}_8$ can be expressed as an infinite sum consisting of hypergeometric functions, similar to that of the $mathrm{FB}_5$. This allows the normalization to be computed under summation with adequate stopping conditions. I then fit the $mathrm{FB}_8$ to a synthetic dataset using a maximum-likelihood approach and show its improvements over a fit with the more restrictive $mathrm{FB}_5$ distribution.
We present the asymptotic distribution for two-sided tests based on the profile likelihood ratio with lower and upper boundaries on the parameter of interest. This situation is relevant for branching ratios and the elements of unitary matrices such as the CKM matrix.
Fitting a simplifying model with several parameters to real data of complex objects is a highly nontrivial task, but enables the possibility to get insights into the objects physics. Here, we present a method to infer the parameters of the model, the model error as well as the statistics of the model error. This method relies on the usage of many data sets in a simultaneous analysis in order to overcome the problems caused by the degeneracy between model parameters and model error. Errors in the modeling of the measurement instrument can be absorbed in the model error allowing for applications with complex instruments.
Inverse problems defined on the sphere arise in many fields, and are generally high-dimensional and computationally very complex. As a result, sampling the posterior of spherical inverse problems is a challenging task. In this work, we describe a framework that leverages a proximal Markov chain Monte Carlo algorithm to efficiently sample the high-dimensional space of spherical inverse problems with a sparsity-promoting wavelet prior. We detail the modifications needed for the algorithm to be applied to spherical problems, and give special consideration to the crucial forward modelling step which contains spherical harmonic transforms that are computationally expensive. By sampling the posterior, our framework allows for full and flexible uncertainty quantification, something which is not possible with other methods based on, for example, convex optimisation. We demonstrate our framework in practice on a common problem in global seismic tomography. We find that our approach is potentially useful for a wide range of applications at moderate resolutions.
LISA is the upcoming space-based Gravitational Wave telescope. LISA Pathfinder, to be launched in the coming years, will prove and verify the detection principle of the fundamental Doppler link of LISA on a flight hardware identical in design to that of LISA. LISA Pathfinder will collect a picture of all noise disturbances possibly affecting LISA, achieving the unprecedented pureness of geodesic motion necessary for the detection of gravitational waves. The first steps of both missions will crucially depend on a very precise calibration of the key system parameters. Moreover, robust parameters estimation is of fundamental importance in the correct assessment of the residual force noise, an essential part of the data processing for LISA. In this paper we present a maximum likelihood parameter estimation technique in time domain being devised for this calibration and show its proficiency on simulated data and validation through Monte Carlo realizations of independent noise runs. We discuss its robustness to non-standard scenarios possibly arising during the real-life mission, as well as its independence to the initial guess and non-gaussianities. Furthermore, we apply the same technique to data produced in mission-like fashion during operational exercises with a realistic simulator provided by ESA.
Using the latest numerical simulations of rotating stellar core collapse, we present a Bayesian framework to extract the physical information encoded in noisy gravitational wave signals. We fit Bayesian principal component regression models with known and unknown signal arrival times to reconstruct gravitational wave signals, and subsequently fit known astrophysical parameters on the posterior means of the principal component coefficients using a linear model. We predict the ratio of rotational kinetic energy to gravitational energy of the inner core at bounce by sampling from the posterior predictive distribution, and find that these predictions are generally very close to the true parameter values, with $90%$ credible intervals $sim 0.04$ and $sim 0.06$ wide for the known and unknown arrival time models respectively. Two supervised machine learning methods are implemented to classify precollapse differential rotation, and we find that these methods discriminate rapidly rotating progenitors particularly well. We also introduce a constrained optimization approach to model selection to find an optimal number of principal components in the signal reconstruction step. Using this approach, we select 14 principal components as the most parsimonious model.