No Arabic abstract
We have developed a maximum likelihood source detection method capable of detecting ultra-faint streaks with surface brightnesses approximately an order of magnitude fainter than the pixel level noise. Our maximum likelihood detection method is a model based approach that requires no a priori knowledge about the streak location, orientation, length, or surface brightness. This method enables discovery of typically undiscovered objects, and enables the utilization of low-cost sensors (i.e., higher-noise data). The method also easily facilitates multi-epoch co-addition. We will present the results from the application of this method to simulations, as well as real low earth orbit observations.
The AGILE space mission (whose instrument is sensitive in the energy ranges 18-60 keV, and 30 MeV - 50 GeV) has been operating since 2007. Assessing the statistical significance of time variability of gamma-ray sources above 100 MeV is a primary task of the AGILE data analysis. In particular, it is important to check the instrument sensitivity in terms of Poisson modeling of the data background, and to determine the post-trial confidence of detections. The goals of this work are: (i) evaluating the distributions of the likelihood ratio test for empty fields, and for regions of the Galactic plane; (ii) calculating the probability of false detection over multiple time intervals. In this paper we describe in detail the techniques used to search for short-term variability in the AGILE gamma-ray source database. We describe the binned maximum likelihood method used for the analysis of AGILE data, and the numerical simulations that support the characterization of the statistical analysis. We apply our method to both Galactic and extra-galactic transients, and provide a few examples. After having checked the reliability of the statistical description tested with the real AGILE data, we obtain the distribution of p-values for blind and specific source searches. We apply our results to the determination of the post-trial statistical significance of detections of transient gamma-ray sources in terms of pre-trial values. The results of our analysis allow a precise determination of the post-trial significance of {gamma}-ray sources detected by AGILE.
Astrophysical polarization measurements in the soft gamma-ray band are becoming more feasible as detectors with high position and energy resolution are deployed. Previous work has shown that the minimum detectable polarization (MDP) of an ideal Compton polarimeter can be improved by $sim 21%$ when an unbinned, maximum likelihood method is used instead of the standard approach of fitting a sinusoid to a histogram of azimuthal scattering angles. Here we outline a procedure for implementing this maximum likelihood approach for real, non-ideal polarimeters. As an example, we use the recent observation of GRB 160530A with the Compton Spectrometer and Imager. We find that the MDP for this observation is reduced by $20%$ when the maximum likelihood method is used instead of the standard method.
The problem of astrometry is revisited from the perspective of analyzing the attainability of well-known performance limits (the Cramer-Rao bound) for the estimation of the relative position of light-emitting (usually point-like) sources on a CCD-like detector using commonly adopted estimators such as the weighted least squares and the maximum likelihood. Novel technical results are presented to determine the performance of an estimator that corresponds to the solution of an optimization problem in the context of astrometry. Using these results we are able to place stringent bounds on the bias and the variance of the estimators in close form as a function of the data. We confirm these results through comparisons to numerical simulations under a broad range of realistic observing conditions. The maximum likelihood and the weighted least square estimators are analyzed. We confirm the sub-optimality of the weighted least squares scheme from medium to high signal-to-noise found in an earlier study for the (unweighted) least squares method. We find that the maximum likelihood estimator achieves optimal performance limits across a wide range of relevant observational conditions. Furthermore, from our results, we provide concrete insights for adopting an adaptive weighted least square estimator that can be regarded as a computationally efficient alternative to the optimal maximum likelihood solution. We provide, for the first time, close-form analytical expressions that bound the bias and the variance of the weighted least square and maximum likelihood implicit estimators for astrometry using a Poisson-driven detector. These expressions can be used to formally assess the precision attainable by these estimators in comparison with the minimum variance bound.
Dwarf spheroidal galaxies are the smallest known stellar systems where under Newtonian interpretations, a significant amount of dark matter is required to explain observed kinematics. In fact, they are in this sense the most heavily dark matter dominated objects known. That, plus the increasingly small sizes of the newly discovered ultra faint dwarfs, puts these systems in the regime where dynamical friction on individual stars starts to become relevant. We calculate the dynamical friction timescales for pressure supported isotropic spherical dark matter dominated stellar systems, yielding $tau_{DF} =0.93 (r_{h}/10 pc)^{2} (sigma/ kms^{-1}) Gyr$, { where $r_{h}$ is the half-light radius}. For a stellar velocity dispersion value of $3 km/s$, as typical for the smallest of the recently detected ultra faint dwarf spheroidals, dynamical friction timescales becomes smaller than the $10 Gyr$ typical of the stellar ages for these systems, for $r_{h}<19 pc$. Thus, this becomes a theoretical lower limit below which dark matter dominated stellar systems become unstable to dynamical friction. We present a comparison with structural parameters of the smallest ultra faint dwarf spheroidals known, showing that these are already close to the stability limit derived, any future detection of yet smaller such systems would be inconsistent with a particle dark matter hypothesis.
We revisit the problem of exact CMB likelihood and power spectrum estimation with the goal of minimizing computational cost through linear compression. This idea was originally proposed for CMB purposes by Tegmark et al. (1997), and here we develop it into a fully working computational framework for large-scale polarization analysis, adopting WMAP as a worked example. We compare five different linear bases (pixel space, harmonic space, noise covariance eigenvectors, signal-to-noise covariance eigenvectors and signal-plus-noise covariance eigenvectors) in terms of compression efficiency, and find that the computationally most efficient basis is the signal-to-noise eigenvector basis, which is closely related to the Karhunen-Loeve and Principal Component transforms, in agreement with previous suggestions. For this basis, the information in 6836 unmasked WMAP sky map pixels can be compressed into a smaller set of 3102 modes, with a maximum error increase of any single multipole of 3.8% at $ellle32$, and a maximum shift in the mean values of a joint distribution of an amplitude--tilt model of 0.006$sigma$. This compression reduces the computational cost of a single likelihood evaluation by a factor of 5, from 38 to 7.5 CPU seconds, and it also results in a more robust likelihood by implicitly regularizing nearly degenerate modes. Finally, we use the same compression framework to formulate a numerically stable and computationally efficient variation of the Quadratic Maximum Likelihood implementation that requires less than 3 GB of memory and 2 CPU minutes per iteration for $ell le 32$, rendering low-$ell$ QML CMB power spectrum analysis fully tractable on a standard laptop.