No Arabic abstract
Strong lensing is a sensitive probe of the small-scale density fluctuations in the Universe. We implement a novel approach to modeling strongly lensed systems using probabilistic cataloging, which is a transdimensional, hierarchical, and Bayesian framework to sample from a metamodel (union of models with different dimensionality) consistent with observed photon count maps. Probabilistic cataloging allows us to robustly characterize modeling covariances within and across lens models with different numbers of subhalos. Unlike traditional cataloging of subhalos, it does not require model subhalos to improve the goodness of fit above the detection threshold. Instead, it allows the exploitation of all information contained in the photon count maps, for instance, when constraining the subhalo mass function. We further show that, by not including these small subhalos in the lens model, fixed-dimensional inference methods can significantly mismodel the data. Using a simulated Hubble Space Telescope (HST) dataset, we show that the subhalo mass function can be probed even when many subhalos in the sample catalogs are individually below the detection threshold and would be absent in a traditional catalog. With the planned Wide Field Infrared Space Telescope (WFIRST), simultaneous probabilistic cataloging of dark subhalos in high-resolution, deep strong lens images has the potential to constrain the subhalo mass function at even lower masses.
We present a proof-of-concept of a novel and fully Bayesian methodology designed to detect halos of different masses in cosmological observations subject to noise and systematic uncertainties. Our methodology combines the previously published Bayesian large-scale structure inference algorithm, HADES, and a Bayesian chain rule (the Blackwell-Rao Estimator), which we use to connect the inferred density field to the properties of dark matter halos. To demonstrate the capability of our approach we construct a realistic galaxy mock catalogue emulating the wide-area 6-degree Field Galaxy Survey, which has a median redshift of approximately 0.05. Application of HADES to the catalogue provides us with accurately inferred three-dimensional density fields and corresponding quantification of uncertainties inherent to any cosmological observation. We then use a cosmological simulation to relate the amplitude of the density field to the probability of detecting a halo with mass above a specified threshold. With this information we can sum over the HADES density field realisations to construct maps of detection probabilities and demonstrate the validity of this approach within our mock scenario. We find that the probability of successful of detection of halos in the mock catalogue increases as a function of the signal-to-noise of the local galaxy observations. Our proposed methodology can easily be extended to account for more complex scientific questions and is a promising novel tool to analyse the cosmic large-scale structure in observations.
The analysis of optical images of galaxy-galaxy strong gravitational lensing systems can provide important information about the distribution of dark matter at small scales. However, the modeling and statistical analysis of these images is extraordinarily complex, bringing together source image and main lens reconstruction, hyper-parameter optimization, and the marginalization over small-scale structure realizations. We present here a new analysis pipeline that tackles these diverse challenges by bringing together many recent machine learning developments in one coherent approach, including variational inference, Gaussian processes, differentiable probabilistic programming, and neural likelihood-to-evidence ratio estimation. Our pipeline enables: (a) fast reconstruction of the source image and lens mass distribution, (b) variational estimation of uncertainties, (c) efficient optimization of source regularization and other hyperparameters, and (d) marginalization over stochastic model components like the distribution of substructure. We present here preliminary results that demonstrate the validity of our approach.
In this paper, we constrain the dimensionless Compton wavelength parameter $B_0$ of $f(R)$ gravity as well as the mass of sterile neutrino by using the cosmic microwave background observations, the baryon acoustic oscillation surveys, and the linear growth rate measurements. Since both the $f(R)$ model and the sterile neutrino generally predict scale-dependent growth rates, we utilize the growth rate data measured in different wavenumber bins with the theoretical growth rate approximatively scale-independent in each bin. The employed growth rate data come from the peculiar velocity measurements at $z=0$ in five wavenumber bins, and the redshift space distortions measurements at $z=0.25$ and $z=0.37$ in one wavenumber bin. By constraining the $f(R)$ model alone, we get a tight 95% upper limit of $log_{10}B_0<-4.1$. This result is slightly weakened to $log_{10}B_0<-3.8$ (at 2$sigma$ level) once we simultaneously constrain the $f(R)$ model and the sterile neutrino mass, due to the degeneracy between the parameters of the two. For the massive sterile neutrino parameters, we get the effective sterile neutrino mass $m_{ u,{rm{sterile}}}^{rm{eff}}<0.62$ eV (2$sigma$) and the effective number of relativistic species $N_{rm eff}<3.90$ (2$sigma$) in the $f(R)$ model. As a comparison, we also obtain $m_{ u,{rm{sterile}}}^{rm{eff}}<0.56$ eV (2$sigma$) and $N_{rm eff}<3.92$ (2$sigma$) in the standard $Lambda$CDM model.
We argue that the global signal of neutral hydrogen 21cm line can be a powerful probe of primordial power spectrum on small scales. Since the amplitude of small scale primordial fluctuations is important to determine the early structure formation and the timing when the sources of Lyman ${alpha}$ photons are produced, they in turn affect the neutral hydrogen 21cm line signal. We show that the information of the position of the absorption trough can severely constrain the small scale amplitude of primordial fluctuations once astrophysical parameters relevant to the 21cm line signal are fixed. We also discuss how the uncertainties of astrophysical parameters affect the constraints.
Gravitational lensing has emerged as a powerful probe of the matter distribution on subgalactic scales, which itself may contain important clues about the fundamental origins and properties of dark matter. Broadly speaking, two different approaches have been taken in the literature to map the small-scale structure of the Universe using strong lensing, with one focused on measuring the position and mass of a small number of discrete massive subhalos appearing close in projection to lensed images, and the other focused on detecting the collective effect of all the small-scale structure between the lensed source and the observer. In this paper, we follow the latter approach and perform a detailed study of the sensitivity of galaxy-scale gravitational lenses to the ensemble properties of small-scale structure. As in some previous studies, we adopt the language of the substructure power spectrum to characterize the statistical properties of the small-scale density field. We present a comprehensive theory that treats lenses with extended sources as well as those with time-dependent compact sources (such as quasars) in a unified framework for the first time. Our approach uses mode functions to provide both computational advantages and insights about couplings between the lens and source. The goal of this paper is to develop the theory and gain the intuition necessary to understand how the sensitivity to the substructure power spectrum depends on the source and lens properties, with the eventual aim of identifying the most promising targets for such studies.