Do you want to publish a course? Click here

Weak lensing mass reconstruction using sparsity and a Gaussian random field

152   0   0.0 ( 0 )
 Added by Jean-Luc Starck
 Publication date 2021
  fields Physics
and research's language is English




Ask ChatGPT about the research

We introduce a novel approach to reconstruct dark matter mass maps from weak gravitational lensing measurements. The cornerstone of the proposed method lies in a new modelling of the matter density field in the Universe as a mixture of two components:(1) a sparsity-based component that captures the non-Gaussian structure of the field, such as peaks or halos at different spatial scales; and (2) a Gaussian random field, which is known to well represent the linear characteristics of the field.Methods. We propose an algorithm called MCALens which jointly estimates these two components. MCAlens is based on an alternating minimization incorporating both sparse recovery and a proximal iterative Wiener filtering. Experimental results on simulated data show that the proposed method exhibits improved estimation accuracy compared to state-of-the-art mass map reconstruction methods.



rate research

Read More

Mapping the underlying density field, including non-visible dark matter, using weak gravitational lensing measurements is now a standard tool in cosmology. Due to its importance to the science results of current and upcoming surveys, the quality of the convergence reconstruction methods should be well understood. We compare three methods: Kaiser-Squires (KS), Wiener filter, and GLIMPSE. KS is a direct inversion, not accounting for survey masks or noise. The Wiener filter is well-motivated for Gaussian density fields in a Bayesian framework. GLIMPSE uses sparsity, aiming to reconstruct non-linearities in the density field. We compare these methods with several tests using public Dark Energy Survey (DES) Science Verification (SV) data and realistic DES simulations. The Wiener filter and GLIMPSE offer substantial improvements over smoothed KS with a range of metrics. Both the Wiener filter and GLIMPSE convergence reconstructions show a 12 per cent improvement in Pearson correlation with the underlying truth from simulations. To compare the mapping methods abilities to find mass peaks, we measure the difference between peak counts from simulated {Lambda}CDM shear catalogues and catalogues with no mass fluctuations (a standard data vector when inferring cosmology from peak statistics); the maximum signal-to-noise of these peak statistics is increased by a factor of 3.5 for the Wiener filter and 9 for GLIMPSE. With simulations we measure the reconstruction of the harmonic phases; the phase residuals concentration is improved 17 per cent by GLIMPSE and 18 per cent by the Wiener filter. The correlation between reconstructions from data and foreground redMaPPer clusters is increased 18 per cent by the Wiener filter and 32 per cent by GLIMPSE.
We propose a novel method to reconstruct high-resolution three-dimensional mass maps using data from photometric weak-lensing surveys. We apply an adaptive LASSO algorithm to perform a sparsity-based reconstruction on the assumption that the underlying cosmic density field is represented by a sum of Navarro-Frenk-White halos. We generate realistic mock galaxy shape catalogues by considering the shear distortions from isolated halos for the configurations matched to Subaru Hyper Suprime-Cam Survey with its photometric redshift estimates. We show that the adaptive method significantly reduces line-of-sight smearing that is caused by the correlation between the lensing kernels at different redshifts. Lensing clusters with lower mass limits of $10^{14.0} h^{-1}M_{odot}$, $10^{14.7} h^{-1}M_{odot}$, $10^{15.0} h^{-1}M_{odot}$ can be detected with 1.5-$sigma$ confidence at the low ($z<0.3$), median ($0.3leq z< 0.6$) and high ($0.6leq z< 0.85$) redshifts, respectively, with an average false detection rate of 0.022 deg$^{-2}$. The estimated redshifts of the detected clusters are systematically lower than the true values by $Delta z sim 0.03$ for halos at $zleq 0.4$, but the relative redshift bias is below $0.5%$ for clusters at $0.4<zleq 0.85$. The standard deviation of the redshift estimation is $0.092$. Our method enables direct three-dimensional cluster detection with accurate redshift estimates.
We introduce a novel method for reconstructing the projected matter distributions of galaxy clusters with weak-lensing (WL) data based on convolutional neural network (CNN). Training datasets are generated with ray-tracing through cosmological simulations. We control the noise level of the galaxy shear catalog such that it mimics the typical properties of the existing ground-based WL observations of galaxy clusters. We find that the mass reconstruction by our multi-layered CNN with the architecture of alternating convolution and trans-convolution filters significantly outperforms the traditional reconstruction methods. The CNN method provides better pixel-to-pixel correlations with the truth, restores more accurate positions of the mass peaks, and more efficiently suppresses artifacts near the field edges. In addition, the CNN mass reconstruction lifts the mass-sheet degeneracy when applied to sufficiently large fields. This implies that this CNN algorithm can be used to measure cluster masses in a model independent way for future wide-field WL surveys.
In this paper, we compare three methods to reconstruct galaxy cluster density fields with weak lensing data. The first method called FLens integrates an inpainting concept to invert the shear field with possible gaps, and a multi-scale entropy denoising procedure to remove the noise contained in the final reconstruction, that arises mostly from the random intrinsic shape of the galaxies. The second and third methods are based on a model of the density field made of a multi-scale grid of radial basis functions. In one case, the model parameters are computed with a linear inversion involving a singular value decomposition. In the other case, the model parameters are estimated using a Bayesian MCMC optimization implemented in the lensing software Lenstool. Methods are compared on simulated data with varying galaxy density fields. We pay particular attention to the errors estimated with resampling. We find the multi-scale grid model optimized with MCMC to provide the best results, but at high computational cost, especially when considering resampling. The SVD method is much faster but yields noisy maps, although this can be mitigated with resampling. The FLens method is a good compromise with fast computation, high signal to noise reconstruction, but lower resolution maps. All three methods are applied to the MACS J0717+3745 galaxy cluster field, and reveal the filamentary structure discovered in Jauzac et al. 2012. We conclude that sensitive priors can help to get high signal to noise, and unbiased reconstructions.
We use a dense redshift survey in the foreground of the Subaru GTO2deg^2 weak lensing field (centered at $alpha_{2000}$ = 16$^h04^m44^s$;$delta_{2000}$ =43^circ11^{prime}24^{primeprime}$) to assess the completeness and comment on the purity of massive halo identification in the weak lensing map. The redshift survey (published here) includes 4541 galaxies; 4405 are new redshifts measured with the Hectospec on the MMT. Among the weak lensing peaks with a signal-to-noise greater that 4.25, 2/3 correspond to individual massive systems; this result is essentially identical to the Geller et al. (2010) test of the Deep Lens Survey field F2. The Subaru map, based on images in substantially better seeing than the DLS, enables detection of less massive halos at fixed redshift as expected. We demonstrate that the procedure adopted by Miyazaki et al. (2007) for removing some contaminated peaks from the weak lensing map improves agreement between the lensing map and the redshift survey in the identification of candidate massive systems.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا