Do you want to publish a course? Click here

High Resolution Weak Lensing Mass-Mapping Combining Shear and Flexion

257   0   0.0 ( 0 )
 Added by Francois Lanusse
 Publication date 2016
  fields Physics
and research's language is English




Ask ChatGPT about the research

We propose a new mass-mapping algorithm, specifically designed to recover small-scale information from a combination of gravitational shear and flexion. Including flexion allows us to supplement the shear on small scales in order to increase the sensitivity to substructures and the overall resolution of the convergence map without relying on strong lensing constraints. In order to preserve all available small scale information, we avoid any binning of the irregularly sampled input shear and flexion fields and treat the mass-mapping problem as a general ill-posed inverse problem, regularised using a robust multi-scale wavelet sparsity prior. The resulting algorithm incorporates redshift, reduced shear, and reduced flexion measurements for individual galaxies and is made highly efficient by the use of fast Fourier estimators. We test our reconstruction method on a set of realistic weak lensing simulations corresponding to typical HST/ACS cluster observations and demonstrate our ability to recover substructures with the inclusion of flexion which are lost if only shear information is used. In particular, we can detect substructures at the 15$^{prime prime}$ scale well outside of the critical region of the clusters. In addition, flexion also helps to constrain the shape of the central regions of the main dark matter halos. Our mass-mapping software, called Glimpse2D, is made freely available at http://www.cosmostat.org/software/glimpse .



rate research

Read More

Gravitational lensing has long been considered as a valuable tool to determine the total mass of galaxy clusters. The shear profile as inferred from the statistics of ellipticity of background galaxies allows to probe the cluster intermediate and outer regions thus determining the virial mass estimate. However, the mass sheet degeneracy and the need for a large number of background galaxies motivate the search for alternative tracers which can break the degeneracy among model parameters and hence improve the accuracy of the mass estimate. Lensing flexion, i.e. the third derivative of the lensing potential, has been suggested as a good answer to the above quest since it probes the details of the mass profile. We investigate here whether this is indeed the case considering jointly using weak lensing, magnification and flexion. We use a Fisher matrix analysis to forecast the relative improvement in the mass accuracy for different assumptions on the shear and flexion signal - to - noise (S/N) ratio also varying the cluster mass, redshift, and ellipticity. It turns out that the error on the cluster mass may be reduced up to a factor 2 for reasonable values of the flexion S/N ratio. As a general result, we get that the improvement in mass accuracy is larger for more flattened haloes, but extracting general trends is a difficult because of the many parameters at play. We nevertheless find that flexion is as efficient as magnification to increase the accuracy in both mass and concentration determination.
Current theories of structure formation predict specific density profiles of galaxy dark matter haloes, and with weak gravitational lensing we can probe these profiles on several scales. On small scales, higher-order shape distortions known as flexion add significant detail to the weak lensing measurements. We present here the first detection of a galaxy-galaxy flexion signal in space-based data, obtained using a new Shapelets pipeline introduced here. We combine this higher-order lensing signal with shear to constrain the average density profile of the galaxy lenses in the Hubble Space Telescope COSMOS survey. We also show that light from nearby bright objects can significantly affect flexion measurements. After correcting for the influence of lens light, we show that the inclusion of flexion provides tighter constraints on density profiles than does shear alone. Finally we find an average density profile consistent with an isothermal sphere.
The complete 10-year survey from the Large Synoptic Survey Telescope (LSST) will image $sim$ 20,000 square degrees of sky in six filter bands every few nights, bringing the final survey depth to $rsim27.5$, with over 4 billion well measured galaxies. To take full advantage of this unprecedented statistical power, the systematic errors associated with weak lensing measurements need to be controlled to a level similar to the statistical errors. This work is the first attempt to quantitatively estimate the absolute level and statistical properties of the systematic errors on weak lensing shear measurements due to the most important physical effects in the LSST system via high fidelity ray-tracing simulations. We identify and isolate the different sources of algorithm-independent, textit{additive} systematic errors on shear measurements for LSST and predict their impact on the final cosmic shear measurements using conventional weak lensing analysis techniques. We find that the main source of the errors comes from an inability to adequately characterise the atmospheric point spread function (PSF) due to its high frequency spatial variation on angular scales smaller than $sim10$ in the single short exposures, which propagates into a spurious shear correlation function at the $10^{-4}$--$10^{-3}$ level on these scales. With the large multi-epoch dataset that will be acquired by LSST, the stochastic errors average out, bringing the final spurious shear correlation function to a level very close to the statistical errors. Our results imply that the cosmological constraints from LSST will not be severely limited by these algorithm-independent, additive systematic effects.
3D data compression techniques can be used to determine the natural basis of radial eigenmodes that encode the maximum amount of information in a tomographic large-scale structure survey. We explore the potential of the Karhunen-Lo`eve decomposition in reducing the dimensionality of the data vector for cosmic shear measurements, and apply it to the final data from the cfh survey. We find that practically all of the cosmological information can be encoded in one single radial eigenmode, from which we are able to reproduce compatible constraints with those found in the fiducial tomographic analysis (done with 7 redshift bins) with a factor of ~30 fewer datapoints. This simplifies the problem of computing the two-point function covariance matrix from mock catalogues by the same factor, or by a factor of ~800 for an analytical covariance. The resulting set of radial eigenfunctions is close to ell-independent, and therefore they can be used as redshift-dependent galaxy weights. This simplifies the application of the Karhunen-Lo`eve decomposition to real-space and Fourier-space data, and allows one to explore the effective radial window function of the principal eigenmodes as well as the associated shear maps in order to identify potential systematics. We also apply the method to extended parameter spaces and verify that additional information may be gained by including a second mode to break parameter degeneracies. The data and analysis code are publicly available at https://github.com/emiliobellini/kl_sample.
Metacalibration is a recently introduced method to accurately measure weak gravitational lensing shear using only the available imaging data, without need for prior information about galaxy properties or calibration from simulations. The method involves distorting the image with a small known shear, and calculating the response of a shear estimator to that applied shear. The method was shown to be accurate in moderate sized simulations with galaxy images that had relatively high signal-to-noise ratios, and without significant selection effects. In this work we introduce a formalism to correct for both shear response and selection biases. We also observe that, for images with relatively low signal-to-noise ratios, the correlated noise that arises during the metacalibration process results in significant bias, for which we develop a simple empirical correction. To test this formalism, we created large image simulations based on both parametric models and real galaxy images, including tests with realistic point-spread functions. We varied the point-spread function ellipticity at the five percent level. In each simulation we applied a small, few percent shear to the galaxy images. We introduced additional challenges that arise in real data, such as detection thresholds, stellar contamination, and missing data. We applied cuts on the measured galaxy properties to induce significant selection effects. Using our formalism, we recovered the input shear with an accuracy better than a part in a thousand in all cases.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا