Do you want to publish a course? Click here

Weak-lensing shear measurement with machine learning: teaching artificial neural networks about feature noise

80   0   0.0 ( 0 )
 Added by Malte Tewes
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

Cosmic shear is a primary cosmological probe for several present and upcoming surveys investigating dark matter and dark energy, such as Euclid or WFIRST. The probe requires an extremely accurate measurement of the shapes of millions of galaxies based on imaging data. Crucially, the shear measurement must address and compensate for a range of interwoven nuisance effects related to the instrument optics and detector, noise, unknown galaxy morphologies, colors, blending of sources, and selection effects. This paper explores the use of supervised machine learning (ML) as a tool to solve this inverse problem. We present a simple architecture that learns to regress shear point estimates and weights via shallow artificial neural networks. The networks are trained on simulations of the forward observing process, and take combinations of moments of the galaxy images as inputs. A challenging peculiarity of this ML application is the combination of the noisiness of the input features and the requirements on the accuracy of the inverse regression. To address this issue, the proposed training algorithm minimizes bias over multiple realizations of individual source galaxies, reducing the sensitivity to properties of the overall sample of source galaxies. Importantly, an observational selection function of these source galaxies can be straightforwardly taken into account via the weights. We first introduce key aspects of our approach using toy-model simulations, and then demonstrate its potential on images mimicking Euclid data. Finally, we analyze images from the GREAT3 challenge, obtaining competitively low shear biases despite the use of a simple training set. We conclude that the further development of ML approaches is of high interest to meet the stringent requirements on the shear measurement in current and future surveys. A demonstration implementation of our technique is publicly available.



rate research

Read More

Metacalibration is a recently introduced method to accurately measure weak gravitational lensing shear using only the available imaging data, without need for prior information about galaxy properties or calibration from simulations. The method involves distorting the image with a small known shear, and calculating the response of a shear estimator to that applied shear. The method was shown to be accurate in moderate sized simulations with galaxy images that had relatively high signal-to-noise ratios, and without significant selection effects. In this work we introduce a formalism to correct for both shear response and selection biases. We also observe that, for images with relatively low signal-to-noise ratios, the correlated noise that arises during the metacalibration process results in significant bias, for which we develop a simple empirical correction. To test this formalism, we created large image simulations based on both parametric models and real galaxy images, including tests with realistic point-spread functions. We varied the point-spread function ellipticity at the five percent level. In each simulation we applied a small, few percent shear to the galaxy images. We introduced additional challenges that arise in real data, such as detection thresholds, stellar contamination, and missing data. We applied cuts on the measured galaxy properties to induce significant selection effects. Using our formalism, we recovered the input shear with an accuracy better than a part in a thousand in all cases.
Highly precise weak lensing shear measurement is required for statistical weak gravitational lensing analysis such as cosmic shear measurement to achieve severe constraint on the cosmological parameters. For this purpose, the accurate shape measurement of background galaxies is absolutely important in which any systematic error in the measurement should be carefully corrected. One of the main systematic error comes from photon noise which is Poisson noise of flux from the atmosphere(noise bias). We investigate how the photon noise makes a systematic error in shear measurement within the framework of ERA method we developed in earlier papers and gives a practical correction method. The method is tested by simulations with real galaxy images with various conditions and it is confirmed that it can correct $80 sim 90%$ of the noise bias except for galaxies with very low signal to noise ratio.
The VST Optical Imaging of the CDFS and ES1 Fields (VOICE) Survey is a Guaranteed Time program carried out with the ESO/VST telescope to provide deep optical imaging over two 4 deg$^2$ patches of the sky centred on the CDFS and ES1 pointings. We present the cosmic shear measurement over the 4 deg$^2$ covering the CDFS region in the $r$-band using LensFit. Each of the four tiles of 1 deg$^2$ has more than one hundred exposures, of which more than 50 exposures passed a series of image quality selection criteria for weak lensing study. The $5sigma$ limiting magnitude in $r$- band is 26.1 for point sources, which is $sim$1 mag deeper than other weak lensing survey in the literature (e.g. the Kilo Degree Survey, KiDS, at VST). The photometric redshifts are estimated using the VOICE $u,g,r,i$ together with near-infrared VIDEO data $Y,J,H,K_s$. The mean redshift of the shear catalogue is 0.87, considering the shear weight. The effective galaxy number density is 16.35 gal/arcmin$^2$, which is nearly twice the one of KiDS. The performance of LensFit on such a deep dataset was calibrated using VOICE-like mock image simulations. Furthermore, we have analyzed the reliability of the shear catalogue by calculating the star-galaxy cross-correlations, the tomographic shear correlations of two redshift bins and the contaminations of the blended galaxies. As a further sanity check, we have constrained cosmological parameters by exploring the parameter space with Population Monte Carlo sampling. For a flat $Lambda$CDM model we have obtained $Sigma_8$ = $sigma_8(Omega_m/0.3)^{0.5}$ = $0.68^{+0.11}_{-0.15}$.
The complete 10-year survey from the Large Synoptic Survey Telescope (LSST) will image $sim$ 20,000 square degrees of sky in six filter bands every few nights, bringing the final survey depth to $rsim27.5$, with over 4 billion well measured galaxies. To take full advantage of this unprecedented statistical power, the systematic errors associated with weak lensing measurements need to be controlled to a level similar to the statistical errors. This work is the first attempt to quantitatively estimate the absolute level and statistical properties of the systematic errors on weak lensing shear measurements due to the most important physical effects in the LSST system via high fidelity ray-tracing simulations. We identify and isolate the different sources of algorithm-independent, textit{additive} systematic errors on shear measurements for LSST and predict their impact on the final cosmic shear measurements using conventional weak lensing analysis techniques. We find that the main source of the errors comes from an inability to adequately characterise the atmospheric point spread function (PSF) due to its high frequency spatial variation on angular scales smaller than $sim10$ in the single short exposures, which propagates into a spurious shear correlation function at the $10^{-4}$--$10^{-3}$ level on these scales. With the large multi-epoch dataset that will be acquired by LSST, the stochastic errors average out, bringing the final spurious shear correlation function to a level very close to the statistical errors. Our results imply that the cosmological constraints from LSST will not be severely limited by these algorithm-independent, additive systematic effects.
3D data compression techniques can be used to determine the natural basis of radial eigenmodes that encode the maximum amount of information in a tomographic large-scale structure survey. We explore the potential of the Karhunen-Lo`eve decomposition in reducing the dimensionality of the data vector for cosmic shear measurements, and apply it to the final data from the cfh survey. We find that practically all of the cosmological information can be encoded in one single radial eigenmode, from which we are able to reproduce compatible constraints with those found in the fiducial tomographic analysis (done with 7 redshift bins) with a factor of ~30 fewer datapoints. This simplifies the problem of computing the two-point function covariance matrix from mock catalogues by the same factor, or by a factor of ~800 for an analytical covariance. The resulting set of radial eigenfunctions is close to ell-independent, and therefore they can be used as redshift-dependent galaxy weights. This simplifies the application of the Karhunen-Lo`eve decomposition to real-space and Fourier-space data, and allows one to explore the effective radial window function of the principal eigenmodes as well as the associated shear maps in order to identify potential systematics. We also apply the method to extended parameter spaces and verify that additional information may be gained by including a second mode to break parameter degeneracies. The data and analysis code are publicly available at https://github.com/emiliobellini/kl_sample.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا