ترغب بنشر مسار تعليمي؟ اضغط هنا

Strong lens modelling: comparing and combining Bayesian neural networks and parametric profile fitting

361   0   0.0 ( 0 )
 نشر من قبل James Pearson
 تاريخ النشر 2021
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

The vast quantity of strong galaxy-galaxy gravitational lenses expected by future large-scale surveys necessitates the development of automated methods to efficiently model their mass profiles. For this purpose, we train an approximate Bayesian convolutional neural network (CNN) to predict mass profile parameters and associated uncertainties, and compare its accuracy to that of conventional parametric modelling for a range of increasingly complex lensing systems. These include standard smooth parametric density profiles, hydrodynamical EAGLE galaxies and the inclusion of foreground mass structures, combined with parametric sources and sources extracted from the Hubble Ultra Deep Field. In addition, we also present a method for combining the CNN with traditional parametric density profile fitting in an automated fashion, where the CNN provides initial priors on the latters parameters. On average, the CNN achieved errors 19 $pm$ 22 per cent lower than the traditional methods blind modelling. The combination method instead achieved 27 $pm$ 11 per cent lower errors over the blind modelling, reduced further to 37 $pm$ 11 per cent when the priors also incorporated the CNN-predicted uncertainties, with errors also 17 $pm$ 21 per cent lower than the CNN by itself. While the CNN is undoubtedly the fastest modelling method, the combination of the two increases the speed of conventional fitting alone by factors of 1.73 and 1.19 with and without CNN-predicted uncertainties, respectively. This, combined with greatly improved accuracy, highlights the benefits one can obtain through combining neural networks with conventional techniques in order to achieve an efficient automated modelling approach.



قيم البحث

اقرأ أيضاً

229 - James Pearson , Nan Li , Simon Dye 2019
We explore the effectiveness of deep learning convolutional neural networks (CNNs) for estimating strong gravitational lens mass model parameters. We have investigated a number of practicalities faced when modelling real image data, such as how netwo rk performance depends on the inclusion of lens galaxy light, the addition of colour information and varying signal-to-noise. Our CNN was trained and tested with strong galaxy-galaxy lens images simulated to match the imaging characteristics of the Large Synoptic Survey Telescope (LSST) and Euclid. For images including lens galaxy light, the CNN can recover the lens model parameters with an acceptable accuracy, although a 34 per cent average improvement in accuracy is obtained when lens light is removed. However, the inclusion of colour information can largely compensate for the drop in accuracy resulting from the presence of lens light. While our findings show similar accuracies for single epoch Euclid VIS and LSST r-band datasets, we find a 24 per cent increase in accuracy by adding g- and i-band images to the LSST r-band without lens light and a 20 per cent increase with lens light. The best network performance is obtained when it is trained and tested on images where lens light exactly follows the mass, but when orientation and ellipticity of the light is allowed to differ from those of the mass, the network performs most consistently when trained with a moderate amount of scatter in the difference between the mass and light profiles.
Future large-scale surveys with high resolution imaging will provide us with a few $10^5$ new strong galaxy-scale lenses. These strong lensing systems however will be contained in large data amounts which are beyond the capacity of human experts to v isually classify in a unbiased way. We present a new strong gravitational lens finder based on convolutional neural networks (CNNs). The method was applied to the Strong Lensing challenge organised by the Bologna Lens Factory. It achieved first and third place respectively on the space-based data-set and the ground-based data-set. The goal was to find a fully automated lens finder for ground-based and space-based surveys which minimizes human inspect. We compare the results of our CNN architecture and three new variations (invariant views and residual) on the simulated data of the challenge. Each method has been trained separately 5 times on 17 000 simulated images, cross-validated using 3 000 images and then applied to a 100 000 image test set. We used two different metrics for evaluation, the area under the receiver operating characteristic curve (AUC) score and the recall with no false positive ($mathrm{Recall}_{mathrm{0FP}}$). For ground based data our best method achieved an AUC score of $0.977$ and a $mathrm{Recall}_{mathrm{0FP}}$ of $0.50$. For space-based data our best method achieved an AUC score of $0.940$ and a $mathrm{Recall}_{mathrm{0FP}}$ of $0.32$. On space-based data adding dihedral invariance to the CNN architecture diminished the overall score but achieved a higher no contamination recall. We found that using committees of 5 CNNs produce the best recall at zero contamination and consistenly score better AUC than a single CNN. We found that for every variation of our CNN lensfinder, we achieve AUC scores close to $1$ within $6%$.
We present ProFit, a new code for Bayesian two-dimensional photometric galaxy profile modelling. ProFit consists of a low-level C++ library (libprofit), accessible via a command-line interface and documented API, along with high-level R (ProFit) and Python (PyProFit) interfaces (available at github.com/ICRAR/ libprofit, github.com/ICRAR/ProFit, and github.com/ICRAR/pyprofit respectively). R ProFit is also available pre-built from CRAN, however this version will be slightly behind the latest GitHub version. libprofit offers fast and accurate two- dimensional integration for a useful number of profiles, including Sersic, Core-Sersic, broken-exponential, Ferrer, Moffat, empirical King, point-source and sky, with a simple mechanism for adding new profiles. We show detailed comparisons between libprofit and GALFIT. libprofit is both faster and more accurate than GALFIT at integrating the ubiquitous Serrsic profile for the most common values of the Serrsic index n (0.5 < n < 8). The high-level fitting code ProFit is tested on a sample of galaxies with both SDSS and deeper KiDS imaging. We find good agreement in the fit parameters, with larger scatter in best-fit parameters from fitting images from different sources (SDSS vs KiDS) than from using different codes (ProFit vs GALFIT). A large suite of Monte Carlo-simulated images are used to assess prospects for automated bulge-disc decomposition with ProFit on SDSS, KiDS and future LSST imaging. We find that the biggest increases in fit quality come from moving from SDSS- to KiDS-quality data, with less significant gains moving from KiDS to LSST.
We investigate the use of approximate Bayesian neural networks (BNNs) in modeling hundreds of time-delay gravitational lenses for Hubble constant ($H_0$) determination. Our BNN was trained on synthetic HST-quality images of strongly lensed active gal actic nuclei (AGN) with lens galaxy light included. The BNN can accurately characterize the posterior PDFs of model parameters governing the elliptical power-law mass profile in an external shear field. We then propagate the BNN-inferred posterior PDFs into ensemble $H_0$ inference, using simulated time delay measurements from a plausible dedicated monitoring campaign. Assuming well-measured time delays and a reasonable set of priors on the environment of the lens, we achieve a median precision of $9.3$% per lens in the inferred $H_0$. A simple combination of 200 test-set lenses results in a precision of 0.5 $textrm{km s}^{-1} textrm{ Mpc}^{-1}$ ($0.7%$), with no detectable bias in this $H_0$ recovery test. The computation time for the entire pipeline -- including the training set generation, BNN training, and $H_0$ inference -- translates to 9 minutes per lens on average for 200 lenses and converges to 6 minutes per lens as the sample size is increased. Being fully automated and efficient, our pipeline is a promising tool for exploring ensemble-level systematics in lens modeling for $H_0$ inference.
The hyperfine transitions of the ground-rotational state of the hydroxyl radical (OH) have emerged as a versatile tracer of the diffuse molecular interstellar medium. We present a novel automated Gaussian decomposition algorithm designed specifically for the analysis of the paired on-source and off-source optical depth and emission spectra of these transitions. In contrast to existing automated Gaussian decomposition algorithms, AMOEBA (Automated MOlecular Excitation Bayesian line-fitting Algorithm) employs a Bayesian approach to model selection, fitting all 4 optical depth and 4 emission spectra simultaneously. AMOEBA assumes that a given spectral feature can be described by a single centroid velocity and full width at half-maximum, with peak values in the individual optical depth and emission spectra then described uniquely by the column density in each of the four levels of the ground-rotational state, thus naturally including the real physical constraints on these parameters. Additionally, the Bayesian approach includes informed priors on individual parameters which the user can modify to suit different data sets. Here we describe AMOEBA and evaluate its validity and reliability in identifying and fitting synthetic spectra with known parameters.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا