ترغب بنشر مسار تعليمي؟ اضغط هنا

ProFit: Bayesian Profile Fitting of Galaxy Images

125   0   0.0 ( 0 )
 نشر من قبل Aaron Robotham
 تاريخ النشر 2016
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We present ProFit, a new code for Bayesian two-dimensional photometric galaxy profile modelling. ProFit consists of a low-level C++ library (libprofit), accessible via a command-line interface and documented API, along with high-level R (ProFit) and Python (PyProFit) interfaces (available at github.com/ICRAR/ libprofit, github.com/ICRAR/ProFit, and github.com/ICRAR/pyprofit respectively). R ProFit is also available pre-built from CRAN, however this version will be slightly behind the latest GitHub version. libprofit offers fast and accurate two- dimensional integration for a useful number of profiles, including Sersic, Core-Sersic, broken-exponential, Ferrer, Moffat, empirical King, point-source and sky, with a simple mechanism for adding new profiles. We show detailed comparisons between libprofit and GALFIT. libprofit is both faster and more accurate than GALFIT at integrating the ubiquitous Serrsic profile for the most common values of the Serrsic index n (0.5 < n < 8). The high-level fitting code ProFit is tested on a sample of galaxies with both SDSS and deeper KiDS imaging. We find good agreement in the fit parameters, with larger scatter in best-fit parameters from fitting images from different sources (SDSS vs KiDS) than from using different codes (ProFit vs GALFIT). A large suite of Monte Carlo-simulated images are used to assess prospects for automated bulge-disc decomposition with ProFit on SDSS, KiDS and future LSST imaging. We find that the biggest increases in fit quality come from moving from SDSS- to KiDS-quality data, with less significant gains moving from KiDS to LSST.

قيم البحث

اقرأ أيضاً

The vast quantity of strong galaxy-galaxy gravitational lenses expected by future large-scale surveys necessitates the development of automated methods to efficiently model their mass profiles. For this purpose, we train an approximate Bayesian convo lutional neural network (CNN) to predict mass profile parameters and associated uncertainties, and compare its accuracy to that of conventional parametric modelling for a range of increasingly complex lensing systems. These include standard smooth parametric density profiles, hydrodynamical EAGLE galaxies and the inclusion of foreground mass structures, combined with parametric sources and sources extracted from the Hubble Ultra Deep Field. In addition, we also present a method for combining the CNN with traditional parametric density profile fitting in an automated fashion, where the CNN provides initial priors on the latters parameters. On average, the CNN achieved errors 19 $pm$ 22 per cent lower than the traditional methods blind modelling. The combination method instead achieved 27 $pm$ 11 per cent lower errors over the blind modelling, reduced further to 37 $pm$ 11 per cent when the priors also incorporated the CNN-predicted uncertainties, with errors also 17 $pm$ 21 per cent lower than the CNN by itself. While the CNN is undoubtedly the fastest modelling method, the combination of the two increases the speed of conventional fitting alone by factors of 1.73 and 1.19 with and without CNN-predicted uncertainties, respectively. This, combined with greatly improved accuracy, highlights the benefits one can obtain through combining neural networks with conventional techniques in order to achieve an efficient automated modelling approach.
The application of Bayesian techniques to astronomical data is generally non-trivial because the fitting parameters can be strongly degenerated and the formal uncertainties are themselves uncertain. An example is provided by the contradictory claims over the presence or absence of a universal acceleration scale (g$_dagger$) in galaxies based on Bayesian fits to rotation curves. To illustrate the situation, we present an analysis in which the Newtonian gravitational constant $G_N$ is allowed to vary from galaxy to galaxy when fitting rotation curves from the SPARC database, in analogy to $g_{dagger}$ in the recently debated Bayesian analyses. When imposing flat priors on $G_N$, we obtain a wide distribution of $G_N$ which, taken at face value, would rule out $G_N$ as a universal constant with high statistical confidence. However, imposing an empirically motivated log-normal prior returns a virtually constant $G_N$ with no sacrifice in fit quality. This implies that the inference of a variable $G_N$ (or g$_{dagger}$) is the result of the combined effect of parameter degeneracies and unavoidable uncertainties in the error model. When these effects are taken into account, the SPARC data are consistent with a constant $G_{rm N}$ (and constant $g_dagger$).
76 - Andrew S. Leung 2015
We present a Bayesian approach to the redshift classification of emission-line galaxies when only a single emission line is detected spectroscopically. We consider the case of surveys for high-redshift Lyman-alpha-emitting galaxies (LAEs), which have traditionally been classified via an inferred rest-frame equivalent width (EW) greater than 20 angstrom. Our Bayesian method relies on known prior probabilities in measured emission-line luminosity functions and equivalent width distributions for the galaxy populations, and returns the probability that an object in question is an LAE given the characteristics observed. This approach will be directly relevant for the Hobby-Eberly Telescope Dark Energy Experiment (HETDEX), which seeks to classify ~10^6 emission-line galaxies into LAEs and low-redshift [O II] emitters. For a simulated HETDEX catalog with realistic measurement noise, our Bayesian method recovers 86% of LAEs missed by the traditional EW > 20 angstrom cutoff over 2 < z < 3, outperforming the EW cut in both contamination and incompleteness. This is due to the methods ability to trade off between the two types of binary classification error by adjusting the stringency of the probability requirement for classifying an observed object as an LAE. In our simulations of HETDEX, this method reduces the uncertainty in cosmological distance measurements by 14% with respect to the EW cut, equivalent to recovering 29% more cosmological information. Rather than using binary object labels, this method enables the use of classification probabilities in large-scale structure analyses. It can be applied to narrowband emission-line surveys as well as upcoming large spectroscopic surveys including Euclid and WFIRST.
Galaxy cluster analyses based on high-resolution observations of the Sunyaev-Zeldovich (SZ) effect have become common in the last decade. We present PreProFit, the first publicly available code designed to fit the pressure profile of galaxy clusters from SZ data. PreProFit is based on a Bayesian forward-modelling approach, allows the analysis of data coming from different sources, adopts a flexible parametrization for the pressure profile, and fits the model to the data accounting for Abel integral, beam smearing, and transfer function filtering. PreProFit is computationally efficient, is extensively documented, has been released as an open source Python project, and was developed to be part of a joint analysis of X-ray and SZ data on galaxy clusters. PreProFit returns $chi^2$, model parameters and uncertainties, marginal and joint probability contours, diagnostic plots, and surface brightness radial profiles. PreProFit also allows the use of analytic approximations for the beam and transfer functions useful for feasibility studies.
Knowing the redshift of galaxies is one of the first requirements of many cosmological experiments, and as its impossible to perform spectroscopy for every galaxy being observed, photometric redshift (photo-z) estimations are still of particular inte rest. Here, we investigate different deep learning methods for obtaining photo-z estimates directly from images, comparing these with traditional machine learning algorithms which make use of magnitudes retrieved through photometry. As well as testing a convolutional neural network (CNN) and inception-module CNN, we introduce a novel mixed-input model which allows for both images and magnitude data to be used in the same model as a way of further improving the estimated redshifts. We also perform benchmarking as a way of demonstrating the performance and scalability of the different algorithms. The data used in the study comes entirely from the Sloan Digital Sky Survey (SDSS) from which 1 million galaxies were used, each having 5-filter (ugriz) images with complete photometry and a spectroscopic redshift which was taken as the ground truth. The mixed-input inception CNN achieved a mean squared error (MSE)=0.009, which was a significant improvement (30%) over the traditional Random Forest (RF), and the model performed even better at lower redshifts achieving a MSE=0.0007 (a 50% improvement over the RF) in the range of z<0.3. This method could be hugely beneficial to upcoming surveys such as the Vera C. Rubin Observatorys Legacy Survey of Space and Time (LSST) which will require vast numbers of photo-z estimates produced as quickly and accurately as possible.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا