ترغب بنشر مسار تعليمي؟ اضغط هنا

Predicting star formation properties of galaxies using deep learning

126   0   0.0 ( 0 )
 نشر من قبل Shraddha Surana
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

Understanding the star-formation properties of galaxies as a function of cosmic epoch is a critical exercise in studies of galaxy evolution. Traditionally, stellar population synthesis models have been used to obtain best fit parameters that characterise star formation in galaxies. As multiband flux measurements become available for thousands of galaxies, an alternative approach to characterising star formation using machine learning becomes feasible. In this work, we present the use of deep learning techniques to predict three important star formation properties -- stellar mass, star formation rate and dust luminosity. We characterise the performance of our deep learning models through comparisons with outputs from a standard stellar population synthesis code.



قيم البحث

اقرأ أيضاً

We present a deep learning model to predict the r-band bulge-to-total light ratio (B/T) of nearby galaxies using their multi-band JPEG images alone. Our Convolutional Neural Network (CNN) based regression model is trained on a large sample of galaxie s with reliable decomposition into the bulge and disk components. The existing approaches to estimate the B/T use galaxy light-profile modelling to find the best fit. This method is computationally expensive, prohibitively so for large samples of galaxies, and requires a significant amount of human intervention. Machine learning models have the potential to overcome these shortcomings. In our CNN model, for a test set of 20000 galaxies, 85.7 per cent of the predicted B/T values have absolute error (AE) less than 0.1. We see further improvement to 87.5 per cent if, while testing, we only consider brighter galaxies (with r-band apparent magnitude < 17) with no bright neighbours. Our model estimates B/T for the 20000 test galaxies in less than a minute. This is a significant improvement in inference time from the conventional fitting pipelines, which manage around 2-3 estimates per minute. Thus, the proposed machine learning approach could potentially save a tremendous amount of time, effort and computational resources while predicting B/T reliably, particularly in the era of next-generation sky surveys such as the Legacy Survey of Space and Time (LSST) and the Euclid sky survey which will produce extremely large samples of galaxies.
Using star-forming galaxies sample in the nearby Universe (0.02<z<0.10) selected from the SDSS (DR7) and GALEX all-sky survey (GR5), we present a new empirical calibration for predicting dust extinction of galaxies from H-alpha-to-FUV flux ratio. We find that the H-alpha dust extinction (A(Ha)) derived with H-alpha/H-beta ratio (Balmer decrement) increases with increasing H-alpha/UV ratio as expected, but there remains a considerable scatter around the relation, which is largely dependent on stellar mass and/or H-alpha equivalent width (EW(Ha)). At fixed H-alpha/UV ratio, galaxies with higher stellar mass (or galaxies with lower EW(Ha)) tend to be more highly obscured by dust. We quantify this trend and establish an empirical calibration for predicting A(Ha) with a combination of H-alpha/UV ratio, stellar mass and EW(Ha), with which we can successfully reduce the systematic uncertainties accompanying the simple H-alpha/UV approach by ~15-30%. The new recipes proposed in this study will provide a convenient tool for predicting dust extinction level of galaxies particularly when Balmer decrement is not available. By comparing A(Ha) (derived with Balmer decrement) and A(UV) (derived with IR/UV luminosity ratio) for a subsample of galaxies for which AKARI FIR photometry is available, we demonstrate that more massive galaxies tend to have higher extra extinction towards the nebular regions compared to the stellar continuum light. Considering recent studies reporting smaller extra extinction towards nebular regions for high-redshift galaxies, we argue that the dust geometry within high-redshift galaxies resemble more like low-mass galaxies in the nearby Universe.
[ABRIDGED] We derive the dust properties for 753 local galaxies and examine how these relate to some of their physical properties. We model their global dust-SEDs, treated statistically as an ensemble within a hierarchical Bayesian dust-SED modeling approach. The model-derived properties are the dust masses (Mdust), the average interstellar radiation field intensities (Uav), the mass fraction of very small dust grains (QPAH fraction), as well as their standard deviations. In addition, we use mid-IR observations to derive SFR and Mstar, quantities independent of the modeling. We derive distribution functions of the properties for the galaxy ensemble and per galaxy type. The mean value of Mdust for the ETGs is lower than that for the LTGs and IRs, despite ETGs and LTGs having Mstar spanning across the whole range observed. The Uav and QPAH fraction show no difference among different galaxy types. When fixing Uav to the Galactic value, the derived QPAH fraction varies across the Galactic value (0.071). The sSFR increases with galaxy type, while this is not the case for the dust-sSFR (=SFR/Mdust), showing an almost constant SFE per galaxy type. The galaxy sample is characterised by a tight relation between Mdust and Mstar for the LTGs and Irs, while ETGs scatter around this relation and tend towards smaller Mdust. While the relation indicates that Mdust may fundamentally be linked to Mstar, metallicity and Uav are the second parameter driving the scatter, which we investigate in a forthcoming work. We use the extended KS law to estimate Mgas and the GDR. The Mgas derived from the extended KS law is on average ~20% higher than that derived from the KS law, and a large standard deviation indicates the importance of the average SF present to regulate star formation and gas supply. The average GDR for the LTGs and IRs is 370, while including the ETGs gives an average of 550. [ABRIDGED]
We develop a machine learning-based framework to predict the HI content of galaxies using more straightforwardly observable quantities such as optical photometry and environmental parameters. We train the algorithm on z=0-2 outputs from the Mufasa co smological hydrodynamic simulation, which includes star formation, feedback, and a heuristic model to quench massive galaxies that yields a reasonable match to a range of survey data including HI. We employ a variety of machine learning methods (regressors), and quantify their performance using the root mean square error ({sc rmse}) and the Pearson correlation coefficient (r). Considering SDSS photometry, 3$^{rd}$ nearest neighbor environment and line of sight peculiar velocities as features, we obtain r $> 0.8$ accuracy of the HI-richness prediction, corresponding to {sc rmse}$<0.3$. Adding near-IR photometry to the features yields some improvement to the prediction. Compared to all the regressors, random forest shows the best performance, with r $>0.9$ at $z=0$, followed by a Deep Neural Network with r $>0.85$. All regressors exhibit a declining performance with increasing redshift, which limits the utility of this approach to $zla 1$, and they tend to somewhat over-predict the HI content of low-HI galaxies which might be due to Eddington bias in the training sample. We test our approach on the RESOLVE survey data. Training on a subset of RESOLVE data, we find that our machine learning method can reasonably well predict the HI-richness of the remaining RESOLVE data, with {sc rmse}$sim0.28$. When we train on mock data from Mufasa and test on RESOLVE, this increases to {sc rmse}$sim0.45$. Our method will be useful for making galaxy-by-galaxy survey predictions and incompleteness corrections for upcoming HI 21cm surveys such as the LADUMA and MIGHTEE surveys on MeerKAT, over regions where photometry is already available.
The new generation of deep photometric surveys requires unprecedentedly precise shape and photometry measurements of billions of galaxies to achieve their main science goals. At such depths, one major limiting factor is the blending of galaxies due t o line-of-sight projection, with an expected fraction of blended galaxies of up to 50%. Current deblending approaches are in most cases either too slow or not accurate enough to reach the level of requirements. This work explores the use of deep neural networks to estimate the photometry of blended pairs of galaxies in monochrome space images, similar to the ones that will be delivered by the Euclid space telescope. Using a clean sample of isolated galaxies from the CANDELS survey, we artificially blend them and train two different network models to recover the photometry of the two galaxies. We show that our approach can recover the original photometry of the galaxies before being blended with $sim$7% accuracy without any human intervention and without any assumption on the galaxy shape. This represents an improvement of at least a factor of 4 compared to the classical SExtractor approach. We also show that forcing the network to simultaneously estimate a binary segmentation map results in a slightly improved photometry. All data products and codes will be made public to ease the comparison with other approaches on a common data set.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا