Do you want to publish a course? Click here

Star Formation Rates for photometric samples of galaxies using machine learning methods

97   0   0.0 ( 0 )
 Publication date 2019
  fields Physics
and research's language is English




Ask ChatGPT about the research

Star Formation Rates or SFRs are crucial to constrain theories of galaxy formation and evolution. SFRs are usually estimated via spectroscopic observations requiring large amounts of telescope time. We explore an alternative approach based on the photometric estimation of global SFRs for large samples of galaxies, by using methods such as automatic parameter space optimisation, and supervised Machine Learning models. We demonstrate that, with such approach, accurate multi-band photometry allows to estimate reliable SFRs. We also investigate how the use of photometric rather than spectroscopic redshifts, affects the accuracy of derived global SFRs. Finally, we provide a publicly available catalogue of SFRs for more than 27 million galaxies extracted from the Sloan Digital Sky survey Data Release 7. The catalogue is available through the Vizier facility at the following link ftp://cdsarc.u-strasbg.fr/pub/cats/J/MNRAS/486/1377.



rate research

Read More

Global Stellar Formation Rates or SFRs are crucial to constrain theories of galaxy formation and evolution. SFRs are usually estimated via spectroscopic observations which require too much previous telescope time and therefore cannot match the needs of modern precision cosmology. We therefore propose a novel method to estimate SFRs for large samples of galaxies using a variety of supervised ML models.
We estimated photometric redshifts (zphot) for more than 1.1 million galaxies of the ESO Public Kilo-Degree Survey (KiDS) Data Release 2. KiDS is an optical wide-field imaging survey carried out with the VLT Survey Telescope (VST) and the OmegaCAM camera, which aims at tackling open questions in cosmology and galaxy evolution, such as the origin of dark energy and the channel of galaxy mass growth. We present a catalogue of photometric redshifts obtained using the Multi Layer Perceptron with Quasi Newton Algorithm (MLPQNA) model, provided within the framework of the DAta Mining and Exploration Web Application REsource (DAMEWARE). These photometric redshifts are based on a spectroscopic knowledge base which was obtained by merging spectroscopic datasets from GAMA (Galaxy And Mass Assembly) data release 2 and SDSS-III data release 9. The overall 1 sigma uncertainty on Delta z = (zspec - zphot) / (1+ zspec) is ~ 0.03, with a very small average bias of ~ 0.001, a NMAD of ~ 0.02 and a fraction of catastrophic outliers (| Delta z | > 0.15) of ~0.4%.
Obtaining accurate photometric redshift estimations is an important aspect of cosmology, remaining a prerequisite of many analyses. In creating novel methods to produce redshift estimations, there has been a shift towards using machine learning techniques. However, there has not been as much of a focus on how well different machine learning methods scale or perform with the ever-increasing amounts of data being produced. Here, we introduce a benchmark designed to analyse the performance and scalability of different supervised machine learning methods for photometric redshift estimation. Making use of the Sloan Digital Sky Survey (SDSS - DR12) dataset, we analysed a variety of the most used machine learning algorithms. By scaling the number of galaxies used to train and test the algorithms up to one million, we obtained several metrics demonstrating the algorithms performance and scalability for this task. Furthermore, by introducing a new optimisation method, time-considered optimisation, we were able to demonstrate how a small concession of error can allow for a great improvement in efficiency. From the algorithms tested we found that the Random Forest performed best in terms of error with a mean squared error, MSE = 0.0042; however, as other algorithms such as Boosted Decision Trees and k-Nearest Neighbours performed incredibly similarly, we used our benchmarks to demonstrate how different algorithms could be superior in different scenarios. We believe benchmarks such as this will become even more vital with upcoming surveys, such as LSST, which will capture billions of galaxies requiring photometric redshifts.
We review the numerical techniques for ideal and non-ideal magneto-hydrodynamics (MHD) used in the context of star formation simulations. We outline the specific challenges offered by modeling star forming environments, which are dominated by supersonic and super-Alfvenic turbulence in a radiative, self-gravitating fluid. These conditions are rather unique in physics and engineering and pose particularly severe restrictions on the robustness and accuracy of numerical codes. One striking aspect is the formation of collapsing fluid elements leading to the formation of singularities that represent point-like objects, namely the proto-stars. Although a few studies have attempted to resolve the formation of the first and second Larson cores, resolution limitations force us to use sink particle techniques, with sub-grid models to compute the accretion rates of mass, momentum and energy, as well as their ejection rate due to radiation and jets from the proto-stars. We discuss the most popular discretisation techniques used in the community, namely smoothed particle hydrodynamics, finite difference and finite volume methods, stressing the importance to maintain a divergence-free magnetic field. We discuss how to estimate the truncation error of a given numerical scheme, and its importance in setting the magnitude of the numerical diffusion. This can have a strong impact on the outcome of these MHD simulations, where both viscosity and resistivity are implemented at the grid scale. We then present various numerical techniques to model non-ideal MHD effects, such as Ohmic and ambipolar diffusion, as well as the Hall effect. These important physical ingredients are posing strong challenges in term of resolution and time stepping. For the latter, several strategies are discussed to overcome the limitations due to prohibitively small time steps (abridged).
The advancement of technology has resulted in a rapid increase in supernova (SN) discoveries. The Subaru/Hyper Suprime-Cam (HSC) transient survey, conducted from fall 2016 through spring 2017, yielded 1824 SN candidates. This gave rise to the need for fast type classification for spectroscopic follow-up and prompted us to develop a machine learning algorithm using a deep neural network (DNN) with highway layers. This machine is trained by actual observed cadence and filter combinations such that we can directly input the observed data array into the machine without any interpretation. We tested our model with a dataset from the LSST classification challenge (Deep Drilling Field). Our classifier scores an area under the curve (AUC) of 0.996 for binary classification (SN Ia or non-SN Ia) and 95.3% accuracy for three-class classification (SN Ia, SN Ibc, or SN II). Application of our binary classification to HSC transient data yields an AUC score of 0.925. With two weeks of HSC data since the first detection, this classifier achieves 78.1% accuracy for binary classification, and the accuracy increases to 84.2% with the full dataset. This paper discusses the potential use of machine learning for SN type classification purposes.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا