Do you want to publish a course? Click here

A Novel Machine Learning Approach to Disentangle Multi-Temperature Regions in Galaxy Clusters

112   0   0.0 ( 0 )
 Added by Carter Rhea
 Publication date 2020
  fields Physics
and research's language is English




Ask ChatGPT about the research

The hot intra-cluster medium (ICM) surrounding the heart of galaxy clusters is a complex medium comprised of various emitting components. Although previous studies of nearby galaxy clusters, such as the Perseus, the Coma, or the Virgo cluster, have demonstrated the need for multiple thermal components when spectroscopically fitting the ICMs X-ray emission, no systematic methodology for calculating the number of underlying components currently exists. In turn, underestimating or overestimating the number of components can cause systematic errors in the emission parameter estimations. In this paper, we present a novel approach to determining the number of components using an amalgam of machine learning techniques. Synthetic spectra containing a various number of underlying thermal components were created using well-established tools available from the textit{Chandra} X-ray Observatory. The dimensions of the training set was initially reduced using the Principal Component Analysis and then categorized based on the number of underlying components using a Random Forest Classifier. Our trained and tested algorithm was subsequently applied to textit{Chandra} X-ray observations of the Perseus cluster. Our results demonstrate that machine learning techniques can efficiently and reliably estimate the number of underlying thermal components in the spectra of galaxy clusters, regardless of the thermal model (MEKAL versus APEC). %and signal-to-noise ratio used. We also confirm that the core of the Perseus cluster contains a mix of differing underlying thermal components. We emphasize that although this methodology was trained and applied on textit{Chandra} X-ray observations, it is readily portable to other current (e.g. XMM-Newton, eROSITA) and upcoming (e.g. Athena, Lynx, XRISM) X-ray telescopes. The code is publicly available at url{https://github.com/XtraAstronomy/Pumpkin}.

rate research

Read More

90 - F. Tarsitano 2021
In this work we explore the possibility of applying machine learning methods designed for one-dimensional problems to the task of galaxy image classification. The algorithms used for image classification typically rely on multiple costly steps, such as the Point Spread Function (PSF) deconvolution and the training and application of complex Convolutional Neural Networks (CNN) of thousands or even millions of parameters. In our approach, we extract features from the galaxy images by analysing the elliptical isophotes in their light distribution and collect the information in a sequence. The sequences obtained with this method present definite features allowing a direct distinction between galaxy types, as opposed to smooth Sersic profiles. Then, we train and classify the sequences with machine learning algorithms, designed through the platform Modulos AutoML, and study how they optimize the classification task. As a demonstration of this method, we use the second public release of the Dark Energy Survey (DES DR2). We show that by applying it to this sample we are able to successfully distinguish between early-type and late-type galaxies, for images with signal-to-noise ratio greater then 300. This yields an accuracy of $86%$ for the early-type galaxies and $93%$ for the late-type galaxies, which is on par with most contemporary automated image classification approaches. Our novel method allows for galaxy images to be accurately classified and is faster than other approaches. Data dimensionality reduction also implies a significant lowering in computational cost. In the perspective of future data sets obtained with e.g. Euclid and the Vera Rubin Observatory (VRO), this work represents a path towards using a well-tested and widely used platform from industry in efficiently tackling galaxy classification problems at the peta-byte scale.
The cosmic web plays a major role in the formation and evolution of galaxies and defines, to a large extent, their properties. However, the relation between galaxies and environment is still not well understood. Here we present a machine learning approach to study imprints of environmental effects on the mass assembly of haloes. We present a galaxy-LSS machine learning classifier based on galaxy properties sensitive to the environment. We then use the classifier to assess the relevance of each property. Correlations between galaxy properties and their cosmic environment can be used to predict galaxy membership to void/wall or filament/cluster with an accuracy of $93%$. Our study unveils environmental information encoded in properties of haloes not normally considered directly dependent on the cosmic environment such as merger history and complexity. Understanding the physical mechanism by which the cosmic web is imprinted in a halo can lead to significant improvements in galaxy formation models. This is accomplished by extracting features from galaxy properties and merger trees, computing feature scores for each feature and then applying support vector machine to different feature sets. To this end, we have discovered that the shape and depth of the merger tree, formation time and density of the galaxy are strongly associated with the cosmic environment. We describe a significant improvement in the original classification algorithm by performing LU decomposition of the distance matrix computed by the feature vectors and then using the output of the decomposition as input vectors for support vector machine.
We present a star/galaxy classification for the Southern Photometric Local Universe Survey (S-PLUS), based on a Machine Learning approach: the Random Forest algorithm. We train the algorithm using the S-PLUS optical photometry up to $r$=21, matched to SDSS/DR13, and morphological parameters. The metric of importance is defined as the relative decrease of the initial accuracy when all correlations related to a certain feature is vanished. In general, the broad photometric bands presented higher importance when compared to narrow ones. The influence of the morphological parameters has been evaluated training the RF with and without the inclusion of morphological parameters, presenting accuracy values of 95.0% and 88.1%, respectively. Particularly, the morphological parameter {rm FWHM/PSF} performed the highest importance over all features to distinguish between stars and galaxies, indicating that it is crucial to classify objects into stars and galaxies. We investigate the misclassification of stars and galaxies in the broad-band colour-colour diagram $(g-r)$ versus $(r-i)$. The morphology can notably improve the classification of objects at regions in the diagram where the misclassification was relatively high. Consequently, it provides cleaner samples for statistical studies. The expected contamination rate of red galaxies as a function of the redshift is estimated, providing corrections for red galaxy samples. The classification of QSOs as extragalactic objects is slightly better using photometric-only case. An extragalactic point-source catalogue is provided using the classification without any morphology feature (only the SED information) with additional constraints on photometric redshifts and {rm FWHM/PSF} values.
In the first paper of this series (Rhea et al. 2020), we demonstrated that neural networks can robustly and efficiently estimate kinematic parameters for optical emission-line spectra taken by SITELLE at the Canada-France-Hawaii Telescope. This paper expands upon this notion by developing an artificial neural network to estimate the line ratios of strong emission-lines present in the SN1, SN2, and SN3 filters of SITELLE. We construct a set of 50,000 synthetic spectra using line ratios taken from the Mexican Million Model database replicating Hii regions. Residual analysis of the network on the test set reveals the networks ability to apply tight constraints to the line ratios. We verified the networks efficacy by constructing an activation map, checking the [N ii] doublet fixed ratio, and applying a standard k-fold cross-correlation. Additionally, we apply the network to SITELLE observation of M33; the residuals between the algorithms estimates and values calculated using standard fitting methods show general agreement. Moreover, the neural network reduces the computational costs by two orders of magnitude. Although standard fitting routines do consistently well depending on the signal-to-noise ratio of the spectral features, the neural network can also excel at predictions in the low signal-to-noise regime within the controlled environment of the training set as well as on observed data when the source spectral properties are well constrained by models. These results reinforce the power of machine learning in spectral analysis.
We present a modern machine learning approach for cluster dynamical mass measurements that is a factor of two improvement over using a conventional scaling relation. Different methods are tested against a mock cluster catalog constructed using halos with mass >= 10^14 Msolar/h from Multidarks publicly-available N-body MDPL halo catalog. In the conventional method, we use a standard M(sigma_v) power law scaling relation to infer cluster mass, M, from line-of-sight (LOS) galaxy velocity dispersion, sigma_v. The resulting fractional mass error distribution is broad, with width=0.87 (68% scatter), and has extended high-error tails. The standard scaling relation can be simply enhanced by including higher-order moments of the LOS velocity distribution. Applying the kurtosis as a correction term to log(sigma_v) reduces the width of the error distribution to 0.74 (16% improvement). Machine learning can be used to take full advantage of all the information in the velocity distribution. We employ the Support Distribution Machines (SDMs) algorithm that learns from distributions of data to predict single values. SDMs trained and tested on the distribution of LOS velocities yield width=0.46 (47% improvement). Furthermore, the problematic tails of the mass error distribution are effectively eliminated. Decreasing cluster mass errors will improve measurements of the growth of structure and lead to tighter constraints on cosmological parameters.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا