Do you want to publish a course? Click here

A Machine Learning Approach for Dynamical Mass Measurements of Galaxy Clusters

169   0   0.0 ( 0 )
 Publication date 2014
  fields Physics
and research's language is English




Ask ChatGPT about the research

We present a modern machine learning approach for cluster dynamical mass measurements that is a factor of two improvement over using a conventional scaling relation. Different methods are tested against a mock cluster catalog constructed using halos with mass >= 10^14 Msolar/h from Multidarks publicly-available N-body MDPL halo catalog. In the conventional method, we use a standard M(sigma_v) power law scaling relation to infer cluster mass, M, from line-of-sight (LOS) galaxy velocity dispersion, sigma_v. The resulting fractional mass error distribution is broad, with width=0.87 (68% scatter), and has extended high-error tails. The standard scaling relation can be simply enhanced by including higher-order moments of the LOS velocity distribution. Applying the kurtosis as a correction term to log(sigma_v) reduces the width of the error distribution to 0.74 (16% improvement). Machine learning can be used to take full advantage of all the information in the velocity distribution. We employ the Support Distribution Machines (SDMs) algorithm that learns from distributions of data to predict single values. SDMs trained and tested on the distribution of LOS velocities yield width=0.46 (47% improvement). Furthermore, the problematic tails of the mass error distribution are effectively eliminated. Decreasing cluster mass errors will improve measurements of the growth of structure and lead to tighter constraints on cosmological parameters.



rate research

Read More

We study dynamical mass measurements of galaxy clusters contaminated by interlopers and show that a modern machine learning (ML) algorithm can predict masses by better than a factor of two compared to a standard scaling relation approach. We create two mock catalogs from Multidarks publicly available $N$-body MDPL1 simulation, one with perfect galaxy cluster membership information and the other where a simple cylindrical cut around the cluster center allows interlopers to contaminate the clusters. In the standard approach, we use a power-law scaling relation to infer cluster mass from galaxy line-of-sight (LOS) velocity dispersion. Assuming perfect membership knowledge, this unrealistic case produces a wide fractional mass error distribution, with a width of $Deltaepsilonapprox0.87$. Interlopers introduce additional scatter, significantly widening the error distribution further ($Deltaepsilonapprox2.13$). We employ the support distribution machine (SDM) class of algorithms to learn from distributions of data to predict single values. Applied to distributions of galaxy observables such as LOS velocity and projected distance from the cluster center, SDM yields better than a factor-of-two improvement ($Deltaepsilonapprox0.67$) for the contaminated case. Remarkably, SDM applied to contaminated clusters is better able to recover masses than even the scaling relation approach applied to uncontaminated clusters. We show that the SDM method more accurately reproduces the cluster mass function, making it a valuable tool for employing cluster observations to evaluate cosmological models.
We demonstrate the ability of convolutional neural networks (CNNs) to mitigate systematics in the virial scaling relation and produce dynamical mass estimates of galaxy clusters with remarkably low bias and scatter. We present two models, CNN$_mathrm{1D}$ and CNN$_mathrm{2D}$, which leverage this deep learning tool to infer cluster masses from distributions of member galaxy dynamics. Our first model, CNN$_text{1D}$, infers cluster mass directly from the distribution of member galaxy line-of-sight velocities. Our second model, CNN$_text{2D}$, extends the input space of CNN$_text{1D}$ to learn on the joint distribution of galaxy line-of-sight velocities and projected radial distances. We train each model as a regression over cluster mass using a labeled catalog of realistic mock cluster observations generated from the MultiDark simulation and UniverseMachine catalog. We then evaluate the performance of each model on an independent set of mock observations selected from the same simulated catalog. The CNN models produce cluster mass predictions with lognormal residuals of scatter as low as $0.132$ dex, greater than a factor of 2 improvement over the classical $M$-$sigma$ power-law estimator. Furthermore, the CNN model reduces prediction scatter relative to similar machine learning approaches by up to $17%$ while executing in drastically shorter training and evaluation times (by a factor of 30) and producing considerably more robust mass predictions (improving prediction stability under variations in galaxy sampling rate by $30%$).
We study methods for reconstructing Bayesian uncertainties on dynamical mass estimates of galaxy clusters using convolutional neural networks (CNNs). We discuss the statistical background of approximate Bayesian neural networks and demonstrate how variational inference techniques can be used to perform computationally tractable posterior estimation for a variety of deep neural architectures. We explore how various model designs and statistical assumptions impact prediction accuracy and uncertainty reconstruction in the context of cluster mass estimation. We measure the quality of our model posterior recovery using a mock cluster observation catalog derived from the MultiDark simulation and UniverseMachine catalog. We show that approximate Bayesian CNNs produce highly accurate dynamical cluster mass posteriors. These model posteriors are log-normal in cluster mass and recover $68%$ and $90%$ confidence intervals to within $1%$ of their measured value. We note how this rigorous modeling of dynamical mass posteriors is necessary for using cluster abundance measurements to constrain cosmological parameters.
We present an algorithm for inferring the dynamical mass of galaxy clusters directly from their respective phase-space distributions, i.e. the observed line-of-sight velocities and projected distances of galaxies from the cluster centre. Our method employs normalizing flows, a deep neural network capable of learning arbitrary high-dimensional probability distributions, and inherently accounts, to an adequate extent, for the presence of interloper galaxies which are not bounded to a given cluster, the primary contaminant of dynamical mass measurements. We validate and showcase the performance of our neural flow approach to robustly infer the dynamical mass of clusters from a realistic mock cluster catalogue. A key aspect of our novel algorithm is that it yields the probability density function of the mass of a particular cluster, thereby providing a principled way of quantifying uncertainties, in contrast to conventional machine learning approaches. The neural network mass predictions, when applied to a contaminated catalogue with interlopers, have a mean overall logarithmic residual scatter of 0.028 dex, with a log-normal scatter of 0.126 dex, which goes down to 0.089 dex for clusters in the intermediate to high mass range. This is an improvement by nearly a factor of four relative to the classical cluster mass scaling relation with the velocity dispersion, and outperforms recently proposed machine learning approaches. We also apply our neural flow mass estimator to a compilation of galaxy observations of some well-studied clusters with robust dynamical mass estimates, further substantiating the efficacy of our algorithm.
The hot intra-cluster medium (ICM) surrounding the heart of galaxy clusters is a complex medium comprised of various emitting components. Although previous studies of nearby galaxy clusters, such as the Perseus, the Coma, or the Virgo cluster, have demonstrated the need for multiple thermal components when spectroscopically fitting the ICMs X-ray emission, no systematic methodology for calculating the number of underlying components currently exists. In turn, underestimating or overestimating the number of components can cause systematic errors in the emission parameter estimations. In this paper, we present a novel approach to determining the number of components using an amalgam of machine learning techniques. Synthetic spectra containing a various number of underlying thermal components were created using well-established tools available from the textit{Chandra} X-ray Observatory. The dimensions of the training set was initially reduced using the Principal Component Analysis and then categorized based on the number of underlying components using a Random Forest Classifier. Our trained and tested algorithm was subsequently applied to textit{Chandra} X-ray observations of the Perseus cluster. Our results demonstrate that machine learning techniques can efficiently and reliably estimate the number of underlying thermal components in the spectra of galaxy clusters, regardless of the thermal model (MEKAL versus APEC). %and signal-to-noise ratio used. We also confirm that the core of the Perseus cluster contains a mix of differing underlying thermal components. We emphasize that although this methodology was trained and applied on textit{Chandra} X-ray observations, it is readily portable to other current (e.g. XMM-Newton, eROSITA) and upcoming (e.g. Athena, Lynx, XRISM) X-ray telescopes. The code is publicly available at url{https://github.com/XtraAstronomy/Pumpkin}.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا