ترغب بنشر مسار تعليمي؟ اضغط هنا

Identifying Cosmological Information in a Deep Neural Network

465   0   0.0 ( 0 )
 نشر من قبل Koya Murakami
 تاريخ النشر 2020
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

A novel method images to estimate cosmological parameters based on images is presented. In this paper, we demonstrate the use of a convolutional neural network (CNN) for constraining the mass of dark matter particle. For this purpose, we perform a suite of N-body simulations with different dark matter particle masses to train CNN and estimate dark matter mass using a density-contrast map. The proposed method is complementary to the one based on summary statistics, such as two-point correlation function. We compare our CNN classification results with those obtained from the two-point correlation of the distribution of dark matter particles, and find that the CNN offers better performance In addition, we use images made from a random Gauss simulation to train a CNN, which is then compared with the CNN trained by N-body simulation and two-point correlation. The random Gauss-trained CNN has comparable performance to two-point correlation.

قيم البحث

اقرأ أيضاً

Textual network embeddings aim to learn a low-dimensional representation for every node in the network so that both the structural and textual information from the networks can be well preserved in the representations. Traditionally, the structural a nd textual embeddings were learned by models that rarely take the mutual influences between them into account. In this paper, a deep neural architecture is proposed to effectively fuse the two kinds of informations into one representation. The novelties of the proposed architecture are manifested in the aspects of a newly defined objective function, the complementary information fusion method for structural and textual features, and the mutual gate mechanism for textual feature extraction. Experimental results show that the proposed model outperforms the comparing methods on all three datasets.
260 - Xiuyuan Yang 2011
Recent studies have shown that the number counts of convergence peaks N(kappa) in weak lensing (WL) maps, expected from large forthcoming surveys, can be a useful probe of cosmology. We follow up on this finding, and use a suite of WL convergence map s, obtained from ray-tracing N-body simulations, to study (i) the physical origin of WL peaks with different heights, and (ii) whether the peaks contain information beyond the convergence power spectrum P_ell. In agreement with earlier work, we find that high peaks (with amplitudes >~ 3.5 sigma, where sigma is the r.m.s. of the convergence kappa) are typically dominated by a single massive halo. In contrast, medium-height peaks (~0.5-1.5 sigma) cannot be attributed to a single collapsed dark matter halo, and are instead created by the projection of multiple (typically, 4-8) halos along the line of sight, and by random galaxy shape noise. Nevertheless, these peaks dominate the sensitivity to the cosmological parameters w, sigma_8, and Omega_m. We find that the peak height distribution and its dependence on cosmology differ significantly from predictions in a Gaussian random field. We directly compute the marginalized errors on w, sigma_8, and Omega_m from the N(kappa) + P_ell combination, including redshift tomography with source galaxies at z_s=1 and z_s=2. We find that the N(kappa) + P_ell combination has approximately twice the cosmological sensitivity compared to P_ell alone. These results demonstrate that N(kappa) contains non-Gaussian information complementary to the power spectrum.
We present a method to reconstruct the initial conditions of the universe using observed galaxy positions and luminosities under the assumption that the luminosities can be calibrated with weak lensing to give the mean halo mass. Our method relies on following the gradients of forward model and since the standard way to identify halos is non-differentiable and results in a discrete sample of objects, we propose a framework to model the halo position and mass field starting from the non-linear matter field using Neural Networks. We evaluate the performance of our model with multiple metrics. Our model is more than $95%$ correlated with the halo-mass fields up to $ksim 0.7 {rm h/Mpc}$ and significantly reduces the stochasticity over the Poisson shot noise. We develop a data likelihood model that takes our modeling error and intrinsic scatter in the halo mass-light relation into account and show that a displaced log-normal model is a good approximation to it. We optimize over the corresponding loss function to reconstruct the initial density field and develop an annealing procedure to speed up and improve the convergence. We apply the method to halo number densities of $bar{n} = 2.5times 10^{-4} -10^{-3}({rm h/Mpc})^3$, typical of current and future redshift surveys, and recover a Gaussian initial density field, mapping all the higher order information in the data into the power spectrum. We show that our reconstruction improves over the standard reconstruction. For baryonic acoustic oscillations (BAO) the gains are relatively modest because BAO is dominated by large scales where standard reconstruction suffices. We improve upon it by $sim 15-20%$ in terms of error on BAO peak as estimated by Fisher analysis at $z=0$. We expect larger gains will be achieved when applying this method to the broadband linear power spectrum reconstruction on smaller scales.
131 - S. Grandis 2015
In light of the growing number of cosmological observations, it is important to develop versatile tools to quantify the constraining power and consistency of cosmological probes. Originally motivated from information theory, we use the relative entro py to compute the information gained by Bayesian updates in units of bits. This measure quantifies both the improvement in precision and the surprise, i.e. the tension arising from shifts in central values. Our starting point is a WMAP9 prior which we update with observations of the distance ladder, supernovae (SNe), baryon acoustic oscillations (BAO), and weak lensing as well as the 2015 Planck release. We consider the parameters of the flat $Lambda$CDM concordance model and some of its extensions which include curvature and Dark Energy equation of state parameter $w$. We find that, relative to WMAP9 and within these model spaces, the probes that have provided the greatest gains are Planck (10 bits), followed by BAO surveys (5.1 bits) and SNe experiments (3.1 bits). The other cosmological probes, including weak lensing (1.7 bits) and {$rm H_0$} measures (1.7 bits), have contributed information but at a lower level. Furthermore, we do not find any significant surprise when updating the constraints of WMAP9 with any of the other experiments, meaning that they are consistent with WMAP9. However, when we choose Planck15 as the prior, we find that, accounting for the full multi-dimensionality of the parameter space, the weak lensing measurements of CFHTLenS produce a large surprise of 4.4 bits which is statistically significant at the 8 $sigma$ level. We discuss how the relative entropy provides a versatile and robust framework to compare cosmological probes in the context of current and future surveys.
Fully-automatic execution is the ultimate goal for many Computer Vision applications. However, this objective is not always realistic in tasks associated with high failure costs, such as medical applications. For these tasks, semi-automatic methods a llowing minimal effort from users to guide computer algorithms are often preferred due to desirable accuracy and performance. Inspired by the practicality and applicability of the semi-automatic approach, this paper proposes a novel deep neural network architecture, namely SideInfNet that effectively integrates features learnt from images with side information extracted from user annotations. To evaluate our method, we applied the proposed network to three semantic segmentation tasks and conducted extensive experiments on benchmark datasets. Experimental results and comparison with prior work have verified the superiority of our model, suggesting the generality and effectiveness of the model in semi-automatic semantic segmentation.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا