ترغب بنشر مسار تعليمي؟ اضغط هنا

Optimal observables in galaxy surveys

62   0   0.0 ( 0 )
 نشر من قبل Julien Carron
 تاريخ النشر 2014
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

The sufficient statistics of the one-point probability density function of the dark matter density field is worked out using cosmological perturbation theory and tested to the Millennium simulation density field. The logarithmic transformation is recovered for spectral index close to $-1$ as a special case of the family of power transformations. We then discuss how these transforms should be modified in the case of noisy tracers of the field and focus on the case of Poisson sampling. This gives us optimal local transformations to apply to galaxy survey data prior the extraction of the spectrum in order to capture most efficiently the information encoded in large scale structures.

قيم البحث

اقرأ أيضاً

Calibrating the photometric redshifts of >10^9 galaxies for upcoming weak lensing cosmology experiments is a major challenge for the astrophysics community. The path to obtaining the required spectroscopic redshifts for training and calibration is da unting, given the anticipated depths of the surveys and the difficulty in obtaining secure redshifts for some faint galaxy populations. Here we present an analysis of the problem based on the self-organizing map, a method of mapping the distribution of data in a high-dimensional space and projecting it onto a lower-dimensional representation. We apply this method to existing photometric data from the COSMOS survey selected to approximate the anticipated Euclid weak lensing sample, enabling us to robustly map the empirical distribution of galaxies in the multidimensional color space defined by the expected Euclid filters. Mapping this multicolor distribution lets us determine where - in galaxy color space - redshifts from current spectroscopic surveys exist and where they are systematically missing. Crucially, the method lets us determine whether a spectroscopic training sample is representative of the full photometric space occupied by the galaxies in a survey. We explore optimal sampling techniques and estimate the additional spectroscopy needed to map out the color-redshift relation, finding that sampling the galaxy distribution in color space in a systematic way can efficiently meet the calibration requirements. While the analysis presented here focuses on the Euclid survey, similar analysis can be applied to other surveys facing the same calibration challenge, such as DES, LSST, and WFIRST.
A heuristic greedy algorithm is developed for efficiently tiling spatially dense redshift surveys. In its first application to the Galaxy and Mass Assembly (GAMA) redshift survey we find it rapidly improves the spatial uniformity of our data, and nat urally corrects for any spatial bias introduced by the 2dF multi object spectrograph. We make conservative predictions for the final state of the GAMA redshift survey after our final allocation of time, and can be confident that even if worse than typical weather affects our observations, all of our main survey requirements will be met.
(Abridged) Estimating the uncertainty on the matter power spectrum internally (i.e. directly from the data) is made challenging by the simple fact that galaxy surveys offer at most a few independent samples. In addition, surveys have non-trivial geom etries, which make the interpretation of the observations even trickier, but the uncertainty can nevertheless be worked out within the Gaussian approximation. With the recent realization that Gaussian treatments of the power spectrum lead to biased error bars about the dilation of the baryonic acoustic oscillation scale, efforts are being directed towards developing non-Gaussian analyses, mainly from N-body simulations so far. Unfortunately, there is currently no way to tell how the non-Gaussian features observed in the simulations compare to those of the real Universe, and it is generally hard to tell at what level of accuracy the N-body simulations can model complicated non-linear effects such as mode coupling and galaxy bias. We propose in this paper a novel method that aims at measuring non-Gaussian error bars on the matter power spectrum directly from galaxy survey data. We utilize known symmetries of the 4-point function, Wiener filtering and principal component analysis to estimate the full covariance matrix from only four independent fields with minimal prior assumptions. With the noise filtering techniques and only four fields, we are able to recover the Fisher information obtained from a large N=200 sample to within 20 per cent, for k < 1.0 h/Mpc. Finally, we provide a prescription to extract a noise-filtered, non-Gaussian, covariance matrix from a handful of fields in the presence of a survey selection function.
Survey observations of the three-dimensional locations of galaxies are a powerful approach to measure the distribution of matter in the universe, which can be used to learn about the nature of dark energy, physics of inflation, neutrino masses, etc. A competitive survey, however, requires a large volume (e.g., Vsurvey is roughly 10 Gpc3) to be covered, and thus tends to be expensive. A sparse sampling method offers a more affordable solution to this problem: within a survey footprint covering a given survey volume, Vsurvey, we observe only a fraction of the volume. The distribution of observed regions should be chosen such that their separation is smaller than the length scale corresponding to the wavenumber of interest. Then one can recover the power spectrum of galaxies with precision expected for a survey covering a volume of Vsurvey (rather than the volume of the sum of observed regions) with the number density of galaxies given by the total number of observed galaxies divided by Vsurvey (rather than the number density of galaxies within an observed region). We find that regularly-spaced sampling yields an unbiased power spectrum with no window function effect, and deviations from regularly-spaced sampling, which are unavoidable in realistic surveys, introduce calculable window function effects and increase the uncertainties of the recovered power spectrum. While we discuss the sparse sampling method within the context of the forthcoming Hobby-Eberly Telescope Dark Energy Experiment, the method is general and can be applied to other galaxy surveys.
Host galaxy identification is a crucial step for modern supernova (SN) surveys such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope (LSST), which will discover SNe by the thousands. Spectroscopic resources are limited, so in t he absence of real-time SN spectra these surveys must rely on host galaxy spectra to obtain accurate redshifts for the Hubble diagram and to improve photometric classification of SNe. In addition, SN luminosities are known to correlate with host-galaxy properties. Therefore, reliable identification of host galaxies is essential for cosmology and SN science. We simulate SN events and their locations within their host galaxies to develop and test methods for matching SNe to their hosts. We use both real and simulated galaxy catalog data from the Advanced Camera for Surveys General Catalog and MICECATv2.0, respectively. We also incorporate hostless SNe residing in undetected faint hosts into our analysis, with an assumed hostless rate of 5%. Our fully automated algorithm is run on catalog data and matches SNe to their hosts with 91% accuracy. We find that including a machine learning component, run after the initial matching algorithm, improves the accuracy (purity) of the matching to 97% with a 2% cost in efficiency (true positive rate). Although the exact results are dependent on the details of the survey and the galaxy catalogs used, the method of identifying host galaxies we outline here can be applied to any transient survey.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا