Do you want to publish a course? Click here

Recovering a redshift-extended VSL signal from galaxy surveys

112   0   0.0 ( 0 )
 Publication date 2016
  fields Physics
and research's language is English




Ask ChatGPT about the research

We investigate a new method to recover (if any) a possible varying speed of light (VSL) signal from cosmological data. It comes as an upgrade of [1,2], where it was argued that such signal could be detected at a single redshift location only. Here, we show how it is possible to extract information on a VSL signal on an extended redshift range. We use mock cosmological data from future galaxy surveys (BOSS, DESI, emph{WFirst-2.4} and SKA): the sound horizon at decoupling imprinted in the clustering of galaxies (BAO) as an angular diameter distance, and the expansion rate derived from those galaxies recognized as cosmic chronometers. We find that, given the forecast sensitivities of such surveys, a $sim1%$ VSL signal can be detected at $3sigma$ confidence level in the redshift interval $z in [0.,1.55]$. Smaller signals $(sim0.1%)$ will be hardly detected (even if some lower possibility for a $1sigma$ detection is still possible). Finally, we discuss the degeneration between a VSL signal and a non-null spatial curvature; we show that, given present bounds on curvature, any signal, if detected, can be attributed to a VSL signal with a very high confidence. On the other hand, our method turns out to be useful even in the classical scenario of a constant speed of light: in this case, the signal we reconstruct can be totally ascribed to spatial curvature and, thus, we might have a method to detect a $0.01$-order curvature in the same redhift range with a very high confidence.



rate research

Read More

The total mass of neutrinos can be constrained in a number of ways using galaxy redshift surveys. Massive neutrinos modify the expansion rate of the Universe, which can be measured using baryon acoustic oscillations (BAOs) or the Alcock-Paczynski (AP) test. Massive neutrinos also change the structure growth rate and the amplitude of the matter power spectrum, which can be measured using redshift-space distortions (RSD). We use the Fisher matrix formalism to disentangle these information sources, to provide projected neutrino mass constraints from each of these probes alone and to determine how sensitive each is to the assumed cosmological model. We isolate the distinctive effect of neutrino free-streaming on the matter power spectrum and structure growth rate as a signal unique to massive neutrinos that can provide the most robust constraints, which are relatively insensitive to extensions to the cosmological model beyond $Lambda$CDM. We also provide forecasted constraints using all of the information contained in the observed galaxy power spectrum combined, and show that these maximally optimistic constraints are primarily limited by the accuracy to which the optical depth of the cosmic microwave background, $tau$, is known.
We present a deep machine learning (ML)-based technique for accurately determining $sigma_8$ and $Omega_m$ from mock 3D galaxy surveys. The mock surveys are built from the AbacusCosmos suite of $N$-body simulations, which comprises 40 cosmological volume simulations spanning a range of cosmological models, and we account for uncertainties in galaxy formation scenarios through the use of generalized halo occupation distributions (HODs). We explore a trio of ML models: a 3D convolutional neural network (CNN), a power-spectrum-based fully connected network, and a hybrid approach that merges the two to combine physically motivated summary statistics with flexible CNNs. We describe best practices for training a deep model on a suite of matched-phase simulations and we test our model on a completely independent sample that uses previously unseen initial conditions, cosmological parameters, and HOD parameters. Despite the fact that the mock observations are quite small ($sim0.07h^{-3},mathrm{Gpc}^3$) and the training data span a large parameter space (6 cosmological and 6 HOD parameters), the CNN and hybrid CNN can constrain $sigma_8$ and $Omega_m$ to $sim3%$ and $sim4%$, respectively.
Survey observations of the three-dimensional locations of galaxies are a powerful approach to measure the distribution of matter in the universe, which can be used to learn about the nature of dark energy, physics of inflation, neutrino masses, etc. A competitive survey, however, requires a large volume (e.g., Vsurvey is roughly 10 Gpc3) to be covered, and thus tends to be expensive. A sparse sampling method offers a more affordable solution to this problem: within a survey footprint covering a given survey volume, Vsurvey, we observe only a fraction of the volume. The distribution of observed regions should be chosen such that their separation is smaller than the length scale corresponding to the wavenumber of interest. Then one can recover the power spectrum of galaxies with precision expected for a survey covering a volume of Vsurvey (rather than the volume of the sum of observed regions) with the number density of galaxies given by the total number of observed galaxies divided by Vsurvey (rather than the number density of galaxies within an observed region). We find that regularly-spaced sampling yields an unbiased power spectrum with no window function effect, and deviations from regularly-spaced sampling, which are unavoidable in realistic surveys, introduce calculable window function effects and increase the uncertainties of the recovered power spectrum. While we discuss the sparse sampling method within the context of the forthcoming Hobby-Eberly Telescope Dark Energy Experiment, the method is general and can be applied to other galaxy surveys.
Accurately characterizing the redshift distributions of galaxies is essential for analysing deep photometric surveys and testing cosmological models. We present a technique to simultaneously infer redshift distributions and individual redshifts from photometric galaxy catalogues. Our model constructs a piecewise constant representation (effectively a histogram) of the distribution of galaxy types and redshifts, the parameters of which are efficiently inferred from noisy photometric flux measurements. This approach can be seen as a generalization of template-fitting photometric redshift methods and relies on a library of spectral templates to relate the photometric fluxes of individual galaxies to their redshifts. We illustrate this technique on simulated galaxy survey data, and demonstrate that it delivers correct posterior distributions on the underlying type and redshift distributions, as well as on the individual types and redshifts of galaxies. We show that even with uninformative priors, large photometric errors and parameter degeneracies, the redshift and type distributions can be recovered robustly thanks to the hierarchical nature of the model, which is not possible with common photometric redshift estimation techniques. As a result, redshift uncertainties can be fully propagated in cosmological analyses for the first time, fulfilling an essential requirement for the current and future generations of surveys.
Measurements of clustering in large-scale imaging surveys that make use of photometric redshifts depend on the uncertainties in the redshift determination. We have used light-cone simulations to show how the deprojection method successfully recovers the real space correlation function when applied to mock photometric redshift surveys. We study how the errors in the redshift determination affect the quality of the recovered two-point correlation function. Considering the expected errors associated to the planned photometric redshift surveys, we conclude that this method provides information on the clustering of matter useful for the estimation of cosmological parameters that depend on the large scale distribution of galaxies.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا