ترغب بنشر مسار تعليمي؟ اضغط هنا

In the near future, the overlap of the Rubin Observatory Legacy Survey of Space and Time (LSST) and the Simons Observatory (SO) will present an ideal opportunity for joint cosmological dataset analyses. In this paper we simulate the joint likelihood analysis of these two experiments using six two-point functions derived from galaxy position, galaxy shear, and CMB lensing convergence fields. Our analysis focuses on realistic noise and systematics models and we find that the dark energy Figure-of-Merit (FoM) increases by 53% (92%) from LSST-only to LSST+SO in Year 1 (Year 6). We also investigate the benefits of using the same galaxy sample for both clustering and lensing analyses, and find the choice improves the overall signal-to-noise by ~30-40%, which significantly improves the photo-z calibration and mildly improves the cosmological constraints. Finally, we explore the effects of catastrophic photo-z outliers finding that they cause significant parameter biases when ignored. We develop a new mitigation approach termed island model, which corrects a large fraction of the biases with only a few parameters while preserving the constraining power.
The tightest and most robust cosmological results of the next decade will be achieved by bringing together multiple surveys of the Universe. This endeavor has to happen across multiple layers of the data processing and analysis, e.g., enhancements ar e expected from combining Euclid, Rubin, and Roman (as well as other surveys) not only at the level of joint processing and catalog combination, but also during the post-catalog parts of the analysis such as the cosmological inference process. While every experiment builds their own analysis and inference framework and creates their own set of simulations, cross-survey work that homogenizes these efforts, exchanges information from numerical simulations, and coordinates details in the modeling of astrophysical and observational systematics of the corresponding datasets is crucial.
Measurements of large-scale structure are interpreted using theoretical predictions for the matter distribution, including potential impacts of baryonic physics. We constrain the feedback strength of baryons jointly with cosmology using weak lensing and galaxy clustering observables (3$times$2pt) of Dark Energy Survey (DES) Year 1 data in combination with external information from baryon acoustic oscillations (BAO) and Planck cosmic microwave background polarization. Our baryon modeling is informed by a set of hydrodynamical simulations that span a variety of baryon scenarios; we span this space via a Principal Component (PC) analysis of the summary statistics extracted from these simulations. We show that at the level of DES Y1 constraining power, one PC is sufficient to describe the variation of baryonic effects in the observables, and the first PC amplitude ($Q_1$) generally reflects the strength of baryon feedback. With the upper limit of $Q_1$ prior being bound by the Illustris feedback scenarios, we reach $sim 20%$ improvement in the constraint of $S_8=sigma_8(Omega_{rm m}/0.3)^{0.5}=0.788^{+0.018}_{-0.021}$ compared to the original DES 3$times$2pt analysis. This gain is driven by the inclusion of small-scale cosmic shear information down to 2.5 arcmin, which was excluded in previous DES analyses that did not model baryonic physics. We obtain $S_8=0.781^{+0.014}_{-0.015}$ for the combined DES Y1+Planck EE+BAO analysis with a non-informative $Q_1$ prior. In terms of the baryon constraints, we measure $Q_1=1.14^{+2.20}_{-2.80}$ for DES Y1 only and $Q_1=1.42^{+1.63}_{-1.48}$ for DESY1+Planck EE+BAO, allowing us to exclude one of the most extreme AGN feedback hydrodynamical scenario at more than $2 sigma$.
The Tri-Agency Cosmological Simulations (TACS) Task Force was formed when Program Managers from the Department of Energy (DOE), the National Aeronautics and Space Administration (NASA), and the National Science Foundation (NSF) expressed an interest in receiving input into the cosmological simulations landscape related to the upcoming DOE/NSF Vera Rubin Observatory (Rubin), NASA/ESAs Euclid, and NASAs Wide Field Infrared Survey Telescope (WFIRST). The Co-Chairs of TACS, Katrin Heitmann and Alina Kiessling, invited community scientists from the USA and Europe who are each subject matter experts and are also members of one or more of the surveys to contribute. The following report represents the input from TACS that was delivered to the Agencies in December 2018.
We simulate the scientific performance of the Wide-Field Infrared Survey Telescope (WFIRST) High Latitude Survey (HLS) on dark energy and modified gravity. The 1.6 year HLS Reference survey is currently envisioned to image 2000 deg$^2$ in multiple ba nds to a depth of $sim$26.5 in Y, J, H and to cover the same area with slit-less spectroscopy beyond z=3. The combination of deep, multi-band photometry and deep spectroscopy will allow scientists to measure the growth and geometry of the Universe through a variety of cosmological probes (e.g., weak lensing, galaxy clusters, galaxy clustering, BAO, Type Ia supernova) and, equally, it will allow an exquisite control of observational and astrophysical systematic effects. In this paper we explore multi-probe strategies that can be implemented given WFIRSTs instrument capabilities. We model cosmological probes individually and jointly and account for correlated systematics and statistical uncertainties due to the higher order moments of the density field. We explore different levels of observational systematics for the WFIRST survey (photo-z and shear calibration) and ultimately run a joint likelihood analysis in N-dim parameter space. We find that the WFIRST reference survey alone (no external data sets) can achieve a standard dark energy FoM of >300 when including all probes. This assumes no information from external data sets and realistic assumptions for systematics. Our study of the HLS reference survey should be seen as part of a future community driven effort to simulate and optimize the science return of WFIRST.
Accurate covariance matrices for two-point functions are critical for inferring cosmological parameters in likelihood analyses of large-scale structure surveys. Among various approaches to obtaining the covariance, analytic computation is much faster and less noisy than estimation from data or simulations. However, the transform of covariances from Fourier space to real space involves integrals with two Bessel integrals, which are numerically slow and easily affected by numerical uncertainties. Inaccurate covariances may lead to significant errors in the inference of the cosmological parameters. In this paper, we introduce a 2D-FFTLog algorithm for efficient, accurate and numerically stable computation of non-Gaussian real space covariances for both 3D and projected statistics. The 2D-FFTLog algorithm is easily extended to perform real space bin-averaging. We apply the algorithm to the covariances for galaxy clustering and weak lensing for a Dark Energy Survey Year 3-like and a Rubin Observatorys Legacy Survey of Space and Time Year 1-like survey, and demonstrate that for both surveys, our algorithm can produce numerically stable angular bin-averaged covariances with the flat sky approximation, which are sufficiently accurate for inferring cosmological parameters. The code CosmoCov for computing the real space covariances with or without the flat sky approximation is released along with this paper.
We explore synergies between the space-based Wide-Field Infrared Survey Telescope (WFIRST) and the ground-based Rubin Observatory Legacy Survey of Space and Time (LSST). In particular, we consider a scenario where the currently envisioned survey stra tegy for WFIRSTs High Latitude Survey (HLS), i.e., 2000 square degrees in four narrow photometric bands is altered in favor of a strategy that combines rapid coverage of the LSST area (to full LSST depth) in one band. We find that a 5-month WFIRST survey in the W-band can cover the full LSST survey area providing high-resolution imaging for >95% of the LSST Year 10 gold galaxy sample. We explore a second, more ambitious scenario where WFIRST spends 1.5 years covering the LSST area. For this second scenario we quantify the constraining power on dark energy equation of state parameters from a joint weak lensing and galaxy clustering analysis, and compare it to an LSST-only survey and to the Reference WFIRST HLS survey. Our survey simulations are based on the WFIRST exposure time calculator and redshift distributions from the CANDELS catalog. Our statistical uncertainties account for higher-order correlations of the density field, and we include a wide range of systematic effects, such as uncertainties in shape and redshift measurements, and modeling uncertainties of astrophysical systematics, such as galaxy bias, intrinsic galaxy alignment, and baryonic physics. Assuming the 5-month WFIRST wide scenario, we find a significant increase in constraining power for the joint LSST+WFIRST wide survey compared to LSST Y10 (FoM(Wwide)= 2.4 FoM(LSST)) and compared to LSST+WFIRST HLS (FoM(Wwide)= 5.5 FoM(HLS)).
Angular two-point statistics of large-scale structure observables are important cosmological probes. To reach the high accuracy required by the statistical precision of future surveys, some of these statistics may need to be computed without the comm only employed Limber approximation; the exact computation however requires integration over Bessel functions, and a brute-force evaluation is slow to converge. We present a new method based on our generalized FFTLog algorithm for the efficient computation of angular power spectra beyond the Limber approximation. The new method significantly simplifies the calculation and improves the numerical speed and stability. It is easily extended to handle integrals involving derivatives of Bessel functions, making it equally applicable to numerically more challenging cases such as contributions from redshift-space distortions and Doppler effects. We implement our method for galaxy clustering and galaxy-galaxy lensing power spectra. We find that using the Limber approximation for galaxy clustering in future analyses like LSST Year 1 and DES Year 6 may cause significant biases in cosmological parameters, indicating that going beyond the Limber approximation is necessary for these analyses.
We study the significance of non-Gaussianity in the likelihood of weak lensing shear two-point correlation functions, detecting significantly non-zero skewness and kurtosis in one-dimensional marginal distributions of shear two-point correlation func tions in simulated weak lensing data. We examine the implications in the context of future surveys, in particular LSST, with derivations of how the non-Gaussianity scales with survey area. We show that there is no significant bias in one-dimensional posteriors of $Omega_{rm m}$ and $sigma_{rm 8}$ due to the non-Gaussian likelihood distributions of shear correlations functions using the mock data ($100$ deg$^{2}$). We also present a systematic approach to constructing approximate multivariate likelihoods with one-dimensional parametric functions by assuming independence or more flexible non-parametric multivariate methods after decorrelating the data points using principal component analysis (PCA). While the use of PCA does not modify the non-Gaussianity of the multivariate likelihood, we find empirically that the one-dimensional marginal sampling distributions of the PCA components exhibit less skewness and kurtosis than the original shear correlation functions.Modeling the likelihood with marginal parametric functions based on the assumption of independence between PCA components thus gives a lower limit for the biases. We further demonstrate that the difference in cosmological parameter constraints between the multivariate Gaussian likelihood model and more complex non-Gaussian likelihood models would be even smaller for an LSST-like survey. In addition, the PCA approach automatically serves as a data compression method, enabling the retention of the majority of the cosmological information while reducing the dimensionality of the data vector by a factor of $sim$5.
Computing the inverse covariance matrix (or precision matrix) of large data vectors is crucial in weak lensing (and multi-probe) analyses of the large scale structure of the universe. Analytically computed covariances are noise-free and hence straigh tforward to invert, however the model approximations might be insufficient for the statistical precision of future cosmological data. Estimating covariances from numerical simulations improves on these approximations, but the sample covariance estimator is inherently noisy, which introduces uncertainties in the error bars on cosmological parameters and also additional scatter in their best fit values. For future surveys, reducing both effects to an acceptable level requires an unfeasibly large number of simulations. In this paper we describe a way to expand the true precision matrix around a covariance model and show how to estimate the leading order terms of this expansion from simulations. This is especially powerful if the covariance matrix is the sum of two contributions, $smash{mathbf{C} = mathbf{A}+mathbf{B}}$, where $smash{mathbf{A}}$ is well understood analytically and can be turned off in simulations (e.g. shape-noise for cosmic shear) to yield a direct estimate of $smash{mathbf{B}}$. We test our method in mock experiments resembling tomographic weak lensing data vectors from the Dark Energy Survey (DES) and the Large Synoptic Survey Telecope (LSST). For DES we find that $400$ N-body simulations are sufficient to achive negligible statistical uncertainties on parameter constraints. For LSST this is achieved with $2400$ simulations. The standard covariance estimator would require >$10^5$ simulations to reach a similar precision. We extend our analysis to a DES multi-probe case finding a similar performance.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا