ترغب بنشر مسار تعليمي؟ اضغط هنا

In the near future, the overlap of the Rubin Observatory Legacy Survey of Space and Time (LSST) and the Simons Observatory (SO) will present an ideal opportunity for joint cosmological dataset analyses. In this paper we simulate the joint likelihood analysis of these two experiments using six two-point functions derived from galaxy position, galaxy shear, and CMB lensing convergence fields. Our analysis focuses on realistic noise and systematics models and we find that the dark energy Figure-of-Merit (FoM) increases by 53% (92%) from LSST-only to LSST+SO in Year 1 (Year 6). We also investigate the benefits of using the same galaxy sample for both clustering and lensing analyses, and find the choice improves the overall signal-to-noise by ~30-40%, which significantly improves the photo-z calibration and mildly improves the cosmological constraints. Finally, we explore the effects of catastrophic photo-z outliers finding that they cause significant parameter biases when ignored. We develop a new mitigation approach termed island model, which corrects a large fraction of the biases with only a few parameters while preserving the constraining power.
One of the primary sources of uncertainties in modeling the cosmic-shear power spectrum on small scales is the effect of baryonic physics. Accurate cosmology for Stage-IV surveys requires knowledge of the matter power spectrum deep in the nonlinear r egime at the percent level. Therefore, it is important to develop reliable mitigation techniques to take into account baryonic uncertainties if information from small scales is to be considered in the cosmological analysis. In this work, we develop a new mitigation method for dealing with baryonic physics for the case of the shear angular power spectrum. The method is based on an extended covariance matrix that incorporates baryonic uncertainties informed by hydrodynamical simulations. We use the results from 13 hydrodynamical simulations and the residual errors arising from a fit to a $Lambda$CDM model using the extended halo model code {tt HMCode} to account for baryonic physics. These residual errors are used to model a so-called theoretical error covariance matrix that is added to the original covariance matrix. In order to assess the performance of the method, we use the 2D tomographic shear from four hydrodynamical simulations that have different extremes of baryonic parameters as mock data and run a likelihood analysis comparing the residual bias on $Omega_m$ and $sigma_8$ of our method and the HMCode for an LSST-like survey. We use different modelling of the theoretical error covariance matrix to test the robustness of the method. We show that it is possible to reduce the bias in the determination of the tested cosmological parameters at the price of a modest decrease in the precision.
We implement a linear model for mitigating the effect of observing conditions and other sources of contamination in galaxy clustering analyses. Our treatment improves upon the fiducial systematics treatment of the Dark Energy Survey (DES) Year 1 (Y1) cosmology analysis in four crucial ways. Specifically, our treatment: 1) does not require decisions as to which observable systematics are significant and which are not, allowing for the possibility of multiple maps adding coherently to give rise to significant bias even if no single map leads to a significant bias by itself; 2) characterizes both the statistical and systematic uncertainty in our mitigation procedure, allowing us to propagate said uncertainties into the reported cosmological constraints; 3) explicitly exploits the full spatial structure of the galaxy density field to differentiate between cosmology-sourced and systematics-sourced fluctuations within the galaxy density field; 4) is fully automated, and can therefore be trivially applied to any data set. The updated correlation function for the DES Y1 redMaGiC catalog minimally impacts the cosmological posteriors from that analysis. Encouragingly, our analysis does improve the goodness of fit statistic of the DES Y1 3$times$2pt data set ($Delta chi^2 = -6.5$ with no additional parameters). This improvement is due in nearly equal parts to both the change in the correlation function and the added statistical and systematic uncertainties associated with our method. We expect the difference in mitigation techniques to become more important in future work as the size of cosmological data sets grows.
84 - Xiao Fang , Yuta Koike 2020
We prove the large-dimensional Gaussian approximation of a sum of $n$ independent random vectors in $mathbb{R}^d$ together with fourth-moment error bounds on convex sets and Euclidean balls. We show that compared with classical third-moment bounds, o ur bounds have near-optimal dependence on $n$ and can achieve improved dependence on the dimension $d$. For centered balls, we obtain an additional error bound that has a sub-optimal dependence on $n$, but recovers the known result of the validity of the Gaussian approximation if and only if $d=o(n)$. We discuss an application to the bootstrap. We prove our main results using Steins method.
130 - Xiao Fang , David Siegmund 2020
We study the maximum score statistic to detect and estimate local signals in the form of change-points in the level, slope, or other property of a sequence of observations, and to segment the sequence when there appear to be multiple changes. We find that when observations are serially dependent, the change-points can lead to upwardly biased estimates of autocorrelations, resulting in a sometimes serious loss of power. Examples involving temperature variations, the level of atmospheric greenhouse gases, suicide rates and daily incidence of COVID-19 illustrate the general theory.
Given a graph sequence ${G_n}_{n geq 1}$ denote by $T_3(G_n)$ the number of monochromatic triangles in a uniformly random coloring of the vertices of $G_n$ with $c geq 2$ colors. This arises as a generalization of the birthday paradox, where $G_n$ co rresponds to a friendship network and $T_3(G_n)$ counts the number of triples of friends with matching birthdays. In this paper we prove a central limit theorem (CLT) for $T_3(G_n)$ with explicit error rates. The proof involves constructing a martingale difference sequence by carefully ordering the vertices of $G_n$, based on a certain combinatorial score function, and using a quantitive version of the martingale CLT. We then relate this error term to the well-known fourth moment phenomenon, which, interestingly, holds only when the number of colors $c geq 5$. We also show that the convergence of the fourth moment is necessary to obtain a Gaussian limit for any $c geq 2$, which, together with the above result, implies that the fourth-moment condition characterizes the limiting normal distribution of $T_3(G_n)$, whenever $c geq 5$. Finally, to illustrate the promise of our approach, we include an alternative proof of the CLT for the number of monochromatic edges, which provides quantitative rates for the results obtained in Bhattacharya et al. (2017).
Accurate covariance matrices for two-point functions are critical for inferring cosmological parameters in likelihood analyses of large-scale structure surveys. Among various approaches to obtaining the covariance, analytic computation is much faster and less noisy than estimation from data or simulations. However, the transform of covariances from Fourier space to real space involves integrals with two Bessel integrals, which are numerically slow and easily affected by numerical uncertainties. Inaccurate covariances may lead to significant errors in the inference of the cosmological parameters. In this paper, we introduce a 2D-FFTLog algorithm for efficient, accurate and numerically stable computation of non-Gaussian real space covariances for both 3D and projected statistics. The 2D-FFTLog algorithm is easily extended to perform real space bin-averaging. We apply the algorithm to the covariances for galaxy clustering and weak lensing for a Dark Energy Survey Year 3-like and a Rubin Observatorys Legacy Survey of Space and Time Year 1-like survey, and demonstrate that for both surveys, our algorithm can produce numerically stable angular bin-averaged covariances with the flat sky approximation, which are sufficiently accurate for inferring cosmological parameters. The code CosmoCov for computing the real space covariances with or without the flat sky approximation is released along with this paper.
48 - Xiao Fang , Yuta Koike 2020
We extend Steins celebrated Wasserstein bound for normal approximation via exchangeable pairs to the multi-dimensional setting. As an intermediate step, we exploit the symmetry of exchangeable pairs to obtain an error bound for smooth test functions. We also obtain a continuous version of the multi-dimensional Wasserstein bound in terms of fourth moments. We apply the main results to multivariate normal approximations to Wishart matrices of size $n$ and degree $d$, where we obtain the optimal convergence rate $sqrt{n^3/d}$ under only moment assumptions, and to quadratic forms and Poisson functionals, where we strengthen a few of the fourth moment bounds in the literature on the Wasserstein distance.
127 - Xiao Fang , Yuta Koike 2020
We obtain explicit error bounds for the $d$-dimensional normal approximation on hyperrectangles for a random vector that has a Stein kernel, or admits an exchangeable pair coupling, or is a non-linear statistic of independent random variables or a su m of $n$ locally dependent random vectors. We assume the approximating normal distribution has a non-singular covariance matrix. The error bounds vanish even when the dimension $d$ is much larger than the sample size $n$. We prove our main results using the approach of Gotze (1991) in Steins method, together with modifications of an estimate of Anderson, Hall and Titterington (1998) and a smoothing inequality of Bhattacharya and Rao (1976). For sums of $n$ independent and identically distributed isotropic random vectors having a log-concave density, we obtain an error bound that is optimal up to a $log n$ factor. We also discuss an application to multiple Wiener-It^{o} integrals.
A classical result for the simple symmetric random walk with $2n$ steps is that the number of steps above the origin, the time of the last visit to the origin, and the time of the maximum height all have exactly the same distribution and converge whe n scaled to the arcsine law. Motivated by applications in genomics, we study the distributions of these statistics for the non-Markovian random walk generated from the ascents and descents of a uniform random permutation and a Mallows($q$) permutation and show that they have the same asymptotic distributions as for the simple random walk. We also give an unexpected conjecture, along with numerical evidence and a partial proof in special cases, for the result that the number of steps above the origin by step $2n$ for the uniform permutation generated walk has exactly the same discrete arcsine distribution as for the simple random walk, even though the other statistics for these walks have very different laws. We also give explicit error bounds to the limit theorems using Steins method for the arcsine distribution, as well as functional central limit theorems and a strong embedding of the Mallows$(q)$ permutation which is of independent interest.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا