ترغب بنشر مسار تعليمي؟ اضغط هنا

An Unbiased Estimator of Peculiar Velocity with Gaussian Distributed Errors for Precision Cosmology

91   0   0.0 ( 0 )
 نشر من قبل Hume A. Feldman
 تاريخ النشر 2014
  مجال البحث فيزياء
والبحث باللغة English
 تأليف Richard Watkins




اسأل ChatGPT حول البحث

We introduce a new estimator of the peculiar velocity of a galaxy or group of galaxies from redshift and distance estimates. This estimator results in peculiar velocity estimates which are statistically unbiased and that have errors that are Gaussian distributed, thus meeting the assumptions of analyses that rely on individual peculiar velocities. We apply this estimator to the SFI++ and the Cosmicflows-2 catalogs of galaxy distances and, using the fact that peculiar velocity estimates of distant galaxies are error dominated, examine their error distributions, The adoption of the new estimator significantly improves the accuracy and validity of studies of the large-scale peculiar velocity field and eliminates potential systematic biases, thus helping to bring peculiar velocity analysis into the era of precision cosmology. In addition, our method of examining the distribution of velocity errors should provide a useful check of the statistics of large peculiar velocity catalogs, particularly those that are compiled out of data from multiple sources.



قيم البحث

اقرأ أيضاً

Type Ia Supernovae have yet again the opportunity to revolutionize the field of cosmology as the new generation of surveys are acquiring thousands of nearby SNeIa opening a new era in cosmology: the direct measurement of the growth of structure param etrized by $fD$. This method is based on the SNeIa peculiar velocities derived from the residual to the Hubble law as direct tracers of the full gravitational potential caused by large scale structure. With this technique, we could probe not only the properties of dark energy, but also the laws of gravity. In this paper we present the analytical framework and forecasts. We show that ZTF and LSST will be able to reach 5% precision on $fD$ by 2027. Our analysis is not significantly sensitive to photo-typing, but known selection functions and spectroscopic redshifts are mandatory. We finally introduce an idea of a dedicated spectrograph that would get all the required information in addition to boost the efficiency to each SNeIa so that we could reach the 5% precision within the first two years of LSST operation and the few percent level by the end of the survey.
We present the Democratic Samples of Supernovae (DSS), a compilation of 775 low-redshift Type Ia and II supernovae (SNe Ia & II), of which 137 SN Ia distances are derived via the newly developed snapshot distance method. Using the objects in the DSS as tracers of the peculiar-velocity field, we compare against the corresponding reconstruction from the 2M++ galaxy redshift survey. Our analysis -- which takes special care to properly weight each DSS subcatalogue and cross-calibrate the relative distance scales between them -- results in a measurement of the cosmological parameter combination $fsigma_8 = 0.390_{-0.022}^{+0.022}$ as well as an external bulk flow velocity of $195_{-23}^{+22}$ km s$^{-1}$ in the direction $(ell, b) = (292_{-7}^{+7}, -6_{-4}^{+5})$ deg, which originates from beyond the 2M++ reconstruction. Similarly, we find a bulk flow of $245_{-31}^{+32}$ km s$^{-1}$ toward $(ell, b) = (294_{-7}^{+7}, 3_{-5}^{+6})$ deg on a scale of $sim 30 h^{-1}$ Mpc if we ignore the reconstructed peculiar-velocity field altogether. Our constraint on $fsigma_8$ -- the tightest derived from SNe to date (considering only statistical error bars), and the only one to utilise SNe II -- is broadly consistent with other results from the literature. We intend for our data accumulation and treatment techniques to become the prototype for future studies that will exploit the unprecedented data volume from upcoming wide-field surveys.
This paper studies the problem of learning with augmented classes (LAC), where augmented classes unobserved in the training data might emerge in the testing phase. Previous studies generally attempt to discover augmented classes by exploiting geometr ic properties, achieving inspiring empirical performance yet lacking theoretical understandings particularly on the generalization ability. In this paper we show that, by using unlabeled training data to approximate the potential distribution of augmented classes, an unbiased risk estimator of the testing distribution can be established for the LAC problem under mild assumptions, which paves a way to develop a sound approach with theoretical guarantees. Moreover, the proposed approach can adapt to complex changing environments where augmented classes may appear and the prior of known classes may change simultaneously. Extensive experiments confirm the effectiveness of our proposed approach.
159 - Darren S. Reed 2012
Cosmological surveys aim to use the evolution of the abundance of galaxy clusters to accurately constrain the cosmological model. In the context of LCDM, we show that it is possible to achieve the required percent level accuracy in the halo mass func tion with gravity-only cosmological simulations, and we provide simulation start and run parameter guidelines for doing so. Some previous works have had sufficient statistical precision, but lacked robust verification of absolute accuracy. Convergence tests of the mass function with, for example, simulation start redshift can exhibit false convergence of the mass function due to counteracting errors, potentially misleading one to infer overly optimistic estimations of simulation accuracy. Percent level accuracy is possible if initial condition particle mapping uses second order Lagrangian Perturbation Theory, and if the start epoch is between 10 and 50 expansion factors before the epoch of halo formation of interest. The mass function for halos with fewer than ~1000 particles is highly sensitive to simulation parameters and start redshift, implying a practical minimum mass resolution limit due to mass discreteness. The narrow range in converged start redshift suggests that it is not presently possible for a single simulation to capture accurately the cluster mass function while also starting early enough to model accurately the numbers of reionisation era galaxies, whose baryon feedback processes may affect later cluster properties. Ultimately, to fully exploit current and future cosmological surveys will require accurate modeling of baryon physics and observable properties, a formidable challenge for which accurate gravity-only simulations are just an initial step.
The Wiener Filter (WF) technique enables the reconstruction of density and velocity fields from observed radial peculiar velocities. This paper aims at identifying the optimal design of peculiar velocity surveys within the WF framework. The prime goa l is to test the dependence of the quality of the reconstruction on the distribution and nature of data points. Mock datasets, extending to 250 Mpc/h, are drawn from a constrained simulation that mimics the local Universe to produce realistic mock catalogs. Reconstructed fields obtained with these mocks are compared to the reference simulation. Comparisons, including residual distributions, cell-to-cell and bulk velocities, imply that the presence of field data points is essential to properly measure the flows. The fields reconstructed from mocks that consist only of galaxy cluster data points exhibit poor quality bulk velocities. In addition, the quality of the reconstruction depends strongly on the grouping of individual data points into single points to suppress virial motions in high density regions. Conversely, the presence of a Zone of Avoidance hardly affects the reconstruction. For a given number of data points, a uniform sample does not score any better than a sample with decreasing number of data points with the distance. The best reconstructions are obtained with a grouped survey containing field galaxies: Assuming no error, they differ from the simulated field by less than 100 km/s up to the extreme edge of the catalogs or up to a distance of three times the mean distance of data points for non-uniform catalogs. The overall conclusions hold when errors are added.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا