ترغب بنشر مسار تعليمي؟ اضغط هنا

Toward an Optimal Sampling of Peculiar Velocity Surveys For Wiener Filter Reconstructions

43   0   0.0 ( 0 )
 نشر من قبل Jenny Sorce Dr.
 تاريخ النشر 2017
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

The Wiener Filter (WF) technique enables the reconstruction of density and velocity fields from observed radial peculiar velocities. This paper aims at identifying the optimal design of peculiar velocity surveys within the WF framework. The prime goal is to test the dependence of the quality of the reconstruction on the distribution and nature of data points. Mock datasets, extending to 250 Mpc/h, are drawn from a constrained simulation that mimics the local Universe to produce realistic mock catalogs. Reconstructed fields obtained with these mocks are compared to the reference simulation. Comparisons, including residual distributions, cell-to-cell and bulk velocities, imply that the presence of field data points is essential to properly measure the flows. The fields reconstructed from mocks that consist only of galaxy cluster data points exhibit poor quality bulk velocities. In addition, the quality of the reconstruction depends strongly on the grouping of individual data points into single points to suppress virial motions in high density regions. Conversely, the presence of a Zone of Avoidance hardly affects the reconstruction. For a given number of data points, a uniform sample does not score any better than a sample with decreasing number of data points with the distance. The best reconstructions are obtained with a grouped survey containing field galaxies: Assuming no error, they differ from the simulated field by less than 100 km/s up to the extreme edge of the catalogs or up to a distance of three times the mean distance of data points for non-uniform catalogs. The overall conclusions hold when errors are added.

قيم البحث

اقرأ أيضاً

We present a high performance solution to the Wiener filtering problem via a formulation that is dual to the recently developed messenger technique. This new dual messenger algorithm, like its predecessor, efficiently calculates the Wiener filter sol ution of large and complex data sets without preconditioning and can account for inhomogeneous noise distributions and arbitrary mask geometries. We demonstrate the capabilities of this scheme in signal reconstruction by applying it on a simulated cosmic microwave background (CMB) temperature data set. The performance of this new method is compared to that of the standard messenger algorithm and the preconditioned conjugate gradient (PCG) approach, using a series of well-known convergence diagnostics and their processing times, for the particular problem under consideration. This variant of the messenger algorithm matches the performance of the PCG method in terms of the effectiveness of reconstruction of the input angular power spectrum and converges smoothly to the final solution. The dual messenger algorithm outperforms the standard messenger and PCG methods in terms of execution time, as it runs to completion around 2 and 3-4 times faster than the respective methods, for the specific problem considered.
59 - Richard Watkins 2014
We introduce a new estimator of the peculiar velocity of a galaxy or group of galaxies from redshift and distance estimates. This estimator results in peculiar velocity estimates which are statistically unbiased and that have errors that are Gaussian distributed, thus meeting the assumptions of analyses that rely on individual peculiar velocities. We apply this estimator to the SFI++ and the Cosmicflows-2 catalogs of galaxy distances and, using the fact that peculiar velocity estimates of distant galaxies are error dominated, examine their error distributions, The adoption of the new estimator significantly improves the accuracy and validity of studies of the large-scale peculiar velocity field and eliminates potential systematic biases, thus helping to bring peculiar velocity analysis into the era of precision cosmology. In addition, our method of examining the distribution of velocity errors should provide a useful check of the statistics of large peculiar velocity catalogs, particularly those that are compiled out of data from multiple sources.
Survey observations of the three-dimensional locations of galaxies are a powerful approach to measure the distribution of matter in the universe, which can be used to learn about the nature of dark energy, physics of inflation, neutrino masses, etc. A competitive survey, however, requires a large volume (e.g., Vsurvey is roughly 10 Gpc3) to be covered, and thus tends to be expensive. A sparse sampling method offers a more affordable solution to this problem: within a survey footprint covering a given survey volume, Vsurvey, we observe only a fraction of the volume. The distribution of observed regions should be chosen such that their separation is smaller than the length scale corresponding to the wavenumber of interest. Then one can recover the power spectrum of galaxies with precision expected for a survey covering a volume of Vsurvey (rather than the volume of the sum of observed regions) with the number density of galaxies given by the total number of observed galaxies divided by Vsurvey (rather than the number density of galaxies within an observed region). We find that regularly-spaced sampling yields an unbiased power spectrum with no window function effect, and deviations from regularly-spaced sampling, which are unavoidable in realistic surveys, introduce calculable window function effects and increase the uncertainties of the recovered power spectrum. While we discuss the sparse sampling method within the context of the forthcoming Hobby-Eberly Telescope Dark Energy Experiment, the method is general and can be applied to other galaxy surveys.
High quality reconstructions of the three dimensional velocity and density fields of the local Universe are essential to study the local Large Scale Structure. In this paper, the Wiener Filter reconstruction technique is applied to galaxy radial pecu liar velocity catalogs to understand how the Hubble constant (H0) value and the grouping scheme affect the reconstructions. While H0 is used to derive radial peculiar velocities from galaxy distance measurements and total velocities, the grouping scheme serves the purpose of removing non linear motions. Two different grouping schemes (based on the literature and a systematic algorithm) as well as five H0 values ranging from 72 to 76 km/s/Mpc are selected. The Wiener Filter is applied to the resulting catalogs. Whatever grouping scheme is used, the larger H0 is, the larger the infall onto the local Volume is. However, this conclusion has to be strongly mitigated: a bias minimization scheme applied to the catalogs after grouping suppresses this effect. At fixed H0, reconstructions obtained with catalogs grouped with the different schemes exhibit structures at the proper location in both cases but the latter are more contrasted in the less aggressive scheme case: having more constraints permits an infall from both sides onto the structures to reinforce their overdensity. Such findings highlight the importance of a balance between grouping to suppress non linear motions and preserving constraints to produce an infall onto structures expected to be large overdensities. Such an observation is promising to perform constrained simulations of the local Universe including its massive clusters.
Type Ia Supernovae have yet again the opportunity to revolutionize the field of cosmology as the new generation of surveys are acquiring thousands of nearby SNeIa opening a new era in cosmology: the direct measurement of the growth of structure param etrized by $fD$. This method is based on the SNeIa peculiar velocities derived from the residual to the Hubble law as direct tracers of the full gravitational potential caused by large scale structure. With this technique, we could probe not only the properties of dark energy, but also the laws of gravity. In this paper we present the analytical framework and forecasts. We show that ZTF and LSST will be able to reach 5% precision on $fD$ by 2027. Our analysis is not significantly sensitive to photo-typing, but known selection functions and spectroscopic redshifts are mandatory. We finally introduce an idea of a dedicated spectrograph that would get all the required information in addition to boost the efficiency to each SNeIa so that we could reach the 5% precision within the first two years of LSST operation and the few percent level by the end of the survey.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا