ترغب بنشر مسار تعليمي؟ اضغط هنا

Wiener filter reloaded: fast signal reconstruction without preconditioning

140   0   0.0 ( 0 )
 نشر من قبل Doogesh Kodi Ramanah
 تاريخ النشر 2017
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We present a high performance solution to the Wiener filtering problem via a formulation that is dual to the recently developed messenger technique. This new dual messenger algorithm, like its predecessor, efficiently calculates the Wiener filter solution of large and complex data sets without preconditioning and can account for inhomogeneous noise distributions and arbitrary mask geometries. We demonstrate the capabilities of this scheme in signal reconstruction by applying it on a simulated cosmic microwave background (CMB) temperature data set. The performance of this new method is compared to that of the standard messenger algorithm and the preconditioned conjugate gradient (PCG) approach, using a series of well-known convergence diagnostics and their processing times, for the particular problem under consideration. This variant of the messenger algorithm matches the performance of the PCG method in terms of the effectiveness of reconstruction of the input angular power spectrum and converges smoothly to the final solution. The dual messenger algorithm outperforms the standard messenger and PCG methods in terms of execution time, as it runs to completion around 2 and 3-4 times faster than the respective methods, for the specific problem considered.


قيم البحث

اقرأ أيضاً

High quality reconstructions of the three dimensional velocity and density fields of the local Universe are essential to study the local Large Scale Structure. In this paper, the Wiener Filter reconstruction technique is applied to galaxy radial pecu liar velocity catalogs to understand how the Hubble constant (H0) value and the grouping scheme affect the reconstructions. While H0 is used to derive radial peculiar velocities from galaxy distance measurements and total velocities, the grouping scheme serves the purpose of removing non linear motions. Two different grouping schemes (based on the literature and a systematic algorithm) as well as five H0 values ranging from 72 to 76 km/s/Mpc are selected. The Wiener Filter is applied to the resulting catalogs. Whatever grouping scheme is used, the larger H0 is, the larger the infall onto the local Volume is. However, this conclusion has to be strongly mitigated: a bias minimization scheme applied to the catalogs after grouping suppresses this effect. At fixed H0, reconstructions obtained with catalogs grouped with the different schemes exhibit structures at the proper location in both cases but the latter are more contrasted in the less aggressive scheme case: having more constraints permits an infall from both sides onto the structures to reinforce their overdensity. Such findings highlight the importance of a balance between grouping to suppress non linear motions and preserving constraints to produce an infall onto structures expected to be large overdensities. Such an observation is promising to perform constrained simulations of the local Universe including its massive clusters.
The Wiener Filter (WF) technique enables the reconstruction of density and velocity fields from observed radial peculiar velocities. This paper aims at identifying the optimal design of peculiar velocity surveys within the WF framework. The prime goa l is to test the dependence of the quality of the reconstruction on the distribution and nature of data points. Mock datasets, extending to 250 Mpc/h, are drawn from a constrained simulation that mimics the local Universe to produce realistic mock catalogs. Reconstructed fields obtained with these mocks are compared to the reference simulation. Comparisons, including residual distributions, cell-to-cell and bulk velocities, imply that the presence of field data points is essential to properly measure the flows. The fields reconstructed from mocks that consist only of galaxy cluster data points exhibit poor quality bulk velocities. In addition, the quality of the reconstruction depends strongly on the grouping of individual data points into single points to suppress virial motions in high density regions. Conversely, the presence of a Zone of Avoidance hardly affects the reconstruction. For a given number of data points, a uniform sample does not score any better than a sample with decreasing number of data points with the distance. The best reconstructions are obtained with a grouped survey containing field galaxies: Assuming no error, they differ from the simulated field by less than 100 km/s up to the extreme edge of the catalogs or up to a distance of three times the mean distance of data points for non-uniform catalogs. The overall conclusions hold when errors are added.
We describe the main features of a new and updated version of the program PArthENoPE, which computes the abundances of light elements produced during Big Bang Nucleosynthesis. As the previous first release in 2008, the new one, PArthENoPE 2.0, will b e soon publicly available and distributed from the code site, http://parthenope.na.infn.it. Apart from minor changes, which will be also detailed, the main improvements are as follows. The powerful, but not freely accessible, NAG routines have been substituted by ODEPACK libraries, without any significant loss in precision. Moreover, we have developed a Graphical User Interface (GUI) which allows a friendly use of the code and a simpler implementation of running for grids of input parameters. Finally, we report the results of PArthENoPE 2.0 for a minimal BBN scenario with free radiation energy density.
We review and compare two different CMB dipole estimators discussed in the literature, and assess their performances through Monte Carlo simulations. The first method amounts to simple template regression with partial sky data, while the second metho d is an optimal Wiener filter (or Gibbs sampling) implementation. The main difference between the two methods is that the latter approach takes into account correlations with higher-order CMB temperature fluctuations that arise from non-orthogonal spherical harmonics on an incomplete sky, which for recent CMB data sets (such as Planck) is the dominant source of uncertainty. For an accepted sky fraction of 81% and an angular CMB power spectrum corresponding to the best-fit Planck 2018 $Lambda$CDM model, we find that the uncertainty on the recovered dipole amplitude is about six times smaller for the Wiener filter approach than for the template approach, corresponding to 0.5 and 3$~mu$K, respectively. Similar relative differences are found for the corresponding directional parameters and other sky fractions. We note that the Wiener filter algorithm is generally applicable to any dipole estimation problem on an incomplete sky, as long as a statistical and computationally tractable model is available for the unmasked higher-order fluctuations. The methodology described in this paper forms the numerical basis for the most recent determination of the CMB solar dipole from Planck, as summarized by arXiv:2007.04997.
The 21-cm intensity mapping (IM) of neutral hydrogen (HI) is a promising tool to probe the large-scale structures. Sky maps of 21-cm intensities can be highly contaminated by different foregrounds, such as Galactic synchrotron radiation, free-free em ission, extragalactic point sources, and atmospheric noise. We here present a model of foreground components and a method of removal, especially to quantify the potential of Five-hundred-meter Aperture Spherical radio Telescope (FAST) for measuring HI IM. We consider 1-year observational time with the survey area of $20,000,{rm deg}^{2}$ to capture significant variations of the foregrounds across both the sky position and angular scales relative to the HI signal. We first simulate the observational sky and then employ the Principal Component Analysis (PCA) foreground separation technique. We show that by including different foregrounds, thermal and $1/f$ noises, the value of the standard deviation between reconstructed 21-cm IM map and the input pure 21-cm signal is $Delta T = 0.034,{rm mK}$, which is well under control. The eigenmode-based analysis shows that the underlying HI eigenmode is just less than $1$ per cent level of the total sky components. By subtracting the PCA cleaned foreground+noise map from the total map, we show that PCA method can recover HI power spectra for FAST with high accuracy.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا