ترغب بنشر مسار تعليمي؟ اضغط هنا

Fast Iterative Tomographic Wave-front Estimation with Recursive Toeplitz Reconstructor Structure for Large Scale Systems

176   0   0.0 ( 0 )
 نشر من قبل Yoshito Ono
 تاريخ النشر 2018
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Tomographic wave-front reconstruction is the main computational bottleneck to realize real-time correction for turbulence-induced wave-front aberrations in future laser-assisted tomographic adaptive-optics (AO) systems for ground-based Giant Segmented Mirror Telescopes (GSMT), because of its unprecedented number of degrees of freedom, $N$, i.e. the number of measurements from wave-front sensors (WFS). In this paper, we provide an efficient implementation of the minimum-mean-square error (MMSE) tomographic wave-front reconstruction mainly useful for some classes of AO systems not requiring a multi-conjugation, such as laser-tomographic AO (LTAO), multi-object AO (MOAO) and ground-layer AO (GLAO) systems, but also applicable to multi-conjugate AO (MCAO) systems. This work expands that by R. Conan [ProcSPIE, 9148, 91480R (2014)] to the multi-wave-front, tomographic case using natural and laser guide stars. The new implementation exploits the Toeplitz structure of covariance matrices used in a MMSE reconstructor, which leads to an overall $O(Nlog N)$ real-time complexity compared to $O(N^2)$ of the original implementation using straight vector-matrix multiplication. We show that the Toeplitz-based algorithm leads to 60,nm rms wave-front error improvement for the European Extremely Large Telescope Laser-Tomography AO system over a well-known sparse-based tomographic reconstruction, but the number of iterations required for suitable performance is still beyond what a real-time system can accommodate to keep up with the time-varying turbulence



قيم البحث

اقرأ أيضاً

In tomographic adaptive-optics (AO) systems, errors due to tomographic wave-front reconstruction limit the performance and angular size of the scientific field of view (FoV), where AO correction is effective. We propose a multi time-step tomographic wave-front reconstruction method to reduce the tomographic error by using the measurements from both the current and the previous time-steps simultaneously. We further outline the method to feed the reconstructor with both wind speed and direction of each turbulence layer. An end-to-end numerical simulation, assuming a multi-object AO (MOAO) system on a 30 m aperture telescope, shows that the multi time-step reconstruction increases the Strehl ratio (SR) over a scientific FoV of 10 arcminutes in diameter by a factor of 1.5--1.8 when compared to the classical tomographic reconstructor, depending on the guide star asterism and with perfect knowledge of wind speeds and directions. We also evaluate the multi time-step reconstruction method and the wind estimation method on the RAVEN demonstrator under laboratory setting conditions. The wind speeds and directions at multiple atmospheric layers are measured successfully in the laboratory experiment by our wind estimation method with errors below 2 ms. With these wind estimates, the multi time-step reconstructor increases the SR value by a factor of 1.2--1.5, which is consistent with a prediction from end-to-end numerical simulation.
The covariance matrix $boldsymbol{Sigma}$ of non-linear clustering statistics that are measured in current and upcoming surveys is of fundamental interest for comparing cosmological theory and data and a crucial ingredient for the likelihood approxim ations underlying widely used parameter inference and forecasting methods. The extreme number of simulations needed to estimate $boldsymbol{Sigma}$ to sufficient accuracy poses a severe challenge. Approximating $boldsymbol{Sigma}$ using inexpensive but biased surrogates introduces model error with respect to full simulations, especially in the non-linear regime of structure growth. To address this problem we develop a matrix generalization of Convergence Acceleration by Regression and Pooling (CARPool) to combine a small number of simulations with fast surrogates and obtain low-noise estimates of $boldsymbol{Sigma}$ that are unbiased by construction. Our numerical examples use CARPool to combine GADGET-III $N$-body simulations with fast surrogates computed using COmoving Lagrangian Acceleration (COLA). Even at the challenging redshift $z=0.5$, we find variance reductions of at least $mathcal{O}(10^1)$ and up to $mathcal{O}(10^4)$ for the elements of the matter power spectrum covariance matrix on scales $8.9times 10^{-3}<k_mathrm{max} <1.0$ $h {rm Mpc^{-1}}$. We demonstrate comparable performance for the covariance of the matter bispectrum, the matter correlation function and probability density function of the matter density field. We compare eigenvalues, likelihoods, and Fisher matrices computed using the CARPool covariance estimate with the standard sample covariance estimators and generally find considerable improvement except in cases where $Sigma$ is severely ill-conditioned.
Atmospheric tomography, i.e. the reconstruction of the turbulence profile in the atmosphere, is a challenging task for adaptive optics (AO) systems of the next generation of extremely large telescopes. Within the community of AO the first choice solv er is the so called Matrix Vector Multiplication (MVM), which directly applies the (regularized) generalized inverse of the system operator to the data. For small telescopes this approach is feasible, however, for larger systems such as the European Extremely Large Telescope (ELT), the atmospheric tomography problem is considerably more complex and the computational efficiency becomes an issue. Iterative methods, such as the Finite Element Wavelet Hybrid Algorithm (FEWHA), are a promising alternative. FEWHA is a wavelet based reconstructor that uses the well-known iterative preconditioned conjugate gradient (PCG) method as a solver. The number of floating point operations and memory usage are decreased significantly by using a matrix-free representation of the forward operator. A crucial indicator for the real-time performance are the number of PCG iterations. In this paper, we propose an augmented version of FEWHA, where the number of iterations is decreased by $50%$ using a Krylov subspace recycling technique. We demonstrate that a parallel implementation of augmented FEWHA allows the fulfilment of the real-time requirements of the ELT.
The next generation of galaxy surveys like the Dark Energy Spectroscopic Instrument (DESI) and Euclid will provide datasets orders of magnitude larger than anything available to date. Our ability to model nonlinear effects in late time matter perturb ations will be a key to unlock the full potential of these datasets, and the area of initial condition reconstruction is attracting growing attention. Iterative reconstruction developed in Ref. [1] is a technique designed to reconstruct the displacement field from the observed galaxy distribution. The nonlinear displacement field and initial linear density field are highly correlated. Therefore, reconstructing the nonlinear displacement field enables us to extract the primordial cosmological information better than from the late time density field at the level of the two-point statistics. This paper will test to what extent the iterative reconstruction can recover the true displacement field and construct a perturbation theory model for the postreconstructed field. We model the iterative reconstruction process with Lagrangian perturbation theory~(LPT) up to third order for dark matter in real space and compare it with $N$-body simulations. We find that the simulated iterative reconstruction does not converge to the nonlinear displacement field, and the discrepancy mainly appears in the shift term, i.e., the term correlated directly with the linear density field. On the contrary, our 3LPT model predicts that the iterative reconstruction should converge to the nonlinear displacement field. We discuss the sources of discrepancy, including numerical noise/artifacts on small scales, and present an ad hoc phenomenological model that improves the agreement.
We use a theoretical frame-work to analytically assess temporal prediction error functions on von-Karman turbulence when a zonal representation of wave-fronts is assumed. Linear prediction models analysed include auto-regressive of order up to three, bilinear interpolation functions and a minimum mean square error predictor. This is an extension of the authors previously published work (see ref. 2) in which the efficacy of various temporal prediction models was established. Here we examine the tolerance of these algorithms to specific forms of model errors, thus defining the expected change in behaviour of the previous results under less ideal conditions. Results show that +/- 100pc wind-speed error and +/- 50 deg are tolerable before the best linear predictor delivers poorer performance than the no-prediction case.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا