ترغب بنشر مسار تعليمي؟ اضغط هنا

Non-uniform sampled scalar diffraction calculation using non-uniform fast Fourier transform

288   0   0.0 ( 0 )
 نشر من قبل Tomoyoshi Shimobaba Dr.
 تاريخ النشر 2013
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Scalar diffraction calculations such as the angular spectrum method (ASM) and Fresnel diffraction, are widely used in the research fields of optics, X-rays, electron beams, and ultrasonics. It is possible to accelerate the calculation using fast Fourier transform (FFT); unfortunately, acceleration of the calculation of non-uniform sampled planes is limited due to the property of the FFT that imposes uniform sampling. In addition, it gives rise to wasteful sampling data if we calculate a plane having locally low and high spatial frequencies. In this paper, we developed non-uniform sampled ASM and Fresnel diffraction to improve the problem using the non-uniform FFT.



قيم البحث

اقرأ أيضاً

The bottleneck of micromagnetic simulations is the computation of the long-ranged magnetostatic fields. This can be tackled on regular N-node grids with Fast Fourier Transforms in time N logN, whereas the geometrically more versatile finite element m ethods (FEM) are bounded to N^4/3 in the best case. We report the implementation of a Non-uniform Fast Fourier Transform algorithm which brings a N logN convergence to FEM, with no loss of accuracy in the results.
We implement and test kernel averaging Non-Uniform Fast Fourier Transform (NUFFT) methods to enhance the performance of correlation and covariance estimation on asynchronously sampled event-data using the Malliavin-Mancino Fourier estimator. The meth ods are benchmarked for Dirichlet and Fej{e}r Fourier basis kernels. We consider test cases formed from Geometric Brownian motions to replicate synchronous and asynchronous data for benchmarking purposes. We consider three standard averaging kernels to convolve the event-data for synchronisation via over-sampling for use with the Fast Fourier Transform (FFT): the Gaussian kernel, the Kaiser-Bessel kernel, and the exponential of semi-circle kernel. First, this allows us to demonstrate the performance of the estimator with different combinations of basis kernels and averaging kernels. Second, we investigate and compare the impact of the averaging scales explicit in each averaging kernel and its relationship between the time-scale averaging implicit in the Malliavin-Mancino estimator. Third, we demonstrate the relationship between time-scale averaging based on the number of Fourier coefficients used in the estimator to a theoretical model of the Epps effect. We briefly demonstrate the methods on Trade-and-Quote (TAQ) data from the Johannesburg Stock Exchange to make an initial visualisation of the correlation dynamics for various time-scales under market microstructure.
Computer simulations of model systems are widely used to explore striking phenomena in promising applications spanning from physics, chemistry, biology, to materials science and engineering. The long range electrostatic interactions between charged p articles constitute a prominent factor in determining structures and states of model systems. How to efficiently calculate electrostatic interactions in model systems subjected to partial or full periodic boundary conditions has been a grand challenging task. In the past decades, a large variety of computational schemes have been proposed, among which the Ewald summation method is the most reliable route to accurately deal with electrostatic interactions in model systems. In addition, extensive effort has been done to improve computational efficiency of the Ewald summation based methods. Representative examples are approaches based on cutoffs, reaction fields, multi-poles, multi-grids, and particle-mesh schemes. We sketched an ENUF method, an abbreviation for the Ewald summation method based on Non-Uniform fast Fourier transform technique, and have implemented this method in particle-based simulation packages to calculate electrostatic energies and forces at micro- and mesoscopic levels. Extensive computational studies of conformational properties of polyelectrolytes, dendrimer-membrane complexes, and ionic fluids demonstrated that the ENUF method and its derivatives conserve both energy and momentum to floating point accuracy, and exhibit a computational complexity of $mathcal{O}(Nlog N)$ with optimal physical parameters. These ENUF based methods are attractive alternatives in molecular simulations where high accuracy and efficiency of simulation methods are needed to accelerate calculations of electrostatic interactions at extended spatiotemporal scales.
We consider the problem of finding the minimizer of a convex function $F: mathbb R^d rightarrow mathbb R$ of the form $F(w) := sum_{i=1}^n f_i(w) + R(w)$ where a low-rank factorization of $ abla^2 f_i(w)$ is readily available. We consider the regime where $n gg d$. As second-order methods prove to be effective in finding the minimizer to a high-precision, in this work, we propose randomized Newton-type algorithms that exploit textit{non-uniform} sub-sampling of ${ abla^2 f_i(w)}_{i=1}^{n}$, as well as inexact updates, as means to reduce the computational complexity. Two non-uniform sampling distributions based on {it block norm squares} and {it block partial leverage scores} are considered in order to capture important terms among ${ abla^2 f_i(w)}_{i=1}^{n}$. We show that at each iteration non-uniformly sampling at most $mathcal O(d log d)$ terms from ${ abla^2 f_i(w)}_{i=1}^{n}$ is sufficient to achieve a linear-quadratic convergence rate in $w$ when a suitable initial point is provided. In addition, we show that our algorithms achieve a lower computational complexity and exhibit more robustness and better dependence on problem specific quantities, such as the condition number, compared to similar existing methods, especially the ones based on uniform sampling. Finally, we empirically demonstrate that our methods are at least twice as fast as Newtons methods with ridge logistic regression on several real datasets.
The Gridding algorithm has shown great utility for reconstructing images from non-uniformly spaced samples in the Fourier domain in several imaging modalities. Due to the non-uniform spacing, some correction for the variable density of the samples mu st be made. Existing methods for generating density compensation values are either sub-optimal or only consider a finite set of points (a set of measure 0) in the optimization. This manuscript presents the first density compensation algorithm for a general trajectory that takes into account the point spread function over a set of non-zero measure. We show that the images reconstructed with Gridding using the density compensation values of this method are of superior quality when compared to density compensation weights determined in other ways. Results are shown with a numerical phantom and with magnetic resonance images of the abdomen and the knee.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا