ترغب بنشر مسار تعليمي؟ اضغط هنا

A kernel-independent sum-of-Gaussians method by de la Vallee-Poussin sums

91   0   0.0 ( 0 )
 نشر من قبل Zhenli Xu
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Approximation of interacting kernels by sum of Gaussians (SOG) is frequently required in many applications of scientific and engineering computing in order to construct efficient algorithms for kernel summation or convolution problems. In this paper, we propose a kernel-independent SOG method by introducing the de la Vallee-Poussin sum and Chebyshev polynomials. The SOG works for general interacting kernels and the lower bound of Gaussian bandwidths is tunable and thus the Gaussians can be easily summed by fast Gaussian algorithms. The number of Gaussians can be further reduced via the model reduction based on the balanced truncation based on the square root method. Numerical results on the accuracy and model reduction efficiency show attractive performance of the proposed method.



قيم البحث

اقرأ أيضاً

We propose an accurate algorithm for a novel sum-of-exponentials (SOE) approximation of kernel functions, and develop a fast algorithm for convolution quadrature based on the SOE, which allows an order $N$ calculation for $N$ time steps of approximat ing a continuous temporal convolution integral. The SOE method is constructed by a combination of the de la Vallee-Poussin sums for a semi-analytical exponential expansion of a general kernel, and a model reduction technique for the minimization of the number of exponentials under given error tolerance. We employ the SOE expansion for the finite part of the splitting convolution kernel such that the convolution integral can be solved as a system of ordinary differential equations due to the exponential kernels. The remaining part is explicitly approximated by employing the generalized Taylor expansion. The significant features of our algorithm are that the SOE method is efficient and accurate, and works for general kernels with controllable upperbound of positive exponents. We provide numerical analysis for the SOE-based convolution quadrature. Numerical results on different kernels, the convolution integral and integral equations demonstrate attractive performance of both accuracy and efficiency of the proposed method.
There are plenty of applications and analysis for time-independent elliptic partial differential equations in the literature hinting at the benefits of overtesting by using more collocation conditions than the number of basis functions. Overtesting n ot only reduces the problem size, but is also known to be necessary for stability and convergence of widely used unsymmetric Kansa-type strong-form collocation methods. We consider kernel-based meshfree methods, which is a method of lines with collocation and overtesting spatially, for solving parabolic partial differential equations on surfaces without parametrization. In this paper, we extend the time-independent convergence theories for overtesting techniques to the parabolic equations on smooth and closed surfaces.
We develop a new type of orthogonal polynomial, the modified discrete Laguerre (MDL) polynomials, designed to accelerate the computation of bosonic Matsubara sums in statistical physics. The MDL polynomials lead to a rapidly convergent Gaussian quadr ature scheme for Matsubara sums, and more generally for any sum $F(0)/2 + F(h) + F(2h) + cdots$ of exponentially decaying summands $F(nh) = f(nh)e^{-nhs}$ where $hs>0$. We demonstrate this technique for computation of finite-temperature Casimir forces arising from quantum field theory, where evaluation of the summand $F$ requires expensive electromagnetic simulations. A key advantage of our scheme, compared to previous methods, is that the convergence rate is nearly independent of the spacing $h$ (proportional to the thermodynamic temperature). We also prove convergence for any polynomially decaying $F$.
We present a wavelet-based adaptive method for computing 3D multiscale flows in complex, time-dependent geometries, implemented on massively parallel computers. While our focus is on simulations of flapping insects, it can be used for other flow prob lems, including turbulence, as well. The incompressible fluid is modeled with an artificial compressibility approach in order to avoid solving elliptical problems. No-slip and in/outflow boundary conditions are imposed using volume penalization. The governing equations are discretized on a locally uniform Cartesian grid with centered finite differences, and integrated in time with a Runge--Kutta scheme, both of 4th order. The domain is partitioned into cubic blocks with equidistant grids with different resolution and, for each block, biorthogonal interpolating wavelets are used as refinement indicators and prediction operators. Thresholding the wavelet coefficients allows to generate dynamically evolving grids, and an adaption strategy tracks the solution in both space and scale. Blocks are distributed among MPI processes and the global topology of the grid is encoded using a tree-like data structure. Analyzing the different physical and numerical parameters allows balancing their individual error contributions and thus ensures optimal convergence while minimizing computational effort. Different validation tests score accuracy and performance of our new open source code, WABBIT (Wavelet Adaptive Block-Based solver for Interactions with Turbulence), on massively parallel computers using fully adaptive grids. Flow simulations of flapping insects demonstrate its applicability to complex, bio-inspired problems.
The moment-of-fluid (MOF) method is an extension of the volume-of-fluid method with piecewise linear interface construction (VOF-PLIC). By minimizing the least square error of the centroid of the cutting polyhedron, the MOF method reconstructs the li near interface without using any neighboring information. Traditional MOF involves iteration while finding the optimized linear reconstruction. Here, we propose an alternative approach based on a machine learning algorithm: Decision Tree algorithm. A training data set is generated from a list of random cuts of a unit cube by plane. The Decision Tree algorithm extracts the input-output relationship from the training data, so that the resulting function determines the normal vector of the reconstruction plane directly, without any iteration. The present method is tested on a range of popular interface advection test problems. Numerical results show that our approach is much faster than the iteration-based MOF method while provides compatible accuracy with the conventional MOF method.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا