ترغب بنشر مسار تعليمي؟ اضغط هنا

Reduction of multivariate mixtures and its applications

70   0   0.0 ( 0 )
 نشر من قبل Gregory Beylkin
 تاريخ النشر 2018
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

We consider fast deterministic algorithms to identify the best linearly independent terms in multivariate mixtures and use them to compute, up to a user-selected accuracy, an equivalent representation with fewer terms. One algorithm employs a pivoted Cholesky decomposition of the Gram matrix constructed from the terms of the mixture to select what we call skeleton terms and the other uses orthogonalization for the same purpose. Importantly, the multivariate mixtures do not have to be a separated representation of a function. Both algorithms require $O(r^2 N + p(d) r N) $ operations, where $N$ is the initial number of terms in the multivariate mixture, $r$ is the number of selected linearly independent terms, and $p(d)$ is the cost of computing the inner product between two terms of a mixture in $d$ variables. For general Gaussian mixtures $p(d) sim d^3$ since we need to diagonalize a $dtimes d$ matrix, whereas for separated representations $p(d) sim d$. Due to conditioning issues, the resulting accuracy is limited to about one half of the available significant digits for both algorithms. We also describe an alternative algorithm that is capable of achieving higher accuracy but is only applicable in low dimensions or to multivariate mixtures in separated form. We describe a number of initial applications of these algorithms to solve partial differential and integral equations and to address several problems in data science. For data science applications in high dimensions,we consider the kernel density estimation (KDE) approach for constructing a probability density function (PDF) of a cloud of points, a far-field kernel summation method and the construction of equivalent sources for non-oscillatory kernels (used in both, computational physics and data science) and, finally, show how to use the new algorithm to produce seeds for subdividing a cloud of points into groups.

قيم البحث

اقرأ أيضاً

We present a new adaptive method for electronic structure calculations based on novel fast algorithms for reduction of multivariate mixtures. In our calculations, spatial orbitals are maintained as Gaussian mixtures whose terms are selected in the pr ocess of solving equations. Using a fixed basis leads to the so-called basis error since orbitals may not lie entirely within the linear span of the basis. To avoid such an error, multiresolution bases are used in adaptive algorithms so that basis functions are selected from a fixed collection of functions, large enough as to approximate solutions within any user-selected accuracy. Our new method achieves adaptivity without using a multiresolution basis. Instead, as a part of an iteration to solve nonlinear equations, our algorithm selects the best subset of linearly independent terms of a Gaussian mixture from a collection that is much larger than any possible basis since the locations and shapes of the Gaussian terms are not fixed in advance. Approximating an orbital within a given accuracy, our algorithm yields significantly fewer terms than methods using multiresolution bases. We demonstrate our approach by solving the Hartree-Fock equations for two diatomic molecules, HeH+ and LiH, matching the accuracy previously obtained using multiwavelet bases.
In this paper, we discuss multiscale methods for nonlinear problems. The main idea of these approaches is to use local constraints and solve problems in oversampled regions for constructing macroscopic equations. These techniques are intended for pro blems without scale separation and high contrast, which often occur in applications. For linear problems, the local solutions with constraints are used as basis functions. This technique is called Constraint Energy Minimizing Generalized Multiscale Finite Element Method (CEM-GMsFEM). GMsFEM identifies macroscopic quantities based on rigorous analysis. In corresponding upscaling methods, the multiscale basis functions are selected such that the degrees of freedom have physical meanings, such as averages of the solution on each continuum. This paper extends the linear concepts to nonlinear problems, where the local problems are nonlinear. The main concept consists of: (1) identifying macroscopic quantities; (2) constructing appropriate oversampled local problems with coarse-grid constraints; (3) formulating macroscopic equations. We consider two types of approaches. In the first approach, the solutions of local problems are used as basis functions (in a linear fashion) to solve nonlinear problems. This approach is simple to implement; however, it lacks the nonlinear interpolation, which we present in our second approach. In this approach, the local solutions are used as a nonlinear forward map from local averages (constraints) of the solution in oversampling region. This local fine-grid solution is further used to formulate the coarse-grid problem. Both approaches are discussed on several examples and applied to single-phase and two-phase flow problems, which are challenging because of convection-dominated nature of the concentration equation.
In this article, we present an $O(N log N)$ rapidly convergent algorithm for the numerical approximation of the convolution integral with radially symmetric weakly singular kernels and compactly supported densities. To achieve the reduced computation al complexity, we utilize the Fast Fourier Transform (FFT) on a uniform grid of size $N$ for approximating the convolution. To facilitate this and maintain the accuracy, we primarily rely on a periodic Fourier extension of the density with a suitably large period depending on the support of the density. The rate of convergence of the method increases with increasing smoothness of the periodic extension and, in fact, approximations exhibit super-algebraic convergence when the extension is infinitely differentiable. Furthermore, when the density has jump discontinuities, we utilize a certain Fourier smoothing technique to accelerate the convergence to achieve the quadratic rate in the overall approximation. Finally, we apply the integration scheme for numerical solution of certain partial differential equations. Moreover, we apply the quadrature to obtain a fast and high-order Nystom solver for the solution of the Lippmann-Schwinger integral equation. We validate the performance of the proposed scheme in terms of accuracy as well as computational efficiency through a variety of numerical experiments.
In this paper, we present a dimension reduction method to reduce the dimension of parameter space and state space and efficiently solve inverse problems. To this end, proper orthogonal decomposition (POD) and radial basis function (RBF) are combined to represent the solution of forward model with a form of variable separation. This POD-RBF method can be used to efficiently evaluate the models output. A gradient regularization method is presented to solve the inverse problem with fast convergence. A generalized cross validation method is suggested to select the regularization parameter and differential step size for the gradient computation. Because the regularization method needs many models evaluations. This is desirable for POD-RBF method. Thus, the POD-RBF method is integrated with the gradient regularization method to provide an efficient approach to solve inverse problems. We focus on the coefficient inversion of diffusion equations using the proposed approach. Based on different types of measurement data and different basis functions for coefficients, we present a few numerical examples for the coefficient inversion. The numerical results show that accurate reconstruction for the coefficient can be achieved efficiently.
Low-rank approximations of original samples are playing more and more an important role in many recently proposed mathematical models from data science. A natural and initial requirement is that these representations inherit original structures or pr operties. With this aim, we propose a new multi-symplectic method based on the Lanzcos bidiagonalization to compute the partial singular triplets of JRS-symmetric matrices. These singular triplets can be used to reconstruct optimal low-rank approximations while preserving the intrinsic multi-symmetry. The augmented Ritz and harmonic Ritz vectors are used to perform implicit restarting to obtain a satisfactory bidiagonal matrix for calculating the $k$ largest or smallest singular triplets, respectively. We also apply the new multi-symplectic Lanczos algorithms to color face recognition and color video compressing and reconstruction. Numerical experiments indicate their superiority over the state-of-the-art algorithms.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا