In this paper, we study how to quickly compute the <-minimal monomial interpolating basis for a multivariate polynomial interpolation problem. We address the notion of reverse reduced basis of linearly independent polynomials and design an algorithm for it. Based on the notion, for any monomial ordering we present a new method to read off the <-minimal monomial interpolating basis from monomials appearing in the polynomials representing the interpolation conditions.
For $m,n in mathbb{N}$, $mgeq 1$ and a given function $f : mathbb{R}^mlongrightarrow mathbb{R}$ the polynomial interpolation problem (PIP) is to determine a emph{generic node set} $P subseteq mathbb{R}^m$ and the coefficients of the uniquely defined polynomial $Qinmathbb{R}[x_1,dots,x_m]$ in $m$ variables of degree $mathrm{deg}(Q)leq n in mathbb{N}$ that fits $f$ on $P$, i.e., $Q(p) = f(p)$, $forall, p in P$. We here show that in general, i.e., for arbitrary $m,n in mathbb{N}$, $m geq 1$, there exists an algorithm that determines $P$ and computes the $N(mbox{m,n})=#P$ coefficients of $Q$ in $mathcal{O}big(N(mbox{m,n})^2big)$ time using $mathcal{O}big(mbox{m}N(mbox{m,n})big)$ storage, without inverting the occurring Vandermonde matrix. We provide such an algorithm, termed PIP-SOLVER, based on a recursive decomposition of the problem and prove its correctness. Since the present approach solves the PIP without matrix inversion, it is computationally more efficient and numerically more robust than previous approaches. We demonstrate this in numerical experiments and compare with previous approaches based on matrix inversion and linear systems solving.
The algebraic characterization of dual univariate interpolating subdivision schemes is investigated. Specifically, we provide a constructive approach for finding dual univariate interpolating subdivision schemes based on the solutions of certain associated polynomial equations. The proposed approach also makes possible to identify conditions for the existence of the sought schemes.
Polynomial preconditioning with the GMRES minimal residual polynomial has the potential to greatly reduce orthogonalization costs, making it useful for communication reduction. We implement polynomial preconditioning in the Belos package from Trilinos and show how it can be effective in both serial and parallel implementations. We further show it is a communication-avoiding technique and is a viable option to CA-GMRES for large-scale parallel computing.
We introduce intrinsic interpolatory bases for data structured on graphs and derive properties of those bases. Polyharmonic Lagrange functions are shown to satisfy exponential decay away from their centers. The decay depends on the density of the zeros of the Lagrange function, showing that they scale with the density of the data. These results indicate that Lagrange-type bases are ideal building blocks for analyzing data on graphs, and we illustrate their use in kernel-based machine learning applications.
We present an optimized algorithm calculating determinant for multivariate polynomial matrix on GPU. The novel algorithm provides precise determinant for input multivariate polynomial matrix in controllable time. Our approach is based on modular methods and split into Fast Fourier Transformation, Condensation method and Chinese Remainder Theorem where each algorithm is paralleled on GPU. The experiment results show that our parallel method owns substantial speedups compared to Maple, allowing memory overhead and time expedition in steady increment. We are also able to deal with complex matrix which is over the threshold on Maple and constrained on CPU. In addition, calculation during the process could be recovered without losing accuracy at any point regardless of disruptions. Furthermore, we propose a time prediction for calculation of polynomial determinant according to some basic matrix attributes and we solve an open problem relating to harmonic elimination equations on the basis of our GPU implementation.