No Arabic abstract
Circulant preconditioners for functions of matrices have been recently of interest. In particular, several authors proposed the use of the optimal circulant preconditioners as well as the superoptimal circulant preconditioners in this context and numerically illustrated that such preconditioners are effective for certain functions of Toeplitz matrices. Motivated by their results, we propose in this work the absolute value superoptimal circulant preconditioners and provide several theorems that analytically show the effectiveness of such circulant preconditioners for systems defined by functions of Toeplitz matrices. Namely, we show that the eigenvalues of the preconditioned matrices are clustered around $pm 1$ and rapid convergence of Krylov subspace methods can therefore be expected. Moreover, we show that our results can be extended to functions of block Toeplitz matrices with Toeplitz blocks provided that the optimal block circulant matrices with circulant blocks are used as preconditioners. Numerical examples are given to support our theoretical results.
We propose several circulant preconditioners for systems defined by some functions $g$ of Toeplitz matrices $A_n$. In this paper we are interested in solving $g(A_n)mathbf{x}=mathbf{b}$ by the preconditioned conjugate method or the preconditioned minimal residual method, namely in the cases when $g(z)$ are the functions $e^{z}$, $sin{z}$ and $cos{z}$. Numerical results are given to show the effectiveness of the proposed preconditioners.
We investigate the problem of approximating the matrix function $f(A)$ by $r(A)$, with $f$ a Markov function, $r$ a rational interpolant of $f$, and $A$ a symmetric Toeplitz matrix. In a first step, we obtain a new upper bound for the relative interpolation error $1-r/f$ on the spectral interval of $A$. By minimizing this upper bound over all interpolation points, we obtain a new, simple and sharp a priori bound for the relative interpolation error. We then consider three different approaches of representing and computing the rational interpolant $r$. Theoretical and numerical evidence is given that any of these methods for a scalar argument allows to achieve high precision, even in the presence of finite precision arithmetic. We finally investigate the problem of efficiently evaluating $r(A)$, where it turns out that the relative error for a matrix argument is only small if we use a partial fraction decomposition for $r$ following Antoulas and Mayo. An important role is played by a new stopping criterion which ensures to automatically find the degree of $r$ leading to a small error, even in presence of finite precision arithmetic.
We study means of geometric type of quasi-Toeplitz matrices, that are semi-infinite matrices $A=(a_{i,j})_{i,j=1,2,ldots}$ of the form $A=T(a)+E$, where $E$ represents a compact operator, and $T(a)$ is a semi-infinite Toeplitz matrix associated with the function $a$, with Fourier series $sum_{ell=-infty}^{infty} a_ell e^{mathfrak i ell t}$, in the sense that $(T(a))_{i,j}=a_{j-i}$. If $a$ is rv and essentially bounded, then these matrices represent bounded self-adjoint operators on $ell^2$. We consider the case where $a$ is a continuous function, where quasi-Toeplitz matrices coincide with a classical Toeplitz algebra, and the case where $a$ is in the Wiener algebra, that is, has absolutely convergent Fourier series. We prove that if $a_1,ldots,a_p$ are continuous and positive functions, or are in the Wiener algebra with some further conditions, then means of geometric type, such as the ALM, the NBMP and the Karcher mean of quasi-Toeplitz positive definite matrices associated with $a_1,ldots,a_p$, are quasi-Toeplitz matrices associated with the geometric mean $(a_1cdots a_p)^{1/p}$, which differ only by the compact correction. We show by numerical tests that these operator means can be practically approximated.
We prove localization with high probability on sets of size of order $N/log N$ for the eigenvectors of non-Hermitian finitely banded $Ntimes N$ Toeplitz matrices $P_N$ subject to small random perturbations, in a very general setting. As perturbation we consider $Ntimes N$ random matrices with independent entries of zero mean, finite moments, and which satisfy an appropriate anti-concentration bound. We show via a Grushin problem that an eigenvector for a given eigenvalue $z$ is well approximated by a random linear combination of the singular vectors of $P_N-z$ corresponding to its small singular values. We prove precise probabilistic bounds on the local distribution of the eigenvalues of the perturbed matrix and provide a detailed analysis of the singular vectors to conclude the localization result.
Many standard conversion matrices between coefficients in classical orthogonal polynomial expansions can be decomposed using diagonally-scaled Hadamard products involving Toeplitz and Hankel matrices. This allows us to derive $smash{mathcal{O}(N(log N)^2)}$ algorithms, based on the fast Fourier transform, for converting coefficients of a degree $N$ polynomial in one polynomial basis to coefficients in another. Numerical results show that this approach is competitive with state-of-the-art techniques, requires no precomputational cost, can be implemented in a handful of lines of code, and is easily adapted to extended precision arithmetic.