Do you want to publish a course? Click here

Determinants and Inverses of Circulant Matrices with Jacobsthal and Jacobsthal-Lucas Numbers

225   0   0.0 ( 0 )
 Added by Durmu\\c{s} Bozkurt
 Publication date 2012
  fields
and research's language is English




Ask ChatGPT about the research

Let ngeq3 and J_{n}:=circ(J_{1},J_{2},...,J_{n}) and j_{n}:=circ(j_{0},j_{1},...,j_{n-1}) be the ntimesn circulant matrices, associated with the nth Jacobsthal number J_{n} and the nth Jacobsthal-Lucas number j_{n}, respectively. The determinants of J_{n} and j_{n} are obtained in terms of the Jacobsthal and Jacobsthal-Lucas numbers. These imply that J_{n} and j_{n} are invertible. We also derive the inverses of J_{n} and j_{n}.



rate research

Read More

Let $pequiv1pmod 4$ be a prime. In this paper, with the help of Jacobsthal sums, we study some permutation problems involving biquadratic residues modulo $p$.
124 - Sean Hon 2018
Circulant preconditioners for functions of matrices have been recently of interest. In particular, several authors proposed the use of the optimal circulant preconditioners as well as the superoptimal circulant preconditioners in this context and numerically illustrated that such preconditioners are effective for certain functions of Toeplitz matrices. Motivated by their results, we propose in this work the absolute value superoptimal circulant preconditioners and provide several theorems that analytically show the effectiveness of such circulant preconditioners for systems defined by functions of Toeplitz matrices. Namely, we show that the eigenvalues of the preconditioned matrices are clustered around $pm 1$ and rapid convergence of Krylov subspace methods can therefore be expected. Moreover, we show that our results can be extended to functions of block Toeplitz matrices with Toeplitz blocks provided that the optimal block circulant matrices with circulant blocks are used as preconditioners. Numerical examples are given to support our theoretical results.
We give upper and lower bounds on the determinant of a perturbation of the identity matrix or, more generally, a perturbation of a nonsingular diagonal matrix. The matrices considered are, in general, diagonally dominant. The lower bounds are best possible, and in several cases they are stronger than well-known bounds due to Ostrowski and other authors. If $A = I-E$ is an $n times n$ matrix and the elements of $E$ are bounded in absolute value by $varepsilon le 1/n$, then a lower bound of Ostrowski (1938) is $det(A) ge 1-nvarepsilon$. We show that if, in addition, the diagonal elements of $E$ are zero, then a best-possible lower bound is [det(A) ge (1-(n-1)varepsilon),(1+varepsilon)^{n-1}.] Corresponding upper bounds are respectively [det(A) le (1 + 2varepsilon + nvarepsilon^2)^{n/2}] and [det(A) le (1 + (n-1)varepsilon^2)^{n/2}.] The first upper bound is stronger than Ostrowskis bound (for $varepsilon < 1/n$) $det(A) le (1 - nvarepsilon)^{-1}$. The second upper bound generalises Hadamards inequality, which is the case $varepsilon = 1$. A necessary and sufficient condition for our upper bounds to be best possible for matrices of order $n$ and all positive $varepsilon$ is the existence of a skew-Hadamard matrix of order $n$.
We present a circulant and skew-circulant splitting (CSCS) iterative method for solving large sparse continuous Sylvester equations $AX + XB = C$, where the coefficient matrices $A$ and $B$ are Toeplitz matrices. A theoretical study shows that if the circulant and skew-circulant splitting factors of $A$ and $B$ are positive semi-definite and at least one is positive definite (not necessarily Hermitian), then the CSCS method converges to the unique solution of the Sylvester equation. In addition, we obtain an upper bound for the convergence factor of the CSCS iteration. This convergence factor depends only on the eigenvalues of the circulant and skew-circulant splitting matrices. A computational comparison with alternative methods reveals the efficiency and reliability of the proposed method.
In this paper, we introduce two new generalized inverses of matrices, namely, the $bra{i}{m}$-core inverse and the $pare{j}{m}$-core inverse. The $bra{i}{m}$-core inverse of a complex matrix extends the notions of the core inverse defined by Baksalary and Trenkler cite{BT} and the core-EP inverse defined by Manjunatha Prasad and Mohana cite{MM}. The $pare{j}{m}$-core inverse of a complex matrix extends the notions of the core inverse and the ${rm DMP}$-inverse defined by Malik and Thome cite{MT}. Moreover, the formulae and properties of these two new concepts are investigated by using matrix decompositions and matrix powers.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا