ترغب بنشر مسار تعليمي؟ اضغط هنا

Computing with quasiseparable matrices

59   0   0.0 ( 0 )
 نشر من قبل Clement Pernet
 تاريخ النشر 2016
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English
 تأليف Clement Pernet




اسأل ChatGPT حول البحث

The class of quasiseparable matrices is defined by a pair of bounds, called the quasiseparable orders, on the ranks of the maximal sub-matrices entirely located in their strictly lower and upper triangular parts. These arise naturally in applications, as e.g. the inverse of band matrices, and are widely used for they admit structured representations allowing to compute with them in time linear in the dimension and quadratic with the quasiseparable order. We show, in this paper, the connection between the notion of quasisepa-rability and the rank profile matrix invariant, presented in [Dumas & al. ISSAC15]. This allows us to propose an algorithm computing the quasiseparable orders (rL, rU) in time O(n^2 s^($omega$--2)) where s = max(rL, rU) and $omega$ the exponent of matrix multiplication. We then present two new structured representations, a binary tree of PLUQ decompositions, and the Bruhat generator, using respectively O(ns log n/s) and O(ns) field elements instead of O(ns^2) for the previously known generators. We present algorithms computing these representations in time O(n^2 s^($omega$--2)). These representations allow a matrix-vector product in time linear in the size of their representation. Lastly we show how to multiply two such structured matrices in time O(n^2 s^($omega$--2)).



قيم البحث

اقرأ أيضاً

The class of quasiseparable matrices is defined by the property that any submatrix entirely below or above the main diagonal has small rank, namely below a bound called the order of quasiseparability. These matrices arise naturally in solving PDEs fo r particle interaction with the Fast Multi-pole Method (FMM), or computing generalized eigenvalues. From these application fields, structured representations and algorithms have been designed in numerical linear algebra to compute with these matrices in time linear in the matrix dimension and either quadratic or cubic in the quasiseparability order. Motivated by the design of the general purpose exact linear algebra library LinBox, and by algorithmic applications in algebraic computing, we adapt existing techniques introduce novel ones to use quasiseparable matrices in exact linear algebra, where sub-cubic matrix arithmetic is available. In particular, we will show, the connection between the notion of quasiseparability and the rank profile matrix invariant, that we have introduced in 2015. It results in two new structured representations, one being a simpler variation on the hierarchically semiseparable storage, and the second one exploiting the generalized Bruhat decomposition. As a consequence, most basic operations, such as computing the quasiseparability orders, applying a vector, a block vector, multiplying two quasiseparable matrices together, inverting a quasiseparable matrix, can be at least as fast and often faster than previous existing algorithms.
62 - Patrick Bahr 2021
Graphs, and graph transformation systems, are used in many areas within Computer Science: to represent data structures and algorithms, to define computation models, as a general modelling tool to study complex systems, etc. Research in term and graph rewriting ranges from theoretical questions to practical implementation issues. Relevant research areas include: the modelling of first- and higher-order term rewriting by graph rewriting, graphical frameworks such as interaction nets and sharing graphs (optimal reduction), rewrite calculi for the analysis of functional programs, graph reduction implementations of programming languages, graphical calculi modelling concurrent and mobile computations, object-oriented systems, graphs as a model of biological or chemical systems, and automated reasoning and symbolic computation systems working on shared structures. The aim of the TERMGRAPH workshop is to bring together researchers working in these different domains and to foster their interaction, to provide a forum for presenting new ideas and work in progress, and to enable newcomers to learn about current activities in this area.
The row (resp. column) rank profile of a matrix describes the staircase shape of its row (resp. column) echelon form. In an ISSAC13 paper, we proposed a recursive Gaussian elimination that can compute simultaneously the row and column rank profiles o f a matrix as well as those of all of its leading sub-matrices, in the same time as state of the art Gaussian elimination algorithms. Here we first study the conditions making a Gaus-sian elimination algorithm reveal this information. Therefore, we propose the definition of a new matrix invariant, the rank profile matrix, summarizing all information on the row and column rank profiles of all the leading sub-matrices. We also explore the conditions for a Gaussian elimination algorithm to compute all or part of this invariant, through the corresponding PLUQ decomposition. As a consequence, we show that the classical iterative CUP decomposition algorithm can actually be adapted to compute the rank profile matrix. Used, in a Crout variant, as a base-case to our ISSAC13 implementation, it delivers a significant improvement in efficiency. Second, the row (resp. column) echelon form of a matrix are usually computed via different dedicated triangular decompositions. We show here that, from some PLUQ decompositions, it is possible to recover the row and column echelon forms of a matrix and of any of its leading sub-matrices thanks to an elementary post-processing algorithm.
Mahler equations relate evaluations of the same function $f$ at iterated $b$th powers of the variable. They arise in particular in the study of automatic sequences and in the complexity analysis of divide-and-conquer algorithms. Recently, the problem of solving Mahler equations in closed form has occurred in connection with number-theoretic questions. A difficulty in the manipulation of Mahler equations is the exponential blow-up of degrees when applying a Mahler operator to a polynomial. In this work, we present algorithms for solving linear Mahler equations for series, polynomials, and rational functions, and get polynomial-time complexity under a mild assumption. Incidentally, we develop an algorithm for computing the gcrd of a family of linear Mahler operators.
114 - Monique Laurent 2008
In this note we prove a generalization of the flat extension theorem of Curto and Fialkow for truncated moment matrices. It applies to moment matrices indexed by an arbitrary set of monomials and its border, assuming that this set is connected to 1. When formulated in a basis-free setting, this gives an equivalent result for truncated Hankel operators.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا