Do you want to publish a course? Click here

Bounds on complexity of matrix multiplication away from CW tensors

169   0   0.0 ( 0 )
 Added by Joachim Jelisiejew
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

We present three families of minimal border rank tensors: they come from highest weight vectors, smoothable algebras, or monomial algebras. We analyse them using Strassens laser method and obtain an upper bound $2.431$ on $omega$. We also explain how in certain monomial cases using the laser method directly is less profitable than first degenerating. Our results form possible paths in the search for valuable tensors for the laser method away from Coppersmith-Winograd tensors.

rate research

Read More

The image of the principal minor map for n x n-matrices is shown to be closed. In the 19th century, Nansen and Muir studied the implicitization problem of finding all relations among principal minors when n=4. We complete their partial results by constructing explicit polynomials of degree 12 that scheme-theoretically define this affine variety and also its projective closure in $PP^{15}$. The latter is the main component in the singular locus of the 2 x 2 x 2 x 2-hyperdeterminant.
Whereas matrix rank is additive under direct sum, in 1981 Schonhage showed that one of its generalizations to the tensor setting, tensor border rank, can be strictly subadditive for tensors of order three. Whether border rank is additive for higher order tensors has remained open. In this work, we settle this problem by providing analogs of Schonhages construction for tensors of order four and higher. Schonhages work was motivated by the study of the computational complexity of matrix multiplication; we discuss implications of our results for the asymptotic rank of higher order generalizations of the matrix multiplication tensor.
181 - Joel Friedman 2017
We develop a notion of {em inner rank} as a tool for obtaining lower bounds on the rank of matrix multiplication tensors. We use it to give a short proof that the border rank (and therefore rank) of the tensor associated with $ntimes n$ matrix multiplication over an arbitrary field is at least $2n^2-n+1$. While inner rank does not provide improvements to currently known lower bounds, we argue that this notion merits further study.
We propose several constructions for the original multiplication algorithm of D.V. and G.V. Chudnovsky in order to improve its scalar complexity. We highlight the set of generic strategies who underlay the optimization of the scalar complexity, according to parameterizable criteria. As an example, we apply this analysis to the construction of type elliptic Chudnovsky$^2$ multiplication algorithms for small extensions. As a case study, we significantly improve the Baum-Shokrollahi construction for multiplication in $mathbb F_{256}/mathbb F_4$.
We study quantum algorithms that learn properties of a matrix using queries that return its action on an input vector. We show that for various problems, including computing the trace, determinant, or rank of a matrix or solving a linear system that it specifies, quantum computers do not provide an asymptotic speedup over classical computation. On the other hand, we show that for some problems, such as computing the parities of rows or columns or deciding if there are two identical rows or columns, quantum computers provide exponential speedup. We demonstrate this by showing equivalence between models that provide matrix-vector products, vector-matrix products, and vector-matrix-vector products, whereas the power of these models can vary significantly for classical computation.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا