ترغب بنشر مسار تعليمي؟ اضغط هنا

Two equalities expressing the determinant of a matrix in terms of expectations over matrix-vector products

95   0   0.0 ( 0 )
 نشر من قبل Jascha Sohl-Dickstein
 تاريخ النشر 2020
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

We introduce two equations expressing the inverse determinant of a full rank matrix $mathbf{A} in mathbb{R}^{n times n}$ in terms of expectations over matrix-vector products. The first relationship is $|mathrm{det} (mathbf{A})|^{-1} = mathbb{E}_{mathbf{s} sim mathcal{S}^{n-1}}bigl[, Vert mathbf{As}Vert^{-n} bigr]$, where expectations are over vectors drawn uniformly on the surface of an $n$-dimensional radius one hypersphere. The second relationship is $|mathrm{det}(mathbf{A})|^{-1} = mathbb{E}_{mathbf{x} sim q}[,p(mathbf{Ax}) /, q(mathbf{x})]$, where $p$ and $q$ are smooth distributions, and $q$ has full support.



قيم البحث

اقرأ أيضاً

Algorithms involving Gaussian processes or determinantal point processes typically require computing the determinant of a kernel matrix. Frequently, the latter is computed from the Cholesky decomposition, an algorithm of cubic complexity in the size of the matrix. We show that, under mild assumptions, it is possible to estimate the determinant from only a sub-matrix, with probabilistic guarantee on the relative error. We present an augmentation of the Cholesky decomposition that stops under certain conditions before processing the whole matrix. Experiments demonstrate that this can save a considerable amount of time while having an overhead of less than $5%$ when not stopping early. More generally, we present a probabilistic stopping strategy for the approximation of a sum of known length where addends are revealed sequentially. We do not assume independence between addends, only that they are bounded from below and decrease in conditional expectation.
We study quantum algorithms that learn properties of a matrix using queries that return its action on an input vector. We show that for various problems, including computing the trace, determinant, or rank of a matrix or solving a linear system that it specifies, quantum computers do not provide an asymptotic speedup over classical computation. On the other hand, we show that for some problems, such as computing the parities of rows or columns or deciding if there are two identical rows or columns, quantum computers provide exponential speedup. We demonstrate this by showing equivalence between models that provide matrix-vector products, vector-matrix products, and vector-matrix-vector products, whereas the power of these models can vary significantly for classical computation.
We evaluate the determinant of a matrix whose entries are elliptic hypergeometric terms and whose form is reminiscent of Sylvester matrices. A hypergeometric determinant evaluation of a matrix of this type has appeared in the context of approximation theory, in the work of Feng, Krattenthaler and Xu. Our determinant evaluation is an elliptic extension of their evaluation, which has two additional parameters (in addition to the base $q$ and nome $p$ found in elliptic hypergeometric terms). We also extend the evaluation to a formula transforming an elliptic determinant into a multiple of another elliptic determinant. This transformation has two further parameters. The proofs of the determinant evaluation and the transformation formula require an elliptic determinant lemma due to Warnaar, and the application of two $C_n$ elliptic formulas that extend Frenkel and Turaevs $_{10}V_9$ summation formula and $_{12}V_{11}$ transformation formula, results due to Warnaar, Rosengren, Rains, and Coskun and Gustafson.
We gather several results on the eigenvalues of the spatial sign covariance matrix of an elliptical distribution. It is shown that the eigenvalues are a one-to-one function of the eigenvalues of the shape matrix and that they are closer together than the latter. We further provide a one-dimensional integral representation of the eigenvalues, which facilitates their numerical computation.
We present a quantum algorithm that verifies a product of two n*n matrices over any field with bounded error in worst-case time n^{5/3} and expected time n^{5/3} / min(w,sqrt(n))^{1/3}, where w is the number of wrong entries. This improves the previo us best algorithm that runs in time n^{7/4}. We also present a quantum matrix multiplication algorithm that is efficient when the result has few nonzero entries.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا