ﻻ يوجد ملخص باللغة العربية
In this paper, we show that if a lower-order Hankel tensor is positive semi-definite (or positive definite, or negative semi-definite, or negative definite, or SOS), then its associated higher-order Hankel tensor with the same generating vector, where the higher order is a multiple of the lower order, is also positive semi-definite (or positive definite, or negative semi-definite, or negative definite, or SOS, respectively). Furthermore, in this case, the extremal H-eigenvalues of the higher order tensor are bounded by the extremal H-eigenvalues of the lower order tensor, multiplied with some constants. Based on this inheritance property, we give a concrete sum-of-squares decomposition for each strong Hankel tensor. Then we prove the second inheritance property of Hankel tensors, i.e., a Hankel tensor has no negative (or non-positive, or positive, or nonnegative) H-eigenvalues if the associated Hankel matrix of that Hankel tensor has no negative (or non-positive, or positive, or nonnegative, respectively) eigenvalues. In this case, the extremal H-eigenvalues of the Hankel tensor are also bounded by the extremal eigenvalues of the associated Hankel matrix, multiplied with some constants. The third inheritance property of Hankel tensors is raised as a conjecture.
We consider two problems that arise in machine learning applications: the problem of recovering a planted sparse vector in a random linear subspace and the problem of decomposing a random low-rank overcomplete 3-tensor. For both problems, the best kn
In order to obtain the best-known guarantees, algorithms are traditionally tailored to the particular problem we want to solve. Two recent developments, the Unique Games Conjecture (UGC) and the Sum-of-Squares (SOS) method, surprisingly suggest that
We introduce a new framework for unifying and systematizing the performance analysis of first-order black-box optimization algorithms for unconstrained convex minimization. The low-cost iteration complexity enjoyed by first-order algorithms renders t
We give a new approach to the dictionary learning (also known as sparse coding) problem of recovering an unknown $ntimes m$ matrix $A$ (for $m geq n$) from examples of the form [ y = Ax + e, ] where $x$ is a random vector in $mathbb R^m$ with at most
In this paper, we mainly focus on how to generalize some conclusions from nonnegative irreducible tensors to nonnegative weakly irreducible tensors. To do so, a basic and important lemma is proven using new tools. First, we give the definition of sto