ترغب بنشر مسار تعليمي؟ اضغط هنا

Nonnegative Tensor Factorization, Completely Positive Tensors and an Hierarchical Elimination Algorithm

124   0   0.0 ( 0 )
 نشر من قبل Liqun Qi
 تاريخ النشر 2013
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

Nonnegative tensor factorization has applications in statistics, computer vision, exploratory multiway data analysis and blind source separation. A symmetric nonnegative tensor, which has a symmetric nonnegative factorization, is called a completely positive (CP) tensor. The H-eigenvalues of a CP tensor are always nonnegative. When the order is even, the Z-eigenvalue of a CP tensor are all nonnegative. When the order is odd, a Z-eigenvector associated with a positive (negative) Z-eigenvalue of a CP tensor is always nonnegative (nonpositive). The entries of a CP tensor obey some dominance properties. The CP tensor cone and the copositive tensor cone of the same order are dual to each other. We introduce strongly symmetric tensors and show that a symmetric tensor has a symmetric binary decomposition if and only if it is strongly symmetric. Then we show that a strongly symmetric, hierarchically dominated nonnegative tensor is a CP tensor, and present a hierarchical elimination algorithm for checking this. Numerical examples are also given.



قيم البحث

اقرأ أيضاً

This paper is concerned with improving the empirical convergence speed of block-coordinate descent algorithms for approximate nonnegative tensor factorization (NTF). We propose an extrapolation strategy in-between block updates, referred to as heuris tic extrapolation with restarts (HER). HER significantly accelerates the empirical convergence speed of most existing block-coordinate algorithms for dense NTF, in particular for challenging computational scenarios, while requiring a negligible additional computational budget.
Existing tensor factorization methods assume that the input tensor follows some specific distribution (i.e. Poisson, Bernoulli, and Gaussian), and solve the factorization by minimizing some empirical loss functions defined based on the corresponding distribution. However, it suffers from several drawbacks: 1) In reality, the underlying distributions are complicated and unknown, making it infeasible to be approximated by a simple distribution. 2) The correlation across dimensions of the input tensor is not well utilized, leading to sub-optimal performance. Although heuristics were proposed to incorporate such correlation as side information under Gaussian distribution, they can not easily be generalized to other distributions. Thus, a more principled way of utilizing the correlation in tensor factorization models is still an open challenge. Without assuming any explicit distribution, we formulate the tensor factorization as an optimal transport problem with Wasserstein distance, which can handle non-negative inputs. We introduce SWIFT, which minimizes the Wasserstein distance that measures the distance between the input tensor and that of the reconstruction. In particular, we define the N-th order tensor Wasserstein loss for the widely used tensor CP factorization and derive the optimization algorithm that minimizes it. By leveraging sparsity structure and different equivalent formulations for optimizing computational efficiency, SWIFT is as scalable as other well-known CP algorithms. Using the factor matrices as features, SWIFT achieves up to 9.65% and 11.31% relative improvement over baselines for downstream prediction tasks. Under the noisy conditions, SWIFT achieves up to 15% and 17% relative improvements over the best competitors for the prediction tasks.
A third order real tensor is mapped to a special f-diagonal tensor by going through Discrete Fourier Transform (DFT), standard matrix SVD and inverse DFT. We call such an f-diagonal tensor an s-diagonal tensor. An f-diagonal tensor is an s-diagonal t ensor if and only if it is mapped to itself in the above process. The third order tensor space is partitioned to orthogonal equivalence classes. Each orthogonal equivalence class has a unique s-diagonal tensor. Two s-diagonal tensors are equal if they are orthogonally equivalent. Third order tensors in an orthogonal equivalence class have the same tensor tubal rank and T-singular values. Four meaningful necessary conditions for s-diagonal tensors are presented. Then we present a set of sufficient and necessary conditions for s-diagonal tensors. Such conditions involve a special complex number. In the cases that the dimension of the third mode of the considered tensor is $2, 3$ and $4$, we present direct sufficient and necessary conditions which do not involve such a complex number.
Spatial symmetries and invariances play an important role in the description of materials. When modelling material properties, it is important to be able to respect such invariances. Here we discuss how to model and generate random ensembles of tenso rs where one wants to be able to prescribe certain classes of spatial symmetries and invariances for the whole ensemble, while at the same time demanding that the mean or expected value of the ensemble be subject to a possibly higher spatial invariance class. Our special interest is in the class of physically symmetric and positive definite tensors, as they appear often in the description of materials. As the set of positive definite tensors is not a linear space, but rather an open convex cone in the linear vector space of physically symmetric tensors, it may be advantageous to widen the notion of mean to the so-called Frechet mean, which is based on distance measures between positive definite tensors other than the usual Euclidean one. For the sake of simplicity, as well as to expose the main idea as clearly as possible, we limit ourselves here to second order tensors. It is shown how the random ensemble can be modelled and generated, with fine control of the spatial symmetry or invariance of the whole ensemble, as well as its Frechet mean, independently in its scaling and directional aspects. As an example, a 2D and a 3D model of steady-state heat conduction in a human proximal femur, a bone with high material anisotropy, is explored. It is modelled with a random thermal conductivity tensor, and the numerical results show the distinct impact of incorporating into the constitutive model different material uncertainties$-$scaling, orientation, and prescribed material symmetry$-$on the desired quantities of interest, such as temperature distribution and heat flux.
229 - Abdul Ahad , Zhen Long , Ce Zhu 2020
Tensor completion can estimate missing values of a high-order data from its partially observed entries. Recent works show that low rank tensor ring approximation is one of the most powerful tools to solve tensor completion problem. However, existing algorithms need predefined tensor ring rank which may be hard to determine in practice. To address the issue, we propose a hierarchical tensor ring decomposition for more compact representation. We use the standard tensor ring to decompose a tensor into several 3-order sub-tensors in the first layer, and each sub-tensor is further factorized by tensor singular value decomposition (t-SVD) in the second layer. In the low rank tensor completion based on the proposed decomposition, the zero elements in the 3-order core tensor are pruned in the second layer, which helps to automatically determinate the tensor ring rank. To further enhance the recovery performance, we use total variation to exploit the locally piece-wise smoothness data structure. The alternating direction method of multiplier can divide the optimization model into several subproblems, and each one can be solved efficiently. Numerical experiments on color images and hyperspectral images demonstrate that the proposed algorithm outperforms state-of-the-arts ones in terms of recovery accuracy.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا