ترغب بنشر مسار تعليمي؟ اضغط هنا

SWIFT: Scalable Wasserstein Factorization for Sparse Nonnegative Tensors

94   0   0.0 ( 0 )
 نشر من قبل Kejing Yin
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Existing tensor factorization methods assume that the input tensor follows some specific distribution (i.e. Poisson, Bernoulli, and Gaussian), and solve the factorization by minimizing some empirical loss functions defined based on the corresponding distribution. However, it suffers from several drawbacks: 1) In reality, the underlying distributions are complicated and unknown, making it infeasible to be approximated by a simple distribution. 2) The correlation across dimensions of the input tensor is not well utilized, leading to sub-optimal performance. Although heuristics were proposed to incorporate such correlation as side information under Gaussian distribution, they can not easily be generalized to other distributions. Thus, a more principled way of utilizing the correlation in tensor factorization models is still an open challenge. Without assuming any explicit distribution, we formulate the tensor factorization as an optimal transport problem with Wasserstein distance, which can handle non-negative inputs. We introduce SWIFT, which minimizes the Wasserstein distance that measures the distance between the input tensor and that of the reconstruction. In particular, we define the N-th order tensor Wasserstein loss for the widely used tensor CP factorization and derive the optimization algorithm that minimizes it. By leveraging sparsity structure and different equivalent formulations for optimizing computational efficiency, SWIFT is as scalable as other well-known CP algorithms. Using the factor matrices as features, SWIFT achieves up to 9.65% and 11.31% relative improvement over baselines for downstream prediction tasks. Under the noisy conditions, SWIFT achieves up to 15% and 17% relative improvements over the best competitors for the prediction tasks.



قيم البحث

اقرأ أيضاً

76 - Zhihui Zhu , Xiao Li , Kai Liu 2018
Symmetric nonnegative matrix factorization (NMF), a special but important class of the general NMF, is demonstrated to be useful for data analysis and in particular for various clustering tasks. Unfortunately, designing fast algorithms for Symmetric NMF is not as easy as for the nonsymmetric counterpart, the latter admitting the splitting property that allows efficient alternating-type algorithms. To overcome this issue, we transfer the symmetric NMF to a nonsymmetric one, then we can adopt the idea from the state-of-the-art algorithms for nonsymmetric NMF to design fast algorithms solving symmetric NMF. We rigorously establish that solving nonsymmetric reformulation returns a solution for symmetric NMF and then apply fast alternating based algorithms for the corresponding reformulated problem. Furthermore, we show these fast algorithms admit strong convergence guarantee in the sense that the generated sequence is convergent at least at a sublinear rate and it converges globally to a critical point of the symmetric NMF. We conduct experiments on both synthetic data and image clustering to support our result.
122 - Liqun Qi , Changqing Xu , Yi Xu 2013
Nonnegative tensor factorization has applications in statistics, computer vision, exploratory multiway data analysis and blind source separation. A symmetric nonnegative tensor, which has a symmetric nonnegative factorization, is called a completely positive (CP) tensor. The H-eigenvalues of a CP tensor are always nonnegative. When the order is even, the Z-eigenvalue of a CP tensor are all nonnegative. When the order is odd, a Z-eigenvector associated with a positive (negative) Z-eigenvalue of a CP tensor is always nonnegative (nonpositive). The entries of a CP tensor obey some dominance properties. The CP tensor cone and the copositive tensor cone of the same order are dual to each other. We introduce strongly symmetric tensors and show that a symmetric tensor has a symmetric binary decomposition if and only if it is strongly symmetric. Then we show that a strongly symmetric, hierarchically dominated nonnegative tensor is a CP tensor, and present a hierarchical elimination algorithm for checking this. Numerical examples are also given.
Fully unsupervised topic models have found fantastic success in document clustering and classification. However, these models often suffer from the tendency to learn less-than-meaningful or even redundant topics when the data is biased towards a set of features. For this reason, we propose an approach based upon the nonnegative matrix factorization (NMF) model, deemed textit{Guided NMF}, that incorporates user-designed seed word supervision. Our experimental results demonstrate the promise of this model and illustrate that it is competitive with other methods of this ilk with only very little supervision information.
We present a general-purpose data compression algorithm, Regularized L21 Semi-NonNegative Matrix Factorization (L21 SNF). L21 SNF provides robust, parts-based compression applicable to mixed-sign data for which high fidelity, individualdata point rec onstruction is paramount. We derive a rigorous proof of convergenceof our algorithm. Through experiments, we show the use-case advantages presentedby L21 SNF, including application to the compression of highly overdeterminedsystems encountered broadly across many general machine learning processes.
171 - Jim Jing-Yan Wang , Xin Gao 2014
Inthischapterwediscusshowtolearnanoptimalmanifoldpresentationto regularize nonegative matrix factorization (NMF) for data representation problems. NMF,whichtriestorepresentanonnegativedatamatrixasaproductoftwolowrank nonnegative matrices, has been a popular method for data representation due to its ability to explore the latent part-based structure of data. Recent study shows that lots of data distributions have manifold structures, and we should respect the manifold structure when the data are represented. Recently, manifold regularized NMF used a nearest neighbor graph to regulate the learning of factorization parameter matrices and has shown its advantage over traditional NMF methods for data representation problems. However, how to construct an optimal graph to present the manifold prop- erly remains a difficultproblem due to the graph modelselection, noisy features, and nonlinear distributed data. In this chapter, we introduce three effective methods to solve these problems of graph construction for manifold regularized NMF. Multiple graph learning is proposed to solve the problem of graph model selection, adaptive graph learning via feature selection is proposed to solve the problem of constructing a graph from noisy features, while multi-kernel learning-based graph construction is used to solve the problem of learning a graph from nonlinearly distributed data.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا