ترغب بنشر مسار تعليمي؟ اضغط هنا

Randomized Online CP Decomposition

154   0   0.0 ( 0 )
 نشر من قبل Hu Wang
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

CANDECOMP/PARAFAC (CP) decomposition has been widely used to deal with multi-way data. For real-time or large-scale tensors, based on the ideas of randomized-sampling CP decomposition algorithm and online CP decomposition algorithm, a novel CP decomposition algorithm called randomized online CP decomposition (ROCP) is proposed in this paper. The proposed algorithm can avoid forming full Khatri-Rao product, which leads to boost the speed largely and reduce memory usage. The experimental results on synthetic data and real-world data show the ROCP algorithm is able to cope with CP decomposition for large-scale tensors with arbitrary number of dimensions. In addition, ROCP can reduce the computing time and memory usage dramatically, especially for large-scale tensors.



قيم البحث

اقرأ أيضاً

We consider the problem of factorizing a structured 3-way tensor into its constituent Canonical Polyadic (CP) factors. This decomposition, which can be viewed as a generalization of singular value decomposition (SVD) for tensors, reveals how the tens or dimensions (features) interact with each other. However, since the factors are a priori unknown, the corresponding optimization problems are inherently non-convex. The existing guaranteed algorithms which handle this non-convexity incur an irreducible error (bias), and only apply to cases where all factors have the same structure. To this end, we develop a provable algorithm for online structured tensor factorization, wherein one of the factors obeys some incoherence conditions, and the others are sparse. Specifically we show that, under some relatively mild conditions on initialization, rank, and sparsity, our algorithm recovers the factors exactly (up to scaling and permutation) at a linear rate. Complementary to our theoretical results, our synthetic and real-world data evaluations showcase superior performance compared to related techniques. Moreover, its scalability and ability to learn on-the-fly makes it suitable for real-world tasks.
The CANDECOMP/PARAFAC (CP) decomposition is a leading method for the analysis of multiway data. The standard alternating least squares algorithm for the CP decomposition (CP-ALS) involves a series of highly overdetermined linear least squares problem s. We extend randomized least squares methods to tensors and show the workload of CP-ALS can be drastically reduced without a sacrifice in quality. We introduce techniques for efficiently preprocessing, sampling, and computing randomized least squares on a dense tensor of arbitrary order, as well as an efficient sampling-based technique for checking the stopping condition. We also show more generally that the Khatri-Rao product (used within the CP-ALS iteration) produces conditions favorable for direct sampling. In numerical results, we see improvements in speed, reductions in memory requirements, and robustness with respect to initialization.
There is currently an unprecedented demand for large-scale temporal data analysis due to the explosive growth of data. Dynamic topic modeling has been widely used in social and data sciences with the goal of learning latent topics that emerge, evolve , and fade over time. Previous work on dynamic topic modeling primarily employ the method of nonnegative matrix factorization (NMF), where slices of the data tensor are each factorized into the product of lower-dimensional nonnegative matrices. With this approach, however, information contained in the temporal dimension of the data is often neglected or underutilized. To overcome this issue, we propose instead adopting the method of nonnegative CANDECOMP/PARAPAC (CP) tensor decomposition (NNCPD), where the data tensor is directly decomposed into a minimal sum of outer products of nonnegative vectors, thereby preserving the temporal information. The viability of NNCPD is demonstrated through application to both synthetic and real data, where significantly improved results are obtained compared to those of typical NMF-based methods. The advantages of NNCPD over such approaches are studied and discussed. To the best of our knowledge, this is the first time that NNCPD has been utilized for the purpose of dynamic topic modeling, and our findings will be transformative for both applications and further developments.
The successes of deep learning, variational inference, and many other fields have been aided by specialized implementations of reverse-mode automatic differentiation (AD) to compute gradients of mega-dimensional objectives. The AD techniques underlyi ng these tools were designed to compute exact gradients to numerical precision, but modern machine learning models are almost always trained with stochastic gradient descent. Why spend computation and memory on exact (minibatch) gradients only to use them for stochastic optimization? We develop a general framework and approach for randomized automatic differentiation (RAD), which can allow unbiased gradient estimates to be computed with reduced memory in return for variance. We examine limitations of the general approach, and argue that we must leverage problem specific structure to realize benefits. We develop RAD techniques for a variety of simple neural network architectures, and show that for a fixed memory budget, RAD converges in fewer iterations than using a small batch size for feedforward networks, and in a similar number for recurrent networks. We also show that RAD can be applied to scientific computing, and use it to develop a low-memory stochastic gradient method for optimizing the control parameters of a linear reaction-diffusion PDE representing a fission reactor.
78 - Hanyu Li , Yajie Yu 2021
This work considers the problem of computing the CANDECOMP/PARAFAC (CP) decomposition of large tensors. One popular way is to translate the problem into a sequence of overdetermined least squares subproblems with Khatri-Rao product (KRP) structure. I n this work, for tensor with different levels of importance in each fiber, combining stochastic optimization with randomized sampling, we present a mini-batch stochastic gradient descent algorithm with importance sampling for those special least squares subproblems. Four different sampling strategies are provided. They can avoid forming the full KRP or corresponding probabilities and sample the desired fibers from the original tensor directly. Moreover, a more practical algorithm with adaptive step size is also given. For the proposed algorithms, we present their convergence properties and numerical performance. The results on synthetic data show that our algorithms outperform the existing algorithms in terms of accuracy or the number of iterations.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا