ﻻ يوجد ملخص باللغة العربية
Low-rank tensor completion has been widely used in computer vision and machine learning. This paper develops a kind of multi-modal core tensor factorization (MCTF) method together with a tensor low-rankness measure and a better nonconvex relaxation form of it (NonMCTF). The proposed models encode low-rank insights for general tensors provided by Tucker and T-SVD, and thus are expected to simultaneously model spectral low-rankness in multiple orientations and accurately restore the data of intrinsic low-rank structure based on few observed entries. Furthermore, we study the MCTF and NonMCTF regularization minimization problem, and design an effective BSUM algorithm to solve them. This efficient solver can extend MCTF to various tasks, such as tensor completion. A series of experiments, including hyperspectral image (HSI), video and MRI completion, confirm the superior performance of the proposed method.
In this paper, we introduce a new neural network (NN) structure, multi-mode reservoir computing (Multi-Mode RC). It inherits the dynamic mechanism of RC and processes the forward path and loss optimization of the NN using tensor as the underlying dat
Higher-order low-rank tensor arises in many data processing applications and has attracted great interests. Inspired by low-rank approximation theory, researchers have proposed a series of effective tensor completion methods. However, most of these m
This work studies the problem of high-dimensional data (referred to tensors) completion from partially observed samplings. We consider that a tensor is a superposition of multiple low-rank components. In particular, each component can be represented
Tensor decomposition is a popular technique for tensor completion, However most of the existing methods are based on linear or shallow model, when the data tensor becomes large and the observation data is very small, it is prone to over fitting and t
In recent years, there have been an increasing number of applications of tensor completion based on the tensor train (TT) format because of its efficiency and effectiveness in dealing with higher-order tensor data. However, existing tensor completion