ترغب بنشر مسار تعليمي؟ اضغط هنا

An Efficient Tensor Completion Method via New Latent Nuclear Norm

241   0   0.0 ( 0 )
 نشر من قبل Jinshi Yu
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In tensor completion, the latent nuclear norm is commonly used to induce low-rank structure, while substantially failing to capture the global information due to the utilization of unbalanced unfolding scheme. To overcome this drawback, a new latent nuclear norm equipped with a more balanced unfolding scheme is defined for low-rank regularizer. Moreover, the new latent nuclear norm together with the Frank-Wolfe (FW) algorithm is developed as an efficient completion method by utilizing the sparsity structure of observed tensor. Specifically, both FW linear subproblem and line search only need to access the observed entries, by which we can instead maintain the sparse tensors and a set of small basis matrices during iteration. Most operations are based on sparse tensors, and the closed-form solution of FW linear subproblem can be obtained from rank-one SVD. We theoretically analyze the space-complexity and time-complexity of the proposed method, and show that it is much more efficient over other norm-based completion methods for higher-order tensors. Extensive experimental results of visual-data inpainting demonstrate that the proposed method is able to achieve state-of-the-art performance at smaller costs of time and space, which is very meaningful for the memory-limited equipment in practical applications.

قيم البحث

اقرأ أيضاً

77 - Jinshi Yu , Chao Li , Qibin Zhao 2019
Tensor ring (TR) decomposition has been successfully used to obtain the state-of-the-art performance in the visual data completion problem. However, the existing TR-based completion methods are severely non-convex and computationally demanding. In ad dition, the determination of the optimal TR rank is a tough work in practice. To overcome these drawbacks, we first introduce a class of new tensor nuclear norms by using tensor circular unfolding. Then we theoretically establish connection between the rank of the circularly-unfolded matrices and the TR ranks. We also develop an efficient tensor completion algorithm by minimizing the proposed tensor nuclear norm. Extensive experimental results demonstrate that our proposed tensor completion method outperforms the conventional tensor completion methods in the image/video in-painting problem with striped missing values.
105 - Yongming Zheng , An-Bao Xu 2020
In this paper, we consider the tensor completion problem, which has many researchers in the machine learning particularly concerned. Our fast and precise method is built on extending the $L_{2,1}$-norm minimization and Qatar Riyal decomposition (LNM- QR) method for matrix completions to tensor completions, and is different from the popular tensor completion methods using the tensor singular value decomposition (t-SVD). In terms of shortening the computing time, t-SVD is replaced with the method computing an approximate t-SVD based on Qatar Riyal decomposition (CTSVD-QR), which can be used to compute the largest $r left(r>0 right)$ singular values (tubes) and their associated singular vectors (of tubes) iteratively. We, in addition, use the tensor $L_{2,1}$-norm instead of the tensor nuclear norm to minimize our model on account of it is easy to optimize. Then in terms of improving accuracy, ADMM, a gradient-search-based method, plays a crucial part in our method. Numerical experimental results show that our method is faster than those state-of-the-art algorithms and have excellent accuracy.
104 - Yang Zhang , Yao Wang , Zhi Han 2021
In recent years, there have been an increasing number of applications of tensor completion based on the tensor train (TT) format because of its efficiency and effectiveness in dealing with higher-order tensor data. However, existing tensor completion methods using TT decomposition have two obvious drawbacks. One is that they only consider mode weights according to the degree of mode balance, even though some elements are recovered better in an unbalanced mode. The other is that serious blocking artifacts appear when the missing element rate is relatively large. To remedy such two issues, in this work, we propose a novel tensor completion approach via the element-wise weighted technique. Accordingly, a novel formulation for tensor completion and an efficient optimization algorithm, called as tensor completion by parallel weighted matrix factorization via tensor train (TWMac-TT), is proposed. In addition, we specifically consider the recovery quality of edge elements from adjacent blocks. Different from traditional reshaping and ket augmentation, we utilize a new tensor augmentation technique called overlapping ket augmentation, which can further avoid blocking artifacts. We then conduct extensive performance evaluations on synthetic data and several real image data sets. Our experimental results demonstrate that the proposed algorithm TWMac-TT outperforms several other competing tensor completion methods.
Tensor nuclear norm (TNN) induced by tensor singular value decomposition plays an important role in hyperspectral image (HSI) restoration tasks. In this letter, we first consider three inconspicuous but crucial phenomenons in TNN. In the Fourier tran sform domain of HSIs, different frequency components contain different information; different singular values of each frequency component also represent different information. The two physical phenomenons lie not only in the spectral dimension but also in the spatial dimensions. Then, to improve the capability and flexibility of TNN for HSI restoration, we propose a multi-mode and double-weighted TNN based on the above three crucial phenomenons. It can adaptively shrink the frequency components and singular values according to their physical meanings in all modes of HSIs. In the framework of the alternating direction method of multipliers, we design an effective alternating iterative strategy to optimize our proposed model. Restoration experiments on both synthetic and real HSI datasets demonstrate their superiority against related methods.
Tensor data often suffer from missing value problem due to the complex high-dimensional structure while acquiring them. To complete the missing information, lots of Low-Rank Tensor Completion (LRTC) methods have been proposed, most of which depend on the low-rank property of tensor data. In this way, the low-rank component of the original data could be recovered roughly. However, the shortcoming is that the detail information can not be fully restored, no matter the Sum of the Nuclear Norm (SNN) nor the Tensor Nuclear Norm (TNN) based methods. On the contrary, in the field of signal processing, Convolutional Sparse Coding (CSC) can provide a good representation of the high-frequency component of the image, which is generally associated with the detail component of the data. Nevertheless, CSC can not handle the low-frequency component well. To this end, we propose two novel methods, LRTC-CSC-I and LRTC-CSC-II, which adopt CSC as a supplementary regularization for LRTC to capture the high-frequency components. Therefore, the LRTC-CSC methods can not only solve the missing value problem but also recover the details. Moreover, the regularizer CSC can be trained with small samples due to the sparsity characteristic. Extensive experiments show the effectiveness of LRTC-CSC methods, and quantitative evaluation indicates that the performance of our models are superior to state-of-the-art methods.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا