Do you want to publish a course? Click here

A Generalized Kernel Risk Sensitive Loss for Robust Two-Dimensional Singular Value Decomposition

73   0   0.0 ( 0 )
 Added by Miaohua Zhang
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Two-dimensional singular decomposition (2DSVD) has been widely used for image processing tasks, such as image reconstruction, classification, and clustering. However, traditional 2DSVD algorithm is based on the mean square error (MSE) loss, which is sensitive to outliers. To overcome this problem, we propose a robust 2DSVD framework based on a generalized kernel risk sensitive loss (GKRSL-2DSVD) which is more robust to noise and and outliers. Since the proposed objective function is non-convex, a majorization-minimization algorithm is developed to efficiently solve it with guaranteed convergence. The proposed framework has inherent properties of processing non-centered data, rotational invariant, being easily extended to higher order spaces. Experimental results on public databases demonstrate that the performance of the proposed method on different applications significantly outperforms that of all the benchmarks.

rate research

Read More

Quaternion singular value decomposition (QSVD) is a robust technique of digital watermarking which can extract high quality watermarks from watermarked images with low distortion. In this paper, QSVD technique is further investigated and an efficient robust watermarking scheme is proposed. The improved algebraic structure-preserving method is proposed to handle the problem of explosion of complexity occurred in the conventional QSVD design. Secret information is transmitted blindly by incorporating in QSVD two new strategies, namely, coefficient pair selection and adaptive embedding. Unlike conventional QSVD which embeds watermarks in a single imaginary unit, we propose to adaptively embed the watermark into the optimal hiding position using the Normalized Cross-Correlation (NC) method. This avoids the selection of coefficient pair with less correlation, and thus, it reduces embedding impact by decreasing the maximum modification of coefficient values. In this way, compared with conventional QSVD, the proposed watermarking strategy avoids more modifications to a single color image layer and a better visual quality of the watermarked image is observed. Meanwhile, adaptive QSVD resists some common geometric attacks, and it improves the robustness of conventional QSVD. With these improvements, our method outperforms conventional QSVD. Its superiority over other state-of-the-art methods is also demonstrated experimentally.
84 - Anru Zhang , Rungang Han 2018
In this article, we consider the sparse tensor singular value decomposition, which aims for dimension reduction on high-dimensional high-order data with certain sparsity structure. A method named Sparse Tensor Alternating Thresholding for Singular Value Decomposition (STAT-SVD) is proposed. The proposed procedure features a novel double projection & thresholding scheme, which provides a sharp criterion for thresholding in each iteration. Compared with regular tensor SVD model, STAT-SVD permits more robust estimation under weaker assumptions. Both the upper and lower bounds for estimation accuracy are developed. The proposed procedure is shown to be minimax rate-optimal in a general class of situations. Simulation studies show that STAT-SVD performs well under a variety of configurations. We also illustrate the merits of the proposed procedure on a longitudinal tensor dataset on European country mortality rates.
The hierarchical SVD provides a quasi-best low rank approximation of high dimensional data in the hierarchical Tucker framework. Similar to the SVD for matrices, it provides a fundamental but expensive tool for tensor computations. In the present work we examine generalizations of randomized matrix decomposition methods to higher order tensors in the framework of the hierarchical tensors representation. In particular we present and analyze a randomized algorithm for the calculation of the hierarchical SVD (HSVD) for the tensor train (TT) format.
We propose a generalized formulation of the Huber loss. We show that with a suitable function of choice, specifically the log-exp transform; we can achieve a loss function which combines the desirable properties of both the absolute and the quadratic loss. We provide an algorithm to find the minimizer of such loss functions and show that finding a centralizing metric is not that much harder than the traditional mean and median.
Since being analyzed by Rokhlin, Szlam, and Tygert and popularized by Halko, Martinsson, and Tropp, randomized Simultaneous Power Iteration has become the method of choice for approximate singular value decomposition. It is more accurate than simpler sketching algorithms, yet still converges quickly for any matrix, independently of singular value gaps. After $tilde{O}(1/epsilon)$ iterations, it gives a low-rank approximation within $(1+epsilon)$ of optimal for spectral norm error. We give the first provable runtime improvement on Simultaneous Iteration: a simple randomized block Krylov method, closely related to the classic Block Lanczos algorithm, gives the same guarantees in just $tilde{O}(1/sqrt{epsilon})$ iterations and performs substantially better experimentally. Despite their long history, our analysis is the first of a Krylov subspace method that does not depend on singular value gaps, which are unreliable in practice. Furthermore, while it is a simple accuracy benchmark, even $(1+epsilon)$ error for spectral norm low-rank approximation does not imply that an algorithm returns high quality principal components, a major issue for data applications. We address this problem for the first time by showing that both Block Krylov Iteration and a minor modification of Simultaneous Iteration give nearly optimal PCA for any matrix. This result further justifies their strength over non-iterative sketching methods. Finally, we give insight beyond the worst case, justifying why both algorithms can run much faster in practice than predicted. We clarify how simple techniques can take advantage of common matrix properties to significantly improve runtime.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا