ترغب بنشر مسار تعليمي؟ اضغط هنا

A Dual Symmetric Gauss-Seidel Alternating Direction Method of Multipliers for Hyperspectral Sparse Unmixing

55   0   0.0 ( 0 )
 نشر من قبل Longfei Ren
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Since sparse unmixing has emerged as a promising approach to hyperspectral unmixing, some spatial-contextual information in the hyperspectral images has been exploited to improve the performance of the unmixing recently. The total variation (TV) has been widely used to promote the spatial homogeneity as well as the smoothness between adjacent pixels. However, the computation task for hyperspectral sparse unmixing with a TV regularization term is heavy. Besides, the convergence of the primal alternating direction method of multipliers (ADMM) for the hyperspectral sparse unmixing with a TV regularization term has not been explained in details. In this paper, we design an efficient and convergent dual symmetric Gauss-Seidel ADMM (sGS-ADMM) for hyperspectral sparse unmixing with a TV regularization term. We also present the global convergence and local linear convergence rate analysis for this algorithm. As demonstrated in numerical experiments, our algorithm can obviously improve the efficiency of the unmixing compared with the state-of-the-art algorithm. More importantly, we can obtain images with higher quality.



قيم البحث

اقرأ أيضاً

In this paper, we develop a dual alternating direction method of multipliers (ADMM) for an image decomposition model. In this model, an image is divided into two meaningful components, i.e., a cartoon part and a texture part. The optimization algorit hm that we develop not only gives the cartoon part and the texture part of an image but also gives the restored image (cartoon part + texture part). We also present the global convergence and the local linear convergence rate for the algorithm under some mild conditions. Numerical experiments demonstrate the efficiency and robustness of the dual ADMM (dADMM). Furthermore, we can obtain relatively higher signalto-noise ratio (SNR) comparing to other algorithms. It shows that the choice of the algorithm is also important even for the same model.
Quantization of the parameters of machine learning models, such as deep neural networks, requires solving constrained optimization problems, where the constraint set is formed by the Cartesian product of many simple discrete sets. For such optimizati on problems, we study the performance of the Alternating Direction Method of Multipliers for Quantization ($texttt{ADMM-Q}$) algorithm, which is a variant of the widely-used ADMM method applied to our discrete optimization problem. We establish the convergence of the iterates of $texttt{ADMM-Q}$ to certain $textit{stationary points}$. To the best of our knowledge, this is the first analysis of an ADMM-type method for problems with discrete variables/constraints. Based on our theoretical insights, we develop a few variants of $texttt{ADMM-Q}$ that can handle inexact update rules, and have improved performance via the use of soft projection and injecting randomness to the algorithm. We empirically evaluate the efficacy of our proposed approaches.
Weight pruning methods for deep neural networks (DNNs) have been investigated recently, but prior work in this area is mainly heuristic, iterative pruning, thereby lacking guarantees on the weight reduction ratio and convergence time. To mitigate the se limitations, we present a systematic weight pruning framework of DNNs using the alternating direction method of multipliers (ADMM). We first formulate the weight pruning problem of DNNs as a nonconvex optimization problem with combinatorial constraints specifying the sparsity requirements, and then adopt the ADMM framework for systematic weight pruning. By using ADMM, the original nonconvex optimization problem is decomposed into two subproblems that are solved iteratively. One of these subproblems can be solved using stochastic gradient descent, the other can be solved analytically. Besides, our method achieves a fast convergence rate. The weight pruning results are very promising and consistently outperform the prior work. On the LeNet-5 model for the MNIST data set, we achieve 71.2 times weight reduction without accuracy loss. On the AlexNet model for the ImageNet data set, we achieve 21 times weight reduction without accuracy loss. When we focus on the convolutional layer pruning for computation reductions, we can reduce the total computation by five times compared with the prior work (achieving a total of 13.4 times weight reduction in convolutional layers). Our models and codes are released at https://github.com/KaiqiZhang/admm-pruning
We present a systematic weight pruning framework of deep neural networks (DNNs) using the alternating direction method of multipliers (ADMM). We first formulate the weight pruning problem of DNNs as a constrained nonconvex optimization problem, and t hen adopt the ADMM framework for systematic weight pruning. We show that ADMM is highly suitable for weight pruning due to the computational efficiency it offers. We achieve a much higher compression ratio compared with prior work while maintaining the same test accuracy, together with a faster convergence rate. Our models are released at https://github.com/KaiqiZhang/admm-pruning
117 - Hongpeng Sun , Jing Yuan 2020
This work introduces a preconditioned dual optimization framework with the alternating direction method of multipliers (ADMM) to the optical flow estimates. By introducing efficient preconditioners with the multiscale pyramid, our preconditioned algo rithms give competitive optical flow estimates under appropriate variational functional frameworks. We propose a novel preconditioned alternating direction methods of multipliers (ADMM) with convergenceguarantee for the total variation regularized optical flow problem through optimizing the dual problems. The numerical tests show the proposed preconditioned ADMM algorithms are very efficient for the total variation regularized optical flow estimates.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا