ترغب بنشر مسار تعليمي؟ اضغط هنا

Stochastic Dual Coordinate Ascent with Alternating Direction Multiplier Method

102   0   0.0 ( 0 )
 نشر من قبل Taiji Suzuki
 تاريخ النشر 2013
  مجال البحث الاحصاء الرياضي
والبحث باللغة English
 تأليف Taiji Suzuki




اسأل ChatGPT حول البحث

We propose a new stochastic dual coordinate ascent technique that can be applied to a wide range of regularized learning problems. Our method is based on Alternating Direction Multiplier Method (ADMM) to deal with complex regularization functions such as structured regularizations. Although the original ADMM is a batch method, the proposed method offers a stochastic update rule where each iteration requires only one or few sample observations. Moreover, our method can naturally afford mini-batch update and it gives speed up of convergence. We show that, under mild assumptions, our method converges exponentially. The numerical experiments show that our method actually performs efficiently.

قيم البحث

اقرأ أيضاً

Communication remains the most significant bottleneck in the performance of distributed optimization algorithms for large-scale machine learning. In this paper, we propose a communication-efficient framework, CoCoA, that uses local computation in a p rimal-dual setting to dramatically reduce the amount of necessary communication. We provide a strong convergence rate analysis for this class of algorithms, as well as experiments on real-world distributed datasets with implementations in Spark. In our experiments, we find that as compared to state-of-the-art mini-bat
243 - Tianyi Chen , Yuejiao Sun , 2021
Stochastic nested optimization, including stochastic compositional, min-max and bilevel optimization, is gaining popularity in many machine learning applications. While the three problems share the nested structure, existing works often treat them se parately, and thus develop problem-specific algorithms and their analyses. Among various exciting developments, simple SGD-type updates (potentially on multiple variables) are still prevalent in solving this class of nested problems, but they are believed to have slower convergence rate compared to that of the non-nested problems. This paper unifies several SGD-type updates for stochastic nested problems into a single SGD approach that we term ALternating Stochastic gradient dEscenT (ALSET) method. By leveraging the hidden smoothness of the problem, this paper presents a tighter analysis of ALSET for stochastic nested problems. Under the new analysis, to achieve an $epsilon$-stationary point of the nested problem, it requires ${cal O}(epsilon^{-2})$ samples. Under certain regularity conditions, applying our results to stochastic compositional, min-max and reinforcement learning problems either improves or matches the best-known sample complexity in the respective cases. Our results explain why simple SGD-type algorithms in stochastic nested problems all work very well in practice without the need for further modifications.
In this paper, we develop a dual alternating direction method of multipliers (ADMM) for an image decomposition model. In this model, an image is divided into two meaningful components, i.e., a cartoon part and a texture part. The optimization algorit hm that we develop not only gives the cartoon part and the texture part of an image but also gives the restored image (cartoon part + texture part). We also present the global convergence and the local linear convergence rate for the algorithm under some mild conditions. Numerical experiments demonstrate the efficiency and robustness of the dual ADMM (dADMM). Furthermore, we can obtain relatively higher signalto-noise ratio (SNR) comparing to other algorithms. It shows that the choice of the algorithm is also important even for the same model.
Quantization of the parameters of machine learning models, such as deep neural networks, requires solving constrained optimization problems, where the constraint set is formed by the Cartesian product of many simple discrete sets. For such optimizati on problems, we study the performance of the Alternating Direction Method of Multipliers for Quantization ($texttt{ADMM-Q}$) algorithm, which is a variant of the widely-used ADMM method applied to our discrete optimization problem. We establish the convergence of the iterates of $texttt{ADMM-Q}$ to certain $textit{stationary points}$. To the best of our knowledge, this is the first analysis of an ADMM-type method for problems with discrete variables/constraints. Based on our theoretical insights, we develop a few variants of $texttt{ADMM-Q}$ that can handle inexact update rules, and have improved performance via the use of soft projection and injecting randomness to the algorithm. We empirically evaluate the efficacy of our proposed approaches.
Since sparse unmixing has emerged as a promising approach to hyperspectral unmixing, some spatial-contextual information in the hyperspectral images has been exploited to improve the performance of the unmixing recently. The total variation (TV) has been widely used to promote the spatial homogeneity as well as the smoothness between adjacent pixels. However, the computation task for hyperspectral sparse unmixing with a TV regularization term is heavy. Besides, the convergence of the primal alternating direction method of multipliers (ADMM) for the hyperspectral sparse unmixing with a TV regularization term has not been explained in details. In this paper, we design an efficient and convergent dual symmetric Gauss-Seidel ADMM (sGS-ADMM) for hyperspectral sparse unmixing with a TV regularization term. We also present the global convergence and local linear convergence rate analysis for this algorithm. As demonstrated in numerical experiments, our algorithm can obviously improve the efficiency of the unmixing compared with the state-of-the-art algorithm. More importantly, we can obtain images with higher quality.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا