ترغب بنشر مسار تعليمي؟ اضغط هنا

Convergence of Constrained Anderson Acceleration

70   0   0.0 ( 0 )
 نشر من قبل Mathieu Barr\\'e
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We prove non asymptotic linear convergence rates for the constrained Anderson acceleration extrapolation scheme. These guarantees come from new upper bounds on the constrained Chebyshev problem, which consists in minimizing the maximum absolute value of a polynomial on a bounded real interval with $l_1$ constraints on its coefficients vector. Constrained Anderson Acceleration has a numerical cost comparable to that of the original scheme.



قيم البحث

اقرأ أيضاً

Let $A$ be a real $ntimes n$ matrix and $z,bin mathbb R^n$. The piecewise linear equation system $z-Avert zvert = b$ is called an textit{absolute value equation}. We consider two solvers for this problem, one direct, one semi-iterative, and extend their previously known ranges of convergence.
The randomized sparse Kaczmarz method was recently proposed to recover sparse solutions of linear systems. In this work, we introduce a greedy variant of the randomized sparse Kaczmarz method by employing the sampling Kaczmarz-Motzkin method, and pro ve its linear convergence in expectation with respect to the Bregman distance in the noiseless and noisy cases. This greedy variant can be viewed as a unification of the sampling Kaczmarz-Motzkin method and the randomized sparse Kaczmarz method, and hence inherits the merits of these two methods. Numerically, we report a couple of experimental results to demonstrate its superiority
We study an optimization problem that aims to determine the shape of an obstacle that is submerged in a fluid governed by the Stokes equations. The mentioned flow takes place in a channel, which motivated the imposition of a Poiseuille-like input fun ction on one end and a do-nothing boundary condition on the other. The maximization of the vorticity is addressed by the $L^2$-norm of the curl and the {it det-grad} measure of the fluid. We impose a Tikhonov regularization in the form of a perimeter functional and a volume constraint to address the possibility of topological change. Having been able to establish the existence of an optimal shape, the first order necessary condition was formulated by utilizing the so-called rearrangement method. Finally, numerical examples are presented by utilizing a finite element method on the governing states, and a gradient descent method for the deformation of the domain. On the said gradient descent method, we use two approaches to address the volume constraint: one is by utilizing the augmented Lagrangian method; and the other one is by utilizing a class of divergence-free deformation fields.
We propose a novel method to accelerate Lloyds algorithm for K-Means clustering. Unlike previous acceleration approaches that reduce computational cost per iterations or improve initialization, our approach is focused on reducing the number of iterat ions required for convergence. This is achieved by treating the assignment step and the update step of Lloyds algorithm as a fixed-point iteration, and applying Anderson acceleration, a well-established technique for accelerating fixed-point solvers. Classical Anderson acceleration utilizes m previous iterates to find an accelerated iterate, and its performance on K-Means clustering can be sensitive to choice of m and the distribution of samples. We propose a new strategy to dynamically adjust the value of m, which achieves robust and consistent speedups across different problem instances. Our method complements existing acceleration techniques, and can be combined with them to achieve state-of-the-art performance. We perform extensive experiments to evaluate the performance of the proposed method, where it outperforms other algorithms in 106 out of 120 test cases, and the mean decrease ratio of computational time is more than 33%.
Anderson acceleration (AA) is a popular method for accelerating fixed-point iterations, but may suffer from instability and stagnation. We propose a globalization method for AA to improve stability and achieve unified global and local convergence. Un like existing AA globalization approaches that rely on safeguarding operations and might hinder fast local convergence, we adopt a nonmonotone trust-region framework and introduce an adaptive quadratic regularization together with a tailored acceptance mechanism. We prove global convergence and show that our algorithm attains the same local convergence as AA under appropriate assumptions. The effectiveness of our method is demonstrated in several numerical experiments.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا