Do you want to publish a course? Click here

On the rate of convergence of iterated Bregman projections and of the alternating algorithm

106   0   0.0 ( 0 )
 Added by Christian Bargetz
 Publication date 2019
  fields
and research's language is English




Ask ChatGPT about the research

We study the alternating algorithm for the computation of the metric projection onto the closed sum of two closed subspaces in uniformly convex and uniformly smooth Banach spaces. For Banach spaces which are convex and smooth of power type, we exhibit a condition which implies linear convergence of this method. We show these convergence results for iterates of Bregman projections onto closed linear subspaces. Using an intimate connection between the metric projection onto a closed linear subspace and the Bregman projection onto its annihilator, we deduce the convergence rate results for the alternating algorithm from the corresponding results for the iterated Bregman projection method.

rate research

Read More

The Gaver-Stehfest algorithm is widely used for numerical inversion of Laplace transform. In this paper we provide the first rigorous study of the rate of convergence of the Gaver-Stehfest algorithm. We prove that Gaver-Stehfest approximations converge exponentially fast if the target function is analytic in a neighbourhood of a point and they converge at a rate $o(n^{-k})$ if the target function is $(2k+3)$-times differentiable at a point.
The approximation of functions in Orlicz space by multivariate operators on simplex is considered. The convergence rate is given by using modulus of smoothness.
Recently, a new distance has been introduced for the graphs of two point-to-set operators, one of which is maximally monotone. When both operators are the subdifferential of a proper lower semicontinuous convex function, this distance specializes under modest assumptions to the classical Bregman distance. We name this new distance the generalized Bregman distance, and we shed light on it with examples that utilize the other two most natural representative functions: the Fitzpatrick function and its conjugate. We provide sufficient conditions for convexity, coercivity, and supercoercivity: properties that are essential for implementation in proximal point type algorithms. We establish these results for both the left and right variants of this new distance. We construct examples closely related to the Kullback--Leibler divergence, which was previously considered in the context of Bregman distances, and whose importance in information theory is well known. In so doing, we demonstrate how to compute a difficult Fitzpatrick conjugate function, and we discover natural occurrences of the Lambert $W$ function, whose importance in optimization is of growing interest.
We consider three mathematically equivalent variants of the conjugate gradient (CG) algorithm and how they perform in finite precision arithmetic. It was shown in [{em Behavior of slightly perturbed Lanczos and conjugate-gradient recurrences}, Lin.~Alg.~Appl., 113 (1989), pp.~7-63] that under certain conditions the convergence of a slightly perturbed CG computation is like that of exact CG for a matrix with many eigenvalues distributed throughout tiny intervals about the eigenvalues of the given matrix, the size of the intervals being determined by how closely these conditions are satisfied. We determine to what extent each of these variants satisfies the desired conditions, using a set of test problems and show that there is significant correlation between how well these conditions are satisfied and how well the finite precision computation converges before reaching its ultimately attainable accuracy. We show that for problems where the width of the intervals containing the eigenvalues of the associated exact CG matrix makes a significant difference in the behavior of exact CG, the different CG variants behave differently in finite precision arithmetic. For problems where the interval width makes little difference or where the convergence of exact CG is essentially governed by the upper bound based on the square root of the condition number of the matrix, the different CG variants converge similarly in finite precision arithmetic until the ultimate level of accuracy is achieved, although this ultimate level of accuracy may be different for the different variants. This points to the need for testing new CG variants on problems that are especially sensitive to rounding errors.
We investigate connections between the geometry of linear subspaces and the convergence of the alternating projection method for linear projections. The aim of this article is twofold: in the first part, we show that even in Euclidean spaces the convergence of the alternating method is not determined by the principal angles between the subspaces involved. In the second part, we investigate the properties of the Oppenheim angle between two linear projections. We discuss, in particular, the question of existence and uniqueness of consistency projections in this context.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا