ترغب بنشر مسار تعليمي؟ اضغط هنا

Error Bounds for the Method of Simultaneous Projections with Infinitely Many Subspaces

161   0   0.0 ( 0 )
 نشر من قبل Rafa{\\l} Zalas
 تاريخ النشر 2020
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

We investigate the properties of the simultaneous projection method as applied to countably infinitely many closed and linear subspaces of a real Hilbert space. We establish the optimal error bound for linear convergence of this method, which we express in terms of the cosine of the Friedrichs angle computed in an infinite product space. In addition, we provide estimates and alternative expressions for the above-mentioned number. Furthermore, we relate this number to the dichotomy theorem and to super-polynomially fast convergence. We also discuss polynomial convergence of the simultaneous projection method which takes place for particularly chosen starting points.



قيم البحث

اقرأ أيضاً

The paper deals with an eigenvalue problems possessing infinitely many positive and negative eigenvalues. Inequalities for the smallest positive and the largest negative eigenvalues, which have the same properties as the fundamental frequency, are de rived. The main question is whether or not the classical isoperimetric inequalities for the fundamental frequency of membranes hold in this case. The arguments are based on the harmonic transplantation for the global results and the shape derivatives (domain variations) for nearly circular domain.
102 - Warren Adams , Akshay Gupte , 2017
Convex hulls of monomials have been widely studied in the literature, and monomial convexifications are implemented in global optimization software for relaxing polynomials. However, there has been no study of the error in the global optimum from suc h approaches. We give bounds on the worst-case error for convexifying a monomial over subsets of $[0,1]^n$. This implies additive error bounds for relaxing a polynomial optimization problem by convexifying each monomial separately. Our main error bounds depend primarily on the degree of the monomial, making them easy to compute. Since monomial convexification studies depend on the bounds on the associated variables, in the second part, we conduct an error analysis for a multilinear monomial over two different types of box constraints. As part of this analysis, we also derive the convex hull of a multilinear monomial over $[-1,1]^n$.
This paper investigates optimal error bounds and convergence rates for general Mann iterations for computing fixed-points of non-expansive maps in normed spaces. We look for iterations that achieve the smallest fixed-point residual after $n$ steps, b y minimizing a worst-case bound $|x^n-Tx^n|le R_n$ derived from a nested family of optimal transport problems. We prove that this bound is tight so that minimizing $R_n$ yields optimal iterations. Inspired from numerical results we identify iterations that attain the rate $R_n=O(1/n)$, which we also show to be the best possible. In particular, we prove that the classical Halpern iteration achieves this optimal rate for several alternative stepsizes, and we determine analytically the optimal stepsizes that attain the smallest worst-case residuals at every step $n$, with a tight bound $R_napproxfrac{4}{n+4}$. We also determine the optimal Halpern stepsizes for affine nonexpansive maps, for which we get exactly $R_n=frac{1}{n+1}$. Finally, we show that the best rate for the classical Krasnoselskiu{i}-Mann iteration is $Omega(1/sqrt{n})$, and we present numerical evidence suggesting that even after introducing inertial terms one cannot reach the faster rate $O(1/n)$.
We prove that a reduced and irreducible algebraic surface in $mathbb{CP}^{3}$ containing infinitely many twistor lines cannot have odd degree. Then, exploiting the theory of quaternionic slice regularity and the normalization map of a surface, we give constructive existence results for even degrees.
170 - Zhiyan Ding , Yiding Chen , Qin Li 2019
Stochastic Gradient Descent (SGD) plays a central role in modern machine learning. While there is extensive work on providing error upper bound for SGD, not much is known about SGD error lower bound. In this paper, we study the convergence of constan t step-size SGD. We provide error lower bound of SGD for potentially non-convex objective functions with Lipschitz gradients. To our knowledge, this is the first analysis for SGD error lower bound without the strong convexity assumption. We use experiments to illustrate our theoretical results.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا