ترغب بنشر مسار تعليمي؟ اضغط هنا

Computing the smallest fixed point of order-preserving nonexpansive mappings arising in positive stochastic games and static analysis of programs

225   0   0.0 ( 0 )
 نشر من قبل Assale Adje
 تاريخ النشر 2013
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

The problem of computing the smallest fixed point of an order-preserving map arises in the study of zero-sum positive stochastic games. It also arises in static analysis of programs by abstract interpretation. In this context, the discount rate may be negative. We characterize the minimality of a fixed point in terms of the nonlinear spectral radius of a certain semidifferential. We apply this characterization to design a policy iteration algorithm, which applies to the case of finite state and action spaces. The algorithm returns a locally minimal fixed point, which turns out to be globally minimal when the discount rate is nonnegative.



قيم البحث

اقرأ أيضاً

383 - Yizun Lin , Yuesheng Xu 2021
We estimate convergence rates for fixed-point iterations of a class of nonlinear operators which are partially motivated from solving convex optimization problems. We introduce the notion of the generalized averaged nonexpansive (GAN) operator with a positive exponent, and provide a convergence rate analysis of the fixed-point iteration of the GAN operator. The proposed generalized averaged nonexpansiveness is weaker than the averaged nonexpansiveness while stronger than nonexpansiveness. We show that the fixed-point iteration of a GAN operator with a positive exponent converges to its fixed-point and estimate the local convergence rate (the convergence rate in terms of the distance between consecutive iterates) according to the range of the exponent. We prove that the fixed-point iteration of a GAN operator with a positive exponent strictly smaller than 1 can achieve an exponential global convergence rate (the convergence rate in terms of the distance between an iterate and the solution). Furthermore, we establish the global convergence rate of the fixed-point iteration of a GAN operator, depending on both the exponent of generalized averaged nonexpansiveness and the exponent of the H$ddot{text{o}}$lder regularity, if the GAN operator is also H$ddot{text{o}}$lder regular. We then apply the established theory to three types of convex optimization problems that appear often in data science to design fixed-point iterative algorithms for solving these optimization problems and to analyze their convergence properties.
143 - Safeer Hussain Khan 2018
In this paper, we first introduce an iterative process in modular function spaces and then extend the idea of a {lambda}-firmly nonexpansive mapping from Banach spaces to modular function spaces. We call such mappings as ({lambda},{rho})-firmly nonex pansive mappings. We incorporate the two ideas to approximate fixed points of ({lambda},{rho})-firmly nonexpansive mappings using the above mentioned iterative process in modular function spaces. We give an example to validate our results.
We show that the typical nonexpansive mapping on a small enough subset of a CAT($kappa$)-space is a contraction in the sense of Rakotch. By typical we mean that the set of nonexpansive mapppings without this property is a $sigma$-porous set and there fore also of the first Baire category. Moreover, we exhibit metric spaces where strict contractions are not dense in the space of nonexpansive mappings. In some of these cases we show that all continuous self-mappings have a fixed point nevertheless.
This paper investigates optimal error bounds and convergence rates for general Mann iterations for computing fixed-points of non-expansive maps in normed spaces. We look for iterations that achieve the smallest fixed-point residual after $n$ steps, b y minimizing a worst-case bound $|x^n-Tx^n|le R_n$ derived from a nested family of optimal transport problems. We prove that this bound is tight so that minimizing $R_n$ yields optimal iterations. Inspired from numerical results we identify iterations that attain the rate $R_n=O(1/n)$, which we also show to be the best possible. In particular, we prove that the classical Halpern iteration achieves this optimal rate for several alternative stepsizes, and we determine analytically the optimal stepsizes that attain the smallest worst-case residuals at every step $n$, with a tight bound $R_napproxfrac{4}{n+4}$. We also determine the optimal Halpern stepsizes for affine nonexpansive maps, for which we get exactly $R_n=frac{1}{n+1}$. Finally, we show that the best rate for the classical Krasnoselskiu{i}-Mann iteration is $Omega(1/sqrt{n})$, and we present numerical evidence suggesting that even after introducing inertial terms one cannot reach the faster rate $O(1/n)$.
We consider equation systems of the form X_1 = f_1(X_1, ..., X_n), ..., X_n = f_n(X_1, ..., X_n) where f_1, ..., f_n are polynomials with positive real coefficients. In vector form we denote such an equation system by X = f(X) and call f a system of positive polynomials, short SPP. Equation systems of this kind appear naturally in the analysis of stochastic models like stochastic context-free grammars (with numerous applications to natural language processing and computational biology), probabilistic programs with procedures, web-surfing models with back buttons, and branching processes. The least nonnegative solution mu f of an SPP equation X = f(X) is of central interest for these models. Etessami and Yannakakis have suggested a particular version of Newtons method to approximate mu f. We extend a result of Etessami and Yannakakis and show that Newtons method starting at 0 always converges to mu f. We obtain lower bounds on the convergence speed of the method. For so-called strongly connected SPPs we prove the existence of a threshold k_f such that for every i >= 0 the (k_f+i)-th iteration of Newtons method has at least i valid bits of mu f. The proof yields an explicit bound for k_f depending only on syntactic parameters of f. We further show that for arbitrary SPP equations Newtons method still converges linearly: there are k_f>=0 and alpha_f>0 such that for every i>=0 the (k_f+alpha_f i)-th iteration of Newtons method has at least i valid bits of mu f. The proof yields an explicit bound for alpha_f; the bound is exponential in the number of equations, but we also show that it is essentially optimal. Constructing a bound for k_f is still an open problem. Finally, we also provide a geometric interpretation of Newtons method for SPPs.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا