ترغب بنشر مسار تعليمي؟ اضغط هنا

Inexact Newton Method for M-Tensor Equations

270   0   0.0 ( 0 )
 نشر من قبل Hongbo Guan
 تاريخ النشر 2020
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

We first investigate properties of M-tensor equations. In particular, we show that if the constant term of the equation is nonnegative, then finding a nonnegative solution of the equation can be done by finding a positive solution of a lower dimensional M-tensor equation. We then propose an inexact Newton method to find a positive solution to the lower dimensional equation and establish its global convergence. We also show that the convergence rate of the method is quadratic. At last, we do numerical experiments to test the proposed Newton method. The results show that the proposed Newton method has a very good numerical performance.

قيم البحث

اقرأ أيضاً

We are concerned with the tensor equations whose coefficient tensor is an M-tensor. We first propose a Newton method for solving the equation with a positive constant term and establish its global and quadratic convergence. Then we extend the method to solve the equation with a nonnegative constant term and establish its convergence. At last, we do numerical experiments to test the proposed methods. The results show that the proposed method is quite efficient.
The last two decades witnessed the increasing of the interests on the absolute value equations (AVE) of finding $xinmathbb{R}^n$ such that $Ax-|x|-b=0$, where $Ain mathbb{R}^{ntimes n}$ and $bin mathbb{R}^n$. In this paper, we pay our attention on de signing efficient algorithms. To this end, we reformulate AVE to a generalized linear complementarity problem (GLCP), which, among the equivalent forms, is the most economical one in the sense that it does not increase the dimension of the variables. For solving the GLCP, we propose an inexact Douglas-Rachford splitting method which can adopt a relative error tolerance. As a consequence, in the inner iteration processes, we can employ the LSQR method ([C.C. Paige and M.A. Saunders, ACM Trans. Mathe. Softw. (TOMS), 8 (1982), pp. 43--71]) to find a qualified approximate solution for each subproblem, which makes the cost per iteration very low. We prove the convergence of the algorithm and establish its global linear rate of convergence. Comparing results with the popular algorithms such as the exact generalized Newton method [O.L. Mangasarian, Optim. Lett., 1 (2007), pp. 3--8], the inexact semi-smooth Newton method [J.Y.B. Cruz, O.P. Ferreira and L.F. Prudente, Comput. Optim. Appl., 65 (2016), pp. 93--108] and the exact SOR-like method [Y.-F. Ke and C.-F. Ma, Appl. Math. Comput., 311 (2017), pp. 195--202] are reported, which indicate that the proposed algorithm is very promising. Moreover, our method also extends the range of numerically solvable of the AVE; that is, it can deal with not only the case that $|A^{-1}|<1$, the commonly used in those existing literature, but also the case where $|A^{-1}|=1$.
For solving large-scale non-convex problems, we propose inexact variants of trust region and adaptive cubic regularization methods, which, to increase efficiency, incorporate various approximations. In particular, in addition to approximate sub-probl em solves, both the Hessian and the gradient are suitably approximated. Using rather mild conditions on such approximations, we show that our proposed inexact methods achieve similar optimal worst-case iteration complexities as the exact counterparts. Our proposed algorithms, and their respective theoretical analysis, do not require knowledge of any unknowable problem-related quantities, and hence are easily implementable in practice. In the context of finite-sum problems, we then explore randomized sub-sampling methods as ways to construct the gradient and Hessian approximations and examine the empirical performance of our algorithms on some real datasets.
We introduce a framework for designing primal methods under the decentralized optimization setting where local functions are smooth and strongly convex. Our approach consists of approximately solving a sequence of sub-problems induced by the accelera ted augmented Lagrangian method, thereby providing a systematic way for deriving several well-known decentralized algorithms including EXTRA arXiv:1404.6264 and SSDA arXiv:1702.08704. When coupled with accelerated gradient descent, our framework yields a novel primal algorithm whose convergence rate is optimal and matched by recently derived lower bounds. We provide experimental results that demonstrate the effectiveness of the proposed algorithm on highly ill-conditioned problems.
86 - Lei Yang , Kim-Chuan Toh 2021
In this paper, we develop an inexact Bregman proximal gradient (iBPG) method based on a novel two-point inexact stopping condition, and establish the iteration complexity of $mathcal{O}(1/k)$ as well as the convergence of the sequence under some prop er conditions. To improve the convergence speed, we further develop an inertial variant of our iBPG (denoted by v-iBPG) and show that it has the iteration complexity of $mathcal{O}(1/k^{gamma})$, where $gammageq1$ is a restricted relative smoothness exponent. Thus, when $gamma>1$, the v-iBPG readily improves the $mathcal{O}(1/k)$ convergence rate of the iBPG. In addition, for the case of using the squared Euclidean distance as the kernel function, we further develop a new inexact accelerated proximal gradient (iAPG) method, which can circumvent the underlying feasibility difficulty often appearing in existing inexact conditions and inherit all desirable convergence properties of the exact APG under proper summable-error conditions. Finally, we conduct some preliminary numerical experiments for solving a relaxation of the quadratic assignment problem to demonstrate the convergence behaviors of the iBPG, v-iBPG and iAPG under different inexactness settings.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا