ﻻ يوجد ملخص باللغة العربية
We first investigate properties of M-tensor equations. In particular, we show that if the constant term of the equation is nonnegative, then finding a nonnegative solution of the equation can be done by finding a positive solution of a lower dimensional M-tensor equation. We then propose an inexact Newton method to find a positive solution to the lower dimensional equation and establish its global convergence. We also show that the convergence rate of the method is quadratic. At last, we do numerical experiments to test the proposed Newton method. The results show that the proposed Newton method has a very good numerical performance.
We are concerned with the tensor equations whose coefficient tensor is an M-tensor. We first propose a Newton method for solving the equation with a positive constant term and establish its global and quadratic convergence. Then we extend the method
The last two decades witnessed the increasing of the interests on the absolute value equations (AVE) of finding $xinmathbb{R}^n$ such that $Ax-|x|-b=0$, where $Ain mathbb{R}^{ntimes n}$ and $bin mathbb{R}^n$. In this paper, we pay our attention on de
For solving large-scale non-convex problems, we propose inexact variants of trust region and adaptive cubic regularization methods, which, to increase efficiency, incorporate various approximations. In particular, in addition to approximate sub-probl
We introduce a framework for designing primal methods under the decentralized optimization setting where local functions are smooth and strongly convex. Our approach consists of approximately solving a sequence of sub-problems induced by the accelera
In this paper, we develop an inexact Bregman proximal gradient (iBPG) method based on a novel two-point inexact stopping condition, and establish the iteration complexity of $mathcal{O}(1/k)$ as well as the convergence of the sequence under some prop