No Arabic abstract
We study the equilibrium problem on general Riemannian manifolds. The results on existence of solutions and on the convex structure of the solution set are established. Our approach consists in relating the equilibrium problem to a suitable variational inequality problem on Riemannian manifolds, and is completely different from previous ones on this topic in the literature. As applications, the corresponding results for the mixed variational inequality and the Nash equilibrium are obtained. Moreover, we formulate and analyze the convergence of the proximal point algorithm for the equilibrium problem. In particular, correct proofs are provided for the results claimed in J. Math. Anal. Appl. 388, 61-77, 2012 (i.e., Theorems 3.5 and 4.9 there) regarding the existence of the mixed variational inequality and the domain of the resolvent for the equilibrium problem on Hadamard manifolds.
We consider optimization problems on Riemannian manifolds with equality and inequality constraints, which we call Riemannian nonlinear optimization (RNLO) problems. Although they have numerous applications, the existing studies on them are limited especially in terms of algorithms. In this paper, we propose Riemannian sequential quadratic optimization (RSQO) that uses a line-search technique with an ell_1 penalty function as an extension of the standard SQO algorithm for constrained nonlinear optimization problems in Euclidean spaces to Riemannian manifolds. We prove its global convergence to a Karush-Kuhn-Tucker point of the RNLO problem by means of parallel transport and the exponential mapping. Furthermore, we establish its local quadratic convergence by analyzing the relationship between sequences generated by RSQO and the Riemannian Newton method. Ours is the first algorithm that has both global and local convergence properties for constrained nonlinear optimization on Riemannian manifolds. Empirical results show that RSQO finds solutions more stably and with higher accuracy compared with the existing Riemannian penalty and augmented Lagrangian methods.
In this paper, we give explicit descriptions
We develop a new Riemannian descent algorithm that relies on momentum to improve over existing first-order methods for geodesically convex optimization. In contrast, accelerated convergence rates proved in prior work have only been shown to hold for geodesically strongly-convex objective functions. We further extend our algorithm to geodesically weakly-quasi-convex objectives. Our proofs of convergence rely on a novel estimate sequence that illustrates the dependency of the convergence rate on the curvature of the manifold. We validate our theoretical results empirically on several optimization problems defined on the sphere and on the manifold of positive definite matrices.
We establish necessary conditions for the existence of solutions to a class of semilinear hyperbolic problems on complete noncompact Riemannian manifolds, extending some nonexistence results for the wave operator with power nonlinearity on the whole Euclidean space. A general weight function depending on spacetime is allowed in front of the power nonlinearity.
We study stochastic projection-free methods for constrained optimization of smooth functions on Riemannian manifolds, i.e., with additional constraints beyond the parameter domain being a manifold. Specifically, we introduce stochastic Riemannian Frank-Wolfe methods for nonconvex and geodesically convex problems. We present algorithms for both purely stochastic optimization and finite-sum problems. For the latter, we develop variance-reduced methods, including a Riemannian adaptation of the recently proposed Spider technique. For all settings, we recover convergence rates that are comparable to the best-known rates for their Euclidean counterparts. Finally, we discuss applications to two classic tasks: The computation of the Karcher mean of positive definite matrices and Wasserstein barycenters for multivariate normal distributions. For both tasks, stochastic Fw methods yield state-of-the-art empirical performance.