No Arabic abstract
We address the problem of computing the smallest symplectic eigenvalues and the corresponding eigenvectors of symmetric positive-definite matrices in the sense of Williamsons theorem. It is formulated as minimizing a trace cost function over the symplectic Stiefel manifold. We first investigate various theoretical aspects of this optimization problem such as characterizing the sets of critical points, saddle points, and global minimizers as well as proving that non-global local minimizers do not exist. Based on our recent results on constructing Riemannian structures on the symplectic Stiefel manifold and the associated optimization algorithms, we then propose solving the symplectic eigenvalue problem in the framework of Riemannian optimization. Moreover, a connection of the sought solution with the eigenvalues of a special class of Hamiltonian matrices is discussed. Numerical examples are presented.
The symplectic Stiefel manifold, denoted by $mathrm{Sp}(2p,2n)$, is the set of linear symplectic maps between the standard symplectic spaces $mathbb{R}^{2p}$ and $mathbb{R}^{2n}$. When $p=n$, it reduces to the well-known set of $2ntimes 2n$ symplectic matrices. Optimization problems on $mathrm{Sp}(2p,2n)$ find applications in various areas, such as optics, quantum physics, numerical linear algebra and model order reduction of dynamical systems. The purpose of this paper is to propose and analyze gradient-descent methods on $mathrm{Sp}(2p,2n)$, where the notion of gradient stems from a Riemannian metric. We consider a novel Riemannian metric on $mathrm{Sp}(2p,2n)$ akin to the canonical metric of the (standard) Stiefel manifold. In order to perform a feasible step along the antigradient, we develop two types of search strategies: one is based on quasi-geodesic curves, and the other one on the symplectic Cayley transform. The resulting optimization algorithms are proved to converge globally to critical points of the objective function. Numerical experiments illustrate the efficiency of the proposed methods.
We study projection-free methods for constrained Riemannian optimization. In particular, we propose the Riemannian Frank-Wolfe (RFW) method. We analyze non-asymptotic convergence rates of RFW to an optimum for (geodesically) convex problems, and to a critical point for nonconvex objectives. We also present a practical setting under which RFW can attain a linear convergence rate. As a concrete example, we specialize Rfw to the manifold of positive definite matrices and apply it to two tasks: (i) computing the matrix geometric mean (Riemannian centroid); and (ii) computing the Bures-Wasserstein barycenter. Both tasks involve geodesically convex interval constraints, for which we show that the Riemannian linear oracle required by RFW admits a closed-form solution; this result may be of independent interest. We further specialize RFW to the special orthogonal group and show that here too, the Riemannian linear oracle can be solved in closed form. Here, we describe an application to the synchronization of data matrices (Procrustes problem). We complement our theoretical results with an empirical comparison of Rfw against state-of-the-art Riemannian optimization methods and observe that RFW performs competitively on the task of computing Riemannian centroids.
We revisit classical eigenvalue inequalities due to Buser, Cheng, and Gromov on closed Riemannian manifolds, and prove t
We introduce in this paper a manifold optimization framework that utilizes semi-Riemannian structures on the underlying smooth manifolds. Unlike in Riemannian geometry, where each tangent space is equipped with a positive definite inner product, a semi-Riemannian manifold allows the metric tensor to be indefinite on each tangent space, i.e., possessing both positive and negative definite subspaces; differential geometric objects such as geodesics and parallel-transport can be defined on non-degenerate semi-Riemannian manifolds as well, and can be carefully leveraged to adapt Riemannian optimization algorithms to the semi-Riemannian setting. In particular, we discuss the metric independence of manifold optimization algorithms, and illustrate that the weaker but more general semi-Riemannian geometry often suffices for the purpose of optimizing smooth functions on smooth manifolds in practice.
In this paper, we formulate the Load Flow (LF) problem in radial electricity distribution networks as an unconstrained Riemannian optimization problem, consisting of two manifolds, and we consider alternative retractions and initialization options. Our contribution is a novel LF solution method, which we show belongs to the family of Riemannian approximate Newton methods guaranteeing monotonic descent and local superlinear convergence rate. To the best of our knowledge, this is the first exact LF solution method employing Riemannian optimization. Extensive numerical comparisons on several test networks illustrate that the proposed method outperforms other Riemannian optimization methods (Gradient Descent, Newtons), and achieves comparable performance with the traditional Newton-Raphson method, albeit besting it by a guarantee to convergence. We also consider an approximate LF solution obtained by the first iteration of the proposed method, and we show that it significantly outperforms other approximants in the LF literature. Lastly, we derive an interesting comparison with the well-known Backward-Forward Sweep method.