ترغب بنشر مسار تعليمي؟ اضغط هنا

Dimensionality reduction of SDPs through sketching

210   0   0.0 ( 0 )
 نشر من قبل Andreas Bluhm
 تاريخ النشر 2017
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We show how to sketch semidefinite programs (SDPs) using positive maps in order to reduce their dimension. More precisely, we use Johnsonhyp{}Lindenstrauss transforms to produce a smaller SDP whose solution preserves feasibility or approximates the value of the original problem with high probability. These techniques allow to improve both complexity and storage space requirements. They apply to problems in which the Schatten 1-norm of the matrices specifying the SDP and also of a solution to the problem is constant in the problem size. Furthermore, we provide some results which clarify the limitations of positive, linear sketches in this setting.

قيم البحث

اقرأ أيضاً

This paper introduces a new interior point method algorithm that solves semidefinite programming (SDP) with variable size $n times n$ and $m$ constraints in the (current) matrix multiplication time $m^{omega}$ when $m geq Omega(n^2)$. Our algorithm i s optimal because even finding a feasible matrix that satisfies all the constraints requires solving an linear system in $m^{omega}$ time. Our work improves the state-of-the-art SDP solver [Jiang, Kathuria, Lee, Padmanabhan and Song, FOCS 2020], and it is the first result that SDP can be solved in the optimal running time. Our algorithm is based on two novel techniques: $bullet$ Maintaining the inverse of a Kronecker product using lazy updates. $bullet$ A general amortization scheme for positive semidefinite matrices.
Sampling-based motion planning algorithms such as RRT* are well-known for their ability to quickly find an initial solution and then converge to the optimal solution asymptotically. However, the convergence rate can be slow for highdimensional planni ng problems, particularly for dynamical systems where the sampling space is not just the configuration space but the full state space. In this paper, we introduce the idea of using a partial-final-state-free (PFF) optimal controller in kinodynamic RRT* [1] to reduce the dimensionality of the sampling space. Instead of sampling the full state space, the proposed accelerated kinodynamic RRT*, called Kino-RRT*, only samples part of the state space, while the rest of the states are selected by the PFF optimal controller. We also propose a delayed and intermittent update of the optimal arrival time of all the edges in the RRT* tree to decrease the computation complexity of the algorithm. We tested the proposed algorithm using 4-D and 10-D state-space linear systems and showed that Kino-RRT* converges much faster than the kinodynamic RRT* algorithm.
Dimensionality reduction is a classical technique widely used for data analysis. One foundational instantiation is Principal Component Analysis (PCA), which minimizes the average reconstruction error. In this paper, we introduce the multi-criteria di mensionality reduction problem where we are given multiple objectives that need to be optimized simultaneously. As an application, our model captures several fairness criteria for dimensionality reduction such as our novel Fair-PCA problem and the Nash Social Welfare (NSW) problem. In Fair-PCA, the input data is divided into $k$ groups, and the goal is to find a single $d$-dimensional representation for all groups for which the minimum variance of any one group is maximized. In NSW, the goal is to maximize the product of the individual variances of the groups achieved by the common low-dimensional space. Our main result is an exact polynomial-time algorithm for the two-criterion dimensionality reduction problem when the two criteria are increasing concave functions. As an application of this result, we obtain a polynomial time algorithm for Fair-PCA for $k=2$ groups and a polynomial time algorithm for NSW objective for $k=2$ groups. We also give approximation algorithms for $k>2$. Our technical contribution in the above results is to prove new low-rank properties of extreme point solutions to semi-definite programs. We conclude with experiments indicating the effectiveness of algorithms based on extreme point solutions of semi-definite programs on several real-world data sets.
We present a randomized primal-dual algorithm that solves the problem $min_{x} max_{y} y^top A x$ to additive error $epsilon$ in time $mathrm{nnz}(A) + sqrt{mathrm{nnz}(A)n}/epsilon$, for matrix $A$ with larger dimension $n$ and $mathrm{nnz}(A)$ nonz ero entries. This improves the best known exact gradient methods by a factor of $sqrt{mathrm{nnz}(A)/n}$ and is faster than fully stochastic gradient methods in the accurate and/or sparse regime $epsilon le sqrt{n/mathrm{nnz}(A)}$. Our results hold for $x,y$ in the simplex (matrix games, linear programming) and for $x$ in an $ell_2$ ball and $y$ in the simplex (perceptron / SVM, minimum enclosing ball). Our algorithm combines Nemirovskis conceptual prox-method and a novel reduced-variance gradient estimator based on sampling from the difference between the current iterate and a reference point.
In this article we introduce the use of recently developed min/max-plus techniques in order to solve the optimal attitude estimation problem in filtering for nonlinear systems on the special orthogonal (SO(3)) group. This work helps obtain computatio nally efficient methods for the synthesis of deterministic filters for nonlinear systems -- i.e. optimal filters which estimate the state using a related optimal control problem. The technique indicated herein is validated using a set of optimal attitude estimation example problems on SO(3).
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا