ترغب بنشر مسار تعليمي؟ اضغط هنا

An Oblivious Ellipsoid Algorithm for Solving a System of (In)Feasible Linear Inequalities

53   0   0.0 ( 0 )
 نشر من قبل Jourdain Lamperski
 تاريخ النشر 2019
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

The ellipsoid algorithm is a fundamental algorithm for computing a solution to the system of $m$ linear inequalities in $n$ variables $(P): A^{top}x le u$ when its set of solutions has positive volume. However, when $(P)$ is infeasible, the ellipsoid algorithm has no mechanism for proving that $(P)$ is infeasible. This is in contrast to the other two fundamental algorithms for tackling $(P)$, namely the simplex method and interior-point methods, each of which can be easily implemented in a way that either produces a solution of $(P)$ or proves that $(P)$ is infeasible by producing a solution to the alternative system $mathrm{({it Alt})}: Alambda= 0$, $u^{top}lambda < 0$, $lambda ge 0$. This paper develops an Oblivious Ellipsoid Algorithm (OEA) that either produces a solution of $(P)$ or produces a solution of $mathrm{({it Alt})}$. Depending on the dimensions and on other natural condition measures, the computational complexity of the basic OEA may be worse than, the same as, or better than that of the standard ellipsoid algorithm. We also present two modifi

قيم البحث

اقرأ أيضاً

We propose an iterative improvement method for the Harrow-Hassidim-Lloyd (HHL) algorithm to solve a linear system of equations. This is a quantum-classical hybrid algorithm. The accuracy is essential to solve the linear system of equations. However, the accuracy of the HHL algorithm is limited by the number of quantum bits used to express the eigenvalues of the matrix. Our iterative method improves the accuracy of the HHL solutions, and gives higher accuracy which surpasses the accuracy limited by the number of quantum bits. In practical HHL algorithm, a huge number of measurements is required to obtain good accuracy, even if we provide a sufficient number of quantum bits for the eigenvalue expression, since the solution is statistically processed from the measurements. Our improved iterative method can reduce the number of measurements. Moreover, the sign information for each eigenstate of the solution is lost once the measurement is made, although the sign is significant. Therefore, the naive iterative method of the HHL algorithm may slow down, especially, when the solution includes wrong signs. In this paper, we propose and evaluate an improved iterative method for the HHL algorithm that is robust against the sign information loss, in terms of the number of iterations and the computational accuracy.
In this paper, we develop a new algorithm combining the idea of ``boosting with the first-order algorithm to approximately solve a class of (Integer) Linear programs(LPs) arisen in general resource allocation problems. Not only can this algorithm sol ve LPs directly, but also can be applied to accelerate the Column Generation method. As a direct solver, our algorithm achieves a provable $O(sqrt{n/K})$ optimality gap, where $n$ is the number of variables and $K$ is the number of data duplication bearing the same intuition as the boosting algorithm. We use numerical experiments to demonstrate the effectiveness of our algorithm and several variants.
Based on the geometric {it Triangle Algorithm} for testing membership of a point in a convex set, we present a novel iterative algorithm for testing the solvability of a real linear system $Ax=b$, where $A$ is an $m times n$ matrix of arbitrary rank. Let $C_{A,r}$ be the ellipsoid determined as the image of the Euclidean ball of radius $r$ under the linear map $A$. The basic procedure in our algorithm computes a point in $C_{A,r}$ that is either within $varepsilon$ distance to $b$, or acts as a certificate proving $b ot in C_{A,r}$. Each iteration takes $O(mn)$ operations and when $b$ is well-situated in $C_{A,r}$, the number of iterations is proportional to $log{(1/varepsilon)}$. If $Ax=b$ is solvable the algorithm computes an approximate solution or the minimum-norm solution. Otherwise, it computes a certificate to unsolvability, or the minimum-norm least-squares solution. It is also applicable to complex input. In a computational comparison with the state-of-the-art algorithm BiCGSTAB ({it Bi-conjugate gradient method stabilized}), the Triangle Algorithm is very competitive. In fact, when the iterates of BiCGSTAB do not converge, our algorithm can verify $Ax=b$ is unsolvable and approximate the minimum-norm least-squares solution. The Triangle Algorithm is robust, simple to implement, and requires no preconditioner, making it attractive to practitioners, as well as researchers and educators.
Column generation is often used to solve multi-commodity flow problems. A program for column generation always includes a module that solves a linear equation. In this paper, we address three major issues in solving linear problem during column gener ation procedure which are (1) how to employ the sparse property of the coefficient matrix; (2) how to reduce the size of the coefficient matrix; and (3) how to reuse the solution to a similar equation. To this end, we first analyze the sparse property of coefficient matrix of linear equations and find that the matrices occurring in iteration are very sparse. Then, we present an algorithm locSolver (for localized system solver) for linear equations with sparse coefficient matrices and right-hand-sides. This algorithm can reduce the number of variables. After that, we present the algorithm incSolver (for incremental system solver) which utilizes similarity in the iterations of the program for a linear equation system. All three techniques can be used in column generation of multi-commodity problems. Preliminary numerical experiments show that the incSolver is significantly faster than the existing algorithms. For example, random test cases show that incSolver is at least 37 times and up to 341 times faster than popular solver LAPACK.
With the development of robotics, there are growing needs for real time motion planning. However, due to obstacles in the environment, the planning problem is highly non-convex, which makes it difficult to achieve real time computation using existing non-convex optimization algorithms. This paper introduces the convex feasible set algorithm (CFS) which is a fast algorithm for non-convex optimization problems that have convex costs and non-convex constraints. The idea is to find a convex feasible set for the original problem and iteratively solve a sequence of subproblems using the convex constraints. The feasibility and the convergence of the proposed algorithm are proved in the paper. The application of this method on motion planning for mobile robots is discussed. The simulations demonstrate the effectiveness of the proposed algorithm.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا