Ellipsoid method
In mathematical optimization, the ellipsoid method is an iterative method for minimizing convex functions. When specialized to solving feasible linear optimization problems with rational data, the ellipsoid method is an algorithm which finds an optimal solution in a number of steps that is polynomial in the input size.
The ellipsoid method generates a sequence of ellipsoids whose volume uniformly decreases at every step, thus enclosing a minimizer of a convex function.
History
The ellipsoid method has a long history. As an iterative method, a preliminary version was introduced by Naum Z. Shor. In 1972, an approximation algorithm for real convex minimization was studied by Arkadi Nemirovski and David B. Yudin (Judin). As an algorithm for solving linear programming problems with rational data, the ellipsoid algorithm was studied by Leonid Khachiyan; Khachiyan's achievement was to prove the polynomial-time solvability of linear programs. This was a notable step from a theoretical perspective: The standard algorithm for solving linear problems at the time was the simplex algorithm, which has a run time that typically is linear in the size of the problem, but for which examples exist for which it is exponential in the size of the problem. As such, having an algorithm that is guaranteed to be polynomial for all cases seemed like a theoretical breakthrough.
Khachiyan's work showed, for the first time, that there can be algorithms for solving linear programs whose runtime can be proven to be polynomial. In practice, however, the algorithm is fairly slow and of little practical interest, though it provided inspiration for later work that turned out to be of much greater practical use. Specifically, Karmarkar's algorithm, an interior-point method, is much faster than the ellipsoid method in practice. Karmarkar's algorithm is also faster in the worst case.
The ellipsoidal algorithm allows complexity theorists to achieve (worst-case) bounds that depend on the dimension of the problem and on the size of the data, but not on the number of rows, so it remained important in combinatorial optimization theory for many years.[1][2][3][4] Only in the 21st century have interior-point algorithms with similar complexity properties appeared.
Description
A convex minimization problem consists of the following ingredients.
- A convex function to be minimized over the vector (containing n variables);
- Convex inequality constraints of the form , where the functions are convex; these constraints define a convex set .
- Linear equality constraints of the form .
We are also given an initial ellipsoid defined as
containing a minimizer , where and is the center of .
Finally, we require the existence of a separation oracle for the convex set . Given a point , the oracle should return one of two answers:[5]
- "The point is in ", or -
- "The point is not in , and moreover, here is a hyperplane that separates from ", that is, a vector such that for all .
The output of the ellipsoid method is either:
- Any point in the polytope (i.e., any feasible point), or -
- A proof that is empty.
Inequality-constrained minimization of a function that is zero everywhere corresponds to the problem of simply identifying any feasible point. It turns out that any linear programming problem can be reduced to a linear feasibility problem (e.g. minimize the zero function subject to some linear inequality and equality constraints). One way to do this is by combining the primal and dual linear programs together into one program, and adding the additional (linear) constraint that the value of the primal solution is no worse than the value of the dual solution. Another way is to treat the objective of the linear program as an additional constraint, and use binary search to find the optimum value.
Unconstrained minimization
At the k-th iteration of the algorithm, we have a point at the center of an ellipsoid
We query the cutting-plane oracle to obtain a vector such that
We therefore conclude that
We set to be the ellipsoid of minimal volume containing the half-ellipsoid described above and compute . The update is given by
where
The stopping criterion is given by the property that
Inequality-constrained minimization
At the k-th iteration of the algorithm for constrained minimization, we have a point at the center of an ellipsoid as before. We also must maintain a list of values recording the smallest objective value of feasible iterates so far. Depending on whether or not the point is feasible, we perform one of two tasks:
- If is feasible, perform essentially the same update as in the unconstrained case, by choosing a subgradient that satisfies
- If is infeasible and violates the j-th constraint, update the ellipsoid with a feasibility cut. Our feasibility cut may be a subgradient of which must satisfy
for all feasible z.
Performance
The ellipsoid method is used on low-dimensional problems, such as planar location problems, where it is numerically stable. On even "small"-sized problems, it suffers from numerical instability and poor performance in practice .
However, the ellipsoid method is an important theoretical technique in combinatorial optimization. In computational complexity theory, the ellipsoid algorithm is attractive because its complexity depends on the number of columns and the digital size of the coefficients, but not on the number of rows. In the 21st century, interior-point algorithms with similar properties have appeared .
Notes
- Grötschel, Martin; Lovász, László.; Schrijver, Alexander (1988). Geometric Algorithms and Combinatorial Optimization. Berlin, Heidelberg: Springer Berlin Heidelberg. ISBN 978-3-642-97881-4. OCLC 851371833.
- L. Lovász: An Algorithmic Theory of Numbers, Graphs, and Convexity, CBMS-NSF Regional Conference Series in Applied Mathematics 50, SIAM, Philadelphia, Pennsylvania, 1986.
- V. Chandru and M.R.Rao, Linear Programming, Chapter 31 in Algorithms and Theory of Computation Handbook, edited by M. J. Atallah, CRC Press 1999, 31-1 to 31-37.
- V. Chandru and M.R.Rao, Integer Programming, Chapter 32 in Algorithms and Theory of Computation Handbook, edited by M.J.Atallah, CRC Press 1999, 32-1 to 32-45.
- "MIT 6.854 Spring 2016 Lecture 12: From Separation to Optimization and Back; Ellipsoid Method - YouTube". www.youtube.com. Archived from the original on 2021-12-22. Retrieved 2021-01-03.
Further reading
- Dmitris Alevras and Manfred W. Padberg, Linear Optimization and Extensions: Problems and Extensions, Universitext, Springer-Verlag, 2001. (Problems from Padberg with solutions.)
- V. Chandru and M.R.Rao, Linear Programming, Chapter 31 in Algorithms and Theory of Computation Handbook, edited by M.J.Atallah, CRC Press 1999, 31-1 to 31-37.
- V. Chandru and M.R.Rao, Integer Programming, Chapter 32 in Algorithms and Theory of Computation Handbook, edited by M.J.Atallah, CRC Press 1999, 32-1 to 32-45.
- George B. Dantzig and Mukund N. Thapa. 1997. Linear programming 1: Introduction. Springer-Verlag.
- George B. Dantzig and Mukund N. Thapa. 2003. Linear Programming 2: Theory and Extensions. Springer-Verlag.
- L. Lovász: An Algorithmic Theory of Numbers, Graphs, and Convexity, CBMS-NSF Regional Conference Series in Applied Mathematics 50, SIAM, Philadelphia, Pennsylvania, 1986
- Kattta G. Murty, Linear Programming, Wiley, 1983.
- M. Padberg, Linear Optimization and Extensions, Second Edition, Springer-Verlag, 1999.
- Christos H. Papadimitriou and Kenneth Steiglitz, Combinatorial Optimization: Algorithms and Complexity, Corrected republication with a new preface, Dover.
- Alexander Schrijver, Theory of Linear and Integer Programming. John Wiley & sons, 1998, ISBN 0-471-98232-6
External links
- EE364b, a Stanford course homepage