INTERIOR-POINT METHODS FOR LINEAR PROGRAMMING: A GUIDED TOUR The purpose of mathematical programming, a branch of optimization, is to minimize (or maximize) a function of several variables under a set of constraints. This is a very important problem arising in many real-world situations (e.g. cost or duration minimization). When the function to optimize and its associated set of constraints are linear, we talk about linear programming. The simplex algorithm, first developed by Dantzig in 1947, is a very efficient method to solve this class of problems. It has been thoroughly studied and improved since its first appearance, and is now widely used in commercial software to solve a great variety of problems (production planning, transportation, scheduling, etc.). However, an article by Karmarkar introduced in 1984 a new class of methods: the so-called interior-point methods. Most of the ideas underlying these new methods originate from the nonlinear optimization domain. These methods offer the following advantages: * Theoretical efficiency. These methods require a number of iterations that is polynomially bounded by the size of the problem (while the number of iterations needed by the simplex method may grow exponentially with that size). * Practical efficiency. Computing times of these methods compare very well to those of the simplex method (and even beat them for some large-scale problems). * Large-scale optimization. These methods allow the resolution of some very large problems that were previously unsolvable in a reasonable amount of time. * Nonlinear optimization. It is possible to adapt these methods to nonlinear programming, in particular to convex programming. The purpose of this report is to give an overview of this rather new domain, providing a clear and understandable description of these methods, both from a theoretical and a practical point of view.