TOPICS IN CONVEX OPTIMIZATION: INTERIOR-POINT METHODS, CONIC DUALITY AND APPROXIMATIONS Optimization is a scientific discipline that lies at the boundary between pure and applied mathematics. Indeed, while on the one hand some of its developments involve rather theoretical concepts, its most successful algorithms are on the other hand heavily used by numerous companies to solve scheduling and design problems on a daily basis. Our research started with the study of the conic formulation for convex optimization problems. This approach was already studied in the seventies but has recently gained a lot of interest due to development of a new class of algorithms called interior-point methods. This setting is able to exploit the two most important characteristics of convexity: - a very rich duality theory (existence of a dual problem that is strongly related to the primal problem, with a very symmetric formulation), - the ability to solve these problems efficiently, both from the theoretical (polynomial algorithmic complexity) and practical (implementations allowing the resolution of large-scale problems) point of views. Most of the research in this area involved so-called self-dual cones, where the dual problem has exactly the same structure as the primal: the most famous classes of convex optimization problems (linear optimization, convex quadratic optimization and semidefinite optimization) belong to this category. We brought some contributions in this field: - a survey of interior-point methods for linear optimization, with an emphasis on the fundamental principles that lie behind the design of these algorithms, - a computational study of a method of linear approximation of convex quadratic optimization (more precisely, the second-order cone that can be used in the formulation of quadratic problems is replaced by a polyhedral approximation whose accuracy that can be guaranteed a priori), - an application of semidefinite optimization to classification, whose principle consists in separating different classes of patterns using ellipsoids defined in the feature space (this approach was successfully applied to the prediction of student grades). However, our research focussed on a much less studied category of convex problems which does not rely on self-dual cones, i.e. structured problems whose dual is formulated very differently from the primal. We studied in particular - geometric optimization, developed in the late sixties, which possesses numerous application in the field of engineering (entropy optimization, used in information theory, also belongs to this class of problems) - l_p-norm optimization, a generalization of linear and convex quadratic optimization, which allows the formulation of constraints built around expressions of the form |ax+b|^p (where p is a fixed exponent strictly greater than 1). For each of these classes of problems, we introduced a new type of convex cone that made their formulation as standard conic problems possible. This allowed us to derive very simplified proofs of the classical duality results pertaining to these problems, notably weak duality (a mere consequence of convexity) and the absence of a duality gap (strong duality property without any constraint qualification, which does not hold in the general convex case). We also uncovered a very surprising result that stipulates that geometric optimization can be viewed as a limit case of l_p-norm optimization. Encouraged by the similarities we observed, we developed a general framework that encompasses these two classes of problems and unifies all the previously obtained conic formulations. We also brought our attention to the design of interior-point methods to solve these problems. The theory of polynomial algorithms for convex optimization developed by Nesterov and Nemirovsky asserts that the main ingredient for these methods is a computable self-concordant barrier function for the corresponding cones. We were able to define such a barrier function in the case of l_p-norm optimization (whose parameter, which is the main determining factor in the algorithmic complexity of the method, is proportional to the number of variables in the formulation and independent from p) as well as in the case of the general framework mentioned above. Finally, we contributed a survey of the self-concordancy property, improving some useful results about the value of the complexity parameter for certain categories of barrier functions and providing some insight on the reason why the most commonly adopted definition for self-concordant functions is the best possible.