
Preprints
 Spectrum optimization in multiuser multicarrier systems with iterative convex and nonconvex approximation methods, Paschalis Tsiaflakis, François Glineur, 2012. Link to arXiv.
 Intermediate gradient methods for smooth convex problems with inexact oracle, Olivier Devolder, François Glineur, Yurii Nesterov, 2013. Link to Optimization Online. Link to CORE Discussion Paper.
 Firstorder methods with inexact oracle: the strongly convex case, Olivier Devolder, François Glineur, Yurii Nesterov, 2013. Link to Optimization Online. Link to CORE Discussion Paper.
Journal articles
 Two Algorithms for Orthogonal Nonnegative Matrix Factorization with Application to Clustering, Filippo Pompili, Nicolas Gillis, P.A. Absil, François Glineur, Neurocomputing, to appear (), , Elsevier. DIAL handle ; abstract.
Approximate matrix factorization techniques with both nonnegativity and orthogonality constraints, referred to as orthogonal nonnegative matrix factorization (ONMF), have been recently introduced and shown to work remarkably well for clustering tasks such as document classification. In this paper, we introduce two new methods to solve ONMF. First, we show mathematical equivalence between ONMF and a weighted variant of spherical kmeans, from which we derive our first method, a simple EMlike algorithm. This also allows us to determine when ONMF should be preferred to kmeans and spherical kmeans. Our second method is based on an augmented Lagrangian approach. Standard ONMF algorithms typically enforce nonnegativity for their iterates while trying to achieve orthogonality at the limit (e.g., using a proper penalization term or a suitably chosen search direction). Our method works the opposite way: orthogonality is strictly imposed at each step while nonnegativity is asymptotically obtained, using a quadratic penalty. Finally, we show that the two proposed approaches compare favorably with standard ONMF algorithms on synthetic, text and image data sets.
 A convex formulation for informed source separation in the single channel setting, Augustin Lefèvre, François Glineur, P.A. Absil, Neurocomputing, to appear (), , Elsevier. DIAL handle ; abstract.
Blind audio source separation is wellsuited for the application of unsupervised techniques such as Nonnegative Matrix Factorization (NMF). It has been shown that on simple examples, it retrieves sensible solutions even in the singlechannel setting, which is highly illposed. However, it is now widely accepted that NMF alone cannot solve singlechannel source separation, for real world audio signals. Several proposals have appeared recently for systems that allow the user to control the output of NMF, by specifying additional equality constraints on the coefficients of the sources in the timefrequency domain. In this article, we show that matrix factorization problems involving these constraints can be formulated as convex problems, using the nuclear norm as a lowrank inducing penalty. We propose to solve the resulting nonsmooth convex formulation using a simple subgradient algorithm. Numerical experiments confirm that the nuclear norm penalty allows the recovery of (approximately) lowrank solutions that satisfy the additional userimposed constraints. Moreover, for a given computational budget, we show that this algorithm matches the performance or even outperforms stateofthe art NMF methods in terms of the quality of the estimated sources.
 Iterative Convex Approximation Based RealTime Dynamic Spectrum Management in MultiUser MultiCarrier Communication Systems, Paschalis Tsiaflakis, François Glineur and Marc Moonen, IEEE Signal Processing Letters, Volume 21, Issue 5 (May 2014), 535–539, IEEE Signal Processing Society. 10.1109/LSP.2014.2308978 ; DIAL handle ; abstract.
Iterative power difference balancing (IPDB) has recently been proposed as a first realtime dynamic spectrum management (RTDSM) algorithm. It consists of a primal coordinate ascent approach where each coordinate step is performed using an exhaustive discrete grid line search. In this paper we present an iterative convex approximation based approach to perform the coordinate ascent search so as to reduce its computational complexity. By exploiting the problem structure, a closedform solution is derived for the convex approximations. The resulting RTDSM algorithm is referred to as fast IPDB (FIPDB). Compared to IPDB, FIPDB exhibits similar data rate performance with significantly reduced computational complexity, while also providing smoother final transmit spectra.
 Realtime dynamic spectrum management for multiuser multicarrier communication systems , Paschalis Tsiaflakis, François Glineur and Marc Moonen, IEEE Transactions on Communications, to appear (), , IEEE Communications Society. 10.1109/TCOMM.2014.012614.130580 ; DIAL handle ; abstract.
Dynamic spectrum management is recognized as a key technique to tackle interference in multiuser multicarrier communication systems and networks. However existing dynamic spectrum management algorithms may not be suitable when the available computation time and compute power are limited, i.e., when a very fast responsiveness is required. In this paper, we present a new paradigm, theory and algorithm for realtime dynamic spectrum management (RTDSM). Specifically, a RTDSM algorithm is realtime in the sense that it can be stopped at any point in time while guaranteeing a feasible and improved solution. This is enabled by the introduction of a novel differenceofvariables (DoV) transformation and problem reformulation, for which a primal coordinate ascent approach is proposed with exact line search via a logarithmicallyscaled grid search. The proposed algorithm is referred to as iterative power difference balancing (IPDB). Simulations for different realistic wireline and wireless interferencelimited systems demonstrate its good performance, low complexity and wide applicability under different configurations.
 A GlobalLocal Synthesis Approach for Large Nonregular Arrays, Thibault Clavier, Nima RazaviGhods, François Glineur, David González Ovejero, Eloy de Lera Acedo, Christophe Craeye, Paul Alexander, IEEE Transactions on Antennas and Propagation, to appear (), , IEEE Antennas and Propagation Society. 10.1109/TAP.2013.2284816 ; DIAL handle ; abstract.
A method is proposed for the synthesis of large planar nonregular arrays assuming constant magnitude for the excitation of the elements. The approach, which combines global and local optimization, is based on distinguishing aperturetype and noncoherent parts of the array factor. The following constraints are considered: minimal separation of the antennas, maximum size of the array and fixed mainbeam width. The level of the sidelobes is reduced via the minimization of a new type of flexible averaging cost function based on an Lpnorm. Possible applications of this model lie in the field of radio astronomy, satellite communications and radar systems. The proposed optimization strategy consists of three successive steps designed to be independent of each other. Starting with an equivalent continuous aperture, the first two steps act as global transformations of the radial and azimuthal positions of the elements in the initial array, while the third step performs a local optimization of individual elements of the array. This last step heavily relies on computations of the gradient of the cost function, which can be done quickly using an FFTbased procedure. The method is illustrated for several largescale examples by considering as inputs four different types of nonregular arrays.
 Firstorder methods of smooth convex optimization with inexact oracle, Olivier Devolder, François Glineur, Yurii Nesterov, Mathematical Programming, to appear (), , Springer. 10.1007/s1010701306775 ; DIAL handle ; abstract.
We introduce the notion of inexact firstorder oracle and analyze the behavior of several firstorder methods of smooth convex optimization used with such an oracle. This notion of inexact oracle naturally appears in the context of smoothing techniques, Moreau–Yosida regularization, Augmented Lagrangians and many other situations. We derive complexity estimates for primal, dual and fast gradient methods, and study in particular their dependence on the accuracy of the oracle and the desired accuracy of the objective function. We observe that the superiority of fast gradient methods over the classical ones is no longer absolute when an inexact oracle is used. We prove that, contrary to simple gradient schemes, fast gradient methods must necessarily suffer from error accumulation. Finally, we show that the notion of inexact oracle allows the application of firstorder methods of smooth convex optimization to solve nonsmooth or weakly smooth convex problems.
 A continuous characterization of the maximumedge biclique problem, Nicolas Gillis, François Glineur, Journal of Global Optimization, Volume 58, Issue 3 (March 2014), 439–464, Springer. 10.1007/s1089801300532 ; DIAL handle ; abstract.
The problem of finding large complete subgraphs in bipartite graphs (that is, bicliques) is a wellknown combinatorial optimization problem referred to as the maximumedge biclique problem (MBP), and has many applications, e.g., in web community discovery, biological data analysis and text mining. In this paper, we present a new continuous characterization for MBP. Given a bipartite graph $G$, we are able to formulate a continuous optimization problem (namely, an approximate rankone matrix factorization problem with nonnegativity constraints, R1N for short), and show that there is a onetoone correspondence between (i) the maximum (i.e., the largest) bicliques of $G$ and the global minima of R1N, and (ii) the maximal bicliques of $G$ (i.e., bicliques not contained in any larger biclique) and the local minima of R1N. We also show that any stationary points of R1N must be close to a biclique of $G$. This allows us to design a new type of biclique finding algorithm based on the application of a blockcoordinate descent scheme to R1N. We show that this algorithm, whose algorithmic complexity per iteration is proportional to the number of edges in the graph, is guaranteed to converge to a biclique and that it performs competitively with existing methods on random graphs and text mining datasets. Finally, we show how R1N is closely related to the MotzkinStrauss formalism for cliques.
 Weighted Sum Rate Maximization for Downlink OFDMA with Subcarrierpair based Opportunistic DF Relaying, Tao Wang, François Glineur, Jérome Louvaux, Luc Vandendorpe, IEEE Transactions on Signal Processing, Volume 61, Issue 10 (15 May 2013), 2512–2524, IEEE Signal Processing Society. 10.1109/TSP.2013.2245326 ; DIAL handle ; abstract.
This paper addresses the weighted sum rate (WSR) maximization problem for downlink orthogonal frequency division multiple access (OFDMA) transmission aided by a decodeandforward (DF) relay under a total power constraint. A novel subcarrierpair based opportunistic DF relaying protocol is proposed. Specifically, user message bits are transmitted in two consecutive equalduration time slots. A subcarrier in the first slot can be paired with a subcarrier in the second slot for the DF relayaided transmission to a user. Moreover, the source and the relay can transmit simultaneously to implement beamforming at the subcarrier in the second slot. Each unpaired subcarrier in either the first or second slot is used by the source for direct transmission to a user without the relay’s assistance. A benchmark protocol, which is the same as the novel one except that the source keeps silent at the subcarrier in the second slot for the relayaided transmission, is also considered. For each protocol, a polynomialcomplexity algorithm is developed to find at least an approximately optimum resource allocation (RA) with a WSR very close to the maximum WSR. Instrumental to the algorithm design is an elegant definition of optimization variables to represent source/relay power allocation, transmissionmode selection, subcarrier pairing and assignment to users jointly, motivated by the idea of regarding the unpaired subcarriers as virtual subcarrier pairs in direct mode. With those variables, the RA problem is formulated and then solved efficiently with continuous relaxation, the dual method, and Hungarian algorithm. The effectiveness of the RA algorithm and the protocols’ performance are illustrated by numerical experiments. It is shown that the novel protocol leads to a much greater maximum WSR than the benchmark one especially when the relay is in close proximity to the source.
 An Efficient Sampling Method for RegressionBased Polynomial Chaos Expansion, Samih Zein, Benoît Colson, François Glineur, Communications in Computational Physics, Volume 13, Number 4 (2013), 1173–1188, Global Science Press. 10.4208/cicp.020911.200412a ; DIAL handle ; abstract.
The polynomial chaos expansion (PCE) is an efficient numerical method for performing a reliability analysis. It relates the output of a nonlinear system with the uncertainty in its input parameters using a multidimensional polynomial approximation (the socalled PCE). Numerically, such an approximation can be obtained by using a regression method with a suitable design of experiments. The cost of this approximation depends on the size of the design of experiments. If the design of experiments is large and the system is modeled with a computationally expensive FEA (Finite Element Analysis) model, the PCE approximation becomes unfeasible. The aim of this work is to propose an algorithm that generates efficiently a design of experiments of a size defined by the user, in order to make the PCE approximation computationally feasible. It is an optimization algorithm that seeks to find the best design of experiments in the Doptimal sense for the PCE. This algorithm is a coupling between genetic algorithms and the Fedorov exchange algorithm. The efficiency of our approach in terms of accuracy and computational time reduction is compared with other existing methods in the case of analytical functions and finite element based functions.
 Double Smoothing Technique for LargeScale Linearly Constrained Convex Optimization, Olivier Devolder, François Glineur, Yurii Nesterov, SIAM Journal on Optimization, Volume 22, Issue 2 (2012), 702–727, SIAM. 10.1137/110826102 ; DIAL handle ; abstract.
In this paper, we propose an efficient approach for solving a class of largescale convex optimization problems. The problem we consider is the minimization of a convex function over a simple (possibly infinitedimensional) convex set, under the additional constraint Au in T, where A is a linear operator and T is a convex set whose dimension is small compared to the dimension of the feasible region. In our approach, we dualize the linear constraints, solve the resulting dual problem with a purely dual gradienttype method and show how to reconstruct an approximate primal solution. Because the linear constraints have been dualized, the dual objective function typically becomes separable, and therefore easy to compute. In order to accelerate our scheme, we introduce a novel double smoothing technique that involves regularization of the dual problem to allow the use of a fast gradient method. As a result, we obtain a method with complexity O(1/epsilon log(1/epsilon) gradient iterations, where epsilon is the desired accuracy for the primaldual solution. Our approach covers, in particular, optimal control problems with a trajectory governed by a system of linear differential equations, where the additional constraints can, for example, force the trajectory to visit some convex sets at certain moments in time.
 On the Geometric Interpretation of the Nonnegative Rank, Nicolas Gillis, François Glineur, Linear Algebra and its Applications, Volume 437, Issue 11 (1 December 2012), 2685–2712, Elsevier. 10.1016/j.laa.2012.06.038 ; DIAL handle ; abstract.
The nonnegative rank of a nonnegative matrix is the minimum number of nonnegative rankone factors needed to reconstruct it exactly. The problem of determining this rank and computing the corresponding nonnegative factors is difficult; however it has many potential applications, e.g., in data mining and graph theory. In particular, it can be used to characterize the minimal size of any extended reformulation of a given polytope. In this paper, we introduce and study a related quantity, called the restricted nonnegative rank. We show that computing this quantity is equivalent to a problem in computational geometry, and fully characterize its computational complexity. This in turn sheds new light on the nonnegative rank problem, and in particular allows us to provide new improved lower bounds based on its geometric interpretation. We apply these results to slack matrices and linear Euclidean distance matrices and obtain counterexamples to two conjectures of Beasley and Laffey, namely we show that the nonnegative rank of linear Euclidean distance matrices is not necessarily equal to their dimension, and that the rank of a matrix is not always greater than the nonnegative rank of its square.
 Accelerated Multiplicative Updates and Hierarchical ALS Algorithms for Nonnegative Matrix Factorization, Nicolas Gillis, François Glineur, Neural Computation, Volume 24, Number 4 (April 2012), 1085–1105, MIT Press Journals. 10.1162/NECO_a_00256 ; DIAL handle ; abstract.
Nonnegative matrix factorization (NMF) is a data analysis technique used in a great variety of applications such as text mining, image processing, hyperspectral data analysis, computational biology, and clustering. In this letter, we consider two wellknown algorithms designed to solve NMF problems: the multiplicative updates of Lee and Seung and the hierarchical alternating least squares of Cichocki et al. We propose a simple way to significantly accelerate these schemes, based on a careful analysis of the computational cost needed at each iteration, while preserving their convergence properties. This acceleration technique can also be applied to other algorithms, which we illustrate on the projected gradient method of Lin. The efficiency of the accelerated algorithms is empirically demonstrated on image and text data sets and compares favorably with a stateoftheart alternating nonnegative least squares algorithm.
 A multilevel approach for nonnegative matrix factorization, Nicolas Gillis, François Glineur, Journal of Computational and Applied Mathematics, Volume 236, Issue 7 (January 2012), 1708–1723, Elsevier. 10.1016/j.cam.2011.10.002 ; DIAL handle ; abstract.
Nonnegative matrix factorization (NMF), the problem of approximating a nonnegative matrix with the product of two lowrank nonnegative matrices, has been shown to be useful in many applications, such as text mining, image processing, and computational biology. In this paper, we explain how algorithms for NMF can be embedded into the framework of multilevel methods in order to accelerate their initial convergence. This technique can be applied in situations where data admit a good approximate representation in a lower dimensional space through linear transformations preserving nonnegativity. Several simple multilevel strategies are described and are experimentally shown to speed up significantly three popular NMF algorithms (alternating nonnegative least squares, multiplicative updates and hierarchical alternating least squares) on several standard image datasets.
 LowRank Matrix Approximation with Weights or Missing Data Is NPHard, Nicolas Gillis, François Glineur, SIAM Journal on Matrix Analysis and Applications, Volume 32, Issue 4 (2011), 1149–1165, SIAM. 10.1137/110820361 ; DIAL handle ; abstract.
Weighted lowrank approximation (WLRA), a dimensionality reduction technique for data analysis, has been successfully used in several applications, such as in collaborative filtering to design recommender systems or in computer vision to recover structure from motion. In this paper, we prove that computing an optimal WLRA is NPhard, already when a rankone approximation is sought. In fact, we show that it is hard to compute approximate solutions to the WLRA problem with some prescribed accuracy. Our proofs are based on reductions from the maximumedge biclique problem and apply to strictly positive weights as well as to binary weights (the latter corresponding to lowrank matrix approximation with missing data).
 ACUTA: A novel method for eliciting additive value functions on the basis of holistic preference statements, Géraldine Bous, Philippe Fortemps, François Glineur, Marc Pirlot, European Journal of Operational Research, Volume 206, Issue 2 (16 October 2010), 435–444, Elsevier. 10.1016/j.ejor.2010.03.009 ; DIAL handle ; abstract.
In multiple criteria decision aiding, it is common to use methods that are capable of automatically extracting a decision or evaluation model from partial information provided by the decision maker about a preference structure. In general, there is more than one possible model, leading to an indetermination which is dealt with sometimes arbitrarily in existing methods. This paper aims at filling this theoretical gap: we present a novel method, based on the computation of the analytic center of a polyhedron, for the selection of additive value functions that are compatible with holistic assessments of preferences. We demonstrate the most important characteristics of this technique with an experimental and comparative study of several existing methods belonging to the UTA family.
 Using underapproximations for sparse nonnegative matrix factorization, Nicolas Gillis, François Glineur, Pattern Recognition, Volume 43, Issue 4 (April 2010), 1676–1687, Elsevier. 10.1016/j.patcog.2009.11.013 ; DIAL handle ; abstract.
Nonnegative matrix factorization consists in (approximately) factorizing a nonnegative data matrix by the product of two lowrank nonnegative matrices. It has been successfully applied as a data analysis technique in numerous domains, e.g., text mining, image processing, microarray data analysis, collaborative filtering, etc. We introduce a novel approach to solve NMF problems, based on the use of an underapproximation technique, and show its effectiveness to obtain sparse solutions. This approach, based on Lagrangian relaxation, allows the resolution of NMF problems in a recursive fashion. We also prove that the underapproximation problem is NPhard for any fixed factorization rank, using a reduction of the maximum edge biclique problem in bipartite graphs. We test two variants of our underapproximation approach on several standard image datasets and show that they provide sparse partbased representations with low reconstruction error. Our results are comparable and sometimes superior to those obtained by two standard sparse nonnegative matrix factorization techniques.
 An interiorpoint method for the singlefacility location problem with mixed norms using a conic formulation, Robert Chares, François Glineur, Mathematical Methods of Operations Research, Volume 68, Number 3 (2008), 383–405, Springer. 10.1007/s001860080225x ; DIAL handle ; abstract.
We consider the singlefacility location problem with mixed norms, i.e. the problem of minimizing the sum of the distances from a point to a set of fixed points in Rn, where each distance can be measured according to a different pnorm. We show how this problem can be expressed into a structured conic format by decomposing the nonlinear components of the objective into a series of constraints involving threedimensional cones. Using the availability of a selfconcordant barrier for these cones, we present a polynomialtime algorithm (a longstep pathfollowing interiorpoint scheme) to solve the problem up to any given accuracy. Finally, we report computational results for this algorithm and compare with standard nonlinear optimization solvers applied to this problem.
 de Finetti’s retention problem for proportional reinsurance revisited, François Glineur, JeanFrançois Walhin, Blätter der DGVFM, Volume 27, Number 3 (2006), 451–462, Springer. 10.1007/BF02809170 ; DIAL handle ; abstract.
We use convex optimization to provide a rigorous proof of de Finetti’s retention result for proportional reinsurance. We then extend this result to variable quota share reinsurance and surplus reinsurance with table of lines. We demonstrate by a numerical example that in general neither variable quota share reinsurance nor surplus reinsurance with table of lines may be considered as optimal reinsurance structures.
 Conic Formulation for lpNorm Optimization, François Glineur, Tamás Terlaky, Journal of Optimization Theory and Applications, Volume 122, Number 2 (August 2004), 285–307, Springer. 10.1023/B:JOTA.0000042522.65261.51 ; DIAL handle ; abstract.
In this paper, we formulate the lpnorm optimization problem as a conic optimization problem, derive its duality properties (weak duality, zero duality gap, and primal attainment) using standard conic duality and show how it can be solved in polynomial time applying the framework of interiorpoint algorithms based on selfconcordant barriers.
 Improving complexity of structured convex optimization problems using selfconcordant barriers, François Glineur, European Journal of Operational Research, Volume 143, Issue 2 (1 December 2002), 291–310, Elsevier. 10.1016/S03772217(02)002977 ; DIAL handle ; abstract.
The purpose of this paper is to provide improved complexity results for several classes of structured convex optimization problems using the theory of selfconcordant functions developed by Nesterov and Nemirovski in SIAM Studies in Applied Mathematics, SIAM Publications, Philadelphia, 1994. We describe the classical shortstep interiorpoint method and optimize its parameters in order to provide the best possible iteration bound. We also discuss the necessity of introducing two parameters in the definition of selfconcordancy and which one is the best to fix. A lemma due to den Hertog et al. in Mathematical Programming Series B 69 (1) (1995) is improved, which allows us to review several classes of structured convex optimization problems and improve the corresponding complexity results.
 Proving Strong Duality for Geometric Optimization Using a Conic Formulation, François Glineur, Annals of Operations Research, Volume 105, Numbers 14 (2001), 155–184, Springer. 10.1023/A:1013357600036 ; DIAL handle ; abstract.
Geometric optimization is an important class of problems that has many applications, especially in engineering design. In this article, we provide new simplified proofs for the wellknown associated duality theory, using conic optimization. After introducing suitable convex cones and studying their properties, we model geometric optimization problems with a conic formulation, which allows us to apply the powerful duality theory of conic optimization and derive the duality results valid for geometric optimization.
 Conic optimization: an elegant framework for convex optimization, François Glineur, Belgian Journal of Operations Research, Statistics and Computer Science, Volume 41, Numbers 12 (2001), 5–28, SogesciB.V.W.B.. DIAL handle ; abstract.
The purpose of this survey article is to introduce the reader to a very elegant formulation of convex optimization problems called conic optimization and outline its many advantages. After a brief introduction to convex optimization, the notion of convex cone is introduced, which leads to the conic formulation of convex optimization problems. This formulation features a very symmetric dual problem, and several useful duality theorems pertaining to this conic primaldual pair are presented. The usefulness of this approach is then demonstrated with its application to a wellknown class of convex problems called lpnorm optimization. A suitably defined convex cone leads to a conic formulation for this problem, which allows us to derive its dual and the associated weak and strong duality properties in a seamless manner.
 An extended conic formulation for geometric optimization, François Glineur, Foundations of Computing and Decision Sciences, Volume 25, Number 3 (2000), 161–174, Poznań University of Technology. DIAL handle ; abstract.
The author has recently proposed a new way of formulating two classical classes of structured convex problems, geometric and lpnorm optimization, using dedicated convex cones. This approach has some advantages over the traditional formulation: it simplifies the proofs of the wellknown associated duality properties (i.e. weak and strong duality) and the design of a polynomial algorithm becomes straightforward. In this article, we make a step towards the description of a common framework that includes these two classes of problems. Indeed, we present an extended variant of the cone for geometric optimization previously introduced by the author and show it is equally suitable to formulate this class of problems. This new cone has the additional advantage of being very similar to the cone used for lpnorm optimization, which opens the way to a common generalization.
 Interiorpoint methods for linear programming: a guided tour, François Glineur, Belgian Journal of Operations Research, Statistics and Computer Science, Volume 38, Number 1 (1998), 3–30, SogesciB.V.W.B.. DIAL handle ; abstract.
The purpose of mathematical programming, a branch of optimization, is to minimize (or maximize) a function of several variables under a set of constraints. This is a very important problem arising in many realworld situations (e.g. cost or duration minimization). When the function to optimize and its associated set of constraints are linear, we talk about linear programming. The simplex algorithm, first developed by Dantzig in 1947, is a very efficient method to solve this class of problems. It has been thoroughly studied and improved since its first appearance, and is now widely used in commercial software to solve a great variety of problems (production planning, transportation, scheduling, etc.). However, an article by Karmarkar introduced in 1984 a new class of methods: the socalled interiorpoint methods. Most of the ideas underlying these new methods originate from the nonlinear optimization domain. These methods are both theoretically and practically efficient, can be used to solve largescale problems and can be generalized to other types of convex optimization problems. The purpose of this article is to give an overview of this rather new domain, providing a clear and understandable description of these methods, both from a theoretical and a practical point of view.
Chapters in books
 Solving Infinitedimensional Optimization Problems by Polynomial Approximation , Olivier Devolder, François Glineur, Yurii Nesterov, in Recent Advances in Optimization and its Applications in Engineering, edited by Moritz Diehl, François Glineur, Elias Jarlebring, Wim Michiels (2010), 31–40, Springer (ISBN 9783642125973). 10.1007/9783642125980_3 ; DIAL handle ; abstract, final PDF.
We solve a class of convex infinitedimensional optimization problems using a numerical approximation method that does not rely on discretization. Instead, we restrict the decision variable to a sequence of infinitedimensional linear subspaces of the original infinitedimensional space and solve the corresponding infinitedimensional problems in a efficient way using structured convex optimization techniques. We prove that, under some reasonable assumptions, the sequence of these optimal values converges to the optimal value of the original infinitedimensional problem and give an explicit description of the corresponding rate of convergence.
 Impact of the Material Distribution Formalism on the Efficiency of Evolutionary Methods for Topology Optimization , Jonathan Denies, Bruno Dehez, François Glineur, Hamid Ben Ahmed, in Recent Advances in Optimization and its Applications in Engineering, edited by Moritz Diehl, François Glineur, Elias Jarlebring, Wim Michiels (2010), 461–470, Springer (ISBN 9783642125973). 10.1007/9783642125980_40 ; DIAL handle ; abstract, final PDF.
We consider an evolutionary method applied to a topology optimization problem. We compare two material distribution formalisms (static vs. Voronoibased dynamic), and two sets of reproduction mechanisms (standard vs. topologyadapted). We test those four variants on both theoretical and practical test cases, to show that the Voronoibased formalism combined with adapted reproduction mechanisms performs better and is less sensitive to its parameters.
 Les méthodes de point intérieur , François Glineur, in Programmation linéaire, Jacques Teghem (2003), 263–285, Editions Ellipses (ISBN 9782800413174). DIAL handle ; abstract, final PDF.
Refereed proceedings
 Genetic algorithmbased topology optimization: Performance improvement through dynamic evolution of the population size , Jonathan Denies, Bruno Dehez, François Glineur and Hamid Ben Ahmed, in Proceedings of the 2012 International Symposium on Power Electronics, Electrical Drives, Automation and Motion (SPEEDAM 2012), Sorrento, Italy (June 2022 2012), 1033–1038 (ISBN 9781467312998). 10.1109/SPEEDAM.2012.6264469 ; DIAL handle ; abstract, final PDF.
Topological optimization tool using genetic algorithm as optimization algorithm are known as very expensive in computation time. In this paper, we study an approach to improve performance of topological optimization tool by introducing a dynamic variation of the population size of children during the process of optimization. This method allows to improve performance of each generation by adapting the number of children created and by introducing a coefficient of reproduction for each individual inside the population of parents. Through this coefficient of reproduction, the number of children assigns to each parent is calculated. The number of evaluations at each generation changes and the tool can saves evaluations in order to increase the number of iterations.
 A novel class of iterative approximation methods for DSL spectrum optimization , Paschalis Tsiaﬂakis and François Glineur, in Proceedings of the IEEE International Conference on Communications (ICC 2012), Ottawa (June 1015 2012), 31933198 (ISBN ). DIAL handle ; abstract, final PDF.
Spectrum optimization is a promising means to tackle the crosstalk problem in DSL systems, and corresponds to a challenging nonconvex optimization problem. Iterative convex approximation (ICA) methods have been proposed in the literature to deal with this optimization problem. These methods consist in solving a series of improving convex approximations and are typically implemented in a peruser iterative approach. In this paper we develop a novel class of iterative methods that focus explicitly on peruser iterative implementations, and which consist of improved peruser approximations that are tighter and simpler to solve (in closedform) than stateoftheart ICA methods. As a result, the proposed methods improve the convergence speed as fewer approximations are required to converge, and display a significantly lower computational cost. Furthermore, three of the proposed methods can tackle the issue of getting stuck in bad locally optimal solutions, and hence improve solution quality with respect to existing ICA methods.
 Extended Lanczos Bidiagonalization for Dimension Reduction in Information Retrieval , François Glineur, Linzhang Lu, Paul Van Dooren and Xuansheng Wang, in Proceedings of the 8th International Conference on Natural Computation (ICNC'12), Chongqing, China (May 2931 2012), 138–144 (ISBN 9781457721335). DIAL handle ; abstract, final PDF.
We describe an extended bidiagonalization scheme designed to compute lowrank approximations of very large data matrices. Its goal is identical to that of the truncated singular value decomposition, but it is significantly cheaper. It consists in an extension of the standard Lanczos bidiagonalization that improves its approximation capabilities, while keeping the computational cost reasonable. This lowrank approximation yields much cheaper computations of the matrixvector products that are central in many information retrieval tasks. We demonstrate effectiveness of this approach on applications in face recognition and latent semantic indexing.
 Study of a topology optimization tool associating a hybrid algorithm with a Voronoïbased formalism for the design of electromagnetic devices , Jonathan Denies, Hamid Ben Ahmed, François Glineur and Bruno Dehez, in Proceedings of the XIX International Conference on Electrical Machines (ICEM 2010), Rome (September 68 2010), 1–6 (ISBN 9781424441747). 10.1109/ICELMACH.2010.5607889 ; DIAL handle ; abstract, final PDF.
This paper presents and compares three topology optimization tools differing from the optimization algorithms they implement. The first two are respectively based on a local optimization algorithm, i.e. a NelderMead method, and a global one, i.e. a genetic algorithm. The third uses a hybrid algorithm associating the local and the global algorithms used in the first two. Each of them implements the same material distribution formalism, i.e. a dynamic formalism based on Voronoi cells. The results obtained on a practical study case, the design of an electromagnet, show the better performance of the hybrid algorithm as well in terms of speed of the convergence as in terms of quality of the final solution and therefore the interest of coupling the exploration abilty of the global search with the intensification abilty of the local search.
 Topology optimization method applied to the design of electromagnetic devices: focus on convexity issues , Thibaut Labbé, François Glineur and Bruno Dehez, in Proceedings of the 8th International Symposium on Advanced Electromechanical Motion Systems & Electric Drives Joint Symposium (ELECTROMOTION 2009), Lille (July 13 2009), 1–6 (ISBN 9781424451500). 10.1109/ELECTROMOTION.2009.5259087 ; DIAL handle ; abstract, final PDF.
To perform parameter and shape optimization, an initial topology is required which affects the final solution. Topology optimization methods have the advantage to release this constraint. They are based on a splitting of the design space into cells, in which they attempt to distribute optimally some given materials. In this paper, the optimization is based on a discrete formulation of the Maxwell equations obtained thanks to a finite element model. The gradient of the objective function is computed immediately from this discrete form, which is more convenient than the adjoint variable method. A linesearch method (steepest descent direction) is then applied. The step size is computed by a simple and quick algorithm, to achieve good and fast convergence. Several convexity issues were already highlighted in topology optimization. We explain how we can face some of them by performing the optimization on a variable linked to the permeability by a given mapping. This mapping actually affects the value of intermediate materials and must be selected carefully to avoid the algorithm to be trapped in some local minimizers. In order to illustrate the concepts presented along the paper, the method is applied for the topology optimization of a linear reluctant actuator.
 Document classification using nonnegative matrix factorization and underapproximation , Michael W. Berry, Nicolas Gillis and François Glineur, in Proceedings of the IEEE International Symposium on Circuits and Systems, 2009 (ISCAS 2009), Taipei (May 2427 2009), 2782–2785 (ISBN 9781424438273). 10.1109/ISCAS.2009.5118379 ; DIAL handle ; abstract, final PDF.
In this study, we use nonnegative matrix factorization (NMF) and nonnegative matrix underapproximation (NMU) approaches to generate feature vectors that can be used to cluster Aviation Safety Reporting System (ASRS) documents obtained from the Distributed National ASAP Archive (DNAA). By preserving nonnegativity, both the NMF and NMU facilitate a sumofparts representation of the underlying term usage patterns in the ASRS document collection. Both the training and test sets of ASRS documents are parsed and then factored by both algorithms to produce a reducedrank representations of the entire document space. The resulting feature coefficient matrix factors are used to cluster ASRS documents so that the (known) associated anomalies of training documents are directly mapped to the feature vectors. Dominant features of test documents are then used to generate anomaly relevance scores for those documents. We demonstrate that the approximate solution obtained by NMU using Lagrangrian duality can lead to a better sumofparts representation and document classification accuracy.
Theses
 Topics in convex optimization: interiorpoint methods, conic duality and approximations , François Glineur, cosupervised by Jacques Teghem and Tamás Terlaky, PhD thesis, Faculté Polytechnique de Mons, Belgium (January 2001), 218 pages. abstract, PDF.
Optimization is a scientific discipline that lies at the boundary between pure and applied mathematics. Indeed, while some of its developments involve rather theoretical concepts, its most successful algorithms are heavily used by numerous companies to solve scheduling and design problems on a daily basis. Our research started with the study of the conic formulation for convex optimization problems. This approach was already studied in the seventies but has recently gained a lot of interest due to development of a new class of algorithms called interiorpoint methods. This setting is able to exploit the two most important characteristics of convexity:  a very rich duality theory (existence of a dual problem that is strongly related to the primal problem, with a very symmetric formulation),  the ability to solve these problems efficiently, both from the theoretical (polynomial algorithmic complexity) and practical (implementations allowing the resolution of largescale problems) points of view. Most of the research in this area involved socalled selfdual cones, where the dual problem has exactly the same structure as the primal: the most famous classes of convex optimization problems (linear optimization, convex quadratic optimization and semidefinite optimization) belong to this category. We brought some contributions in this field:  a survey of interiorpoint methods for linear optimization, with an emphasis on the fundamental principles that lie behind the design of these algorithms,  a computational study of a method of linear approximation of convex quadratic optimization (more precisely, the secondorder cone that can be used in the formulation of quadratic problems is replaced by a polyhedral approximation whose accuracy can be guaranteed a priori) ; this technique was also successfully applied to the resolution of limit analysis problems in mechanical engineering,  an application of semidefinite optimization to classification, whose principle consists in separating different classes of patterns using ellipsoids defined in the feature space (this approach was successfully applied to the prediction of student grades). However, our research focussed on a much less studied category of convex problems which does not rely on selfdual cones, \IE structured problems whose dual is formulated very differently from the primal. We studied in particular  geometric optimization, developed in the late sixties, which possesses numerous application in the field of engineering (entropy optimization, used in information theory, also belongs to this class of problems)  lpnorm optimization, a generalization of linear and convex quadratic optimization, which allows the formulation of constraints built around expressions of the form abs(ax+b^p) (where p is a fixed exponent strictly greater than 1). For each of these classes of problems, we introduced a new type of convex cone that made their formulation as standard conic problems possible. This allowed us to derive very simplified proofs of the classical duality results pertaining to these problems, notably weak duality and the absence of a duality gap (strong duality property without any constraint qualification, which does not hold in the general convex case). We also uncovered a very surprising result that stipulates that geometric optimization can be viewed as a limit case of lpnorm optimization. Encouraged by the similarities we observed, we developed a general framework that encompasses these two classes of problems and unifies all the previously obtained conic formulations. This framework, called separable optimization, allows the modelling of many more classes of problems while keeping the zero duality gap property. We also brought our attention to the design of interiorpoint methods to solve these problems. The theory of polynomial algorithms for convex optimization developed by Nesterov and Nemirovsky asserts that the main ingredient for these methods is a computable selfconcordant barrier function for the corresponding cones. We were able to define such a barrier function in the case of lpnorm optimization (whose parameter, which is the main determining factor in the algorithmic complexity of the method, is proportional to the number of variables in the formulation and independent from p) as well as in the case of the general framework mentioned above. Our objective is now to exploit the inherent symmetry of this formulation in order to solve these problems efficiently using primaldual methods. Finally, we contributed a survey of the selfconcordancy property, improving some useful results about the value of the complexity parameter for certain categories of barrier functions and providing some insight on the reason why the most commonly adopted definition for selfconcordant functions is the best possible.
 Pattern separation via ellipsoids and conic programming , François Glineur, supervised by Jacques Teghem, Master thesis (D.E.A.), Faculté Polytechnique de Mons, Belgium (September 1998), 58 pages. abstract, PDF.
The first chapter is about mathematical programming. We will start by describing how and why researchers were led to study special types of mathematical programs, namely convex programs and conic programs. We will also provide a detailed discussion about conic duality and give a classification of conic programs. We will then describe what are selfscaled cones and why they are so useful in conic programming. Finally, we will give an overview of what can be modelled using a SQL conic program, keeping in mind our pattern separation problem. Since most of the material in the chapter is standard, many of the proofs are omitted. The second chapter will concentrate on pattern separation. After a short description of the problem, we will successively describe four different separation methods using SQL conic programming. For each method, various properties are investigated. Each algorithm has in fact been successively designed with the objective of eliminating the drawbacks of the previous one, while keeping its good properties. We conclude this chapter with a small section describing the state of the art in pattern separation with ellipsoids. The third chapter reports some computational experiments with our four methods, and provides a comparison with other separation procedures. Finally, we conclude this work by providing a short summary, highlighting the author's personal contribution and giving some interesting perspectives for further research.
Please feel free to send me an email if you would like a copy of one these publications.
 