The seminars will take place at the Katholieke Universiteit Leuven, Auditorium 200N, Celestijnenlaan, 200, 3001 Leuven (Heverlee), and at CESAME, Auditoire Euler, av. Georges Lemaître, 4, 1348
Louvain-la-Neuve. All seminars will be given on
Friday afternoons from 14h00 till 17h30 with a 30 minutes break around 15h30.
Students registering for this course may
request an evaluation, which can be arranged in the form of a short report or
additional presentation on one of the topics addressed in these seminars.
LIST
OF SEMINARS, DATES AND LOCATIONS
April
15, 2005, KUL, Auditorium 200N
14:00-17:30
Krylov methods : an introduction slides
Paul Van Dooren (Université Catholique de Louvain, Belgium)
Abstract :
This
introduction derives the general ideas of Krylov based methods starting from
two matrix decompositions: one based on orthogonal similarity transformations
and one based on arbitrary similarity transformations. From these matrix
decompositions we obtain the basic recurrence relations of the Arnoldi and
Lanczos algorithms and discuss possible breakdowns. We then show how these lead
to different classes of solvers for linear systems of equations and to
projection methods for the computation of the spectrum of a matrix. This talk
develops the basic background for the more specialized lectures in this series.
April 22, 2005, UCL, Auditoire BARB06, Hall
Ste Barbe
14:00-17:30 Krylov subspace methods : practical variants slides
Henk van der
Vorst (Utrecht University, the Netherlands)
Abstract :
We will
present an overview of a number of related iterative methods for the solution
of linear systems of equations. These methods are so-called Krylov projection
type methods and they include popular methods as Conjugate Gradients,
Bi-Conjugate Gradients, QMR, LSQR and GMRES. We will discuss some useful
variants of these methods, in particular Bi-CGSTAB and GMRESR. Rounding errors may have serious impact on
the results of iterative methods and we will discuss these effects and we will
see how these effects can be repaired.
Iterative
methods are often used in combination with so-called preconditioning operators
(approximations for the inverses of the operator of the system to be solved) in
order to speed up convergence. We will give a short overview of these techniques, details will be given in other lectures in this
series.
April 29, 2005, KUL, Auditorium 200N
14:00-17:30 Iterative
methods for large linear eigenvalue problems slides
Henk van der Vorst
(Utrecht University, the Netherlands)
Abstract :
Suppose we
want to compute one or more eigenvalues and their corresponding eigenvectors of
the n x n matrix A. Several iterative methods are available: Jacobi's method,
the power method, the method of Lanczos, Arnoldi's method, and Davidson's
method. We will discuss how these methods work and how they may be used
successfully. Most attention will be given to the methods of Lanczos and
Arnoldi. It is quite easy to use these methods, but it is not so easy to select
the right information from the produced output. A variant of the Arnoldi method
is known as Davidson's method and this method has been reported as being quite
successful, most notably in connection with certain symmetric problems in
computational chemistry. The success of the method seems to depend quite
heavily on (strong) diagonal dominance of A.
However, as we will show, Davidson's method has an interesting
connection with an old and almost forgotten method of Jacobi. This leads to
another view on the method of Davidson, that may help us to explain the
behaviour of the method, and that leads to newpowerful algorithms for
non-diagonally dominant and unsymmetric matrices as well, the so-called
Jacobi-Davidson methods. From these algorithms, we can obtain essentially
different approximations for eigenvalues and these can be used to our
advantage.
May 13, 2005, UCL, Auditoire SUD01, Croix-SUD (new location !)
14:00-17:30 Preconditioning of iterative solvers for PDE’s slides
Andy Wathen
(Oxford University, U.K.)
Abstract :
This
seminar will describe preconditioning approaches for linear systems deriving
from finite difference and finite element approximation of partial differential
equations. We will cover self-adjoint problems which give rise to symmetric and
positive definite matrices, saddle-point systems which yield symmetric
indefinite matrices and non-self-adjoint problems where non-symmetric matrices
result. Most of the examples relate to problems of fluid flow, though the
coverage in the lectures should be such that familiarity with PDE’s in general
and the incompressible Navier-Stokes equations in particular will not be a
prerequisite. The approach will be rather algebraic and relevant PDE theory
will only be introduced as and when required.
May 20, 2005, KUL, Auditorium 200N
14:00-17:30 Algebraic multigrid and multilevel methods: a general introduction slides
Yvan Notay (Université
Libre de Bruxelles, Belgium)
Abstract :
Multigrid
and multilevel methods, alone or as preconditioner for (Krylov based) iterative
methods, are increasingly popular as applications grow in size. They indeed
allow to solve discrete PDEs with memory and CPU
requirements that remain proportional to the number of unknowns. To face
increasingly complex situations, there is in particular a high interest in
"algebraic" methods, that can be used
robustly in a "black box" fashion, independently of the underlying
geometry or discretization.
In this
lecture, we first introduce the different type of multigrid & multilevel
methods, illustrating their basic principles and ideas with simple
"geometric" examples. We next show how each of these methods is
easily described algebraically using a few key ingredients. We then browse
available theoretical results to identify the necessary and sufficient
conditions for fast convergence. In passing, this allows to display some
potential weaknesses of "geometric" algorithms. On this basis, we
discuss how the different key ingredients may be defined by algebraic manipulations,
reviewing several approaches that have been proposed in the literature. We
conclude with a few illustrations.