Contact information:
Mathematical Institute, University of Oxford
Andrew Wiles Building, Radcliffe Observatory Quarter, Woodstock Road, Oxford
OX2 6GG
Here is a small summary of my research.
Many applications involve positive-semidefinite matrices. However, the set of positive-semidefinite matrices is not a manifold. In some cases (e.g., when the data points are low-rank approximations of large positive-semidefinite matrices), the rank of the matrices can be assumed to be fixed to some common value. The data points are then represented as points on the manifold of fixed-rank positive-semidefinite matrices.
This recent preprint contains a detailed description of the manifold of fixed-rank positive-semidefinite matrices seen as a quotient of the set of full-rank rectangular matrices by the orthogonal group. In particular, we obtain expressions for the Riemannian logarithm and the injectivity radius. The resulting Riemannian distance coincides with the Wasserstein distance between centered degenerate Gaussians with corresponding low-rank covariance matrices.
In collaboration with Pierre-Yves Gousenbourger (UCLouvain), Antoni Musolas (MIT) and Thanh Son Nguyen (UCLouvain), we have then applied curve fitting algorithms on manifolds (see this paper), to wind field estimation (here) and parametric model order reduction (here).
We have proposed two accelerated algorithms for computing the Riemannian barycenter on the manifold of positive-definite matrices (endowed with its classical affine-invariant metric).
The first algorithm is a accelerated incremental gradient descent (the deterministic variant of the classical stochastic gradient descent). This algorithm is endowed with a deterministic shuffling process, resulting on average in a faster convergence that the well-known stochastic gradient algorithm. The algorithm is described in this paper and has been applied in this paper to EEG classification. The main motivation to use an incremental algorithm for computing the Riemannian barycenter in this application is that it allows to adapt the classifier as new data points are encountered.
The second algorithm is an accelerated decentralized algorithm for averaging positive-definite matrices. The resulting algorithm, presented in this work, is based on ideas from consensus theory.