Research My research is about the analysis of the complexity of the Policy Iteration algorithm for solving Markov Decision Processes (MDPs) and Two-Player Turn-Based Stochastic Games (2TBSGs). A few years ago, after I started looking at Policy Iteration, several breakthrough results appeared in the literature, including one showing a family of examples on which Policy Iteration requires an exponential number of steps to converge, hence providing the first exponential lower bound on its compexity. Since then, I have been interested in identifying the problem structures that may cause Policy Iteration to behave well or not in the worst case. In particular, we have shown that the exponential complexity results could be extended to discounted MDPs. Lately, I have also been interested in the combinatorial structure of Policy Iteration and in particular, in its connexions with the Simplex algorithm for solving Linear Programs and with Acyclic Unique Sink Orientations. Our main goal was to improve the existing upper bound on the number of iterations of Policy Iteration in general. Our latest results include such an improvement, but also the disproval of a conjecture claiming that the upper bound should follow the Fibonacci numbers and an improvement of an existing lower bound regarding the same question. On December 7, 2015, I defended my Ph.D. thesis on this topic. It is entitled "Decision making in large stochastic and adversarial environments: A complexity analysis of Policy Iteration" and can be found here. The slides of the public defense are available here. My Ph.D. advisors are Jean-Charles Delvenne and Raphaël Jungers. My Ph.D. jury was composed of Thomas D. Hansen, Bruno Scherrer, Philippe Chevalier and Philippe Lefèvre. My Ph.D. committee also included Philippe Toint. So many Philippe's :-) My masters thesis can be found here. I used to work at UCL in the applied mathematics department (INMA). I am now leaving accademia. I have a strong taste for efficient communication, both written and oral. I owe this taste to Jean-Luc Doumont who convinced me of the central importance of communication in both my professional an personal life. Teaching
Resume I hold a Ph.D. degree in mathematical engineering from UCL which I made under the supervision of Raphaël Jungers and Jean-Charles Delvenne. I obtained my Master's degree from the Ecole polytechnique de Louvain (ingénieur civil en mathématiques appliquées) (2008-2010). I have done my master's thesis under the supervision of Vincent Blondel and Raphaël Jungers, for the most part at MIT, at LIDS (Laboratory for Information and Decision Systems). Here is a CV (last update: August 2012). Publications and talks Publications : B. Gerenczér, R. Hollanders, J-C. Delvenne, R. Jungers, A complexity analysis of Policy Iteration through combinatorial matrices arising from Unique Sink Orientations. Coming soon. [Preprint] [Go code] R. Hollanders, B. Gerenczér, J-C. Delvenne, R. Jungers, About upper bounds on the complexity of Policy Iteration. Accepted for publication in Operations Research Letters, 2015. [Preprint] R. Hollanders, D. Bernardes, B. Mitra, R. Jungers, J-C. Delvenne, F. Tarissan, Data-driven traffic and diffusion modeling in peer-to-peer networks: a real case study. Network Science, 2014. [Preprint] R. Hollanders, G. Como, J-C. Delvenne, R. Jungers, Tight bounds on sparse perturbations of Markov chains. In Proceedings of the 21st international symposium on Mathematical Theory of Networks and Systems (MTNS 2014). [Preprint] R. Hollanders, J-C. Delvenne, R. Jungers, The complexity of Policy Iteration is exponential for discounted Markov Decision Processes. In proceedings of the 51st IEEE conference on Decision and control (CDC 2012). [Preprint] R. Hollanders, J-C. Delvenne, R. Jungers, Policy Iteration is well suited to optimize PageRank, available on Arxiv. [Preprint] Talks : About the latest complexity bounds for Policy Iteration, Benelux Meeting 2015, Lommel, March 2015. [Slides] Solving Markov Decision Processes using Policy Iteration: a complexity analysis, Verification Seminar, ULB, Brussels, Belgium, December 2014, Seminar at Loria-Inria, Nancy, France, February 2015. [Slides] Tight bounds on sparse perturbations of Markov Chains, MTNS 2014, Groningen, Netherlands, July 2014. Presented by Jean-Charles Delvenne. [Slides] Policy Iteration has exponential complexity for solving Markov Decision Processes. But there is more to the story... Cesame Seminar in Systems and Control, Louvain-la-Neuve, April 2013. [Slides with Prezi : Part 1, Part 2, pdf version] A combinatorial open problem for the complexity of Policy Iteration, Benelux Meeting 2013, Houffalize, Belgium, March 2013. Nominated for the Best Junior Presentation Award. [Slides with Prezi, pdf version] Make the right choice using Markov Decision Processes, ICTEAM Tutorial Seminar, Louvain-la-Neuve, Belgium, March 2013. [Slides] The complexity of Policy Iteration is exponential for discounted Markov Decision Processes, CDC 2012, Hawaii, December 2012. [Slides with Prezi, pdf version] On the complexity of Optimizing PageRank, AlgoTel 2012, Montpellier, May 2012, SIAM Linear Algebra 2012, Valencia, June 2012 and MTNS 2012, Melbourne, July 2012. [Slides, abstract] On the complexity of Policy Iteration for PageRank Optimization, Benelux meeting 2011, Lommel, March 2011 and NetSci 2011 conference, Budapest, May 2011. [Slides] On the Policy Iteration algorithm for PageRank Optimization, JGA 2010 meeting, Marseille, Nov. 2010 and UCL-INMA Large Graphs and Networks seminar, Louvain-la-Neuve, Nov. 2010. [Slides] Posters : A new upper bound for Policy Iteration, AofA 2014, Paris, June 2014. [Poster, abstract] This looks like Fibonacci... But is it really? A combinatorial open problem, IAP meeting, Brussels, Nov. 2013. [Poster] Miscellaneous Nicolas,
Karim
and I won the second place at the "SIAM
Math Matters : Apply it!" worldwide contest
(2011) with our poster introducing in layman's terms the importance of
math in monitoring
fetal ECG's.
My Erdös number is 3. I can speak French, English, Matlab, Latex, PHP, Java and C++ among others. last
update: December 14, 2015
|