Title : « Algorithms with Learnable Predictions for Combinatorial Optimization »
The advance of machine learning opens promising research avenues, especially in the design of algorithm for combinatorial optimization. Algorithms with machine-learning predictions have been recently introduced  to circumvent worst-case analysis. The goal is to design algorithms that have near optimal performance when the predictions are good, and always have a performance better than the worst-case guarantee even when the predictions are mediocre. In high level, the methods incorporate machine learning advices to adapt their behavior to the properties of the input distribution and consequently improve their performance, such as runtime, space or quality of the solution. The field has blossomed with many applications . In this direction, we aim to develop a model that allows for imperfect predictions and study the tradeoff between the quality of algorithms and that of the predictive information.
Recently, it has been shown that useful information relies on the dual setting — a different way of looking at a given problem — and the information is learnable and can be used to provide predictions for matching , scheduling , flows . Besides, in a different approach , the primal-dual method can be extended in order to design algorithms with predictions. In this internship, we plan to :
- characterize the strength and the limit of information on the dual setting; to which extend such information is useful and related to learnable predictions;
- design algorithms with performance guarantees for problems with non-linear objectives . A killer-application would be energy minimization problem in which data are available and any improvement, even tiny, on the performance guarantee would have a real impact on the management of energy.
Location: IBISC, Univ-Evry, University Paris-Saclay. Address : 23 boulevard de France, 91037 Evry.
Duration: from Feb 2022 to July 2022.
Expectation: ideally one paper is done in the end of the internship For further information, please feel free to send Thang NGUYEN an email.
Kim Thang NGUYEN (Associate Prof. Univ. Evry, IBISC AROB@S team), kimthangDOTnguyenATuniv-evryDOTfr
 Shipra Agrawal, Morteza Zadimoghaddam, and Vahab Mirrokni. Proportional allocation : Simple, distributed, and diverse matching with high entropy. In International Conference on Machine Learning, pages 99–108, 2018.
 Etienne Bamas, Andreas Maggiori, and Ola Svensson. The primal-dual method for learning augmented algorithms. In 34th Conference on Neural Information Processing Systems, 2020.
 Silvio Lattanzi, Thomas Lavastida, Benjamin Moseley, and Sergei Vassilvitskii. Online scheduling via learned weights. In 14th Symposium on Discrete Algorithms, pages 1859– 1877, 2020.
 Thomas Lavastida, Benjamin Moseley, R Ravi, and Chenyang Xu. Learnable and instancerobust predictions for online matching, flows and load balancing. arXiv :2011.11743, 2020.
 Michael Mitzenmacher and Sergei Vassilvitskii. Algorithms with predictions. In Beyond the Worst-Case Analysis of Algorithms. Cambridge University Press, 2020.