Title: Counterfactual interpretation of deep model for biomarker discovery
Keywords: Neural networks, Counterfactual interpretation, XAI, gene expression.
Deep learning is expected to have a pivotal role in diagnostic and therapeutic decision-making. Models trained on genomic data enable the prediction of diverse patient phenotypes with remarkable accuracy. Therefore, comprehending the key factors underpinning the decisions of the models becomes crucial  . The most influential variables in these models could potentially serve as biomarkers or therapeutic targets for the disease.
The goal of this internship is to use counterfactual interpretation  of a deep neural network in order to identify relevant biomarkers
The first step will be to construct a highly accurate predictive model for a specified phenotype (such as cancer type or prognosis) utilizing genomic data. Subsequently, we will explore counterfactual explanations for the model’s predictions. This process will involve solving constrained optimization problem, integrating the distinct characteristics of genomic data. By comparing real and counterfactual patients, we intend to identify potential biomarkers.
 R. Guidotti, A. Monreale, S. Ruggieri, F. Turini, F. Giannotti, and D. Pedreschi. A survey of methods for explaining black box models. ACM Comput. Surv., 51(5):93:1–93:42, Aug. 2018. ISSN 0360-0300.
 S. Wachter, B. Mittelstadt, and C. Russell, Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR, SSRN Journal, 2017.
 S. Dandl, C. Molnar, M. Binder, and B. Bischl, Multi-Objective Counterfactual Explanations, 2020.
 S. Verma, J. Dickerson, and K. Hines, Counterfactual Explanations for Machine Learning: A Review. arXiv, 2020.
 A. Van Looveren and J. Klaise, Interpretable Counterfactual Explanations Guided by Prototypes. arXiv, 2020.