Logic-Enabled Verification and Explanation of ML Models
Alexey Ignatiev, Joao Marques-Silva, Kuldeep Meel, Nina Narodytska
(IJCAI 2020 Tutorial)
slides
description
The tutorial overviews recent trends in verification and explainability of machine learning (ML) models from the formal logic standpoint. It will illustrate how powerful logic-based methods can be employed to solve a variety of practical applications, including certifying robustness, safety of ML models, generating explanations of ML models decisions, etc. The primary objective of this tutorial is to introduce and explain a topic of emerging importance for the AI researcher and practitioner.
The tutorial is designed around three emerging research areas:
- Property verification of ML models: (1) logic-based methods for checking existential properties of ML models (e.g. is there an input that violates a given property?); (2) quantitative estimation of probabilistic proprieties (e.g. what is the probability that a random valid input violates a property?)
- Two orthogonal approaches to interpretability of ML models: (1) building interpretable (transparent) models, like decision trees or decision sets and (2) extracting post-hoc explanations from non-interpretable models, like neural networks.
- Explaining predictions of ML models. We will cover recent and popular heuristic approaches, e.g. LIME, Anchor, and SHAP, and then delve into rigorous logic-based approaches for computing explanations.
prerequisites
The audience is assumed to have basic understanding of the concepts arising in formal logic and automated reasoning.
presenters
-
Alexey
Ignatiev (Monash University, Australia)
Alexey Ignatiev is a Senior Lecturer at Monash University, Australia. Alexey's recent work is mainly focused on reasoning with SAT and SMT oracles, analysis of overconstrained systems, knowledge compilation, and a multitude of practical applications in AI: from graph optimization problems to model-based diagnosis and explainable AI.
-
Joao
Marques-Silva (ANITI, Univ. Toulouse, France)
Joao Marques-Silva is affiliated with ANITI, Univ. Toulouse, France. Joao has made seminal contributions in the area of automated reasoning (AR), including development of Conflict-Driven Clause Learning (CDCL). Joao has taught at SAT/SMT Summer schools and has presented tutorials at leading venues, including recent tutorials at IJCAI 2019 and AAAI 2020.
-
Kuldeep Meel
(National University of Singapore)
Kuldeep Meel is Sung Kah Kay Assistant Professor of Computer Science at the National University of Singapore. His research interests lie at the intersection of AI and Formal Methods. He is a recipient of the 2019 NRF Fellowship for AI and have presented tutorials at AAAI, UAI, and IJCAI.
-
Nina Narodytska
(VMware Research)
Nina Narodytska is a senior researcher at VMware Research. Nina works on developing efficient search algorithms for decision and optimization problems, verification and explainability of machine learning models. She has presented invited talks at SAT 2017 and CP 2019, and a tutorial FMCAD 2018.