[Proof Complexity] Seminar Friday Oct 15 at 10:00 CEST with Alexey Ignatiev: Logic-based explainable AI

Jakob Nordström jakob.nordstrom at cs.lth.se
Thu Sep 30 11:35:54 CEST 2021


Dear all,

On Friday October 15 at 10:00 CEST we are having a MIAO video seminar with Alexey Ignatiev from  Monash University titled "Logic-based explainable AI". See below for the abstract.

We will meet virtually at https://lu-se.zoom.us/j/61925271827 . Please feel free to share this information with colleagues who you think might be interested. We are also hoping to record the seminar and post on the MIAO Research YouTube channel https://www.youtube.com/channel/UCN0G2Wfl9-sAKrVLVza7z4A for people who would like to hear the talk but cannot attend.

Most of our seminars consist of two parts: first a 50-55-minute regular talk, and then after a break a ca-1-hour in-depth technical presentation with (hopefully) a lot of interaction. The intention is that the first part of the seminar will give all listeners an overview of some exciting research results, and after the break people who have the time and interest will get an opportunity to really get into the technical details. However, for those who feel that the first part was enough, it is perfectly fine to just discretely drop out during the break. No questions asked; no excuses needed.

More information about upcoming video seminars can be found at http://www.jakobnordstrom.se/videoseminars/ . If you do not wish to receive these announcements, or receive several copies of them, please send a message to jakob.nordstrom at cs.lth.se.

Best regards,
Jakob Nordström

**********

Friday Oct 15 at 10:00 CEST
Logic-based explainable AI
(Alexey Ignatiev, Monash University)

Explainable artificial intelligence (XAI) represents arguably one of the most crucial challenges being faced by the area of AI these days. Although the majority of approaches to XAI are of heuristic nature, recent work proposed the use of abductive reasoning to computing provably correct explanations for machine learning (ML) predictions. The proposed rigorous approach was shown to be useful not only for computing trustable explanations but also for reasoning about explanations computed heuristically. It was also applied to uncover a close relationship between XAI and verification of ML models. This talk will overview the advances of the rigorous logic-based approach to XAI as well as the use of reasoning in devising interpretable rule-based ML models including decision trees, decision sets, and decision lists.


Jakob Nordström, Professor
University of Copenhagen and Lund University
Phone: +46 70 742 21 98
http://www.jakobnordstrom.se



More information about the Proof-Complexity mailing list