Skip to yearly menu bar Skip to main content


Tutorial

Foundations of Interpretable AI

106 A
[ ]
Wed 11 Jun 6 a.m. PDT — 10 a.m. PDT

Abstract:

In recent years, interpretability has emerged as a significant barrier to the widespread adoption of deep learning techniques, particularly in domains where AI decisions can have consequential impacts on human lives, such as healthcare and finance. Recent attempts at interpreting the decisions made by a deep network can be broadly classified in two categories, (i) methods that seek to explain existing models (post-hoc explainability), and (ii) methods that seek to build models that are explainable by design. This tutorial aims to provide a comprehensive overview of both approaches along with a discussion on their limitations.

Live content is unavailable. Log in and register to view live content