Skip to yearly menu bar Skip to main content


Poster

IDGI: A Framework To Eliminate Explanation Noise From Integrated Gradients

Ruo Yang · Binghui Wang · Mustafa Bilgic

West Building Exhibit Halls ABC 298

Abstract:

Integrated Gradients (IG) as well as its variants are well-known techniques for interpreting the decisions of deep neural networks. While IG-based approaches attain state-of-the-art performance, they often integrate noise into their explanation saliency maps, which reduce their interpretability. To minimize the noise, we examine the source of the noise analytically and propose a new approach to reduce the explanation noise based on our analytical findings. We propose the Important Direction Gradient Integration (IDGI) framework, which can be easily incorporated into any IG-based method that uses the Reimann Integration for integrated gradient computation. Extensive experiments with three IG-based methods show that IDGI improves them drastically on numerous interpretability metrics.

Chat is not available.