Model-agnostic visual explanations via approximate bilinear models
Boris Joukovsky, Fawaz Sammani, Nikos Deligiannis
-
SPS
IEEE Members: $11.00
Non-members: $15.00
This paper proposes InteractionLIME: a model-agnostic attribution technique to explain deep models predictions in terms of feature interactions. Specifically, we regress a bilinear form to approximate the output of any two-input model, by sampling pertubations of both inputs simultaneously. Upon training, we retrieve a global explanation and a set of feature partitioning maps via the singular value decomposition of the learned interaction matrix of the bilinear model. We demonstrate InteractionLIME on vision and text-vision contrastive models, using visual examples and quantitative evaluation metrics. Our results show that the bilinear model successfully retrieves important interacting features from both inputs, while strongly reducing the occurrence of incomplete or asymmetric explanations produced by a linear model.