We interpret the predictions of any black-box structured input-structured output model around a specific input-output pair. Our method returns an “explanation”‘ consisting of groups of input-output tokens that are causally related. Our method infers these dependencies by querying the model with perturbed inputs, generating a graph over tokens from the responses, and solving a partitioning problem to select the most relevant components. We focus the general approach on sequence-to-sequence problems, adopting a variational autoencoder to yield meaningful input perturbations. We test our method across several NLP sequence generation tasks.

Comments: EMNLP 2017
Subjects: Learning (cs.LG)
Cite as: arXiv:1707.01943 [cs.LG]
(or arXiv:1707.01943v1 [cs.LG] for this version)