We interpret the predictions of any black-box structured input-structured output model around a specific input-output pair. Our method returns an “explanation”‘ consisting of groups of input-output tokens that are causally related. Our method infers these dependencies by querying the model with perturbed inputs, generating a graph over tokens from the responses, and solving a partitioning problem to select the most relevant components. We focus the general approach on sequence-to-sequence problems, adopting a variational autoencoder to yield meaningful input perturbations. We test our method across several NLP sequence generation tasks.