FIND: A Function Description Benchmark for Evaluating Interpretability Methods

* indicates equal contribution.
1MIT CSAIL 2Northeastern University
NeurIPS 2023

Abstract

Labeling neural network submodules with human-legible descriptions is useful for many downstream tasks: such descriptions can surface failures, guide interventions, and perhaps even explain important model behaviors. To date, most mechanistic descriptions of trained networks have involved small models, narrowly delimited phenomena, and large amounts of human labor. Labeling all human-interpretable sub-computations in models of increasing size and complexity will almost certainly require tools that can generate and validate descriptions automatically. Recently, techniques that use learned models in-the-loop for labeling have begun to gain traction, but methods for evaluating their efficacy are limited and ad-hoc. How should we validate and compare open-ended labeling tools? This paper introduces FIND (Function INterpretation and Description), a benchmark suite for evaluating the building blocks of automated interpretability methods. FIND contains functions that resemble components of trained neural networks, and accompanying descriptions of the kind we seek to generate. The functions are procedurally constructed across textual and numeric domains, and involve a range of real-world complexities, including noise, composition, approximation, and bias. We evaluate methods that use pretrained language models (LMs) to produce code-based and natural language descriptions of function behavior. Additionally, we introduce a new interactive method in which an Automated Interpretability Agent (AIA) generates function descriptions. We find that an AIA, built with an off-the-shelf LM augmented with black-box access to functions, can sometimes infer function structure, acting as a scientist by forming hypotheses, proposing experiments, and updating descriptions in light of new data. However, FIND also reveals that LM-based descriptions capture global function behavior while missing local details. These results suggest that FIND will be useful for characterizing the performance of more sophisticated interpretability methods before they are applied to real-world models.


FIND dataset and the Automated Interpretability Agent. FIND is constructed procedurally: atomic functions are defined across domains including elementary numeric operations (purple), string operations (green), and synthetic neural modules that compute semantic similarity to reference entities (yellow) and implement real-world factual associations (blue). Complexity is introduced through composition, bias, approximation and noise. We provide an LM-based interpretation baseline that compares text and code interpretations to ground-truth function implementations.


Video


AIA interpretations

Numeric functions

String functions

Synthetic neurons

BibTeX

@inproceedings{schwettmann2023find,
      title={FIND: A Function Description Benchmark for Evaluating Interpretability Methods},
      author={Schwettmann, Sarah and Rott Shaham, Tamar and Materzynska, Joanna and Chowdhury, Neil and Li, Shuang and Andreas, Jacob and Bau, David and Torralba, Antonio},
      booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
      year={2023}
    }