Patching open-vocabulary models by interpolating weights

Paper | Code | Data


Open-vocabulary models like CLIP achieve high accuracy across many image classification tasks. However, there are still settings where their zero-shot performance is far from optimal. We study model patching, where the goal is to improve accuracy on specific tasks without degrading accuracy on tasks where performance is already adequate. Towards this goal, we introduce PAINT, a patching method that uses interpolations between the weights of a model before fine-tuning and the weights after fine-tuning on a task to be patched. On nine tasks where zero-shot CLIP performs poorly, PAINT increases accuracy by 15 to 60 percentage points while preserving accuracy on ImageNet within one percentage point of the zero-shot model. PAINT also allows a single model to be patched on multiple tasks and improves with model scale. Furthermore, we identify cases of broad transfer, where patching on one task increases accuracy on other tasks even when the tasks have disjoint classes. Finally, we investigate applications beyond common benchmarks such as counting or reducing the impact of typographic attacks on CLIP. Our findings demonstrate that it is possible to expand the set of tasks on which open-vocabulary models achieve high accuracy without re-training them from scratch.


PAINT interactive demo

Explore predictions for both a supported task (CIFAR-10) and a patching task (MNIST) by setting an interpolation factor α. When α = 0, we have the model θ0, which is the original open-vocabulary model (here a CLIP ViT-B/32). When α = 1, we have the model θ1, which is fully fine-tuned on MNIST. Notice for α around 0.25, it is possible to recover a patched model, with negligible performance drop on CIFAR-10, which gains around 50 percentage points on MNIST relative to θ0.

θpatch = (1 - α) · θ0 + α · θ1, where

α =

Supported task (CIFAR-10) Patching task (MNIST)
AccCIFAR-10patch) =
AccMNISTpatch) =
Example inputs θpatch predictions Example inputs θpatch predictions

Note: for purposes of this demo, predictions are cached.


Team

1 University of Washington             2 Columbia University             3 Google Research, Brain Team            

Bibtex

@article {ilharco2022patching,
	title={Patching open-vocabulary models by interpolating weights},
	author={Ilharco, Gabriel and Wortsman, Mitchell and Gadre, Samir Yitzhak and Song, Shuran and Hajishirzi, Hannaneh and Kornblith, Simon and Farhadi, Ali and Schmidt, Ludwig},
	journal={arXiv},
	year={2022}
}
							

Acknowledgements

We thank Akari Asai, Alex Fang, Huy Ha, Ari Holtzman, Pieter-Jan Kindermans, Marco Tulio Ribeiro, Ofir Press, Sarah Pratt, Sewon Min, Thao Nguyen and Tim Dettmers for helpful discussions and feedback, and Hyak at UW for computing support.


Contact

If you have any questions, please contact Gabriel, Mitchell, and Samir