Is Oracle Pruning the True Oracle?

Sicheng Feng1,2  Keda Tao1  Huan Wang1,* 

2024

1Westlake University, Hangzhou, China 2Nankai University, Tianjin, China
*Corresponding author: wanghuan@westlake.edu.cn

WLU
NKU
ENCODE Lab
Analysis framework of this work. We study the validity of oracle pruning in this paper, by examining the correlation between the pruned train loss and the final test performance (test accuracy or test loss). We apply this analysis framework to a wide range of networks and datasets (from toy networks like LeNet5-Mini to very large ones like ViT-B/16 and TinyLLaVA-3.1B) in order to have a comprehensive evaluation. The key finding of this work is that on modern networks and datasets (starting from the CIFAR level), oracle pruning is invalid, to our surprise. This new finding acutely challenges the conventional belief in network pruning over the past 35 years.

Abstract

Oracle pruning, which selects unimportant weights by minimizing the pruned train loss, has been taken as the foundation for most neural network pruning methods for over 35 years, while few (if not none) have thought about how much the foundation really holds. This paper, for the first time, attempts to examine its validity on modern deep models through empirical correlation analyses and provide reflections on the field of neural network pruning. Specifically, for a typical pruning algorithm with three stages (pertaining, pruning, and retraining), we analyze the model performance correlation before and after retraining. Extensive experiments (37K models are trained) across a wide spectrum of models (LeNet5, VGG, ResNets, ViT, MLLM) and datasets (MNIST and its variants, CIFAR10/CIFAR100, ImageNet-1K, MLLM data) are conducted. The results lead to a surprising conclusion: on modern deep learning models, the performance before retraining is barely correlated with the performance after retraining. Namely, the weights selected by oracle pruning can hardly guarantee a good performance after retraining. This further implies that existing works using oracle pruning to derive pruning criteria may be groundless from the beginning. Further studies suggest the rising task complexity is one factor that makes oracle pruning invalid nowadays. Finally, given the evidence, we argue that the retraining stage in a pruning algorithm should be accounted for when developing any pruning criterion.

Correlation Results on MNIST

Pruned train loss vs. final test accuracy on MNIST with LeNet5-Mini. The subcaptions correspond to the pruning rates of each image. The blue star indicates the oracle pruning result (the one with the smallest pruned train loss). The points with final test accuracy higher than the oracle pruning are marked in red (anomaly points), and those lower are marked in green.





Correlation Results with VGG19/ResNet56/ResNet18

Pruned train loss vs. final test accuracy with ResNet56 (on CIFAR10), VGG19 (on CIFAR100), and ResNet18 (on ImageNet-1K).

Correlation Results with MNIST variants

Pruned train loss vs. final test accuracy on the variants of MNIST dataset, with LeNet5-Mini network (pruning ratio 0.5, Conv1 layer). FMNIST and KMNIST are two drop-in replacements of MNIST, which are more complex. As seen, the correlation becomes weaker on more challenging datasets.

Correlation Results with LeNet5-Mini variants

Pruned train loss vs. final test accuracy on MNIST with different variants of LeNet5-Mini (pruning ratio 0.5, Conv1 layer). The original LeNet5-Mini (Base) has 5 layers (D5) and each layer has 10 neurons (W10). Here we change the model width and depth to obtain different variants. As seen, the correlation becomes weaker when pruning more complex networks.


Correlation Results about Lesson

Test accuracy (10% epochs) vs. test accuracy (100% epochs).



BibTeX

@article{feng2024oracle,
title={Is Oracle Pruning the True Oracle?},
author={Sicheng Feng, Keda Tao, Huan Wang},
journal={arXiv preprint arXiv:2412.00143},
year={2024},
}