Skip to yearly menu bar Skip to main content


Poster

InfoBatch: Lossless Training Speed Up by Unbiased Dynamic Data Pruning

Ziheng Qin · Kai Wang · Zangwei Zheng · Jianyang Gu · Xiangyu Peng · Zhaopan Xu · Zhou Daquan · Lei Shang · Baigui Sun · Xuansong Xie · Yang You

Halle B #118
[ ]
Fri 10 May 7:30 a.m. PDT — 9:30 a.m. PDT
 
Oral presentation: Oral 8B
Fri 10 May 6:45 a.m. PDT — 7:30 a.m. PDT

Abstract:

Data pruning aims to obtain lossless performances with less overall cost. A common approach is to filter out samples that make less contribution to the training. This could lead to gradient expectation bias compared to the original data. To solve this problem, we propose InfoBatch, a novel framework aiming to achieve lossless training acceleration by unbiased dynamic data pruning. Specifically, InfoBatchrandomly prunes a portion of less informative samples based on the loss distribution and rescales the gradients of the remaining samples to approximate the original gradient. As a plug-and-play and architecture-agnostic framework, InfoBatch consistently obtains lossless training results on classification, semantic segmentation, vision pertaining, and instruction fine-tuning tasks. On CIFAR10/100, ImageNet-1K, and ADE20K, InfoBatch losslessly saves 40% overall cost. For pertaining MAE and diffusion model, InfoBatch can respectively save 24.8% and 27% cost. For LLaMA instruction fine-tuning, combining InfoBatch and the recent coreset selection method (DQ) can achieve 10 times acceleration. Our results encourage more exploration on the data efficiency aspect of large model training. Code is publicly available at NUS-HPC-AI-Lab/InfoBatch.

Live content is unavailable. Log in and register to view live content