Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Spurious Correlation and Shortcut Learning: Foundations and Solutions

Beyond ID Bias: PCA-Guided Dropout for Robust Fine-tuning

Bo Fei · Xiaocheng Li · ZhangZhiqi · Youchen Qing · YANCONG DENG

Keywords: [ Vision Language Model ] [ Out of distribution Generalization ] [ Robust Finetuning ]


Abstract:

Fine-tuning large-scale pre-trained models often improves in-distribution (ID) performance at the cost of out-of-distribution (OOD) generalization due to overfitting to ID-specific features. To mitigate this, we propose PCA Dropout, a novel fine-tuning strategy that suppresses ID-specific feature dependencies by leveraging Principal Component Analysis (PCA). Our method identifies dominant feature components that contribute the most to ID variance and applies structured dropout to reduce their influence, encouraging the model to learn more generalizable representations. We evaluate PCA Dropout on DomainNet and iWildCam using CLIP-based models, demonstrating consistent improvements in OOD robustness over state-of-the-art fine-tuning methods while maintaining strong ID accuracy. Ablation studies further confirm that structured dropout at the feature level outperforms unstructured feature suppression and random dropout strategies.

Chat is not available.