Skip to yearly menu bar Skip to main content


Poster

OVOR: OnePrompt with Virtual Outlier Regularization for Rehearsal-Free Class-Incremental Learning

Wei-Cheng Huang · Chun-Fu Chen · Hsiang Hsu

Halle B #220
[ ] [ Project Page ]
Tue 7 May 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract:

Recent works have shown that by using large pre-trained models along with learnable prompts, rehearsal-free methodsfor class-incremental learning (CIL) settings can achieve superior performance to prominent rehearsal-based ones.Rehearsal-free CIL methods struggle with distinguishing classes from different tasks, as those are not trained together.In this work we propose a regularization method based on virtual outliers to tighten decision boundaries of the classifier,such that confusion of classes among different tasks is mitigated.Recent prompt-based methods often require a pool of task-specific prompts, in order to prevent overwriting knowledgeof previous tasks with that of the new task, leading to extra computation in querying and composing anappropriate prompt from the pool.This additional cost can be eliminated, without sacrificing accuracy, as we reveal in the paper.We illustrate that a simplified prompt-based method can achieve results comparable toprevious state-of-the-art (SOTA) methods equipped with a prompt pool, using much less learnable parameters and lower inference cost.Our regularization method has demonstrated its compatibility with different prompt-based methods, boostingthose previous SOTA rehearsal-free CIL methods' accuracy on the ImageNet-R and CIFAR-100 benchmarks. Our source code is available at https://github.com/jpmorganchase/ovor.

Live content is unavailable. Log in and register to view live content