Skip to yearly menu bar Skip to main content


In-Person Poster presentation / poster accept

Optimizing Spca-based Continual Learning: A Theoretical Approach

Chunchun Yang · Malik Tiomoko · Zengfu Wang

MH1-2-3-4 #163

Keywords: [ Theory ] [ machine learning theory ] [ high dimensional statistics ] [ continual learning ]


Abstract:

Catastrophic forgetting and the stability-plasticity dilemma are two major obstacles to continual learning. In this paper we first propose a theoretical analysis of a SPCA-based continual learning algorithm using high dimensional statistics. Second, we design OSCL (Optimized Spca-based Continual Learning) which builds on a flexible task optimization based on the theory. By optimizing a single task, catastrophic forgetting can be prevented theoretically. While optimizing multi-tasks, the trade-off between integrating knowledge from the new task and retaining previous knowledge of the old task can be achieved by assigning appropriate weights to corresponding tasks in compliance with the objectives. Experimental results confirm that the various theoretical conclusions are robust to a wide range of data distributions. Besides, several applications on synthetic and real data show that the proposed method while being computationally efficient, achieves comparable results with some state of the art.

Chat is not available.