Skip to yearly menu bar Skip to main content


Poster

SemiReward: A General Reward Model for Semi-supervised Learning

Siyuan Li · Weiyang Jin · Zedong Wang · Zedong Wang · Fang Wu · Zicheng Liu · Cheng Tan · Cheng Tan · Stan Z Li

Halle B #183
[ ] [ Project Page ]
Thu 9 May 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract:

Semi-supervised learning (SSL) has witnessed great progress with various improvements in the self-training framework with pseudo labeling. The main challenge is how to distinguish high-quality pseudo labels against the confirmation bias. However, existing pseudo-label selection strategies are limited to pre-defined schemes or complex hand-crafted policies specially designed for classification, failing to achieve high-quality labels, fast convergence, and task versatility simultaneously. To these ends, we propose a Semi-supervised Reward framework (SemiReward) that predicts reward scores to evaluate and filter out high-quality pseudo labels, which is pluggable to mainstream SSL methods in wide task types and scenarios. To mitigate confirmation bias, SemiReward is trained online in two stages with a generator model and subsampling strategy. With classification and regression tasks on 13 standard SSL benchmarks across three modalities, extensive experiments verify that SemiReward achieves significant performance gains and faster convergence speeds upon Pseudo Label, FlexMatch, and Free/SoftMatch. Code and models are available at https://github.com/Westlake-AI/SemiReward.

Live content is unavailable. Log in and register to view live content