Skip to yearly menu bar Skip to main content


In-Person Poster presentation / poster accept

Revocable Deep Reinforcement Learning with Affinity Regularization for Outlier-Robust Graph Matching

Chang Liu · Zetian Jiang · Runzhong Wang · Lingxiao Huang · Pinyan Lu · Junchi Yan

MH1-2-3-4 #25

Keywords: [ Applications ] [ reinforcement learning ] [ graph matching ] [ Affinity Regularization ] [ Combinatorial Optimization. ] [ Quadratic Assignment ]


Abstract:

Graph matching (GM) has been a building block in various areas including computer vision and pattern recognition. Despite recent impressive progress, existing deep GM methods often have obvious difficulty in handling outliers, which are ubiquitous in practice. We propose a deep reinforcement learning based approach RGM, whose sequential node matching scheme naturally fits the strategy for selective inlier matching against outliers. A revocable action framework is devised to improve the agent's flexibility against the complex constrained GM. Moreover, we propose a quadratic approximation technique to regularize the affinity score, in the presence of outliers. As such, the agent can finish inlier matching timely when the affinity score stops growing, for which otherwise an additional parameter i.e. the number of inliers is needed to avoid matching outliers. In this paper, we focus on learning the back-end solver under the most general form of GM: the Lawler's QAP, whose input is the affinity matrix. Especially, our approach can also boost existing GM methods that use such input. Experiments on multiple real-world datasets demonstrate its performance regarding both accuracy and robustness.

Chat is not available.