Revocable Deep Reinforcement Learning with Affinity Regularization for Outlier-Robust Graph Matching


Abstract in English

Graph matching (GM) has been a building block in many areas including computer vision and pattern recognition. Despite the recent impressive progress, existing deep GM methods often have difficulty in handling outliers in both graphs, which are ubiquitous in practice. We propose a deep reinforcement learning (RL) based approach RGM for weighted graph matching, whose sequential node matching scheme naturally fits with the strategy for selective inlier matching against outliers, and supports seed graph matching. A revocable action scheme is devised to improve the agents flexibility against the complex constrained matching task. Moreover, we propose a quadratic approximation technique to regularize the affinity matrix, in the presence of outliers. As such, the RL agent can finish inlier matching timely when the objective score stop growing, for which otherwise an additional hyperparameter i.e. the number of common inliers is needed to avoid matching outliers. In this paper, we are focused on learning the back-end solver for the most general form of GM: the Lawlers QAP, whose input is the affinity matrix. Our approach can also boost other solvers using the affinity input. Experimental results on both synthetic and real-world datasets showcase its superior performance regarding both matching accuracy and robustness.

Download