Asymptotic Properties of $mathcal{S}$-$mathcal{AB}$ Method with Diminishing Stepsize


Abstract in English

The popular $mathcal{AB}$/push-pull method for distributed optimization problem may unify much of the existing decentralized first-order methods based on gradient tracking technique. More recently, the stochastic gradient variant of $mathcal{AB}$/Push-Pull method ($mathcal{S}$-$mathcal{AB}$) has been proposed, which achieves the linear rate of converging to a neighborhood of the global minimizer when the step-size is constant. This paper is devoted to the asymptotic properties of $mathcal{S}$-$mathcal{AB}$ with diminishing stepsize. Specifically, under the condition that each local objective is smooth and the global objective is strongly-convex, we first present the boundedness of the iterates of $mathcal{S}$-$mathcal{AB}$ and then show that the iterates converge to the global minimizer with the rate $mathcal{O}left(1/sqrt{k}right)$. Furthermore, the asymptotic normality of Polyak-Ruppert averaged $mathcal{S}$-$mathcal{AB}$ is obtained and applications on statistical inference are discussed. Finally, numerical tests are conducted to demonstrate the theoretic results.

Download