Do you want to publish a course? Click here

The popular $mathcal{AB}$/push-pull method for distributed optimization problem may unify much of the existing decentralized first-order methods based on gradient tracking technique. More recently, the stochastic gradient variant of $mathcal{AB}$/Push-Pull method ($mathcal{S}$-$mathcal{AB}$) has been proposed, which achieves the linear rate of converging to a neighborhood of the global minimizer when the step-size is constant. This paper is devoted to the asymptotic properties of $mathcal{S}$-$mathcal{AB}$ with diminishing stepsize. Specifically, under the condition that each local objective is smooth and the global objective is strongly-convex, we first present the boundedness of the iterates of $mathcal{S}$-$mathcal{AB}$ and then show that the iterates converge to the global minimizer with the rate $mathcal{O}left(1/sqrt{k}right)$. Furthermore, the asymptotic normality of Polyak-Ruppert averaged $mathcal{S}$-$mathcal{AB}$ is obtained and applications on statistical inference are discussed. Finally, numerical tests are conducted to demonstrate the theoretic results.
Considering the constrained stochastic optimization problem over a time-varying random network, where the agents are to collectively minimize a sum of objective functions subject to a common constraint set, we investigate asymptotic properties of a distributed algorithm based on dual averaging of gradients. Different from most existing works on distributed dual averaging algorithms that mainly concentrating on their non-asymptotic properties, we not only prove almost sure convergence and the rate of almost sure convergence, but also asymptotic normality and asymptotic efficiency of the algorithm. Firstly, for general constrained convex optimization problem distributed over a random network, we prove that almost sure consensus can be archived and the estimates of agents converge to the same optimal point. For the case of linear constrained convex optimization, we show that the mirror map of the averaged dual sequence identifies the active constraints of the optimal solution with probability 1, which helps us to prove the almost sure convergence rate and then establish asymptotic normality of the algorithm. Furthermore, we also verify that the algorithm is asymptotically optimal. To the best of our knowledge, it seems to be the first asymptotic normality result for constrained distributed optimization algorithms. Finally, a numerical example is provided to justify the theoretical analysis.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا