ترغب بنشر مسار تعليمي؟ اضغط هنا

Solving specified-time distributed optimization problem via sampled-data-based algorithm

108   0   0.0 ( 0 )
 نشر من قبل Jialing Zhou
 تاريخ النشر 2021
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

Despite significant advances on distributed continuous-time optimization of multi-agent networks, there is still lack of an efficient algorithm to achieve the goal of distributed optimization at a pre-specified time. Herein, we design a specified-time distributed optimization algorithm for connected agents with directed topologies to collectively minimize the sum of individual objective functions subject to an equality constraint. With the designed algorithm, the settling time of distributed optimization can be exactly predefined. The specified selection of such a settling time is independent of not only the initial conditions of agents, but also the algorithm parameters and the communication topologies. Furthermore, the proposed algorithm can realize specified-time optimization by exchanging information among neighbours only at discrete sampling instants and thus reduces the communication burden. In addition, the equality constraint is always satisfied during the whole process, which makes the proposed algorithm applicable to online solving distributed optimization problems such as economic dispatch. For the special case of undirected communication topologies, a reduced-order algorithm is also designed. Finally, the effectiveness of the theoretical analysis is justified by numerical simulations.



قيم البحث

اقرأ أيضاً

We propose a novel direct transcription and solution method for solving nonlinear, continuous-time dynamic optimization problems. Instead of forcing the dynamic constraints to be satisfied only at a selected number of points as in direct collocation, the new approach alternates between minimizing and constraining the squared norm of the dynamic constraint residuals integrated along the whole solution trajectories. As a result, the method can 1) obtain solutions of higher accuracy for the same mesh compared to direct collocation methods, 2) enables a flexible trade-off between solution accuracy and optimality, 3) provides reliable solutions for challenging problems, including those with singular arcs and high-index differential algebraic equations.
We consider minimizing a sum of non-smooth objective functions with set constraints in a distributed manner. As to this problem, we propose a distributed algorithm with an exponential convergence rate for the first time. By the exact penalty method, we reformulate the problem equivalently as a standard distributed one without consensus constraints. Then we design a distributed projected subgradient algorithm with the help of differential inclusions. Furthermore, we show that the algorithm converges to the optimal solution exponentially for strongly convex objective functions.
101 - Chuanye Gu , Zhiyou Wu , Jueyou Li 2018
We investigate a distributed optimization problem over a cooperative multi-agent time-varying network, where each agent has its own decision variables that should be set so as to minimize its individual objective subject to local constraints and glob al coupling constraints. Based on push-sum protocol and dual decomposition, we design a distributed regularized dual gradient algorithm to solve this problem, in which the algorithm is implemented in time-varying directed graphs only requiring the column stochasticity of communication matrices. By augmenting the corresponding Lagrangian function with a quadratic regularization term, we first obtain the bound of the Lagrangian multipliers which does not require constructing a compact set containing the dual optimal set when compared with most of primal-dual based methods. Then, we obtain that the convergence rate of the proposed method can achieve the order of $mathcal{O}(ln T/T)$ for strongly convex objective functions, where $T$ is the iterations. Moreover, the explicit bound of constraint violations is also given. Finally, numerical results on the network utility maximum problem are used to demonstrate the efficiency of the proposed algorithm.
A collection of optimization problems central to power system operation requires distributed solution architectures to avoid the need for aggregation of all information at a central location. In this paper, we study distributed dual subgradient metho ds to solve three such optimization problems. Namely, these are tie-line scheduling in multi-area power systems, coordination of distributed energy resources in radial distribution networks, and joint dispatch of transmission and distribution assets. With suitable relaxations or approximations of the power flow equations, all three problems can be reduced to a multi-agent constrained convex optimization problem. We utilize a constant step-size dual subgradient method with averaging on these problems. For this algorithm, we provide a convergence guarantee that is shown to be order-optimal. We illustrate its application on the grid optimization problems.
There has been work on exploiting polynomial approximation to solve distributed nonconvex optimization problems involving univariate objectives. This idea facilitates arbitrarily precise global optimization without requiring local evaluations of grad ients at every iteration. Nonetheless, there remains a gap between existing theoretical guarantees and diverse practical requirements for dependability, notably privacy preservation and robustness to network imperfections (e.g., time-varying directed communication and asynchrony). To fill this gap and keep the above strengths, we propose a Dependable Chebyshev-Proxy-based distributed Optimization Algorithm (D-CPOA). Specifically, to ensure both accuracy of solutions and privacy of local objectives, a new privacy-preserving mechanism is designed. This mechanism leverages the randomness in blockwise insertions of perturbed vector states and hence provides an improved privacy guarantee compared to the literature in terms of ($alpha,beta$)-data-privacy. Furthermore, to gain robustness to various network imperfections, we use the push-sum consensus protocol as a backbone, discuss its specific enhancements, and evaluate the performance of the proposed algorithm accordingly. Thanks to the linear consensus-based structure of iterations, we avoid the privacy-accuracy trade-off and the bother of selecting appropriate step-sizes in different settings. We provide rigorous analysis of the accuracy, dependability and complexity. It is shown that the advantages brought by the idea of polynomial approximation are maintained when all the above requirements exist. Simulations demonstrate the effectiveness of the developed algorithm.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا