No Arabic abstract
The load pick-up (LPP) problem searches the optimal configuration of the electrical distribution system (EDS), aiming to minimize the power loss or provide maximum power to the load ends. The piecewise linearization (PWL) approximation method can be used to tackle the nonlinearity and nonconvexity in network power flow (PF) constraints, and transform the LPP model into a mixed-integer linear programming model (LPP-MILP model).However, for the PWL approximation based PF constraints, big linear approximation errors will affect the accuracy and feasibility of the LPP-MILP models solving results. And the long modeling and solving time of the direct solution procedure of the LPP-MILP model may affect the applicability of the LPP optimization scheme. This paper proposes a multi-step PWL approximation based solution for the LPP problem in the EDS. In the proposed multi-step solution procedure, the variable upper bounds in the PWL approximation functions are dynamically renewed to reduce the approximation errors effectively. And the multi-step solution procedure can significantly decrease the modeling and solving time of the LPP-MILP model, which ensure the applicability of the LPP optimization scheme. For the two main application schemes for the LPP problem (i.e. network optimization reconfiguration and service restoration), the effectiveness of the proposed method is demonstrated via case studies using a real 13-bus EDS and a real 1066-bus EDS.
As a representative mathematical expression of power flow (PF) constraints in electrical distribution system (EDS), the piecewise linearization (PWL) based PF constraints have been widely used in different EDS optimization scenarios. However, the linearized approximation errors originating from the currently-used PWL approximation function can be very large and thus affect the applicability of the PWL based PF constraints. This letter analyzes the approximation error self-optimal (ESO) condition of the PWL approximation function, refines the PWL function formulas, and proposes the self-optimal (SO)-PWL based PF constraints in EDS optimization which can ensure the minimum approximation errors. Numerical results demonstrate the effectiveness of the proposed method.
Many separable nonlinear optimization problems can be approximated by their nonlinear objective functions with piecewise linear functions. A natural question arising from applying this approach is how to break the interval of interest into subintervals (pieces) to achieve a good approximation. We present formulations to optimize the location of the knots. We apply a sequential quadratic programming method and a spectral projected gradient method to solve the problem. We report numerical experiments to show the effectiveness of the proposed approaches.
In this paper, we formulate the Load Flow (LF) problem in radial electricity distribution networks as an unconstrained Riemannian optimization problem, consisting of two manifolds, and we consider alternative retractions and initialization options. Our contribution is a novel LF solution method, which we show belongs to the family of Riemannian approximate Newton methods guaranteeing monotonic descent and local superlinear convergence rate. To the best of our knowledge, this is the first exact LF solution method employing Riemannian optimization. Extensive numerical comparisons on several test networks illustrate that the proposed method outperforms other Riemannian optimization methods (Gradient Descent, Newtons), and achieves comparable performance with the traditional Newton-Raphson method, albeit besting it by a guarantee to convergence. We also consider an approximate LF solution obtained by the first iteration of the proposed method, and we show that it significantly outperforms other approximants in the LF literature. Lastly, we derive an interesting comparison with the well-known Backward-Forward Sweep method.
Motivated by a growing list of nontraditional statistical estimation problems of the piecewise kind, this paper provides a survey of known results supplemented with new results for the class of piecewise linear-quadratic programs. These are linearly constrained optimization problems with piecewise linear-quadratic (PLQ) objective functions. Starting from a study of the representation of such a function in terms of a family of elementary functions consisting of squared affine functions, squared plus-composite-affine functions, and affine functions themselves, we summarize some local properties of a PLQ function in terms of their first and second-order directional derivatives. We extend some well-known necessary and sufficient second-order conditions for local optimality of a quadratic program to a PLQ program and provide a dozen such equivalent conditions for strong, strict, and isolated local optimality, showing in particular that a PLQ program has the same characterizations for local minimality as a standard quadratic program. As a consequence of one such condition, we show that the number of strong, strict, or isolated local minima of a PLQ program is finite; this result supplements a recent result about the finite number of directional stationary objective values. Interestingly, these finiteness results can be uncovered by invoking a very powerful property of subanalytic functions; our proof is fairly elementary, however. We discuss applications of PLQ programs in some modern statistical estimation problems. These problems lead to a special class of unconstrained composite programs involving the non-differentiable $ell_1$-function, for which we show that the task of verifying the second-order stationary condition can be converted to the problem of checking the copositivity of certain Schur complement on the nonnegative orthant.
Multi-step temporal-difference (TD) learning, where the update targets contain information from multiple time steps ahead, is one of the most popular forms of TD learning for linear function approximation. The reason is that multi-step methods often yield substantially better performance than their single-step counter-parts, due to a lower bias of the update targets. For non-linear function approximation, however, single-step methods appear to be the norm. Part of the reason could be that on many domains the popular multi-step methods TD($lambda$) and Sarsa($lambda$) do not perform well when combined with non-linear function approximation. In particular, they are very susceptible to divergence of value estimates. In this paper, we identify the reason behind this. Furthermore, based on our analysis, we propose a new multi-step TD method for non-linear function approximation that addresses this issue. We confirm the effectiveness of our method using two benchmark tasks with neural networks as function approximation.