Do you want to publish a course? Click here

Distributionally Robust Optimal Power Flow with Contextual Information

227   0   0.0 ( 0 )
 Added by Juan M. Morales Dr.
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

In this paper, we develop a distributionally robust chance-constrained formulation of the Optimal Power Flow problem (OPF) whereby the system operator can leverage contextual information. For this purpose, we exploit an ambiguity set based on probability trimmings and optimal transport through which the dispatch solution is protected against the incomplete knowledge of the relationship between the OPF uncertainties and the context that is conveyed by a sample of their joint probability distribution. We provide an exact reformulation of the proposed distributionally robust chance-constrained OPF problem under the popular conditional-value-at-risk approximation. By way of numerical experiments run on a modified IEEE-118 bus network with wind uncertainty, we show how the power system can substantially benefit from taking into account the well-known statistical dependence between the point forecast of wind power outputs and its associated prediction error. Furthermore, the experiments conducted also reveal that the distributional robustness conferred on the OPF solution by our probability-trimmings-based approach is superior to that bestowed by alternative approaches in terms of expected cost and system reliability.



rate research

Read More

374 - Chao Shang , Hao Ye , Dexian Huang 2021
Probabilistic methods have attracted much interest in fault detection design, but its need for complete distributional knowledge is seldomly fulfilled. This has spurred endeavors in distributionally robust fault detection (DRFD) design, which secures robustness against inexact distributions by using moment-based ambiguity sets as a prime modelling tool. However, with the worst-case distribution being implausibly discrete, the resulting design suffers from over-pessimisim and can mask the true fault. This paper aims at developing a new DRFD design scheme with reduced conservatism, by assuming unimodality of the true distribution, a property commonly encountered in real-life practice. To tackle the chance constraint on false alarms, we first attain a new generalized Gauss bound on the probability outside an ellipsoid, which is less conservative than known Chebyshev bounds. As a result, analytical solutions to DRFD design problems are obtained, which are less conservative than known ones disregarding unimodality. We further encode bounded support information into ambiguity sets, derive a tightened multivariate Gauss bound, and develop approximate reformulations of design problems as convex programs. Moreover, the derived generalized Gauss bounds are broadly applicable to versatile change detection tasks for setting alarm thresholds. Results on a laborotary system shown that, the incorporation of unimodality information helps reducing conservatism of distributionally robust design and leads to a better tradeoff between robustness and sensitivity.
We consider stochastic programs conditional on some covariate information, where the only knowledge of the possible relationship between the uncertain parameters and the covariates is reduced to a finite data sample of their joint distribution. By exploiting the close link between the notion of trimmings of a probability measure and the partial mass transportation problem, we construct a data-driven Distributionally Robust Optimization (DRO) framework to hedge the decision against the intrinsic error in the process of inferring conditional information from limited joint data. We show that our approach is computationally as tractable as the standard (without side information) Wasserstein-metric-based DRO and enjoys performance guarantees. Furthermore, our DRO framework can be conveniently used to address data-driven decision-making problems under contaminated samples and naturally produces distributionally robu
170 - Feiran Zhao , Keyou You 2020
Optimal control of a stochastic dynamical system usually requires a good dynamical model with probability distributions, which is difficult to obtain due to limited measurements and/or complicated dynamics. To solve it, this work proposes a data-driven distributionally robust control framework with the Wasserstein metric via a constrained two-player zero-sum Markov game, where the adversarial player selects the probability distribution from a Wasserstein ball centered at an empirical distribution. Then, the game is approached by its penalized version, an optimal stabilizing solution of which is derived explicitly in a linear structure under the Riccati-type iterations. Moreover, we design a model-free Q-learning algorithm with global convergence to learn the optimal controller. Finally, we verify the effectiveness of the proposed learning algorithm and demonstrate its robustness to the probability distribution errors via numerical examples.
This paper studies distributionally robust optimization (DRO) when the ambiguity set is given by moments for the distributions. The objective and constraints are given by polynomials in decision variables. We reformulate the DRO with equivalent moment conic constraints. Under some general assumptions, we prove the DRO is equivalent to a linear optimization problem with moment and psd polynomial cones. A moment-SOS relaxation method is proposed to solve it. Its asymptotic and finite convergence are shown under certain assumptions. Numerical examples are presented to show how to solve DRO problems.
In prescriptive analytics, the decision-maker observes historical samples of $(X, Y)$, where $Y$ is the uncertain problem parameter and $X$ is the concurrent covariate, without knowing the joint distribution. Given an additional covariate observation $x$, the goal is to choose a decision $z$ conditional on this observation to minimize the cost $mathbb{E}[c(z,Y)|X=x]$. This paper proposes a new distributionally robust approach under Wasserstein ambiguity sets, in which the nominal distribution of $Y|X=x$ is constructed based on the Nadaraya-Watson kernel estimator concerning the historical data. We show that the nominal distribution converges to the actual conditional distribution under the Wasserstein distance. We establish the out-of-sample guarantees and the computational tractability of the framework. Through synthetic and empirical experiments about the newsvendor problem and portfolio optimization, we demonstrate the strong performance and practical value of the proposed framework.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا