Do you want to publish a course? Click here

A Distributionally Robust Self-Scheduling Under Price Uncertainty Based on CVaR

139   0   0.0 ( 0 )
 Added by Linfeng Yang
 Publication date 2021
  fields
and research's language is English




Ask ChatGPT about the research

To ensure a successful bid while maximizing of profits, generation companies (GENCOs) need a self-scheduling strategy that can cope with a variety of scenarios. So distributionally robust opti-mization (DRO) is a good choice because that it can provide an adjustable self-scheduling strategy for GENCOs in the uncertain environment, which can well balance robustness and economics compared to strategies derived from robust optimization (RO) and stochastic programming (SO). In this paper, a novel mo-ment-based DRO model with conditional value-at-risk (CVaR) is proposed to solve the self-scheduling problem under electricity price uncertainty. Such DRO models are usually translated into semi-definite programming (SDP) for solution, however, solving large-scale SDP needs a lot of computational time and resources. For this shortcoming, two effective approximate models are pro-posed: one approximate model based on vector splitting and an-other based on alternate direction multiplier method (ADMM), both can greatly reduce the calculation time and resources, and the second approximate model only needs the information of the current area in each step of the solution and thus information private is guaranteed. Simulations of three IEEE test systems are conducted to demonstrate the correctness and effectiveness of the proposed DRO model and two approximate models.



rate research

Read More

180 - Henry Lam , Fengpei Li 2019
We consider optimization problems with uncertain constraints that need to be satisfied probabilistically. When data are available, a common method to obtain feasible solutions for such problems is to impose sampled constraints, following the so-called scenario optimization approach. However, when the data size is small, the sampled constraints may not statistically support a feasibility guarantee on the obtained solution. This paper studies how to leverage parametric information and the power of Monte Carlo simulation to obtain feasible solutions for small-data situations. Our approach makes use of a distributionally robust optimization (DRO) formulation that translates the data size requirement into a Monte Carlo sample size requirement drawn from what we call a generating distribution. We show that, while the optimal choice of this generating distribution is the one eliciting the data or the baseline distribution in a nonparametric divergence-based DRO, it is not necessarily so in the parametric case. Correspondingly, we develop procedures to obtain generating distributions that improve upon these basic choices. We support our findings with several numerical examples.
In this paper, we solve the multiple product price optimization problem under interval uncertainties of the price sensitivity parameters in the demand function. The objective of the price optimization problem is to maximize the overall revenue of the firm where the decision variables are the prices of the products supplied by the firm. We propose an approach that yields optimal solutions under different variations of the estimated price sensitivity parameters. We adopt a robust optimization approach by building a data-driven uncertainty set for the parameters, and then construct a deterministic counterpart for the robust optimization model. The numerical results show that two objectives are fulfilled: the method reflects the uncertainty embedded in parameter estimations, and also an interval is obtained for optimal prices. We also conducted a simulation study to which we compared the results of our approach. The comparisons show that although robust optimization is deemed to be conservative, the results of the proposed approach show little loss compared to those from the simulation.
We consider stochastic programs conditional on some covariate information, where the only knowledge of the possible relationship between the uncertain parameters and the covariates is reduced to a finite data sample of their joint distribution. By exploiting the close link between the notion of trimmings of a probability measure and the partial mass transportation problem, we construct a data-driven Distributionally Robust Optimization (DRO) framework to hedge the decision against the intrinsic error in the process of inferring conditional information from limited joint data. We show that our approach is computationally as tractable as the standard (without side information) Wasserstein-metric-based DRO and enjoys performance guarantees. Furthermore, our DRO framework can be conveniently used to address data-driven decision-making problems under contaminated samples and naturally produces distributionally robu
We propose kernel distributionally robust optimization (Kernel DRO) using insights from the robust optimization theory and functional analysis. Our method uses reproducing kernel Hilbert spaces (RKHS) to construct a wide range of convex ambiguity sets, which can be generalized to sets based on integral probability metrics and finite-order moment bounds. This perspective unifies multiple existing robust and stochastic optimization methods. We prove a theorem that generalizes the classical duality in the mathematical problem of moments. Enabled by this theorem, we reformulate the maximization with respect to measures in DRO into the dual program that searches for RKHS functions. Using universal RKHSs, the theorem applies to a broad class of loss functions, lifting common limitations such as polynomial losses and knowledge of the Lipschitz constant. We then establish a connection between DRO and stochastic optimization with expectation constraints. Finally, we propose practical algorithms based on both batch convex solvers and stochastic functional gradient, which apply to general optimization and machine learning tasks.
This paper addresses the problem of utility maximization under uncertain parameters. In contrast with the classical approach, where the parameters of the model evolve freely within a given range, we constrain them via a penalty function. We show that this robust optimization process can be interpreted as a two-player zero-sum stochastic differential game. We prove that the value function satisfies the Dynamic Programming Principle and that it is the unique viscosity solution of an associated Hamilton-Jacobi-Bellman-Isaacs equation. We test this robust algorithm on real market data. The results show that robust portfolios generally have higher expected utilities and are more stable under strong market downturns. To solve for the value function, we derive an analytical solution in the logarithmic utility case and obtain accurate numerical approximations in the general case by three methods: finite difference method, Monte Carlo simulation, and Generative Adversarial Networks.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا