ﻻ يوجد ملخص باللغة العربية
We revisit the so-called sampling and discarding approach used to quantify the probability of violation of a scenario solution when some of the original samples are allowed to be discarded. We propose a scheme that consists of a cascade of optimization problems, where at each step we remove a superset of the active constraints. By relying on results from compression learning theory, we produce a tighter bound for the probability of violation of the obtained solution than existing state-of-the-art one. Besides, we show that the proposed bound is tight by exhibiting a class of optimization problems that achieves the given upper bound. The improvement of the proposed methodology with respect to a scenario discarding scheme based on a greedy removal strategy is shown by means of an analytic example and a resource sharing linear program.
Scenario programs have established themselves as efficient tools towards decision-making under uncertainty. To assess the quality of scenario-based solutions a posteriori, validation tests based on Bernoulli trials have been widely adopted in practic
In this paper, we propose two algorithms for solving convex optimization problems with linear ascending constraints. When the objective function is separable, we propose a dual method which terminates in a finite number of iterations. In particular,
We show how to efficiently compute the derivative (when it exists) of the solution map of log-log convex programs (LLCPs). These are nonconvex, nonsmooth optimization problems with positive variables that become convex when the variables, objective f
First-order methods (FOMs) have been widely used for solving large-scale problems. A majority of existing works focus on problems without constraint or with simple constraints. Several recent works have studied FOMs for problems with complicated func
We study constrained stochastic programs where the decision vector at each time slot cannot be chosen freely but is tied to the realization of an underlying random state vector. The goal is to minimize a general objective function subject to linear c