ترغب بنشر مسار تعليمي؟ اضغط هنا

Discrete Approximation of Two-Stage Stochastic and Distributionally Robust Linear Complementarity Problems

80   0   0.0 ( 0 )
 نشر من قبل Hailin Sun Dr
 تاريخ النشر 2017
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

In this paper, we propose a discretization scheme for the two-stage stochastic linear complementarity problem (LCP) where the underlying random data are continuously distributed. Under some moderate conditions, we derive qualitative and quantitative convergence for the solutions obtained from solving the discretized two-stage stochastic LCP (SLCP). We explain how the discretized two-stage SLCP may be solved by the well-known progressive hedging method (PHM). Moreover, we extend the discussion by considering a two-stage distributionally robust LCP (DRLCP) with moment constraints and proposing a discretization scheme for the DRLCP. As an application, we show how the SLCP and DRLCP models can be used to study equilibrium arising from two-stage duopoly game where each player plans to set up its optimal capacity at present with anticipated competition for production in future.

قيم البحث

اقرأ أيضاً

Adaptive robust optimization problems are usually solved approximately by restricting the adaptive decisions to simple parametric decision rules. However, the corresponding approximation error can be substantial. In this paper we show that two-stage robust and distributionally robust linear programs can often be reformulated exactly as conic programs that scale polynomially with the problem dimensions. Specifically, when the ambiguity set constitutes a 2-Wasserstein ball centered at a discrete distribution, then the distributionally robust linear program is equivalent to a copositive program (if the problem has complete recourse) or can be approximated arbitrarily closely by a sequence of copositive programs (if the problem has sufficiently expensive recourse). These results directly extend to the classical robust setting and motivate strong tractable approximations of two-stage problems based on semidefinite approximations of the copositive cone. We also demonstrate that the two-stage distributionally robust optimization problem is equivalent to a tractable linear program when the ambiguity set constitutes a 1-Wasserstein ball centered at a discrete distribution and there are no support constraints.
In this paper, we consider multi-stage stochastic optimization problems with convex objectives and conic constraints at each stage. We present a new stochastic first-order method, namely the dynamic stochastic approximation (DSA) algorithm, for solvi ng these types of stochastic optimization problems. We show that DSA can achieve an optimal ${cal O}(1/epsilon^4)$ rate of convergence in terms of the total number of required scenarios when applied to a three-stage stochastic optimization problem. We further show that this rate of convergence can be improved to ${cal O}(1/epsilon^2)$ when the objective function is strongly convex. We also discuss variants of DSA for solving more general multi-stage stochastic optimization problems with the number of stages $T > 3$. The developed DSA algorithms only need to go through the scenario tree once in order to compute an $epsilon$-solution of the multi-stage stochastic optimization problem. As a result, the memory required by DSA only grows linearly with respect to the number of stages. To the best of our knowledge, this is the first time that stochastic approximation type methods are generalized for multi-stage stochastic optimization with $T ge 3$.
Square-root (loss) regularized models have recently become popular in linear regression due to their nice statistical properties. Moreover, some of these models can be interpreted as the distributionally robust optimization counterparts of the tradit ional least-squares regularized models. In this paper, we give a unified proof to show that any square-root regularized model whose penalty function being the sum of a simple norm and a seminorm can be interpreted as the distributionally robust optimization (DRO) formulation of the corresponding least-squares problem. In particular, the optimal transport cost in the DRO formulation is given by a certain dual form of the penalty. To solve the resulting square-root regularized model whose loss function and penalty function are both nonsmooth, we design a proximal point dual semismooth Newton algorithm and demonstrate its efficiency when the penalty is the sparse group Lasso penalty or the fused Lasso penalty. Extensive experiments demonstrate that our algorithm is highly efficient for solving the square-root sparse group Lasso problems and the square-root fused Lasso problems.
The main focus of this paper is radius-based (supplier) clustering in the two-stage stochastic setting with recourse, where the inherent stochasticity of the model comes in the form of a budget constraint. We also explore a number of variants where a dditional constraints are imposed on the first-stage decisions, specifically matroid and multi-knapsack constraints. Our eventual goal is to provide results for supplier problems in the most general distributional setting, where there is only black-box access to the underlying distribution. To that end, we follow a two-step approach. First, we develop algorithms for a restricted version of each problem, in which all possible scenarios are explicitly provided; second, we employ a novel emph{scenario-discarding} variant of the standard emph{Sample Average Approximation (SAA)} method, in which we crucially exploit properties of the restricted-case algorithms. We finally note that the scenario-discarding modification to the SAA method is necessary in order to optimize over the radius.
We study safe, data-driven control of (Markov) jump linear systems with unknown transition probabilities, where both the discrete mode and the continuous state are to be inferred from output measurements. To this end, we develop a receding horizon es timator which uniquely identifies a sub-sequence of past mode transitions and the corresponding continuous state, allowing for arbitrary switching behavior. Unlike traditional approaches to mode estimation, we do not require an offline exhaustive search over mode sequences to determine the size of the observation window, but rather select it online. If the system is weakly mode observable, the window size will be upper bounded, leading to a finite-memory observer. We integrate the estimation procedure with a simple distributionally robust controller, which hedges against misestimations of the transition probabilities due to finite sample sizes. As additional mode transitions are observed, the used ambiguity sets are updated, resulting in continual improvements of the control performance. The practical applicability of the approach is illustrated on small numerical examples.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا