ترغب بنشر مسار تعليمي؟ اضغط هنا

Risk-averse risk-constrained optimal control

361   0   0.0 ( 0 )
 نشر من قبل Mathijs Schuurmans
 تاريخ النشر 2019
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

Multistage risk-averse optimal control problems with nested conditional risk mappings are gaining popularity in various application domains. Risk-averse formulations interpolate between the classical expectation-based stochastic and minimax optimal control. This way, risk-averse problems aim at hedging against extreme low-probability events without being overly conservative. At the same time, risk-based constraints may be employed either as surrogates for chance (probabilistic) constraints or as a robustification of expectation-based constraints. Such multistage problems, however, have been identified as particularly hard to solve. We propose a decomposition method for such nested problems that allows us to solve them via efficient numerical optimization methods. Alongside, we propose a new form of risk constraints which accounts for the propagation of uncertainty in time.


قيم البحث

اقرأ أيضاً

Consider a multi-agent network comprised of risk averse social sensors and a controller that jointly seek to estimate an unknown state of nature, given noisy measurements. The network of social sensors perform Bayesian social learning - each sensor f uses the information revealed by previous social sensors along with its private valuation using Bayes rule - to optimize a local cost function. The controller sequentially modifies the cost function of the sensors by discriminatory pricing (control inputs) to realize long term global objectives. We formulate the stochastic control problem faced by the controller as a Partially Observed Markov Decision Process (POMDP) and derive structural results for the optimal control policy as a function of the risk-aversion factor in the Conditional Value-at-Risk (CVaR) cost function of the sensors. We show that the optimal price sequence when the sensors are risk- averse is a super-martingale; i.e, it decreases on average over time.
The multi-armed bandit (MAB) is a classical online optimization model for the trade-off between exploration and exploitation. The traditional MAB is concerned with finding the arm that minimizes the mean cost. However, minimizing the mean does not ta ke the risk of the problem into account. We now want to accommodate risk-averse decision makers. In this work, we introduce a coherent risk measure as the criterion to form a risk-averse MAB. In particular, we derive an index-based online sampling framework for the risk-averse MAB. We develop this framework in detail for three specific risk measures, i.e. the conditional value-at-risk, the mean-deviation and the shortfall risk measures. Under each risk measure, the convergence rate for the upper bound on the pseudo regret, defined as the difference between the expectation of the empirical risk based on the observation sequence and the true risk of the optimal arm, is established.
We study a risk-averse optimal control problem with a finite-horizon Borel model, where the cost is assessed via exponential utility. The setting permits non-linear dynamics, non-quadratic costs, and continuous spaces but is less general than the pro blem of optimizing an expected utility. Our contribution is to show the existence of an optimal risk-averse controller through the use of measure-theoretic first principles.
We consider the problem of designing policies for partially observable Markov decision processes (POMDPs) with dynamic coherent risk objectives. Synthesizing risk-averse optimal policies for POMDPs requires infinite memory and thus undecidable. To ov ercome this difficulty, we propose a method based on bounded policy iteration for designing stochastic but finite state (memory) controllers, which takes advantage of standard convex optimization methods. Given a memory budget and optimality criterion, the proposed method modifies the stochastic finite state controller leading to sub-optimal solutions with lower coherent risk.
The term rational has become synonymous with maximizing expected payoff in the definition of the best response in Nash setting. In this work, we consider stochastic games in which players engage only once, or at most a limited number of times. In suc h games, it may not be rational for players to maximize their expected payoff as they cannot wait for the Law of Large Numbers to take effect. We instead define a new notion of a risk-averse best response, that results in a risk-averse equilibrium (RAE) in which players choose to play the strategy that maximizes the probability of them being rewarded the most in a single round of the game rather than maximizing the expected received reward, subject to the actions of other players. We prove the risk-averse equilibrium to exist in all finite games and numerically compare its performance to Nash equilibrium in finite-time stochastic games.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا