ترغب بنشر مسار تعليمي؟ اضغط هنا

Risk-Averse Stochastic Shortest Path Planning

400   0   0.0 ( 0 )
 نشر من قبل Mohamadreza Ahmadi
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

We consider the stochastic shortest path planning problem in MDPs, i.e., the problem of designing policies that ensure reaching a goal state from a given initial state with minimum accrued cost. In order to account for rare but important realizations of the system, we consider a nested dynamic coherent risk total cost functional rather than the conventional risk-neutral total expected cost. Under some assumptions, we show that optimal, stationary, Markovian policies exist and can be found via a special Bellmans equation. We propose a computational technique based on difference convex programs (DCPs) to find the associated value functions and therefore the risk-averse policies. A rover navigation MDP is used to illustrate the proposed methodology with conditional-value-at-risk (CVaR) and entropic-value-at-risk (EVaR) coherent risk measures.



قيم البحث

اقرأ أيضاً

We consider the problem of designing policies for partially observable Markov decision processes (POMDPs) with dynamic coherent risk objectives. Synthesizing risk-averse optimal policies for POMDPs requires infinite memory and thus undecidable. To ov ercome this difficulty, we propose a method based on bounded policy iteration for designing stochastic but finite state (memory) controllers, which takes advantage of standard convex optimization methods. Given a memory budget and optimality criterion, the proposed method modifies the stochastic finite state controller leading to sub-optimal solutions with lower coherent risk.
Although ground robotic autonomy has gained widespread usage in structured and controlled environments, autonomy in unknown and off-road terrain remains a difficult problem. Extreme, off-road, and unstructured environments such as undeveloped wildern ess, caves, and rubble pose unique and challenging problems for autonomous navigation. To tackle these problems we propose an approach for assessing traversability and planning a safe, feasible, and fast trajectory in real-time. Our approach, which we name STEP (Stochastic Traversability Evaluation and Planning), relies on: 1) rapid uncertainty-aware mapping and traversability evaluation, 2) tail risk assessment using the Conditional Value-at-Risk (CVaR), and 3) efficient risk and constraint-aware kinodynamic motion planning using sequential quadratic programming-based (SQP) model predictive control (MPC). We analyze our method in simulation and validate its efficacy on wheeled and legged robotic platforms exploring extreme terrains including an abandoned subway and an underground lava tube.
We propose a learning-based, distributionally robust model predictive control approach towards the design of adaptive cruise control (ACC) systems. We model the preceding vehicle as an autonomous stochastic system, using a hybrid model with continuou s dynamics and discrete, Markovian inputs. We estimate the (unknown) transition probabilities of this model empirically using observed mode transitions and simultaneously determine sets of probability vectors (ambiguity sets) around these estimates, that contain the true transition probabilities with high confidence. We then solve a risk-averse optimal control problem that assumes the worst-case distributions in these sets. We furthermore derive a robust terminal constraint set and use it to establish recursive feasibility of the resulting MPC scheme. We validate the theoretical results and demonstrate desirable properties of the scheme through closed-loop simulations.
Imitation learning algorithms learn viable policies by imitating an experts behavior when reward signals are not available. Generative Adversarial Imitation Learning (GAIL) is a state-of-the-art algorithm for learning policies when the experts behavi or is available as a fixed set of trajectories. We evaluate in terms of the experts cost function and observe that the distribution of trajectory-costs is often more heavy-tailed for GAIL-agents than the expert at a number of benchmark continuous-control tasks. Thus, high-cost trajectories, corresponding to tail-end events of catastrophic failure, are more likely to be encountered by the GAIL-agents than the expert. This makes the reliability of GAIL-agents questionable when it comes to deployment in risk-sensitive applications like robotic surgery and autonomous driving. In this work, we aim to minimize the occurrence of tail-end events by minimizing tail risk within the GAIL framework. We quantify tail risk by the Conditional-Value-at-Risk (CVaR) of trajectories and develop the Risk-Averse Imitation Learning (RAIL) algorithm. We observe that the policies learned with RAIL show lower tail-end risk than those of vanilla GAIL. Thus the proposed RAIL algorithm appears as a potent alternative to GAIL for improved reliability in risk-sensitive applications.
The standard approach to risk-averse control is to use the Exponential Utility (EU) functional, which has been studied for several decades. Like other risk-averse utility functionals, EU encodes risk aversion through an increasing convex mapping $var phi$ of objective costs to subjective costs. An objective cost is a realization $y$ of a random variable $Y$. In contrast, a subjective cost is a realization $varphi(y)$ of a random variable $varphi(Y)$ that has been transformed to measure preferences about the outcomes. For EU, the transformation is $varphi(y) = exp(frac{-theta}{2}y)$, and under certain conditions, the quantity $varphi^{-1}(E(varphi(Y)))$ can be approximated by a linear combination of the mean and variance of $Y$. More recently, there has been growing interest in risk-averse control using the Conditional Value-at-Risk (CVaR) functional. In contrast to the EU functional, the CVaR of a random variable $Y$ concerns a fraction of its possible realizations. If $Y$ is a continuous random variable with finite $E(|Y|)$, then the CVaR of $Y$ at level $alpha$ is the expectation of $Y$ in the $alpha cdot 100 %$ worst cases. Here, we study the applications of risk-averse functionals to controller synthesis and safety analysis through the development of numerical examples, with emphasis on EU and CVaR. Our contribution is to examine the decision-theoretic, mathematical, and computational trade-offs that arise when using EU and CVaR for optimal control and safety analysis. We are hopeful that this work will advance the interpretability and elucidate the potential benefits of risk-averse control technology.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا