Do you want to publish a course? Click here

Counterexample-guided Abstraction Refinement for POMDPs

370   0   0.0 ( 0 )
 Added by Xiaobin Zhang
 Publication date 2017
and research's language is English




Ask ChatGPT about the research

Partially Observable Markov Decision Process (POMDP) is widely used to model probabilistic behavior for complex systems. Compared with MDPs, POMDP models a system more accurate but solving a POMDP generally takes exponential time in the size of its state space. This makes the formal verification and synthesis problems much more challenging for POMDPs, especially when multiple system components are involved. As a promising technique to reduce the verification complexity, the abstraction method tries to find an abstract system with a smaller state space but preserves enough properties for the verification purpose. While abstraction based verification has been explored extensively for MDPs, in this paper, we present the first result of POMDP abstraction and its refinement techniques. The main idea follows the counterexample-guided abstraction refinement (CEGAR) framework. Starting with a coarse guess for the POMDP abstraction, we iteratively use counterexamples from formal verification to refine the abstraction until the abstract system can be used to infer the verification result for the original POMDP. Our main contributions have two folds: 1) we propose a novel abstract system model for POMDP and a new simulation relation to capture the partial observability then prove the preservation on a fragment of Probabilistic Computation Tree Logic (PCTL); 2) to find a proper abstract system that can prove or disprove the satisfaction relation on the concrete POMDP, we develop a novel refinement algorithm. Our work leads to a sound and complete CEGAR framework for POMDP.



rate research

Read More

We study strategy synthesis for partially observable Markov decision processes (POMDPs). The particular problem is to determine strategies that provably adhere to (probabilistic) temporal logic constraints. This problem is computationally intractable and theoretically hard. We propose a novel method that combines techniques from machine learning and formal verification. First, we train a recurrent neural network (RNN) to encode POMDP strategies. The RNN accounts for memory-based decisions without the need to expand the full belief space of a POMDP. Secondly, we restrict the RNN-based strategy to represent a finite-memory strategy and implement it on a specific POMDP. For the resulting finite Markov chain, efficient formal verification techniques provide provable guarantees against temporal logic specifications. If the specification is not satisfied, counterexamples supply diagnostic information. We use this information to improve the strategy by iteratively training the RNN. Numerical experiments show that the proposed method elevates the state of the art in POMDP solving by up to three orders of magnitude in terms of solving times and model sizes.
We study finite-state controllers (FSCs) for partially observable Markov decision processes (POMDPs) that are provably correct with respect to given specifications. The key insight is that computing (randomised) FSCs on POMDPs is equivalent to - and computationally as hard as - synthesis for parametric Markov chains (pMCs). This correspondence allows to use tools for parameter synthesis in pMCs to compute correct-by-construction FSCs on POMDPs for a variety of specifications. Our experimental evaluation shows comparable performance to well-known POMDP solvers.
Numerical tools for constraints solving are a cornerstone to control verification problems. This is evident by the plethora of research that uses tools like linear and convex programming for the design of control systems. Nevertheless, the capability of linear and convex programming is limited and is not adequate to reason about general nonlinear polynomials constraints that arise naturally in the design of nonlinear systems. This limitation calls for new solvers that are capable of utilizing the power of linear and convex programming to reason about general multivariate polynomials. In this paper, we propose PolyAR, a highly parallelizable solver for polynomial inequality constraints. PolyAR provides several key contributions. First, it uses convex relaxations of the problem to accelerate the process of finding a solution to the set of the non-convex multivariate polynomials. Second, it utilizes an iterative convex abstraction refinement process which aims to prune the search space and identify regions for which the convex relaxation fails to solve the problem. Third, it allows for a highly parallelizable usage of off-the-shelf solvers to analyze the regions in which the convex relaxation failed to provide solutions. We compared the scalability of PolyAR against Z3 8.9 and Yices 2.6 on control designing problems. Finally, we demonstrate the performance of PolyAR on designing switching signals for continuous-time linear switching systems.
75 - Thomas Ehrhard 2015
We present a probabilistic version of PCF, a well-known simply typed universal functional language. The type hierarchy is based on a single ground type of natural numbers. Even if the language is globally call-by-name, we allow a call-by-value evaluation for ground type arguments in order to provide the language with a suitable algorithmic expressiveness. We describe a denotational semantics based on probabilistic coherence spaces, a model of classical Linear Logic developed in previous works. We prove an adequacy and an equational full abstraction theorem showing that equality in the model coincides with a natural notion of observational equivalence.
Markov Decision Processes (MDPs) are a popular class of models suitable for solving control decision problems in probabilistic reactive systems. We consider parametric MDPs (pMDPs) that include parameters in some of the transition probabilities to account for stochastic uncertainties of the environment such as noise or input disturbances. We study pMDPs with reachability objectives where the parameter values are unknown and impossible to measure directly during execution, but there is a probability distribution known over the parameter values. We study for the first time computing parameter-independent strategies that are expectation optimal, i.e., optimize the expected reachability probability under the probability distribution over the parameters. We present an encoding of our problem to partially observable MDPs (POMDPs), i.e., a reduction of our problem to computing optimal strategies in POMDPs. We evaluate our method experimentally on several benchmarks: a motivating (repeated) learner model; a series of benchmarks of varying configurations of a robot moving on a grid; and a consensus protocol.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا