Do you want to publish a course? Click here

Probabilistic Model Checking of Robots Deployed in Extreme Environments

55   0   0.0 ( 0 )
 Added by Xingyu Zhao
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

Robots are increasingly used to carry out critical missions in extreme environments that are hazardous for humans. This requires a high degree of operational autonomy under uncertain conditions, and poses new challenges for assuring the robots safety and reliability. In this paper, we develop a framework for probabilistic model checking on a layered Markov model to verify the safety and reliability requirements of such robots, both at pre-mission stage and during runtime. Two novel estimators based on conservative Bayesian inference and imprecise probability model with sets of priors are introduced to learn the unknown transition parameters from operational data. We demonstrate our approach using data from a real-world deployment of unmanned underwater vehicles in extreme environments.

rate research

Read More

Building a humanlike integrative artificial cognitive system, that is, an artificial general intelligence, is one of the goals in artificial intelligence and developmental robotics. Furthermore, a computational model that enables an artificial cognitive system to achieve cognitive development will be an excellent reference for brain and cognitive science. This paper describes the development of a cognitive architecture using probabilistic generative models (PGMs) to fully mirror the human cognitive system. The integrative model is called a whole-brain PGM (WB-PGM). It is both brain-inspired and PGMbased. In this paper, the process of building the WB-PGM and learning from the human brain to build cognitive architectures is described.
In this paper we investigate the applicability of standard model checking approaches to verifying properties in probabilistic programming. As the operational model for a standard probabilistic program is a potentially infinite parametric Markov decision process, no direct adaption of existing techniques is possible. Therefore, we propose an on-the-fly approach where the operational model is successively created and verified via a step-wise execution of the program. This approach enables to take key features of many probabilistic programs into account: nondeterminism and conditioning. We discuss the restrictions and demonstrate the scalability on several benchmarks.
Binary decision diagrams can compactly represent vast sets of states, mitigating the state space explosion problem in model checking. Probabilistic systems, however, require multi-terminal diagrams storing rational numbers. They are inefficient for models with many distinct probabilities and for iterative numeric algorithms like value iteration. In this paper, we present a new symblicit approach to checking Markov chains and related probabilistic models: We first generate a decision diagram that symbolically collects all reachable states and their predecessors. We then concretise states one-by-one into an explicit partial state space representation. Whenever all predecessors of a state have been concretised, we eliminate it from the explicit state space in a way that preserves all relevant probabilities and rewards. We thus keep few explicit states in memory at any time. Experiments show that very large models can be model-checked in this way with very low memory consumption.
We revisit the symbolic verification of Markov chains with respect to finite horizon reachability properties. The prevalent approach iteratively computes step-bounded state reachability probabilities. By contrast, recent advances in probabilistic inference suggest symbolically representing all horizon-length paths through the Markov chain. We ask whether this perspective advances the state-of-the-art in probabilistic model checking. First, we formally describe both approaches in order to highlight their key differences. Then, using these insights we develop Rubicon, a tool that transpiles Prism models to the probabilistic inference tool Dice. Finally, we demonstrate better scalability compared to probabilistic model checkers on selected benchmarks. All together, our results suggest that probabilistic inference is a valuable addition to the probabilistic model checking portfolio -- with Rubicon as a first step towards integrating both perspectives.
Probabilistic timed automata are an extension of timed automata with discrete probability distributions. We consider model-checking algorithms for the subclasses of probabilistic timed automata which have one or two clocks. Firstly, we show that PCTL probabilistic model-checking problems (such as determining whether a set of target states can be reached with probability at least 0.99 regardless of how nondeterminism is resolved) are PTIME-complete for one-clock probabilistic timed automata, and are EXPTIME-complete for probabilistic timed automata with two clocks. Secondly, we show that, for one-clock probabilistic timed automata, the model-checking problem for the probabilistic timed temporal logic PCTL is EXPTIME-complete. However, the model-checking problem for the subclass of PCTL which does not permit both punctual timing bounds, which require the occurrence of an event at an exact time point, and comparisons with probability bounds other than 0 or 1, is PTIME-complete for one-clock probabilistic timed automata.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا