Do you want to publish a course? Click here

Robust Inference and Verification of Temporal Logic Classifier-in-the-loop Systems

56   0   0.0 ( 0 )
 Added by Zhe Xu
 Publication date 2020
and research's language is English
 Authors Zhe Xu




Ask ChatGPT about the research

Autonomous systems embedded with machine learning modules often rely on deep neural networks for classifying different objects of interest in the environment or different actions or strategies to take for the system. Due to the non-linearity and high-dimensionality of deep neural networks, the interpretability of the autonomous systems is compromised. Besides, the machine learning methods in autonomous systems are mostly data-intensive and lack commonsense knowledge and reasoning that are natural to humans. In this paper, we propose the framework of temporal logic classifier-in-the-loop systems. The temporal logic classifiers can output different actions to take for an autonomous system based on the environment, such that the behavior of the autonomous system can satisfy a given temporal logic specification. Our approach is robust and provably-correct, as we can prove that the behavior of the autonomous system can satisfy a given temporal logic specification in the presence of (bounded) disturbances.



rate research

Read More

137 - Zhe Xu , Yi Deng , Agung Julius 2020
In this paper, we present a mechanism for building hybrid system observers to differentiate between specific positions of the hybrid system. The mechanism is designed through inferring metric temporal logic (MTL) formulae from simulated trajectories from the hybrid system. We first approximate the system behavior by simulating finitely many trajectories with timerobust tube segments around them. These time-robust tube segments account for both spatial and temporal uncertainties that exist in the hybrid system with initial state variations. The inferred MTL formulae classify different time-robust tube segments and thus can be used for classifying the hybrid system behaviors in a provably correct fashion. We implement our approach on a model of a smart building testbed to distinguish two cases of room occupancy.
In the current control design of safety-critical autonomous systems, formal verification techniques are typically applied after the controller is designed to evaluate whether the required properties (e.g., safety) are satisfied. However, due to the increasing system complexity and the fundamental hardness of designing a controller with formal guarantees, such an open-loop process of design-then-verify often results in many iterations and fails to provide the necessary guarantees. In this paper, we propose a correct-by-construction control learning framework that integrates the verification into the control design process in a closed-loop manner, i.e., design-while-verify. Specifically, we leverage the verification results (computed reachable set of the system state) to construct feedback metrics for control learning, which measure how likely the current design of control parameters can meet the required reach-avoid property for safety and goal-reaching. We formulate an optimization problem based on such metrics for tuning the controller parameters, and develop an approximated gradient descent algorithm with a difference method to solve the optimization problem and learn the controller. The learned controller is formally guaranteed to meet the required reach-avoid property. By treating verifiability as a first-class objective and effectively leveraging the verification results during the control learning process, our approach can significantly improve the chance of finding a control design with formal property guarantees. This is demonstrated via a set of experiments on both linear and non-linear systems that use model-based or neural network based controllers.
The deployment of autonomous systems in uncertain and dynamic environments has raised fundamental questions. Addressing these is pivotal to build fully autonomous systems and requires a systematic integration of planning and control. We first propose reactive risk signal interval temporal logic (ReRiSITL) as an extension of signal temporal logic (STL) to formulate complex spatiotemporal specifications. Unlike STL, ReRiSITL allows to consider uncontrollable propositions that may model humans as well as random environmental events such as sensor failures. Additionally, ReRiSITL allows to incorporate risk measures, such as (but not limited to) the Conditional Value-at-Risk, to measure the risk of violating certain spatial specifications. Second, we propose an algorithm to check if an ReRiSITL specification is satisfiable. For this purpose, we abstract the ReRiSITL specification into a timed signal transducer and devise a game-based approach. Third, we propose a reactive planning and control framework for dynamical control systems under ReRiSITL specifications.
Motivated by the recent interest in cyber-physical and autonomous robotic systems, we study the problem of dynamically coupled multi-agent systems under a set of signal temporal logic tasks. In particular, the satisfaction of each of these signal temporal logic tasks depends on the behavior of a distinct set of agents. Instead of abstracting the agent dynamics and the temporal logic tasks into a discrete domain and solving the problem therein or using optimization-based methods, we derive collaborative feedback control laws. These control laws are based on a decentralized control barrier function condition that results in discontinuous control laws, as opposed to a centralized condition resembling the single-agent case. The benefits of our approach are inherent robustness properties typically present in feedback control as well as satisfaction guarantees for continuous-time multi-agent systems. More specifically, time-varying control barrier functions are used that account for the semantics of the signal temporal logic tasks at hand. For a certain fragment of signal temporal logic tasks, we further propose a systematic way to construct such control barrier functions. Finally, we show the efficacy and robustness of our framework in an experiment including a group of three omnidirectional robots.
We develop a probabilistic control algorithm, $texttt{GTLProCo}$, for swarms of agents with heterogeneous dynamics and objectives, subject to high-level task specifications. The resulting algorithm not only achieves decentralized control of the swarm but also significantly improves scalability over state-of-the-art existing algorithms. Specifically, we study a setting in which the agents move along the nodes of a graph, and the high-level task specifications for the swarm are expressed in a recently-proposed language called graph temporal logic (GTL). By constraining the distribution of the swarm over the nodes of the graph, GTL can specify a wide range of properties, including safety, progress, and response. $texttt{GTLProCo}$, agnostic to the number of agents comprising the swarm, controls the density distribution of the swarm in a decentralized and probabilistic manner. To this end, it synthesizes a time-varying Markov chain modeling the time evolution of the density distribution under the GTL constraints. We first identify a subset of GTL, namely reach-avoid specifications, for which we can reduce the synthesis of such a Markov chain to either linear or semi-definite programs. Then, in the general case, we formulate the synthesis of the Markov chain as a mixed-integer nonlinear program (MINLP). We exploit the structure of the problem to provide an efficient sequential mixed-integer linear programming scheme with trust regions to solve the MINLP. We empirically demonstrate that our sequential scheme is at least three orders of magnitude faster than off-the-shelf MINLP solvers and illustrate the effectiveness of $texttt{GTLProCo}$ in several swarm scenarios.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا