ترغب بنشر مسار تعليمي؟ اضغط هنا

On modularity in reactive control architectures, with an application to formal verification

113   0   0.0 ( 0 )
 نشر من قبل Oliver Biggar
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Modularity is a central principle throughout the design process for cyber-physical systems. Modularity reduces complexity and increases reuse of behavior. In this paper we pose and answer the following question: how can we identify independent `modules within the structure of reactive control architectures? To this end, we propose a graph-structured control architecture we call a decision structure, and show how it generalises some reactive control architectures which are popular in Artificial Intelligence (AI) and robotics, specifically Teleo-Reactive programs (TRs), Decision Trees (DTs), Behavior Trees (BTs) and Generalised Behavior Trees ($k$-BTs). Inspired by the definition of a module in graph theory, we define modules in decision structures and show how each decision structure possesses a canonical decomposition into its modules. We can naturally characterise each of the BTs, $k$-BTs, DTs and TRs by properties of their module decomposition. This allows us to recognise which decision structures are equivalent to each of these architectures in quadratic time. Our proposed concept of modules extends to formal verification, under any verification scheme capable of verifying a decision structure. Namely, we prove that a modification to a module within a decision structure has no greater flow-on effects than a modification to an individual action within that structure. This enables verification on modules to be done locally and hierarchically, where structures can be verified and then repeatedly locally modified, with modules replaced by modules while preserving correctness. To illustrate the findings, we present an example of a solar-powered drone controlled by a decision structure. We use a Linear Temporal Logic-based verification scheme to verify the correctness of this structure, and then show how one can modify modules while preserving its correctness.



قيم البحث

اقرأ أيضاً

The battery is a key component of autonomous robots. Its performance limits the robots safety and reliability. Unlike liquid-fuel, a battery, as a chemical device, exhibits complicated features, including (i) capacity fade over successive recharges a nd (ii) increasing discharge rate as the state of charge (SOC) goes down for a given power demand. Existing formal verification studies of autonomous robots, when considering energy constraints, formalise the energy component in a generic manner such that the battery features are overlooked. In this paper, we model an unmanned aerial vehicle (UAV) inspection mission on a wind farm and via probabilistic model checking in PRISM show (i) how the battery features may affect the verification results significantly in practical cases; and (ii) how the battery features, together with dynamic environments and battery safety strategies, jointly affect the verification results. Potential solutions to explicitly integrate battery prognostics and health management (PHM) with formal verification of autonomous robots are also discussed to motivate future work.
Formal Verification (FV) and Machine Learning (ML) can seem incompatible due to their opposite mathematical foundations and their use in real-life problems: FV mostly relies on discrete mathematics and aims at ensuring correctness; ML often relies on probabilistic models and consists of learning patterns from training data. In this paper, we postulate that they are complementary in practice, and explore how ML helps FV in its classical approaches: static analysis, model-checking, theorem-proving, and SAT solving. We draw a landscape of the current practice and catalog some of the most prominent uses of ML inside FV tools, thus offering a new perspective on FV techniques that can help researchers and practitioners to better locate the possible synergies. We discuss lessons learned from our work, point to possible improvements and offer visions for the future of the domain in the light of the science of software and systems modeling.
303 - Ran Tian , Masayoshi Tomizuka , 2021
Reward function, as an incentive representation that recognizes humans agency and rationalizes humans actions, is particularly appealing for modeling human behavior in human-robot interaction. Inverse Reinforcement Learning is an effective way to ret rieve reward functions from demonstrations. However, it has always been challenging when applying it to multi-agent settings since the mutual influence between agents has to be appropriately modeled. To tackle this challenge, previous work either exploits equilibrium solution concepts by assuming humans as perfectly rational optimizers with unbounded intelligence or pre-assigns humans interaction strategies a priori. In this work, we advocate that humans are bounded rational and have different intelligence levels when reasoning about others decision-making process, and such an inherent and latent characteristic should be accounted for in reward learning algorithms. Hence, we exploit such insights from Theory-of-Mind and propose a new multi-agent Inverse Reinforcement Learning framework that reasons about humans latent intelligence levels during learning. We validate our approach in both zero-sum and general-sum games with synthetic agents and illustrate a practical application to learning human drivers reward functions from real driving data. We compare our approach with two baseline algorithms. The results show that by reasoning about humans latent intelligence levels, the proposed approach has more flexibility and capability to retrieve reward functions that explain humans driving behaviors better.
High capacity end-to-end approaches for human motion (behavior) prediction have the ability to represent subtle nuances in human behavior, but struggle with robustness to out of distribution inputs and tail events. Planning-based prediction, on the o ther hand, can reliably output decent-but-not-great predictions: it is much more stable in the face of distribution shift (as we verify in this work), but it has high inductive bias, missing important aspects that drive human decisions, and ignoring cognitive biases that make human behavior suboptimal. In this work, we analyze one family of approaches that strive to get the best of both worlds: use the end-to-end predictor on common cases, but do not rely on it for tail events / out-of-distribution inputs -- switch to the planning-based predictor there. We contribute an analysis of different approaches for detecting when to make this switch, using an autonomous driving domain. We find that promising approaches based on ensembling or generative modeling of the training distribution might not be reliable, but that there very simple methods which can perform surprisingly well -- including training a classifier to pick up on tell-tale issues in predicted trajectories.
113 - Bruno Scherrer 2011
We consider the discrete-time infinite-horizon optimal control problem formalized by Markov Decision Processes. We revisit the work of Bertsekas and Ioffe, that introduced $lambda$ Policy Iteration, a family of algorithms parameterized by $lambda$ th at generalizes the standard algorithms Value Iteration and Policy Iteration, and has some deep connections with the Temporal Differences algorithm TD($lambda$) described by Sutton and Barto. We deepen the original theory developped by the authors by providing convergence rate bounds which generalize standard bounds for Value Iteration described for instance by Puterman. Then, the main contribution of this paper is to develop the theory of this algorithm when it is used in an approximate form and show that this is sound. Doing so, we extend and unify the separate analyses developped by Munos for Approximate Value Iteration and Approximate Policy Iteration. Eventually, we revisit the use of this algorithm in the training of a Tetris playing controller as originally done by Bertsekas and Ioffe. We provide an original performance bound that can be applied to such an undiscounted control problem. Our empirical results are different from those of Bertsekas and Ioffe (which were originally qualified as paradoxical and intriguing), and much more conform to what one would expect from a learning experiment. We discuss the possible reason for such a difference.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا