Do you want to publish a course? Click here

A kernel function for Signal Temporal Logic formulae

268   0   0.0 ( 0 )
 Added by Laura Nenzi
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

We discuss how to define a kernel for Signal Temporal Logic (STL) formulae. Such a kernel allows us to embed the space of formulae into a Hilbert space, and opens up the use of kernel-based machine learning algorithms in the context of STL. We show an application of this idea to a regression problem in formula space for probabilistic models.

rate research

Read More

195 - Ruixuan Yan , Agung Julius 2021
In this paper, we propose a neuro-symbolic framework called weighted Signal Temporal Logic Neural Network (wSTL-NN) that combines the characteristics of neural networks and temporal logics. Weighted Signal Temporal Logic (wSTL) formulas are recursively composed of subformulas that are combined using logical and temporal operators. The quantitative semantics of wSTL is defined such that the quantitative satisfaction of subformulas with higher weights has more influence on the quantitative satisfaction of the overall wSTL formula. In the wSTL-NN, each neuron corresponds to a wSTL subformula, and its output corresponds to the quantitative satisfaction of the formula. We use wSTL-NN to represent wSTL formulas as features to classify time series data. STL features are more explainable than those used in classical methods. The wSTL-NN is end-to-end differentiable, which allows learning of wSTL formulas to be done using back-propagation. To reduce the number of weights, we introduce two techniques to sparsify the wSTL-NN.We apply our framework to an occupancy detection time-series dataset to learn a classifier that predicts the occupancy status of an office room.
Temporal epistemic logic is a well-established framework for expressing agents knowledge and how it evolves over time. Within language-based security these are central issues, for instance in the context of declassification. We propose to bring these two areas together. The paper presents a computational model and an epistemic temporal logic used to reason about knowledge acquired by observing program outputs. This approach is shown to elegantly capture standard notions of noninterference and declassification in the literature as well as information flow properties where sensitive and public data intermingle in delicate ways.
In this note we consider the problem of introducing variables in temporal logic programs under the formalism of Temporal Equilibrium Logic (TEL), an extension of Answer Set Programming (ASP) for dealing with linear-time modal operators. To this aim, we provide a definition of a first-order version of TEL that shares the syntax of first-order Linear-time Temporal Logic (LTL) but has a different semantics, selecting some LTL models we call temporal stable models. Then, we consider a subclass of theories (called splittable temporal logic programs) that are close to usual logic programs but allowing a restricted use of temporal operators. In this setting, we provide a syntactic definition of safe variables that suffices to show the property of domain independence -- that is, addition of arbitrary elements in the universe does not vary the set of temporal stable models. Finally, we present a method for computing the derivable facts by constructing a non-temporal logic program with variables that is fed to a standard ASP grounder. The information provided by the grounder is then used to generate a subset of ground temporal rules which is equivalent to (and generally smaller than) the full program instantiation.
Effectively combining logic reasoning and probabilistic inference has been a long-standing goal of machine learning: the former has the ability to generalize with small training data, while the latter provides a principled framework for dealing with noisy data. However, existing methods for combining the best of both worlds are typically computationally intensive. In this paper, we focus on Markov Logic Networks and explore the use of graph neural networks (GNNs) for representing probabilistic logic inference. It is revealed from our analysis that the representation power of GNN alone is not enough for such a task. We instead propose a more expressive variant, called ExpressGNN, which can perform effective probabilistic logic inference while being able to scale to a large number of entities. We demonstrate by several benchmark datasets that ExpressGNN has the potential to advance probabilistic logic reasoning to the next stage.
78 - Julien Brunel 2019
Most model checkers provide a useful simulation mode, that allows users to explore the set of possible behaviours by interactively picking at each state which event to execute next. Traditionally this simulation mode cannot take into consideration additional temporal logic constraints, such as arbitrary fairness restrictions, substantially reducing its usability for debugging the modelled system behaviour. Similarly, when a specification is false, even if all its counter-examples combined also form a set of behaviours, most model checkers only present one of them to the user, providing little or no mechanism to explore alternatives. In this paper, we present a simple on-the-fly verification technique to allow the user to explore the behaviours that satisfy an arbitrary temporal logic specification, with an interactive process akin to simulation. This technique enables a unified interface for simulating the modelled system and exploring its counter-examples. The technique is formalised in the framework of state/event linear temporal logic and a proof of concept was implemented in an event-based variant of the Electrum framework.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا