ترغب بنشر مسار تعليمي؟ اضغط هنا

A Fixed-point Theorem for Horn Formula Equations

254   0   0.0 ( 0 )
 نشر من قبل EPTCS
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English
 تأليف Stefan Hetzl




اسأل ChatGPT حول البحث

We consider constrained Horn clause solving from the more general point of view of solving formula equations. Constrained Horn clauses correspond to the subclass of Horn formula equations. We state and prove a fixed-point theorem for Horn formula equations which is based on expressing the fixed-point computation of a minimal model of a set of Horn clauses on the object level as a formula in first-order logic with a least fixed point operator. We describe several corollaries of this fixed-point theorem, in particular concerning the logical foundations of program verification, and sketch how to generalise it to incorporate abstract interpretations.


قيم البحث

اقرأ أيضاً

281 - Peizheng Yu , Zhihong Xia 2021
Poincares last geometric theorem (Poincare-Birkhoff Theorem) states that any area-preserving twist map of annulus has at least two fixed points. We replace the area-preserving condition with a weaker intersection property, which states that any essen tial simple closed curve intersects its image under $f$ at least at one point. The conclusion is that any such map has at least one fixed point. Besides providing a new proof to Poincares geometric theorem, our result also has some applications to reversible systems.
We obtain an extended Reich fixed point theorem for the setting of generalized cone rectangular metric spaces without assuming the normality of the underlying cone. Our work is a generalization of the main result in cite{AAB} and cite{JS}.
Implementations of artificial neural networks (ANNs) might lead to failures, which are hardly predicted in the design phase since ANNs are highly parallel and their parameters are barely interpretable. Here, we develop and evaluate a novel symbolic v erification framework using incremental bounded model checking (BMC), satisfiability modulo theories (SMT), and invariant inference, to obtain adversarial cases and validate coverage methods in a multi-layer perceptron (MLP). We exploit incremental BMC based on interval analysis to compute boundaries from a neurons input. Then, the latter are propagated to effectively find a neurons output since it is the input of the next one. This paper describes the first bit-precise symbolic verification framework to reason over actual implementations of ANNs in CUDA, based on invariant inference, therefore providing further guarantees about finite-precision arithmetic and its rounding errors, which are routinely ignored in the existing literature. We have implemented the proposed approach on top of the efficient SMT-based bounded model checker (ESBMC), and its experimental results show that it can successfully verify safety properties, in actual implementations of ANNs, and generate real adversarial cases in MLPs. Our approach was able to verify and produce adversarial examples for 85.8% of 21 test cases considering different input images, and 100% of the properties related to covering methods. Although our verification time is higher than existing approaches, our methodology can consider fixed-point implementation aspects that are disregarded by the state-of-the-art verification methodologies.
Linear fixed point equations in Hilbert spaces arise in a variety of settings, including reinforcement learning, and computational methods for solving differential and integral equations. We study methods that use a collection of random observations to compute approximate solutions by searching over a known low-dimensional subspace of the Hilbert space. First, we prove an instance-dependent upper bound on the mean-squared error for a linear stochastic approximation scheme that exploits Polyak--Ruppert averaging. This bound consists of two terms: an approximation error term with an instance-dependent approximation factor, and a statistical error term that captures the instance-specific complexity of the noise when projected onto the low-dimensional subspace. Using information theoretic methods, we also establish lower bounds showing that both of these terms cannot be improved, again in an instance-dependent sense. A concrete consequence of our characterization is that the optimal approximation factor in this problem can be much larger than a universal constant. We show how our results precisely characterize the error of a class of temporal difference learning methods for the policy evaluation problem with linear function approximation, establishing their optimality.
146 - Klaus Bering 2009
We give an elementary proof of Noethers first Theorem while stressing the magical fact that the global quasi-symmetry only needs to hold for one fixed integration region. We provide sufficient conditions for gauging a global quasi-symmetry.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا