ترغب بنشر مسار تعليمي؟ اضغط هنا

A Decidable Confluence Test for Cognitive Models in ACT-R

174   0   0.0 ( 0 )
 نشر من قبل Daniel Gall
 تاريخ النشر 2017
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Computational cognitive modeling investigates human cognition by building detailed computational models for cognitive processes. Adaptive Control of Thought - Rational (ACT-R) is a rule-based cognitive architecture that offers a widely employed framework to build such models. There is a sound and complete embedding of ACT-R in Constraint Handling Rules (CHR). Therefore analysis techniques from CHR can be used to reason about computational properties of ACT-R models. For example, confluence is the property that a program yields the same result for the same input regardless of the rules that are applied. In ACT-R models, there are often cognitive processes that should always yield the same result while others e.g. implement strategies to solve a problem that could yield different results. In this paper, a decidable confluence criterion for ACT-R is presented. It allows to identify ACT-R rules that are not confluent. Thereby, the modeler can check if his model has the desired behavior. The sound and complete translation of ACT-R to CHR from prior work is used to come up with a suitable invariant-based confluence criterion from the CHR literature. Proper invariants for translated ACT-R models are identified and proven to be decidable. The presented method coincides with confluence of the original ACT-R models.

قيم البحث

اقرأ أيضاً

Computational psychology has the aim to explain human cognition by computational models of cognitive processes. The cognitive architecture ACT-R is popular to develop such models. Although ACT-R has a well-defined psychological theory and has been us ed to explain many cognitive processes, there are two problems that make it hard to reason formally about its cognitive models: First, ACT-R lacks a formalization of its underlying production rule system and secondly, there are many different implementations and extensions of ACT-R with technical artifacts complicating formal reasoning even more. This paper describes a formal operational semantics - the very abstract semantics - that abstracts from as many technical details as possible keeping it open to extensions and different implementations of the ACT-R theory. In a second step, this semantics is refined to define some of its abstract features that are found in many implementations of ACT-R - the abstract semantics. It concentrates on the procedural core of ACT-R and is suitable for analysis of the transition system since it still abstracts from details like timing, the sub-symbolic layer or conflict resolution. Furthermore, a translation of ACT-R models to the programming language Constraint Handling Rules (CHR) is defined. This makes the abstract semantics an executable specification of ACT-R. CHR has been used successfully to embed other rule-based formalisms like graph transformation systems or functional programming. There are many results and tools that support formal reasoning about and analysis of CHR programs. The translation of ACT-R models to CHR is proven sound and complete w.r.t. the abstract operational semantics of ACT-R. This paves the way to analysis of ACT-R models through CHR. Therefore, to the best of our knowledge, our abstract semantics is the first formulation of ACT-R suitable for both analysis and execution.
In language learning in the limit, the most common type of hypothesis is to give an enumerator for a language. This so-called $W$-index allows for naming arbitrary computably enumerable languages, with the drawback that even the membership problem is undecidable. In this paper we use a different system which allows for naming arbitrary decidable languages, namely programs for characteristic functions (called $C$-indices). These indices have the drawback that it is now not decidable whether a given hypothesis is even a legal $C$-index. In this first analysis of learning with $C$-indices, we give a structured account of the learning power of various restrictions employing $C$-indices, also when compared with $W$-indices. We establish a hierarchy of learning power depending on whether $C$-indices are required (a) on all outputs; (b) only on outputs relevant for the class to be learned and (c) only in the limit as final, correct hypotheses. Furthermore, all these settings are weaker than learning with $W$-indices (even when restricted to classes of computable languages). We analyze all these questions also in relation to the mode of data presentation. Finally, we also ask about the relation of semantic versus syntactic convergence and derive the map of pairwise relations for these two kinds of convergence coupled with various forms of data presentation.
Confluence denotes the property of a state transition system that states can be rewritten in more than one way yielding the same result. Although it is a desirable property, confluence is often too strict in practical applications because it also con siders states that can never be reached in practice. Additionally, sometimes states that have the same semantics in the practical context are considered as different states due to different syntactic representations. By introducing suitable invariants and equivalence relations on the states, programs may have the property to be confluent modulo the equivalence relation w.r.t. the invariant which often is desirable in practice. In this paper, a sufficient and necessary criterion for confluence modulo equivalence w.r.t. an invariant for Constraint Handling Rules (CHR) is presented. It is the first approach that covers invariant-based confluence modulo equivalence for the de facto standard semantics of CHR. There is a trade-off between practical applicability and the simplicity of proving a confluence property. Therefore, a better manageable subset of equivalence relations has been identified that allows for the proposed confluence criterion and and simplifies the confluence proofs by using well established CHR analysis methods.
Orthogonality is a discipline of programming that in a syntactic manner guarantees determinism of functional specifications. Essentially, orthogonality avoids, on the one side, the inherent ambiguity of non determinism, prohibiting the existence of d ifferent rules that specify the same function and that may apply simultaneously (non-ambiguity), and, on the other side, it eliminates the possibility of occurrence of repetitions of variables in the left-hand side of these rules (left linearity). In the theory of term rewriting systems (TRSs) determinism is captured by the well-known property of confluence, that basically states that whenever different computations or simplifications from a term are possible, the computed answers should coincide. Although the proofs are technically elaborated, confluence is well-known to be a consequence of orthogonality. Thus, orthogonality is an important mathematical discipline intrinsic to the specification of recursive functions that is naturally applied in functional programming and specification. Starting from a formalization of the theory of TRSs in the proof assistant PVS, this work describes how confluence of orthogonal TRSs has been formalized, based on axiomatizations of properties of rules, positions and substitutions involved in parallel steps of reduction, in this proof assistant. Proofs for some similar but restricted properties such as the property of confluence of non-ambiguous and (left and right) linear TRSs have been fully formalized.
We present initial limit Datalog, a new extensible class of constrained Horn clauses for which the satisfiability problem is decidable. The class may be viewed as a generalisation to higher-order logic (with a simple restriction on types) of the firs t-order language limit Datalog$_Z$ (a fragment of Datalog modulo linear integer arithmetic), but can be instantiated with any suitable background theory. For example, the fragment is decidable over any countable well-quasi-order with a decidable first-order theory, such as natural number vectors under componentwise linear arithmetic, and words of a bounded, context-free language ordered by the subword relation. Formulas of initial limit Datalog have the property that, under some assumptions on the background theory, their satisfiability can be witnessed by a new kind of term model which we call entwined structures. Whilst the set of all models is typically uncountable, the set of all entwined structures is recursively enumerable, and model checking is decidable.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا