Do you want to publish a course? Click here

Yet Another Deep Embedding of B:Extending de Bruijn Notations

116   0   0.0 ( 0 )
 Added by Eric Jaeger
 Publication date 2009
and research's language is English
 Authors Eric Jaeger




Ask ChatGPT about the research

We present Bicoq3, a deep embedding of the B system in Coq, focusing on the technical aspects of the development. The main subjects discussed are related to the representation of sets and maps, the use of induction principles, and the introduction of a new de Bruijn notation providing solutions to various problems related to the mechanisation of languages and logics.



rate research

Read More

144 - Neng-Fa Zhou 2020
The at-most-k constraint is ubiquitous in combinatorial problems, and numerous SAT encodings are available for the constraint. Prior experiments have shown the competitiveness of the sequential-counter encoding for k $>$ 1, and have excluded the parallel-counter encoding, which is more compact that the binary-adder encoding, from consideration due to its incapability of enforcing arc consistency through unit propagation. This paper presents an experiment that shows astounding performance of the binary-adder encoding for the at-most-k constraint.
We present a space- and time-efficient fully dynamic implementation de Bruijn graphs, which can also support fixed-length jumbled pattern matching.
The lambda-calculus with de Bruijn indices assembles each alpha-class of lambda-terms in a unique term, using indices instead of variable names. Intersection types provide finitary type polymorphism and can characterise normalisable lambda-terms through the property that a term is normalisable if and only if it is typeable. To be closer to computations and to simplify the formalisation of the atomic operations involved in beta-contractions, several calculi of explicit substitution were developed mostly with de Bruijn indices. Versions of explicit substitutions calculi without types and with simple type systems are well investigated in contrast
Scaling issues are mundane yet irritating for practitioners of reinforcement learning. Error scales vary across domains, tasks, and stages of learning; sometimes by many orders of magnitude. This can be detrimental to learning speed and stability, create interference between learning tasks, and necessitate substantial tuning. We revisit this topic for agents based on temporal-difference learning, sketch out some desiderata and investigate scenarios where simple fixes fall short. The mechanism we propose requires neither tuning, clipping, nor adaptation. We validate its effectiveness and robustness on the suite of Atari games. Our scaling method turns out to be particularly helpful at mitigating interference, when training a shared neural network on multiple targets that differ in reward scale or discounting.
In latest years, several advancements have been made in symbolic-numerical eigenvalue techniques for solving polynomial systems. In this article, we add to this list by reducing the task to an eigenvalue problem in a considerably faster and simpler way than in previous methods. This results in an algorithm which solves systems with isolated solutions in a reliable and efficient way, outperforming homotopy methods in overdetermined cases. We provide an implementation in the proof-of-concept Julia package EigenvalueSolver.jl.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا