ترغب بنشر مسار تعليمي؟ اضغط هنا

SIMPL: A DSL for Automatic Specialization of Inference Algorithms

95   0   0.0 ( 0 )
 نشر من قبل Rohin Shah
 تاريخ النشر 2016
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Inference algorithms in probabilistic programming languages (PPLs) can be thought of as interpreters, since an inference algorithm traverses a model given evidence to answer a query. As with interpreters, we can improve the efficiency of inference algorithms by compiling them once the model, evidence and query are known. We present SIMPL, a domain specific language for inference algorithms, which uses this idea in order to automatically specialize annotated inference algorithms. Due to the approach of specialization, unlike a traditional compiler, with SIMPL new inference algorithms can be added easily, and still be optimized using domain-specific information. We evaluate SIMPL and show that partial evaluation gives a 2-6x speedup, caching provides an additional 1-1.5x speedup, and generating C code yields an additional 13-20x speedup, for an overall speedup of 30-150x for several inference algorithms and models.



قيم البحث

اقرأ أيضاً

Blockchains host smart contracts for crowdfunding, tokens, and many other purposes. Vulnerabilities in contracts are often discovered, leading to the loss of large quantities of money. Psamathe is a new language we are designing around a new flow abs traction, reducing asset bugs and making contracts more concise than in existing languages. We present an overview of Psamathe, including a partial formalization. We also discuss several example contracts in Psamathe, and compare the Psamathe examples to the same contracts written in Solidity.
TensorFlow Eager is a multi-stage, Python-embedded domain-specific language for hardware-accelerated machine learning, suitable for both interactive research and production. TensorFlow, which TensorFlow Eager extends, requires users to represent comp utations as dataflow graphs; this permits compiler optimizations and simplifies deployment but hinders rapid prototyping and run-time dynamism. TensorFlow Eager eliminates these usability costs without sacrificing the benefits furnished by graphs: It provides an imperative front-end to TensorFlow that executes operations immediately and a JIT tracer that translates Python functions composed of TensorFlow operations into executable dataflow graphs. TensorFlow Eager thus offers a multi-stage programming model that makes it easy to interpolate between imperative and staged execution in a single package.
The last improvements in programming languages, programming models, and frameworks have focused on abstracting the users from many programming issues. Among others, recent programming frameworks include simpler syntax, automatic memory management and garbage collection, which simplifies code re-usage through library packages, and easily configurable tools for deployment. For instance, Python has risen to the top of the list of the programming languages due to the simplicity of its syntax, while still achieving a good performance even being an interpreted language. Moreover, the community has helped to develop a large number of libraries and modules, tuning them to obtain great performance. However, there is still room for improvement when preventing users from dealing directly with distributed and parallel computing issues. This paper proposes and evaluates AutoParallel, a Python module to automatically find an appropriate task-based parallelization of affine loop nests to execute them in parallel in a distributed computing infrastructure. This parallelization can also include the building of data blocks to increase task granularity in order to achieve a good execution performance. Moreover, AutoParallel is based on sequential programming and only contains a small annotation in the form of a Python decorator so that anyone with little programming skills can scale up an application to hundreds of cores.
Garcia and Cimini study a type inference problem for the ITGL, an implicitly and gradually typed language with let-polymorphism, and develop a sound and complete inference algorithm for it. Soundness and completeness mean that, if the algorithm succe eds, the input term can be translated to a well-typed term of an explicitly typed blame calculus by cast insertion and vice versa. However, in general, there are many possible translations depending on how type variables that were left undecided by static type inference are instantiated with concrete static types. Worse, the translated terms may behave differently---some evaluate to values but others raise blame. In this paper, we propose and formalize a new blame calculus $lambda^{textsf{DTI}}_{textsf{B}}$ that avoids such divergence as an intermediate language for the ITGL. A main idea is to allow a term to contain type variables (that have not been instantiated during static type inference) and defer instantiation of these type variables to run time. We introduce dynamic type inference (DTI) into the semantics of $lambda^{textsf{DTI}}_{textsf{B}}$ so that type variables are instantiated along reduction. The DTI-based semantics not only avoids the divergence described above but also is sound and complete with respect to the semantics of fully instantiated terms in the following sense: if the evaluation of a term succeeds (i.e., terminates with a value) in the DTI-based semantics, then there is a fully instantiated version of the term that also succeeds in the explicitly typed blame calculus and vice versa. Finally, we prove the gradual guarantee, which is an important correctness criterion of a gradually typed language, for the ITGL.
We decompose reverse-mode automatic differentiation into (forward-mode) linearization followed by transposition. Doing so isolates the essential difference between forward- and reverse-mode AD, and simplifies their joint implementation. In particular , once forward-mode AD rules are defined for every primitive operation in a source language, only linear primitives require an additional transposition rule in order to arrive at a complete reverse-mode AD implementation. This is how reverse-mode AD is written in JAX and Dex.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا