ترغب بنشر مسار تعليمي؟ اضغط هنا

On the Principles of Differentiable Quantum Programming Languages

323   0   0.0 ( 0 )
 نشر من قبل Xiaodi Wu
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Variational Quantum Circuits (VQCs), or the so-called quantum neural-networks, are predicted to be one of the most important near-term quantum applications, not only because of their similar promises as classical neural-networks, but also because of their feasibility on near-term noisy intermediate-size quantum (NISQ) machines. The need for gradient information in the training procedure of VQC applications has stimulated the development of auto-differentiation techniques for quantum circuits. We propose the first formalization of this technique, not only in the context of quantum circuits but also for imperative quantum programs (e.g., with controls), inspired by the success of differentiable programming languages in classical machine learning. In particular, we overcome a few unique difficulties caused by exotic quantum features (such as quantum no-cloning) and provide a rigorous formulation of differentiation applied to bounded-loop imperative quantum programs, its code-transformation rules, as well as a sound logic to reason about their correctness. Moreover, we have implemented our code transformation in OCaml and demonstrated the resource-efficiency of our scheme both analytically and empirically. We also conduct a case study of training a VQC instance with controls, which shows the advantage of our scheme over existing auto-differentiation for quantum circuits without controls.



قيم البحث

اقرأ أيضاً

This volume contains the proceedings of the Eighth Workshop on Quantitative Aspects of Programming Languages (QAPL 2010), held in Paphos, Cyprus, on March 27-28, 2010. QAPL 2010 is a satellite event of the European Joint Conferences on Theory and Pra ctice of Software (ETAPS 2010). The workshop theme is on quantitative aspects of computation. These aspects are related to the use of physical quantities (storage space, time, bandwidth, etc.) as well as mathematical quantities (e.g. probability and measures for reliability, security and trust), and play an important (sometimes essential) role in characterising the behavior and determining the properties of systems. Such quantities are central to the definition of both the model of systems (architecture, language design, semantics) and the methodologies and tools for the analysis and verification of the systems properties. The aim of this workshop is to discuss the explicit use of quantitative information such as time and probabilities either directly in the model or as a tool for the analysis of systems.
54 - Peter D. Mosses 2021
Specifying the semantics of a programming language formally can have many benefits. However, it can also require a huge effort. The effort can be significantly reduced by translating language syntax to so-called fundamental constructs (funcons). A tr anslation to funcons is easy to update when the language evolves, and it exposes relationships between individual language constructs. The PLanCompS project has developed an initial collection of funcons (primarily for translation of functional and imperative languages). The behaviour of each funcon is defined, once and for all, using a modular variant of structural operational semantics. The definitions are available online. This paper introduces and motivates funcons. It illustrates translation of language constructs to funcons, and how funcons are defined. It also relates funcons to notation used in previous frameworks, including monadic semantics and action semantics.
This EPTCS volume contains the proceedings of the 16th Workshop on Quantitative Aspects of Programming Languages and Systems (QAPL 2019) held in Prague, Czech Republic, on Sunday 7 April 2019. QAPL 2019 was a satellite event of the European Joint Con ferences on Theory and Practice of Software (ETAPS 2019). QAPL focuses on quantitative aspects of computations, which may refer to the use of physical quantities (time, bandwidth, etc.) as well as mathematical quantities (e.g., probabilities) for the characterisation of the behaviour and for determining the properties of systems. Such quantities play a central role in defining both the model of systems (architecture, language design, semantics) and the methodologies and tools for the analysis and verification of system properties. The aim of the QAPL workshop series is to discuss the explicit use of time and probability and general quantities either directly in the model or as a tool for the analysis or synthesis of systems. The 16th edition of QAPL also focuses on discussing the developments, challenges and results in this area covered by our workshop in its nearly 20-year history.
134 - Herbert Wiklicky 2017
This volume of the EPTCS contains the proceedings of the 15th international workshop on Qualitative Aspects of Programming Languages and Systems, QAPL 2017, held at April 23, 2017 in Uppsala, Sweden as a satellite event of ETAPS 2017, the 20th Europe an Joint Conferencec on Theory and Practice of Software. The volume contains two invited contributions by Erika Abraham and by Andrea Vandin as well as six technical papers selected by the QAPL 2017 program committee.
In a series of recent theoretical works, it was shown that strongly over-parameterized neural networks trained with gradient-based methods could converge exponentially fast to zero training loss, with their parameters hardly varying. In this work, we show that this lazy training phenomenon is not specific to over-parameterized neural networks, and is due to a choice of scaling, often implicit, that makes the model behave as its linearization around the initialization, thus yielding a model equivalent to learning with positive-definite kernels. Through a theoretical analysis, we exhibit various situations where this phenomenon arises in non-convex optimization and we provide bounds on the distance between the lazy and linearized optimization paths. Our numerical experiments bring a critical note, as we observe that the performance of commonly used non-linear deep convolutional neural networks in computer vision degrades when trained in the lazy regime. This makes it unlikely that lazy training is behind the many successes of neural networks in difficult high dimensional tasks.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا