ترغب بنشر مسار تعليمي؟ اضغط هنا

Program-to-Circuit: Exploiting GNNs for Program Representation and Circuit Translation

81   0   0.0 ( 0 )
 نشر من قبل Nan Wu
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Circuit design is complicated and requires extensive domain-specific expertise. One major obstacle stuck on the way to hardware agile development is the considerably time-consuming process of accurate circuit quality evaluation. To significantly expedite the circuit evaluation during the translation from behavioral languages to circuit designs, we formulate it as a Program-to-Circuit problem, aiming to exploit the representation power of graph neural networks (GNNs) by representing C/C++ programs as graphs. The goal of this work is four-fold. First, we build a standard benchmark containing 40k C/C++ programs, each of which is translated to a circuit design with actual hardware quality metrics, aiming to facilitate the development of effective GNNs targeting this high-demand circuit design area. Second, 14 state-of-the-art GNN models are analyzed on the Program-to-Circuit problem. We identify key design challenges of this problem, which should be carefully handled but not yet solved by existing GNNs. The goal is to provide domain-specific knowledge for designing GNNs with suitable inductive biases. Third, we discuss three sets of real-world benchmarks for GNN generalization evaluation, and analyze the performance gap between standard programs and the real-case ones. The goal is to enable transfer learning from limited training data to real-world large-scale circuit design problems. Fourth, the Program-to-Circuit problem is a representative within the Program-to-X framework, a set of program-based analysis problems with various downstream tasks. The in-depth understanding of strength and weaknesses in applying GNNs on Program-to-Circuit could largely benefit the entire family of Program-to-X. Pioneering in this direction, we expect more GNN endeavors to revolutionize this high-demand Program-to-Circuit problem and to enrich the expressiveness of GNNs on programs.

قيم البحث

اقرأ أيضاً

We present QSystem, an open-source platform for the simulation of quantum circuits focused on bitwise operations on a Hashmap data structure storing quantum states and gates. QSystem is implemented in C++ and delivered as a Python module, taking adva ntage of the C++ performance and the Python dynamism. The simulators API is designed to be simple and intuitive, thus streamlining the simulation of a quantum circuit in Python. The current release has three distinct ways to represent the quantum state: vector, matrix, and the proposed bitwise. The latter constitutes our main results and is a new way to store and manipulate both states and operations which shows an exponential advantage with the amount of superposition in the systems state. We benchmark the bitwise representation against other simulators, namely Qiskit, Forest SDK QVM, and Cirq.
Probabilistic graphical models such as Bayesian networks are widely used to model stochastic systems to perform various types of analysis such as probabilistic prediction, risk analysis, and system health monitoring, which can become computationally expensive in large-scale systems. While demonstrations of true quantum supremacy remain rare, quantum computing applications managing to exploit the advantages of amplitude amplification have shown significant computational benefits when compared against their classical counterparts. We develop a systematic method for designing a quantum circuit to represent a generic discrete Bayesian network with nodes that may have two or more states, where nodes with more than two states are mapped to multiple qubits. The marginal probabilities associated with root nodes (nodes without any parent nodes) are represented using rotation gates, and the conditional probability tables associated with non-root nodes are represented using controlled rotation gates. The controlled rotation gates with more than one control qubit are represented using ancilla qubits. The proposed approach is demonstrated for three examples: a 4-node oil company stock prediction, a 10-node network for liquidity risk assessment, and a 9-node naive Bayes classifier for bankruptcy prediction. The circuits were designed and simulated using Qiskit, a quantum computing platform that enables simulations and also has the capability to run on real quantum hardware. The results were validated against those obtained from classical Bayesian network implementations.
Inductive program synthesis, or inferring programs from examples of desired behavior, offers a general paradigm for building interpretable, robust, and generalizable machine learning systems. Effective program synthesis depends on two key ingredients : a strong library of functions from which to build programs, and an efficient search strategy for finding programs that solve a given task. We introduce LAPS (Language for Abstraction and Program Search), a technique for using natural language annotations to guide joint learning of libraries and neurally-guided search models for synthesis. When integrated into a state-of-the-art library learning system (DreamCoder), LAPS produces higher-quality libraries and improves search efficiency and generalization on three domains -- string editing, image composition, and abstract reasoning about scenes -- even when no natural language hints are available at test time.
The goal of program synthesis is to automatically generate programs in a particular language from corresponding specifications, e.g. input-output behavior. Many current approaches achieve impressive results after training on randomly generated I/O ex amples in limited domain-specific languages (DSLs), as with string transformations in RobustFill. However, we empirically discover that applying test input generation techniques for languages with control flow and rich input space causes deep networks to generalize poorly to certain data distributions; to correct this, we propose a new methodology for controlling and evaluating the bias of synthetic data distributions over both programs and specifications. We demonstrate, using the Karel DSL and a small Calculator DSL, that training deep networks on these distributions leads to improved cross-distribution generalization performance.
The goal of program synthesis from examples is to find a computer program that is consistent with a given set of input-output examples. Most learning-based approaches try to find a program that satisfies all examples at once. Our work, by contrast, c onsiders an approach that breaks the problem into two stages: (a) find programs that satisfy only one example, and (b) leverage these per-example solutions to yield a program that satisfies all examples. We introduce the Cross Aggregator neural network module based on a multi-head attention mechanism that learns to combine the cues present in these per-example solutions to synthesize a global solution. Evaluation across programs of different lengths and under two different experimental settings reveal that when given the same time budget, our technique significantly improves the success rate over PCCoder arXiv:1809.04682v2 [cs.LG] and other ablation baselines. The code, data and trained models for our work can be found at https://github.com/shrivastavadisha/N-PEPS.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا