Do you want to publish a course? Click here

Quantum mean value approximator for hard integer value problems

60   0   0.0 ( 0 )
 Added by David Joseph
 Publication date 2021
  fields Physics
and research's language is English




Ask ChatGPT about the research

Evaluating the expectation of a quantum circuit is a classically difficult problem known as the quantum mean value problem (QMV). It is used to optimize the quantum approximate optimization algorithm and other variational quantum eigensolvers. We show that such an optimization can be improved substantially by using an approximation rather than the exact expectation. Together with efficient classical sampling algorithms, a quantum algorithm with minimal gate count can thus improve the efficiency of general integer-value problems, such as the shortest vector problem (SVP) investigated in this work.

rate research

Read More

The quantizer-dequantizer formalism is developed for mean value and probability representation of qubits and qutrits. We derive the star-product kernels providing the possibility to derive explicit expressions of the associative product of the symbols of the density operators and quantum observables for qubits. We discuss an extension of the quantizer-dequantizer formalism associated with the probability and observable mean-value descriptions of quantum states for qudits.
We introduce a general technique to create an extended formulation of a mixed-integer program. We classify the integer variables into blocks, each of which generates a finite set of vector values. The extended formulation is constructed by creating a new binary variable for each generated value. Initial experiments show that the extended formulation can have a more compact complete description than the original formulation. We prove that, using this reformulation technique, the facet description decomposes into one ``linking polyhedron per block and the ``aggregated polyhedron. Each of these polyhedra can be analyzed separately. For the case of identical coefficients in a block, we provide a complete description of the linking polyhedron and a polynomial-time separation algorithm. Applied to the knapsack with a fixed number of distinct coefficients, this theorem provides a complete description in an extended space with a polynomial number of variables.
A bosonic Laplacian is a conformally invariant second order differential operator acting on smooth functions defined on domains in Euclidean space and taking values in higher order irreducible representations of the special orthogonal group. In this paper, we study boundary value problems involving bosonic Laplacians in the upper-half space and the unit ball. Poisson kernels in the upper-half space and the unit ball are constructed, which give us solutions to the Dirichlet problems with $L^p$ boundary data, $1 leq p leq infty$. We also prove the uniqueness for solutions to the Dirichlet problems with continuous data for bosonic Laplacians and provide analogs of some properties of harmonic functions for null solutions of bosonic Laplacians, for instance, Cauchys estimates, the mean-value property, Liouvilles Theorem, etc.
We study Policy-extended Value Function Approximator (PeVFA) in Reinforcement Learning (RL), which extends conventional value function approximator (VFA) to take as input not only the state (and action) but also an explicit policy representation. Such an extension enables PeVFA to preserve values of multiple policies at the same time and brings an appealing characteristic, i.e., emph{value generalization among policies}. We formally analyze the value generalization under Generalized Policy Iteration (GPI). From theoretical and empirical lens, we show that generalized value estimates offered by PeVFA may have lower initial approximation error to true values of successive policies, which is expected to improve consecutive value approximation during GPI. Based on above clues, we introduce a new form of GPI with PeVFA which leverages the value generalization along policy improvement path. Moreover, we propose a representation learning framework for RL policy, providing several approaches to learn effective policy embeddings from policy network parameters or state-action pairs. In our experiments, we evaluate the efficacy of value generalization offered by PeVFA and policy representation learning in several OpenAI Gym continuous control tasks. For a representative instance of algorithm implementation, Proximal Policy Optimization (PPO) re-implemented under the paradigm of GPI with PeVFA achieves about 40% performance improvement on its vanilla counterpart in most environments.
We obtain an asymptotic representation formula for harmonic functions with respect to a linear anisotropic nonlocal operator. Furthermore we get a Bourgain-Brezis-Mironescu type limit formula for a related class of anisotropic nonlocal norms.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا