ترغب بنشر مسار تعليمي؟ اضغط هنا

Measurements on current quantum-hardware are subject to hardware imperfections that lead to readout-errors. These errors manifest themselves as a bias in quantum expectation values. Here, we propose a method to remove this bias from the expectation v alues of Pauli observables. No specific form of the noise is assumed, other than requiring that it is `weak. We apply a method that forces the bias in the expectation value to appear as a multiplicative factor irrespective of the actual noise process. This factor can be measured directly and removed, at the cost of an increase in the sampling complexity for the observable. A bound relating the error in the expectation value to the sample complexity is derived.
125 - Ewout van den Berg 2020
We describe a simple algorithm for sampling $n$-qubit Clifford operators uniformly at random. The algorithm outputs the Clifford operators in the form of quantum circuits with at most $5n + 2n^2$ elementary gates and a maximum depth of $mathcal{O}(nl og n)$ on fully connected topologies. The circuit can be output in a streaming fashion as the algorithm proceeds, and different parts of the circuit can be generated in parallel. The algorithm has an $mathcal{O}(n^2)$ time complexity, which matches the current state of the art. The main advantage of the proposed algorithm, however, lies in its simplicity and elementary derivation.
99 - Ewout van den Berg 2020
We describe an efficient implementation of Bayesian quantum phase estimation in the presence of noise and multiple eigenstates. The main contribution of this work is the dynamic switching between different representations of the phase distributions, namely truncated Fourier series and normal distributions. The Fourier-series representation has the advantage of being exact in many cases, but suffers from increasing complexity with each update of the prior. This necessitates truncation of the series, which eventually causes the distribution to become unstable. We derive bounds on the error in representing normal distributions with a truncated Fourier series, and use these to decide when to switch to the normal-distribution representation. This representation is much simpler, and was proposed in conjunction with rejection filtering for approximate Bayesian updates. We show that, in many cases, the update can be done exactly using analytic expressions, thereby greatly reducing the time complexity of the updates. Finally, when dealing with a superposition of several eigenstates, we need to estimate the relative weights. This can be formulated as a convex optimization problem, which we solve using a gradient-projection algorithm. By updating the weights at exponentially scaled iterations we greatly reduce the computational complexity without affecting the overall accuracy.
Many applications of practical interest rely on time evolution of Hamiltonians that are given by a sum of Pauli operators. Quantum circuits for exact time evolution of single Pauli operators are well known, and can be extended trivially to sums of co mmuting Paulis by concatenating the circuits of individual terms. In this paper we reduce the circuit complexity of Hamiltonian simulation by partitioning the Pauli operators into mutually commuting clusters and exponentiating the elements within each cluster after applying simultaneous diagonalization. We provide a practical algorithm for partitioning sets of Paulis into commuting subsets, and show that the proposed approach can help to significantly reduce both the number of CNOT operations and circuit depth for Hamiltonians arising in quantum chemistry. The algorithms for simultaneous diagonalization are also applicable in the context of stabilizer states; in particular we provide novel four- and five-stage representations, each containing only a single stage of conditional gates.
In this work we study the structure and cardinality of maximal sets of commuting and anticommuting Paulis in the setting of the abelian Pauli group. We provide necessary and sufficient conditions for anticommuting sets to be maximal, and present an e fficient algorithm for generating anticommuting sets of maximum size. As a theoretical tool, we introduce commutativity maps, and study properties of maps associated with elements in the cosets with respect to anticommuting minimal generating sets. We also derive expressions for the number of distinct sets of commuting and anticommuting abelian Paulis of a given size.
64 - Ewout van den Berg 2019
In this work we consider practical implementations of Kitaevs algorithm for quantum phase estimation. We analyze the use of phase shifts that simplify the estimation of successive bits in the estimation of unknown phase $varphi$. By using increasingl y accurate shifts we reduce the number of measurements to the point where only a single measurements in needed for each additional bit. This results in an algorithm that can estimate $varphi$ to an accuracy of $2^{-(m+2)}$ with probability at least $1-epsilon$ using $N_{epsilon} + m$ measurements, where $N_{epsilon}$ is a constant that depends only on $epsilon$ and the particular sampling algorithm. We present different sampling algorithms and study the exact number of measurements needed through careful numerical evaluation, and provide theoretical bounds and numerical values for $N_{epsilon}$.
82 - Ewout van den Berg 2018
Matrix and tensor operations form the basis of a wide range of fields and applications, and in many cases constitute a substantial part of the overall computational complexity. The ability of general-purpose GPUs to speed up many of these operations and enable others has resulted in a widespread adaptation of these devices. In order for tensor operations to take full advantage of the computational power, specialized software is required, and currently there exist several packages (predominantly in the area of deep learning) that incorporate tensor operations on both CPU and GPU. Nevertheless, a stand-alone framework that supports general tensor operations is still missing. In this paper we fill this gap and propose the Ocean Tensor Library: a modular tensor-support package that is designed to serve as a foundational layer for applications that require dense tensor operations on a variety of device types. The API is carefully designed to be powerful, extensible, and at the same time easy to use. The package is available as open source.
We study the flow of information and the evolution of internal representations during deep neural network (DNN) training, aiming to demystify the compression aspect of the information bottleneck theory. The theory suggests that DNN training comprises a rapid fitting phase followed by a slower compression phase, in which the mutual information $I(X;T)$ between the input $X$ and internal representations $T$ decreases. Several papers observe compression of estimated mutual information on different DNN models, but the true $I(X;T)$ over these networks is provably either constant (discrete $X$) or infinite (continuous $X$). This work explains the discrepancy between theory and experiments, and clarifies what was actually measured by these past works. To this end, we introduce an auxiliary (noisy) DNN framework for which $I(X;T)$ is a meaningful quantity that depends on the networks parameters. This noisy framework is shown to be a good proxy for the original (deterministic) DNN both in terms of performance and the learned representations. We then develop a rigorous estimator for $I(X;T)$ in noisy DNNs and observe compression in various models. By relating $I(X;T)$ in the noisy DNN to an information-theoretic communication problem, we show that compression is driven by the progressive clustering of hidden representations of inputs from the same class. Several methods to directly monitor clustering of hidden representations, both in noisy and deterministic DNNs, are used to show that meaningful clusters form in the $T$ space. Finally, we return to the estimator of $I(X;T)$ employed in past works, and demonstrate that while it fails to capture the true (vacuous) mutual information, it does serve as a measure for clustering. This clarifies the past observations of compression and isolates the geometric clustering of hidden representations as the true phenomenon of interest.
52 - Ewout van den Berg 2016
We propose a new algorithm for the optimization of convex functions over a polyhedral set in Rn. The algorithm extends the spectral projected-gradient method with limited-memory BFGS iterates restricted to the present face whenever possible. We prove convergence of the algorithm under suitable conditions and apply the algorithm to solve the Lasso problem, and consequently, the basis-pursuit denoise problem through the root-finding framework proposed by van den Berg and Friedlander [SIAM Journal on Scientific Computing, 31(2), 2008]. The algorithm is especially well suited to simple domains and could also be used to solve bound-constrained problems as well as problems restricted to the simplex.
In this work we study variance in the results of neural network training on a wide variety of configurations in automatic speech recognition. Although this variance itself is well known, this is, to the best of our knowledge, the first paper that per forms an extensive empirical study on its effects in speech recognition. We view training as sampling from a distribution and show that these distributions can have a substantial variance. These results show the urgent need to rethink the way in which results in the literature are reported and interpreted.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا