No Arabic abstract
This paper presents a systematic method to decompose uncertain linear quantum input-output networks into uncertain and nominal subnetworks, when uncertainties are defined in SLH representation. To this aim, two decomposition theorems are stated, which show how an uncertain quantum network can be decomposed into nominal and uncertain subnetworks in cascaded connection and how uncertainties can be translated from SLH parameters into state-space parameters. As a potential application of the proposed decomposition scheme, robust stability analysis of uncertain quantum networks is briefly introduced. The proposed uncertainty decomposition theorems take account of uncertainties in all three parameters of a quantum network and bridge the gap between SLH modeling and state-space robust analysis theory for linear quantum networks.
We introduce the measures, Bell discord (BD) and Mermin discord (MD), to characterize bipartite quantum correlations in the context of nonsignaling (NS) polytopes. These measures divide the full NS polytope into four regions depending on whether BD and/or MD is zero. This division of the NS polytope allows us to obtain a 3-decomposition that any bipartite box with two binary inputs and two binary outputs can be decomposed into Popescu-Rohrlich (PR) box, a maximally local box, and a local box with BD and MD equal to zero. BD and MD quantify two types of nonclassicality of correlations arising from all quantum correlated states which are neither classical-quantum states nor quantum-classical states. BD and MD serve us the semi-device-independent witnesses of nonclassicality of local boxes in that nonzero value of these measures imply incompatible measurements and nonzero quantum discord only when the dimension of the measured states is fixed. The 3-decomposition serves us to isolate the origin of the two types of nonclassicality into a PR-box and a maximally local box which is related to EPR-steering, respectively. We consider a quantum polytope that has an overlap with all the four regions of the full NS polytope to figure out the constraints of quantum correlations.
Recently quantum neural networks or quantum-classical neural networks (QCNN) have been actively studied, as a possible alternative to the conventional classical neural network (CNN), but their practical and theoretically-guaranteed performance is still to be investigated. On the other hand, CNNs and especially the deep CNNs, have acquired several solid theoretical basis; one of those significant basis is the neural tangent kernel (NTK) theory, which indeed can successfully explain the mechanism of various desirable properties of CNN, e.g., global convergence and good generalization properties. In this paper, we study a class of QCNN where NTK theory can be directly applied. The output of the proposed QCNN is a function of the projected quantum kernel, in the limit of large number of nodes of the CNN part; hence this scheme may have a potential quantum advantage. Also, because the parameters can be tuned only around the initial random variables chosen from unitary 2-design and Gaussian distributions, the proposed QCNN casts as a scheme that realizes the quantum kernel method with less computational complexity. Moreover, NTK is identical to the covariance matrix of a Gaussian process, which allows us to analytically study the learning process and as a consequence to have a condition of the dataset such that QCNN may perform better than the classical correspondence. These properties are all observed in a thorough numerical experiment.
Despite the pursuit of quantum advantages in various applications, the power of quantum computers in neural network computations has mostly remained unknown, primarily due to a missing link that effectively designs a neural network model suitable for quantum circuit implementation. In this article, we present the co-design framework, namely QuantumFlow, to provide such a missing link. QuantumFlow consists of novel quantum-friendly neural networks (QF-Nets), a mapping tool (QF-Map) to generate the quantum circuit (QF-Circ) for QF-Nets, and an execution engine (QF-FB). We discover that, in order to make full use of the strength of quantum representation, it is best to represent data in a neural network as either random variables or numbers in unitary matrices, such that they can be directly operated by the basic quantum logical gates. Based on these data representations, we propose two quantum friendly neural networks, QF-pNet and QF-hNet in QuantumFlow. QF-pNet using random variables has better flexibility, and can seamlessly connect two layers without measurement with more qbits and logical gates than QF-hNet. On the other hand, QF-hNet with unitary matrices can encode 2^k data into k qbits, and a novel algorithm can guarantee the cost complexity to be O(k^2). Compared to the cost of O(2^k)in classical computing, QF-hNet demonstrates the quantum advantages. Evaluation results show that QF-pNet and QF-hNet can achieve 97.10% and 98.27% accuracy, respectively. Results further show that for input sizes of neural computation grow from 16 to 2,048, the cost reduction of QuantumFlow increased from 2.4x to 64x. Furthermore, on MNIST dataset, QF-hNet can achieve accuracy of 94.09%, while the cost reduction against the classical computer reaches 10.85x. To the best of our knowledge, QuantumFlow is the first work to demonstrate the potential quantum advantage on neural network computation.
The polar decomposition for a matrix $A$ is $A=UB$, where $B$ is a positive Hermitian matrix and $U$ is unitary (or, if $A$ is not square, an isometry). This paper shows that the ability to apply a Hamiltonian $pmatrix{ 0 & A^dagger cr A & 0 cr} $ translates into the ability to perform the transformations $e^{-iBt}$ and $U$ in a deterministic fashion. We show how to use the quantum polar decomposition algorithm to solve the quantum Procrustes problem, to perform pretty good measurements, to find the positive Hamiltonian closest to any Hamiltonian, and to perform a Hamiltonian version of the quantum singular value transformation.
We discuss some applications of vario