No Arabic abstract
Quantum state tomography is both a crucial component in the field of quantum information and computation, and a formidable task that requires an incogitably large number of measurement configurations as the system dimension grows. We propose and experimentally carry out an intuitive adaptive compressive tomography scheme, inspired by the traditional compressed-sensing protocol in signal recovery, that tremendously reduces the number of configurations needed to uniquely reconstruct any given quantum state without any additional a priori assumption whatsoever (such as rank information, purity, etc) about the state, apart from its dimension.
We perform several numerical studies for our recently published adaptive compressive tomography scheme [D. Ahn et al. Phys. Rev. Lett. 122, 100404 (2019)], which significantly reduces the number of measurement settings to unambiguously reconstruct any rank-deficient state without any a priori knowledge besides its dimension. We show that both entangled and product bases chosen by our adaptive scheme perform comparably well with recently-known compressed-sensing element-probing measurements, and also beat random measurement bases for low-rank quantum states. We also numerically conjecture asymptotic scaling behaviors for this number as a function of the state rank for our adaptive schemes. These scaling formulas appear to be independent of the Hilbert space dimension. As a natural development, we establish a faster hybrid compressive scheme that first chooses random bases, and later adaptive bases as the scheme progresses. As an epilogue, we reiterate important elements of informational completeness for our adaptive scheme.
This review serves as a concise introductory survey of modern compressive tomography developed since 2019. These are schemes meant for characterizing arbitrary low-rank quantum objects, be it an unknown state, a process or detector, using minimal measuring resources (hence compressive) without any emph{a priori} assumptions (rank, sparsity, eigenbasis, emph{etc}.) about the quantum object. This article contains a reasonable amount of technical details for the quantum-information community to start applying the methods discussed here. To facilitate the understanding of formulation logic and physics of compressive tomography, the theoretical concepts and important numerical results (both new and cross-referenced) shall be presented in a pedagogical manner.
A method for including a priori information in the 2-D D-bar algorithm is presented. Two methods of assigning conductivity values to the prior are presented, each corresponding to a different scenario on applications. The method is tested on several numerical examples with and without noise and is demonstrated to be highly effective in improving the spatial resolution of the D-bar method.
We present a compressive quantum process tomography scheme that fully characterizes any rank-deficient completely-positive process with no a priori information about the process apart from the dimension of the system on which the process acts. It uses randomly-chosen input states and adaptive output von Neumann measurements. Both entangled and tensor-product configurations are flexibly employable in our scheme, the latter which naturally makes it especially compatible with many-body quantum computing. Two main features of this scheme are the certification protocol that verifies whether the accumulated data uniquely characterize the quantum process, and a compressive reconstruction method for the output states. We emulate multipartite scenarios with high-order electromagnetic transverse modes and optical fibers to positively demonstrate that, in terms of measurement resources, our assumption-free compressive strategy can reconstruct quantum processes almost equally efficiently using all types of input states and basis measurement operations, operations, independent of whether or not they are factorizable into tensor-product states.
Quantum State Tomography is the task of determining an unknown quantum state by making measurements on identical copies of the state. Current algorithms are costly both on the experimental front -- requiring vast numbers of measurements -- as well as in terms of the computational time to analyze those measurements. In this paper, we address the problem of analysis speed and flexibility, introducing textit{Neural Adaptive Quantum State Tomography} (NA-QST), a machine learning based algorithm for quantum state tomography that adapts measurements and provides orders of magnitude faster processing while retaining state-of-the-art reconstruction accuracy. Our algorithm is inspired by particle swarm optimization and Bayesian particle-filter based adaptive methods, which we extend and enhance using neural networks. The resampling step, in which a bank of candidate solutions -- particles -- is refined, is in our case learned directly from data, removing the computational bottleneck of standard methods. We successfully replace the Bayesian calculation that requires computational time of $O(mathrm{poly}(n))$ with a learned heuristic whose time complexity empirically scales as $O(log(n))$ with the number of copies measured $n$, while retaining the same reconstruction accuracy. This corresponds to a factor of a million speedup for $10^7$ copies measured. We demonstrate that our algorithm learns to work with basis, symmetric informationally complete (SIC), as well as other types of POVMs. We discuss the value of measurement adaptivity for each POVM type, demonstrating that its effect is significant only for basis POVMs. Our algorithm can be retrained within hours on a single laptop for a two-qubit situation, which suggests a feasible time-cost when extended to larger systems. It can also adapt to a subset of possible states, a choice of the type of measurement, and other experimental details.