ﻻ يوجد ملخص باللغة العربية
Data-driven inference was recently introduced as a protocol that, upon the input of a set of data, outputs a mathematical description for a physical device able to explain the data. The device so inferred is automatically self-consistent, that is, capable of generating all given data, and least committal, that is, consistent with a minimal superset of the given dataset. When applied to the inference of an unknown device, data-driven inference has been shown to output always the true device whenever the dataset has been produced by means of an observationally complete setup, which plays here the same role played by informationally complete setups in conventional quantum tomography. In this paper we develop a unified formalism for the data-driven inference of states and measurements. In the case of qubits, in particular, we provide an explicit implementation of the inference protocol as a convex programming algorithm for the machine learning of states and measurements. We also derive a complete characterization of observational completeness for general systems, from which it follows that only spherical 2-designs achieve observational completeness for qubit systems. This result provides symmetric informationally complete sets and mutually unbiased bases with a new theoretical and operational justification.
The range of a quantum measurement is the set of outcome probability distributions that can be produced by varying the input state. We introduce data-driven inference as a protocol that, given a set of experimental data as a collection of outcome dis
Given a physical device as a black box, one can in principle fully reconstruct its input-output transfer function by repeatedly feeding different input probes through the device and performing different measurements on the corresponding outputs. Howe
Knowledge about data completeness is essentially in data-supported decision making. In this thesis we present a framework for metadata-based assessment of database completeness. We discuss how to express information about data completeness and how to
We train convolutional neural networks to predict whether or not a set of measurements is informationally complete to uniquely reconstruct any given quantum state with no prior information. In addition, we perform fidelity benchmarking based on this
Maximum-likelihood estimation is applied to identification of an unknown quantum mechanical process represented by a ``black box. In contrast to linear reconstruction schemes the proposed approach always yields physically sensible results. Its feasib