ترغب بنشر مسار تعليمي؟ اضغط هنا

Data-Driven Inference, Reconstruction, and Observational Completeness of Quantum Devices

43   0   0.0 ( 0 )
 نشر من قبل Michele Dall'Arno
 تاريخ النشر 2018
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

The range of a quantum measurement is the set of outcome probability distributions that can be produced by varying the input state. We introduce data-driven inference as a protocol that, given a set of experimental data as a collection of outcome distributions, infers the quantum measurement which is, i) consistent with the data, in the sense that its range contains all the distributions observed, and, ii) maximally noncommittal, in the sense that its range is of minimum volume in the space of outcome distributions. We show that data-driven inference is able to return a unique measurement for any data set if and only if the inference adopts a (hyper)-spherical state space (for example, the classical or the quantum bit). In analogy to informational completeness for quantum tomography, we define observational completeness as the property of any set of states that, when fed into any given measurement, produces a set of outcome distributions allowing for the correct reconstruction of the measurement via data-driven inference. We show that observational completeness is strictly stronger than informational completeness, in the sense that not all informationally complete sets are also observationally complete. Moreover, we show that for systems with a (hyper)-spherical state space, the only observationally complete simplex is the regular one, namely, the symmetric informationally complete set.



قيم البحث

اقرأ أيضاً

Data-driven inference was recently introduced as a protocol that, upon the input of a set of data, outputs a mathematical description for a physical device able to explain the data. The device so inferred is automatically self-consistent, that is, ca pable of generating all given data, and least committal, that is, consistent with a minimal superset of the given dataset. When applied to the inference of an unknown device, data-driven inference has been shown to output always the true device whenever the dataset has been produced by means of an observationally complete setup, which plays here the same role played by informationally complete setups in conventional quantum tomography. In this paper we develop a unified formalism for the data-driven inference of states and measurements. In the case of qubits, in particular, we provide an explicit implementation of the inference protocol as a convex programming algorithm for the machine learning of states and measurements. We also derive a complete characterization of observational completeness for general systems, from which it follows that only spherical 2-designs achieve observational completeness for qubit systems. This result provides symmetric informationally complete sets and mutually unbiased bases with a new theoretical and operational justification.
Given a physical device as a black box, one can in principle fully reconstruct its input-output transfer function by repeatedly feeding different input probes through the device and performing different measurements on the corresponding outputs. Howe ver, for such a complete tomographic reconstruction to work, full knowledge of both input probes and output measurements is required. Such an assumption is not only experimentally demanding, but also logically questionable, as it produces a circular argument in which the characterization of unknown devices appears to require other devices to have been already characterized beforehand. Here, we introduce a method to overcome such limitations present in usual tomographic techniques. We show that, even without any knowledge about the tomographic apparatus, it is still possible to infer the unknown device to a high degree of precision, solely relying on the observed data. This is achieved by employing a criterion that singles out the minimal explanation compatible with the observed data. Our method, that can be seen as a data-driven analogue of tomography, is solved analytically and implemented as an algorithm for the learning of qubit channels.
272 - Simon Razniewski 2014
Knowledge about data completeness is essentially in data-supported decision making. In this thesis we present a framework for metadata-based assessment of database completeness. We discuss how to express information about data completeness and how to use such information to draw conclusions about the completeness of query answers. In particular, we introduce formalisms for stating completeness for parts of relational databases. We then present techniques for drawing inferences between such statements and statements about the completeness of query answers, and show how the techniques can be extended to databases that contain null values. We show that the framework for relational databases can be transferred to RDF data, and that a similar framework can also be applied to spatial data. We also discuss how completeness information can be verified over processes, and introduce a data-aware process model that allows this verification.
We train convolutional neural networks to predict whether or not a set of measurements is informationally complete to uniquely reconstruct any given quantum state with no prior information. In addition, we perform fidelity benchmarking based on this measurement set without explicitly carrying out state tomography. The networks are trained to recognize the fidelity and a reliable measure for informational completeness. By gradually accumulating measurements and data, these trained convolutional networks can efficiently establish a compressive quantum-state characterization scheme by accelerating runtime computation and greatly reducing systematic drifts in experiments. We confirm the potential of this machine-learning approach by presenting experimental results for both spatial-mode and multiphoton systems of large dimensions. These predictions are further shown to improve when the networks are trained with additional bootstrapped training sets from real experimental data. Using a realistic beam-profile displacement error model for Hermite-Gaussian sources, we further demonstrate numerically that the orders-of-magnitude reduction in certification time with trained networks greatly increases the computation yield of a large-scale quantum processor using these sources, before state fidelity deteriorates significantly.
Maximum-likelihood estimation is applied to identification of an unknown quantum mechanical process represented by a ``black box. In contrast to linear reconstruction schemes the proposed approach always yields physically sensible results. Its feasib ility is demonstrated using the Monte Carlo simulations for the two-level system (single qubit).
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا