Do you want to publish a course? Click here

Visibility-based hypothesis testing using higher-order optical interference

55   0   0.0 ( 0 )
 Added by Marcin Jarzyna
 Publication date 2017
  fields Physics
and research's language is English




Ask ChatGPT about the research

Many quantum information protocols rely on optical interference to compare datasets with efficiency or security unattainable by classical means. Standard implementations exploit first-order coherence between signals whose preparation requires a shared phase reference. Here, we analyze and experimentally demonstrate binary discrimination of visibility hypotheses based on higher-order interference for optical signals with a random relative phase. This provides a robust protocol implementation primitive when a phase lock is unavailable or impractical. With the primitive cost quantified by the total detected optical energy, optimal operation is typically reached in the few-photon regime.

rate research

Read More

73 - Tabish Qureshi 2019
The interference observed for a quanton, traversing more than one path, is believed to characterize its wave nature. Conventionally, the sharpness of interference has been quantified by its visibility or contrast, as defined in optics. Based on this visibility, wave-particle duality relations have been formulated for two-path interference. However, as one generalizes the situation to multi-path interference, it is found that conventional interference visibility is not a good quantifier. A recently introduced measure of quantum coherence has been shown to be a good quantifier of the wave nature. The subject of quantum coherence, in relation to the wave nature of quantons and to interference visibility, is reviewed here. It is argued that coherence can be construed as a more general form of interference visibility, if the visibility is measured in a different manner, and not as contrast.
The problem of discriminating between many quantum channels with certainty is analyzed under the assumption of prior knowledge of algebraic relations among possible channels. It is shown, by explicit construction of a novel family of quantum algorithms, that when the set of possible channels faithfully represents a finite subgroup of SU(2) (e.g., $C_n, D_{2n}, A_4, S_4, A_5$) the recently-developed techniques of quantum signal processing can be modified to constitute subroutines for quantum hypothesis testing. These algorithms, for group quantum hypothesis testing (G-QHT), intuitively encode discrete properties of the channel set in SU(2) and improve query complexity at least quadratically in $n$, the size of the channel set and group, compared to naive repetition of binary hypothesis testing. Intriguingly, performance is completely defined by explicit group homomorphisms; these in turn inform simple constraints on polynomials embedded in unitary matrices. These constructions demonstrate a flexible technique for mapping questions in quantum inference to the well-understood subfields of functional approximation and discrete algebra. Extensions to larger groups and noisy settings are discussed, as well as paths by which improved protocols for quantum hypothesis testing against structured channel sets have application in the transmission of reference frames, proofs of security in quantum cryptography, and algorithms for property testing.
127 - Zixin Huang , Cosmo Lupo 2021
Detecting the faint emission of a secondary source in the proximity of the much brighter source has been the most severe obstacle for using direct imaging in searching for exoplanets. Using quantum state discrimination and quantum imaging techniques, we show that one can significantly reduce the probability of error for detecting the presence of a weak secondary source, even when the two sources have small angular separations. If the weak source has relative intensity $epsilon ll 1 $ to the bright source, we find that the error exponent can be improved by a factor of $1/epsilon$. We also find the linear-optical measurements that are optimal in this regime. Our result serves as a complementary method in the toolbox of optical imaging, from astronomy to microscopy.
One of the key tasks in physics is to perform measurements in order to determine the state of a system. Often, measurements are aimed at determining the values of physical parameters, but one can also ask simpler questions, such as is the system in state A or state B?. In quantum mechanics, the latter type of measurements can be studied and optimized using the framework of quantum hypothesis testing. In many cases one can explicitly find the optimal measurement in the limit where one has simultaneous access to a large number $n$ of identical copies of the system, and estimate the expected error as $n$ becomes large. Interestingly, error estimates turn out to involve various quantum information theoretic quantities such as relative entropy, thereby giving these quantities operational meaning. In this paper we consider the application of quantum hypothesis testing to quantum many-body systems and quantum field theory. We review some of the necessary background material, and study in some detail the situation where the two states one wants to distinguish are parametrically close. The relevant error estimates involve quantities such as the variance of relative entropy, for which we prove a new inequality. We explore the optimal measurement strategy for spin chains and two-dimensional conformal field theory, focusing on the task of distinguishing reduced density matrices of subsystems. The optimal strategy turns out to be somewhat cumbersome to implement in practice, and we discuss a possible alternative strategy and the corresponding errors.
71 - Lin Zhou , Yun Wei , Alfred Hero 2020
We revisit the universal outlier hypothesis testing (Li emph{et al.}, TIT 2014) and derive fundamental limits for the optimal test. In outlying hypothesis testing, one is given multiple observed sequences, where most sequences are generated i.i.d. from a nominal distribution. The task is to discern the set of outlying sequences that are generated according to anomalous distributions. The nominal and anomalous distributions are emph{unknown}. We study the tradeoff among the probabilities of misclassification error, false alarm and false reject for tests that satisfy weak conditions on the rate of decrease of these error probabilities as a function of sequence length. Specifically, we propose a threshold-based universal test that ensures exponential decay of misclassification error and false alarm probabilities. We study two constraints on the false reject probabilities, one is that it be a non-vanishing constant and the other is that it have an exponential decay rate. For both cases, we characterize bounds on the false reject probability, as a function of the threshold, for each pair of nominal and anomalous distributions and demonstrate the optimality of our test in the generalized Neyman-Pearson sense. We first consider the case of at most one outlier and then generalize our results to the case of multiple outliers where the number of outliers is unknown and each outlier can follow a different anomalous distribution.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا