ترغب بنشر مسار تعليمي؟ اضغط هنا

Multiple-copy state discrimination: Thinking globally, acting locally

184   0   0.0 ( 0 )
 نشر من قبل Brendon Higgins
 تاريخ النشر 2010
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We theoretically investigate schemes to discriminate between two nonorthogonal quantum states given multiple copies. We consider a number of state discrimination schemes as applied to nonorthogonal, mixed states of a qubit. In particular, we examine the difference that local and global optimization of local measurements makes to the probability of obtaining an erroneous result, in the regime of finite numbers of copies $N$, and in the asymptotic limit as $N rightarrow infty$. Five schemes are considered: optimal collective measurements over all copies, locally optimal local measurements in a fixed single-qubit measurement basis, globally optimal fixed local measurements, locally optimal adaptive local measurements, and globally optimal adaptive local measurements. Here, adaptive measurements are those for which the measurement basis can depend on prior measurement results. For each of these measurement schemes we determine the probability of error (for finite $N$) and scaling of this error in the asymptotic limit. In the asymptotic limit, adaptive schemes have no advantage over the optimal fixed local scheme, and except for states with less than 2% mixture, the most naive scheme (locally optimal fixed local measurements) is as good as any noncollective scheme. For finite $N$, however, the most sophisticated local scheme (globally optimal adaptive local measurements) is better than any other noncollective scheme, for any degree of mixture.

قيم البحث

اقرأ أيضاً

Background Based Conversations (BBCs) have been introduced to help conversational systems avoid generating overly generic responses. In a BBC, the conversation is grounded in a knowledge source. A key challenge in BBCs is Knowledge Selection (KS): gi ven a conversational context, try to find the appropriate background knowledge (a text fragment containing related facts or comments, etc.) based on which to generate the next response. Previous work addresses KS by employing attention and/or pointer mechanisms. These mechanisms use a local perspective, i.e., they select a token at a time based solely on the current decoding state. We argue for the adoption of a global perspective, i.e., pre-selecting some text fragments from the background knowledge that could help determine the topic of the next response. We enhance KS in BBCs by introducing a Global-to-Local Knowledge Selection (GLKS) mechanism. Given a conversational context and background knowledge, we first learn a topic transition vector to encode the most likely text fragments to be used in the next response, which is then used to guide the local KS at each decoding timestamp. In order to effectively learn the topic transition vector, we propose a distantly supervised learning schema. Experimental results show that the GLKS model significantly outperforms state-of-the-art methods in terms of both automatic and human evaluation. More importantly, GLKS achieves this without requiring any extra annotations, which demonstrates its high degree of scalability.
The celebrated sparse representation model has led to remarkable results in various signal processing tasks in the last decade. However, despite its initial purpose of serving as a global prior for entire signals, it has been commonly used for modeli ng low dimensional patches due to the computational constraints it entails when deployed with learned dictionaries. A way around this problem has been recently proposed, adopting a convolutional sparse representation model. This approach assumes that the global dictionary is a concatenation of banded Circulant matrices. While several works have presented algorithmic solutions to the global pursuit problem under this new model, very few truly-effective guarantees are known for the success of such methods. In this work, we address the theoretical aspects of the convolutional sparse model providing the first meaningful answers to questions of uniqueness of solutions and success of pursuit algorithms, both greedy and convex relaxations, in ideal and noisy regimes. To this end, we generalize mathematical quantities, such as the $ell_0$ norm, mutual coherence, Spark and RIP to their counterparts in the convolutional setting, intrinsically capturing local measures of the global model. On the algorithmic side, we demonstrate how to solve the global pursuit problem by using simple local processing, thus offering a first of its kind bridge between global modeling of signals and their patch-based local treatment.
The celebrated sparse representation model has led to remarkable results in various signal processing tasks in the last decade. However, despite its initial purpose of serving as a global prior for entire signals, it has been commonly used for modeli ng low dimensional patches due to the computational constraints it entails when deployed with learned dictionaries. A way around this problem has been proposed recently, adopting a convolutional sparse representation model. This approach assumes that the global dictionary is a concatenation of banded Circulant matrices. Although several works have presented algorithmic solutions to the global pursuit problem under this new model, very few truly-effective guarantees are known for the success of such methods. In the first of this two-part work, we address the theoretical aspects of the sparse convolutional model, providing the first meaningful answers to corresponding questions of uniqueness of solutions and success of pursuit algorithms. To this end, we generalize mathematical quantities, such as the $ell_0$ norm, the mutual coherence and the Spark, to their counterparts in the convolutional setting, which intrinsically capture local measures of the global model. In a companion paper, we extend the analysis to a noisy regime, addressing the stability of the sparsest solutions and pursuit algorithms, and demonstrate practical approaches for solving the global pursuit problem via simple local processing.
The convolutional sparse model has recently gained increasing attention in the signal and image processing communities, and several methods have been proposed for solving the pursuit problem emerging from it -- in particular its convex relaxation, Ba sis Pursuit. In the first of this two-part work, we have provided a theoretical back-bone for this model, providing guarantees for the uniqueness of the sparsest solution and for the success of pursuit algorithms by introducing the notion of stripe sparsity and other related measures. Herein, we extend the analysis to a noisy regime, thereby considering signal perturbations and model deviations. We address questions of stability of the sparsest solutions and the success of pursuit algorithms, both greedy and convex. Classical definitions such as the RIP are generalized to the convolutional model, and existing notions such as the ERC are connected to our setting. On the algorithmic side, we demonstrate how to solve the global pursuit problem by using simple local processing, thus offering a first of its kind bridge between global modeling of signals and their patch-based local treatment.
We consider a scenario of remote state preparation of qubits where a single copy of an entangled state is shared between Alice and several Bobs who sequentially perform unsharp single-particle measurements. We show that a substantial number of Bobs c an optimally and reliably prepare the qubit in Alices lab exceeding the classical realm. There can be at most 16 Bobs in a sequence when the state is chosen from the equatorial circle of the Bloch sphere. In general, depending upon the choice of a circle from the Bloch sphere, the optimum number of Bobs ranges from 12 for the worst choice, to become remarkably very large corresponding to circles in the polar regions, in case of an initially shared maximally entangled state. We further show that the bound on the number of observers successful in implementing remote state preparation is higher for maximally entangled initial states than that for non-maximally entangled initial states.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا