Do you want to publish a course? Click here

Quantum Shannon Theory

67   0   0.0 ( 0 )
 Added by John Preskill
 Publication date 2016
  fields Physics
and research's language is English
 Authors John Preskill




Ask ChatGPT about the research

This is the 10th and final chapter of my book on Quantum Information, based on the course I have been teaching at Caltech since 1997. An earlier version of this chapter (originally Chapter 5) has been available on the course website since 1998, but this version is substantially revised and expanded. Topics covered include classical Shannon theory, quantum compression, quantifying entanglement, accessible information, and using the decoupling principle to derive achievable rates for quantum protocols. This is a draft, pre-publication copy of Chapter 10, which I will continue to update. See the URL on the title page for further updates and drafts of other chapters, and please send me an email if you notice errors.



rate research

Read More

Dual to the usual noisy channel coding problem, where a noisy (classical or quantum) channel is used to simulate a noiseless one, reverse Shannon theorems concern the use of noiseless channels to simulate noisy ones, and more generally the use of one noisy channel to simulate another. For channels of nonzero capacity, this simulation is always possible, but for it to be efficient, auxiliary resources of the proper kind and amount are generally required. In the classical case, shared randomness between sender and receiver is a sufficient auxiliary resource, regardless of the nature of the source, but in the quantum case the requisite auxiliary resources for efficient simulation depend on both the channel being simulated, and the source from which the channel inputs are coming. For tensor power sources (the quantum generalization of classical IID sources), entanglement in the form of standard ebits (maximally entangled pairs of qubits) is sufficient, but for general sources, which may be arbitrarily correlated or entangled across channel inputs, additional resources, such as entanglement-embezzling states or backward communication, are generally needed. Combining existing and new results, we establish the amounts of communication and auxiliary resources needed in both the classical and quantum cases, the tradeoffs among them, and the loss of simulation efficiency when auxiliary resources are absent or insufficient. In particular we find a new single-letter expression for the excess forward communication cost of coherent feedback simulations of quantum channels (i.e. simulations in which the sender retains what would escape into the environment in an ordinary simulation), on non-tensor-power sources in the presence of unlimited ebits but no other auxiliary resource. Our results on tensor power sources establish a strong converse to the entanglement-assisted capacity theorem.
63 - Pouria Pedram 2016
In the framework of the generalized uncertainty principle, the position and momentum operators obey the modified commutation relation $[X,P]=ihbarleft(1+beta P^2right)$ where $beta$ is the deformation parameter. Since the validity of the uncertainty relation for the Shannon entropies proposed by Beckner, Bialynicki-Birula, and Mycieslki (BBM) depends on both the algebra and the used representation, we show that using the formally self-adjoint representation, i.e., $X=x$ and $P=tanleft(sqrt{beta}pright)/sqrt{beta}$ where $[x,p]=ihbar$, the BBM inequality is still valid in the form $S_x+S_pgeq1+lnpi$ as well as in ordinary quantum mechanics. We explicitly indicate this result for the harmonic oscillator in the presence of the minimal length.
Quantum chromodynamics (QCD) describes the structure of hadrons such as the proton at a fundamental level. The precision of calculations in QCD limits the precision of the values of many physical parameters extracted from collider data. For example, uncertainty in the parton distribution function (PDF) is the dominant source of error in the $W$ mass measurement at the LHC. Improving the precision of such measurements is essential in the search for new physics. Quantum simulation offers an efficient way of studying quantum field theories (QFTs) such as QCD non-perturbatively. Previous quantum algorithms for simulating QFTs have qubit requirements that are well beyond the most ambitious experimental proposals for large-scale quantum computers. Can the qubit requirements for such algorithms be brought into range of quantum computation with several thousand logical qubits? We show how this can be achieved by using the light-front formulation of quantum field theory. This work was inspired by the similarity of the light-front formulation to quantum chemistry, first noted by Kenneth Wilson.
Any realist interpretation of quantum theory must grapple with the measurement problem and the status of state-vector collapse. In a no-collapse approach, measurement is typically modeled as a dynamical process involving decoherence. We describe how the minimal modal interpretation closes a gap in this dynamical description, leading to a complete and consistent resolution to the measurement problem and an effective form of state collapse. Our interpretation also provides insight into the indivisible nature of measurement--the fact that you cant stop a measurement part-way through and uncover the underlying `ontic dynamics of the system in question. Having discussed the hidden dynamics of a systems ontic state during measurement, we turn to more general forms of open-system dynamics and explore the extent to which the details of the underlying ontic behavior of a system can be described. We construct a space of ontic trajectories and describe obstructions to defining a probability measure on this space.
Information reconciliation (IR) corrects the errors in sifted keys and ensures the correctness of quantum key distribution (QKD) systems. Polar codes-based IR schemes can achieve high reconciliation efficiency, however, the incidental high frame error rate decreases the secure key rate of QKD systems. In this article, we propose a Shannon-limit approached (SLA) IR scheme, which mainly contains two phases: the forward reconciliation phase and the acknowledgment reconciliation phase. In the forward reconciliation phase, the sifted key is divided into sub-blocks and performed with the improved block checked successive cancellation list (BC-SCL) decoder of polar codes. Afterwards, only the failure corrected sub-blocks perform the additional acknowledgment reconciliation phase, which decreases the frame error rate of the SLA IR scheme. The experimental results show that the overall failure probability of SLA IR scheme is decreased to $10^{-8}$ and the efficiency is improved to 1.091 with the IR block length of 128Mb. Furthermore, the efficiency of the proposed SLA IR scheme is 1.055, approached to Shannon-limit, when quantum bit error rate is 0.02 and the input scale of 1Gb, which is hundred times larger than the state-of-art implemented polar codes-based IR schemes.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا