ترغب بنشر مسار تعليمي؟ اضغط هنا

Scalable and Fault Tolerant Computation with the Sparse Grid Combination Technique

78   0   0.0 ( 0 )
 نشر من قبل Brendan Harding
 تاريخ النشر 2014
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

This paper continues to develop a fault tolerant extension of the sparse grid combination technique recently proposed in [B. Harding and M. Hegland, ANZIAM J., 54 (CTAC2012), pp. C394-C411]. The approach is novel for two reasons, first it provides several levels in which one can exploit parallelism leading towards massively parallel implementations, and second, it provides algorithm-based fault tolerance so that solutions can still be recovered if failures occur during computation. We present a generalisation of the combination technique from which the fault tolerant algorithm is a consequence. Using a model for the time between faults on each node of a high performance computer we provide bounds on the expected error for interpolation with this algorithm. Numerical experiments on the scalar advection PDE demonstrate that the algorithm is resilient to faults on a real application. It is observed that the trade-off of recovery time to decreased accuracy of the solution is suitably small. A comparison with traditional checkpoint-restart methods applied to the combination technique show that our approach is highly scalable with respect to the number of faults.



قيم البحث

اقرأ أيضاً

184 - Rui Chao , Ben W. Reichardt 2017
Reliable qubits are difficult to engineer, but standard fault-tolerance schemes use seven or more physical qubits to encode each logical qubit, with still more qubits required for error correction. The large overhead makes it hard to experiment with fault-tolerance schemes with multiple encoded qubits. The 15-qubit Hamming code protects seven encoded qubits to distance three. We give fault-tolerant procedures for applying arbitrary Clifford operations on these encoded qubits, using only two extra qubits, 17 total. In particular, individual encoded qubits within the code block can be targeted. Fault-tolerant universal computation is possible with four extra qubits, 19 total. The procedures could enable testing more sophisticated protected circuits in small-scale quantum devices. Our main technique is to use gadgets to protect gates against correlated faults. We also take advantage of special code symmetries, and use pieceable fault tolerance.
We explain how to combine holonomic quantum computation (HQC) with fault tolerant quantum error correction. This establishes the scalability of HQC, putting it on equal footing with other models of computation, while retaining the inherent robustness the method derives from its geometric nature.
We study how dynamical decoupling (DD) pulse sequences can improve the reliability of quantum computers. We prove upper bounds on the accuracy of DD-protected quantum gates and derive sufficient conditions for DD-protected gates to outperform unprote cted gates. Under suitable conditions, fault-tolerant quantum circuits constructed from DD-protected gates can tolerate stronger noise, and have a lower overhead cost, than fault-tolerant circuits constructed from unprotected gates. Our accuracy estimates depend on the dynamics of the bath that couples to the quantum computer, and can be expressed either in terms of the operator norm of the baths Hamiltonian or in terms of the power spectrum of bath correlations; we explain in particular how the performance of recursively generated concatenated pulse sequences can be analyzed from either viewpoint. Our results apply to Hamiltonian noise models with limited spatial correlations.
The celebrated result of Fischer, Lynch and Paterson is the fundamental lower bound for asynchronous fault tolerant computation: any 1-crash resilient asynchronous agreement protocol must have some (possibly measure zero) probability of not terminati ng. In 1994, Ben-Or, Kelmer and Rabin published a proof-sketch of a lesser known lower bound for asynchronous fault tolerant computation with optimal resilience against a Byzantine adversary: if $nle 4t$ then any t-resilient asynchronous verifiable secret sharing protocol must have some non-zero probability of not terminating. Our main contribution is to revisit this lower bound and provide a rigorous and more general proof. Our second contribution is to show how to avoid this lower bound. We provide a protocol with optimal resilience that is almost surely terminating for a strong common coin functionality. Using this new primitive we provide an almost surely terminating protocol with optimal resilience for asynchronous Byzantine agreement that has a new fair validity property. To the best of our knowledge this is the first asynchronous Byzantine agreement with fair validity in the information theoretic setting.
The scalability of photonic implementations of fault-tolerant quantum computing based on Gottesman-Kitaev-Preskill (GKP) qubits is injured by the requirements of inline squeezing and reconfigurability of the linear optical network. In this work we pr opose a topologically error-corrected architecture that does away with these elements at no cost - in fact, at an advantage - to state preparation overheads. Our computer consists of three modules: a 2D array of probabilistic sources of GKP states; a depth-four circuit of static beamsplitters, phase shifters, and single-time-step delay lines; and a 2D array of homodyne detectors. The symmetry of our proposed circuit allows us to combine the effects of finite squeezing and uniform photon loss within the noise model, resulting in more comprehensive threshold estimates. These jumps over both architectural and analytical hurdles considerably expedite the construction of a photonic quantum computer.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا