No Arabic abstract
A promising approach to defend against side channel attacks is to build programs that are leakage resilient, in a formal sense. One such formal notion of leakage resilience is the n-threshold-probing model proposed in the seminal work by Ishai et al. In a recent work, Eldib and Wang have proposed a method for automatically synthesizing programs that are leakage resilient according to this model, for the case n=1. In this paper, we show that the n-threshold-probing model of leakage resilience enjoys a certain compositionality property that can be exploited for synthesis. We use the property to design a synthesis method that efficiently synthesizes leakage-resilient programs in a compositional manner, for the general case of n > 1. We have implemented a prototype of the synthesis algorithm, and we demonstrate its effectiveness by synthesizing leakage-resilie
Non-malleable secret sharing was recently proposed by Goyal and Kumar in independent tampering and joint tampering models for threshold secret sharing (STOC18) and secret sharing with general access structure (CRYPTO18). The idea of making secret sharing non-malleable received great attention and by now has generated many papers exploring new frontiers in this topic, such as multiple-time tampering and adding leakage resiliency to the one-shot tampering model. Non-compartmentalized tampering model was first studied by Agrawal et.al (CRYPTO15) for non-malleability against permutation composed with bit-wise independent tampering, and shown useful in constructing non-malleable string commitments. We initiate the study of leakage-resilient secret sharing in the non-compartmentalized model. The leakage adversary can corrupt several players and obtain their shares, as in normal secret sharing. The leakage adversary can apply arbitrary affine functions with bounded total output length to the full share vector and obtain the outputs as leakage. These two processes can be both non-adaptive and do not depend on each other, or both adaptive and depend on each other with arbitrary ordering. We construct such leakage-resilient secret sharing schemes and achieve constant information ratio (the scheme for non-adaptive adversary is near optimal). We then explore making the non-compartmentalized leakage-resilient secret sharing also non-malleable against tampering. We consider a tampering model, where the adversary can use the shares obtained from the corrupted players and the outputs of the global leakage functions to choose a tampering function from a tampering family F. We give two constructions of such leakage-resilient non-malleable secret sharing for the case F is the bit-wise independent tampering and, respectively, for the case F is the affine tampering functions.
A recent trend in cryptography is to protect data and computation against various side-channel attacks. Dziembowski and Faust (TCC 2012) have proposed a general way to protect arbitrary circuits against any continual leakage assuming that: (i) the memory is divided into the parts, which leaks independently (ii) the leakage in each observation is bounded (iii) the circuit has an access to a leak-free component, which samples random orthogonal vectors. The pivotal element of their construction is a protocol for refreshing the so-called Leakage-Resilient Storage (LRS). In this note, we present a more efficient and simpler protocol for refreshing LRS under the same assumptions. Our solution needs O(n) operations to fully refresh the secret (in comparison to {Omega}(n^2) for a protocol of Dziembowski and Faust), where n is a security parameter that describes the maximal amount of leakage in each invocation of the refreshing procedure
Federated learning(FL) is an emerging distributed learning paradigm with default client privacy because clients can keep sensitive data on their devices and only share local training parameter updates with the federated server. However, recent studies reveal that gradient leakages in FL may compromise the privacy of client training data. This paper presents a gradient leakage resilient approach to privacy-preserving federated learning with per training example-based client differential privacy, coined as Fed-CDP. It makes three original contributions. First, we identify three types of client gradient leakage threats in federated learning even with encrypted client-server communications. We articulate when and why the conventional server coordinated differential privacy approach, coined as Fed-SDP, is insufficient to protect the privacy of the training data. Second, we introduce Fed-CDP, the per example-based client differential privacy algorithm, and provide a formal analysis of Fed-CDP with the $(epsilon, delta)$ differential privacy guarantee, and a formal comparison between Fed-CDP and Fed-SDP in terms of privacy accounting. Third, we formally analyze the privacy-utility trade-off for providing differential privacy guarantee by Fed-CDP and present a dynamic decay noise-injection policy to further improve the accuracy and resiliency of Fed-CDP. We evaluate and compare Fed-CDP and Fed-CDP(decay) with Fed-SDP in terms of differential privacy guarantee and gradient leakage resilience over five benchmark datasets. The results show that the Fed-CDP approach outperforms conventional Fed-SDP in terms of resilience to client gradient leakages while offering competitive accuracy performance in federated learning.
Spin qubits in silicon quantum dots are one of the most promising building blocks for large scale quantum computers thanks to their high qubit density and compatibility with the existing semiconductor technologies. High fidelity single-qubit gates exceeding the threshold of error correction codes like the surface code have been demonstrated, while two-qubit gates have reached 98% fidelity and are improving rapidly. However, there are other types of error --- such as charge leakage and propagation --- that may occur in quantum dot arrays and which cannot be corrected by quantum error correction codes, making them potentially damaging even when their probability is small. We propose a surface code architecture for silicon quantum dot spin qubits that is robust against leakage errors by incorporating multi-electron mediator dots. Charge leakage in the qubit dots is transferred to the mediator dots via charge relaxation processes and then removed using charge reservoirs attached to the mediators. A stabiliser-check cycle, optimised for our hardware, then removes the correlations between the residual physical errors. Through simulations we obtain the surface code threshold for the charge leakage errors and show that in our architecture the damage due to charge leakage errors is reduced to a similar level to that of the usual depolarising gate noise. Spin leakage errors in our architecture are constrained to only ancilla qubits and can be removed during quantum error correction via reinitialisations of ancillae, which ensure the robustness of our architecture against spin leakage as well. Our use of an elongated mediator dots creates spaces throughout the quantum dot array for charge reservoirs, measuring devices and control gates, providing the scalability in the design.
How can we animate 3D-characters from a movie script or move robots by simply telling them what we would like them to do? How unstructured and complex can we make a sentence and still generate plausible movements from it? These are questions that need to be answered in the long-run, as the field is still in its infancy. Inspired by these problems, we present a new technique for generating compositional actions, which handles complex input sentences. Our output is a 3D pose sequence depicting the actions in the input sentence. We propose a hierarchical two-stream sequential model to explore a finer joint-level mapping between natural language sentences and 3D pose sequences corresponding to the given motion. We learn two manifold representations of the motion -- one each for the upper body and the lower body movements. Our model can generate plausible pose sequences for short sentences describing single actions as well as long compositional sentences describing multiple sequential and superimposed actions. We evaluate our proposed model on the publicly available KIT Motion-Language Dataset containing 3D pose data with human-annotated sentences. Experimental results show that our model advances the state-of-the-art on text-based motion synthesis in objective evaluations by a margin of 50%. Qualitative evaluations based on a user study indicate that our synthesized motions are perceived to be the closest to the ground-truth motion captures for both short and compositional sentences.