Do you want to publish a course? Click here

Practical and Private (Deep) Learning without Sampling or Shuffling

167   0   0.0 ( 0 )
 Added by Shuang Song
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

We consider training models with differential privacy (DP) using mini-batch gradients. The existing state-of-the-art, Differentially Private Stochastic Gradient Descent (DP-SGD), requires privacy amplification by sampling or shuffling to obtain the best privacy/accuracy/computation trade-offs. Unfortunately, the precise requirements on exact sampling and shuffling can be hard to obtain in important practical scenarios, particularly federated learning (FL). We design and analyze a DP variant of Follow-The-Regularized-Leader (DP-FTRL) that compares favorably (both theoretically and empirically) to amplified DP-SGD, while allowing for much more flexible data access patterns. DP-FTRL does not use any form of privacy amplification. The code is available at https://github.com/google-research/federated/tree/master/dp_ftrl and https://github.com/google-research/DP-FTRL .



rate research

Read More

107 - Lichao Sun , Yingbo Zhou , Ji Wang 2019
Privacy-preserving deep learning is crucial for deploying deep neural network based solutions, especially when the model works on data that contains sensitive information. Most privacy-preserving methods lead to undesirable performance degradation. Ensemble learning is an effective way to improve model performance. In this work, we propose a new method for teacher ensembles that uses more informative network outputs under differential private stochastic gradient descent and provide provable privacy guarantees. Out method employs knowledge distillation and hint learning on intermediate representations to facilitate the training of student model. Additionally, we propose a simple weighted ensemble scheme that works more robustly across different teaching settings. Experimental results on three common image datasets benchmark (i.e., CIFAR10, MINST, and SVHN) demonstrate that our approach outperforms previous state-of-the-art methods on both performance and privacy-budget.
336 - Lei Yu , Ling Liu , Calton Pu 2019
Deep learning techniques based on neural networks have shown significant success in a wide range of AI tasks. Large-scale training datasets are one of the critical factors for their success. However, when the training datasets are crowdsourced from individuals and contain sensitive information, the model parameters may encode private information and bear the risks of privacy leakage. The recent growing trend of the sharing and publishing of pre-trained models further aggravates such privacy risks. To tackle this problem, we propose a differentially private approach for training neural networks. Our approach includes several new techniques for optimizing both privacy loss and model accuracy. We employ a generalization of differential privacy called concentrated differential privacy(CDP), with both a formal and refined privacy loss analysis on two different data batching methods. We implement a dynamic privacy budget allocator over the course of training to improve model accuracy. Extensive experiments demonstrate that our approach effectively improves privacy loss accounting, training efficiency and model quality under a given privacy budget.
In this paper we present Percival, a browser-embedded, lightweight, deep learning-powered ad blocker. Percival embeds itself within the browsers image rendering pipeline, which makes it possible to intercept every image obtained during page execution and to perform blocking based on applying machine learning for image classification to flag potential ads. Our implementation inside both Chromium and Brave browsers shows only a minor rendering performance overhead of 4.55%, demonstrating the feasibility of deploying traditionally heavy models (i.e. deep neural networks) inside the critical path of the rendering engine of a browser. We show that our image-based ad blocker can replicate EasyList rules with an accuracy of 96.76%. To show the versatility of the Percivals approach we present case studies that demonstrate that Percival 1) does surprisingly well on ads in languages other than English; 2) Percival also performs well on blocking first-party Facebook ads, which have presented issues for other ad blockers. Percival proves that image-based perceptual ad blocking is an attractive complement to todays dominant approach of block lists
As deep learning systems are widely adopted in safety- and security-critical applications, such as autonomous vehicles, banking systems, etc., malicious faults and attacks become a tremendous concern, which potentially could lead to catastrophic consequences. In this paper, we initiate the first study of leveraging physical fault injection attacks on Deep Neural Networks (DNNs), by using laser injection technique on embedded systems. In particular, our exploratory study targets four widely used activation functions in DNNs development, that are the general main building block of DNNs that creates non-linear behaviors -- ReLu, softmax, sigmoid, and tanh. Our results show that by targeting these functions, it is possible to achieve a misclassification by injecting faults into the hidden layer of the network. Such result can have practical implications for real-world applications, where faults can be introduced by simpler means (such as altering the supply voltage).
For asynchronous binary agreement (ABA) with optimal resilience, prior private-setup free protocols (Cachin et al., CCS 2002; Kokoris-Kogias et al., CCS 2020) incur $O({lambda}n^4)$ bits and $O(n^3)$ messages; for asynchronous multi-valued agreement with external validity (VBA), Abraham et al. [2] very recently gave the first elegant construction with $O(n^3)$ messages, relying on public key infrastructure (PKI), but still costs $O({lambda} n^3 log n)$ bits. We for the first time close the remaining efficiency gap, i.e., reducing their communication to $O({lambda} n^3)$ bits on average. At the core of our design, we give a systematic treatment of reasonably fair common randomness: - We construct a reasonably fair common coin (Canetti and Rabin, STOC 1993) in the asynchronous setting with PKI instead of private setup, using only $O({lambda} n^3)$ bit and constant asynchronous rounds. The common coin protocol ensures that with at least 1/3 probability, all honest parties can output a common bit that is as if uniformly sampled, rendering a more efficient private-setup free ABA with expected $O({lambda} n^3)$ bit communication and constant running time. - More interestingly, we lift our reasonably fair common coin protocol to attain perfect agreement without incurring any extra factor in the asymptotic complexities, resulting in an efficient reasonably fair leader election primitive pluggable in all existing VBA protocols, thus reducing the communication of private-setup free VBA to expected $O({lambda} n^3)$ bits while preserving expected constant running time. - Along the way, we improve an important building block, asynchronous verifiable secret sharing by presenting a private-setup free implementation costing only $O({lambda} n^2)$ bits in the PKI setting. By contrast, prior art having the same complexity (Backes et al., CT-RSA 2013) has to rely on a private setup.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا