ترغب بنشر مسار تعليمي؟ اضغط هنا

Decentralized and Secure Generation Maintenance with Differential Privacy

77   0   0.0 ( 0 )
 نشر من قبل Paritosh Ramanan
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Decentralized methods are gaining popularity for data-driven models in power systems as they offer significant computational scalability while guaranteeing full data ownership by utility stakeholders. However, decentralized methods still require sharing information about network flow estimates over public facing communication channels, which raises privacy concerns. In this paper we propose a differential privacy driven approach geared towards decentralized formulations of mixed integer operations and maintenance optimization problems that protects network flow estimates. We prove strong privacy guarantees by leveraging the linear relationship between the phase angles and the flow. To address the challenges associated with the mixed integer and dynamic nature of the problem, we introduce an exponential moving average based consensus mechanism to enhance convergence, coupled with a control chart based convergence criteria to improve stability. Our experimental results obtained on the IEEE 118 bus case demonstrate that our privacy preserving approach yields solution qualities on par with benchmark methods without differential privacy. To demonstrate the computational robustness of our method, we conduct experiments using a wide range of noise levels and operational scenarios.

قيم البحث

اقرأ أيضاً

Privacy concerns with sensitive data are receiving increasing attention. In this paper, we study local differential privacy (LDP) in interactive decentralized optimization. By constructing random local aggregators, we propose a framework to amplify L DP by a constant. We take Alternating Direction Method of Multipliers (ADMM), and decentralized gradient descent as two concrete examples, where experiments support our theory. In an asymptotic view, we address the following question: Under LDP, is it possible to design a distributed private minimizer for arbitrary closed convex constraints with utility loss not explicitly dependent on dimensionality? As an affiliated result, we also show that with merely linear secret sharing, information theoretic privacy is achievable for bounded colluding agents.
Unit Commitment (UC) is a fundamental problem in power system operations. When coupled with generation maintenance, the joint optimization problem poses significant computational challenges due to coupling constraints linking maintenance and UC decis ions. Obviously, these challenges grow with the size of the network. With the introduction of sensors for monitoring generator health and condition-based maintenance(CBM), these challenges have been magnified. ADMM-based decentralized methods have shown promise in solving large-scale UC problems, especially in vertically integrated power systems. However, in their current form, these methods fail to deliver similar computational performance and scalability when considering the joint UC and CBM problem. This paper provides a novel decentralized optimization framework for solving large-scale, joint UC and CBM problems. Our approach relies on the novel use of the subgradient method to temporally decouple various subproblems of the ADMM-based formulation of the joint problem along the maintenance horizon. By effectively utilizing multithreading, our decentralized subgradient approach delivers superior computational performance and eliminates the need to move sensor data thereby alleviating privacy and security concerns. Using experiments on large scale test cases, we show that our framework can provide a speedup of upto 50x as compared to various state of the art benchmarks without compromising on solution quality.
We propose and experimentally evaluate a novel secure aggregation algorithm targeted at cross-organizational federated learning applications with a fixed set of participating learners. Our solution organizes learners in a chain and encrypts all traff ic to reduce the controller of the aggregation to a mere message broker. We show that our algorithm scales better and is less resource demanding than existing solutions, while being easy to implement on constrained platforms. With 36 nodes our method outperforms state-of-the-art secure aggregation by 70x, and 56x with and without failover, respectively.
We consider the problem of reinforcing federated learning with formal privacy guarantees. We propose to employ Bayesian differential privacy, a relaxation of differential privacy for similarly distributed data, to provide sharper privacy loss bounds. We adapt the Bayesian privacy accounting method to the federated setting and suggest multiple improvements for more efficient privacy budgeting at different levels. Our experiments show significant advantage over the state-of-the-art differential privacy bounds for federated learning on image classification tasks, including a medical application, bringing the privacy budget below 1 at the client level, and below 0.1 at the instance level. Lower amounts of noise also benefit the model accuracy and reduce the number of communication rounds.
110 - Zhiqi Bu , Jinshuo Dong , Qi Long 2019
Deep learning models are often trained on datasets that contain sensitive information such as individuals shopping transactions, personal contacts, and medical records. An increasingly important line of work therefore has sought to train neural netwo rks subject to privacy constraints that are specified by differential privacy or its divergence-based relaxations. These privacy definitions, however, have weaknesses in handling certain important primitives (composition and subsampling), thereby giving loose or complicated privacy analyses of training neural networks. In this paper, we consider a recently proposed privacy definition termed textit{$f$-differential privacy} [18] for a refined privacy analysis of training neural networks. Leveraging the appealing properties of $f$-differential privacy in handling composition and subsampling, this paper derives analytically tractable expressions for the privacy guarantees of both stochastic gradient descent and Adam used in training deep neural networks, without the need of developing sophisticated techniques as [3] did. Our results demonstrate that the $f$-differential privacy framework allows for a new privacy analysis that improves on the prior analysis~[3], which in turn suggests tuning certain parameters of neural networks for a better prediction accuracy without violating the privacy budget. These theoretically derived improvements are confirmed by our experiments in a range of tasks in image classification, text classification, and recommender systems. Python code to calculate the privacy cost for these experiments is publicly available in the texttt{TensorFlow Privacy} library.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا