ﻻ يوجد ملخص باللغة العربية
Decentralized methods are gaining popularity for data-driven models in power systems as they offer significant computational scalability while guaranteeing full data ownership by utility stakeholders. However, decentralized methods still require sharing information about network flow estimates over public facing communication channels, which raises privacy concerns. In this paper we propose a differential privacy driven approach geared towards decentralized formulations of mixed integer operations and maintenance optimization problems that protects network flow estimates. We prove strong privacy guarantees by leveraging the linear relationship between the phase angles and the flow. To address the challenges associated with the mixed integer and dynamic nature of the problem, we introduce an exponential moving average based consensus mechanism to enhance convergence, coupled with a control chart based convergence criteria to improve stability. Our experimental results obtained on the IEEE 118 bus case demonstrate that our privacy preserving approach yields solution qualities on par with benchmark methods without differential privacy. To demonstrate the computational robustness of our method, we conduct experiments using a wide range of noise levels and operational scenarios.
Privacy concerns with sensitive data are receiving increasing attention. In this paper, we study local differential privacy (LDP) in interactive decentralized optimization. By constructing random local aggregators, we propose a framework to amplify L
Unit Commitment (UC) is a fundamental problem in power system operations. When coupled with generation maintenance, the joint optimization problem poses significant computational challenges due to coupling constraints linking maintenance and UC decis
We propose and experimentally evaluate a novel secure aggregation algorithm targeted at cross-organizational federated learning applications with a fixed set of participating learners. Our solution organizes learners in a chain and encrypts all traff
We consider the problem of reinforcing federated learning with formal privacy guarantees. We propose to employ Bayesian differential privacy, a relaxation of differential privacy for similarly distributed data, to provide sharper privacy loss bounds.
Deep learning models are often trained on datasets that contain sensitive information such as individuals shopping transactions, personal contacts, and medical records. An increasingly important line of work therefore has sought to train neural netwo