ﻻ يوجد ملخص باللغة العربية
Building the conditional probability distribution of wind power forecast errors benefits both wind farms (WFs) and independent system operators (ISOs). Establishing the joint probability distribution of wind power and the corresponding forecast data of spatially correlated WFs is the foundation for deriving the conditional probability distribution. Traditional parameter estimation methods for probability distributions require the collection of historical data of all WFs. However, in the context of multi-regional interconnected grids, neither regional ISOs nor WFs can collect the raw data of WFs in other regions due to privacy or competition considerations. Therefore, based on the Gaussian mixture model, this paper first proposes a privacy-preserving distributed expectation-maximization algorithm to estimate the parameters of the joint probability distribution. This algorithm consists of two original methods: (1) a privacy-preserving distributed summation algorithm and (2) a privacy-preserving distributed inner product algorithm. Then, we derive each WFs conditional probability distribution of forecast error from the joint one. By the proposed algorithms, WFs only need local calculations and privacy-preserving neighboring communications to achieve the whole parameter estimation. These algorithms are verified using the wind integration data set published by the NREL.
Due to the uncertainty of distributed wind generations (DWGs), a better understanding of the probability distributions (PD) of their wind power forecast errors (WPFEs) can help market participants (MPs) who own DWGs perform better during trading. Und
We consider the critical problem of distributed learning over data while keeping it private from the computational servers. The state-of-the-art approaches to this problem rely on quantizing the data into a finite field, so that the cryptographic app
Artificial neural network has achieved unprecedented success in the medical domain. This success depends on the availability of massive and representative datasets. However, data collection is often prevented by privacy concerns and people want to ta
This paper investigates capabilities of Privacy-Preserving Deep Learning (PPDL) mechanisms against various forms of privacy attacks. First, we propose to quantitatively measure the trade-off between model accuracy and privacy losses incurred by recon
How to train a machine learning model while keeping the data private and secure? We present CodedPrivateML, a fast and scalable approach to this critical problem. CodedPrivateML keeps both the data and the model information-theoretically private, whi