No Arabic abstract
Non-intrusive load monitoring (NILM) is essential for understanding customers power consumption patterns and may find wide applications like carbon emission reduction and energy conservation. The training of NILM models requires massive load data containing different types of appliances. However, inadequate load data and the risk of power consumer privacy breaches may be encountered by local data owners during the NILM model training. To prevent such potential risks, a novel NILM method named Fed-NILM which is based on Federated Learning (FL) is proposed in this paper. In Fed-NILM, local model parameters instead of local load data are shared among multiple data owners. The global model is obtained by weighted averaging the parameters. Experiments based on two measured load datasets are conducted to explore the generalization ability of Fed-NILM. Besides, a comparison of Fed-NILM with locally-trained NILMs and the centrally-trained NILM is conducted. The experimental results show that Fed-NILM has superior performance in scalability and convergence. Fed-NILM outperforms locally-trained NILMs operated by local data owners and approximates the centrally-trained NILM which is trained on the entire load dataset without privacy protection. The proposed Fed-NILM significantly improves the co-modeling capabilities of local data owners while protecting power consumers privacy.
Non-intrusive load monitoring (NILM), which usually utilizes machine learning methods and is effective in disaggregating smart meter readings from the household-level into appliance-level consumptions, can help to analyze electricity consumption behaviours of users and enable practical smart energy and smart grid applications. However, smart meters are privately owned and distributed, which make real-world applications of NILM challenging. To this end, this paper develops a distributed and privacy-preserving federated deep learning framework for NILM (FederatedNILM), which combines federated learning with a state-of-the-art deep learning architecture to conduct NILM for the classification of typical states of household appliances. Through extensive comparative experiments, the effectiveness of the proposed FederatedNILM framework is demonstrated.
Energy disaggregation, also known as non-intrusive load monitoring (NILM), challenges the problem of separating the whole-home electricity usage into appliance-specific individual consumptions, which is a typical application of data analysis. {NILM aims to help households understand how the energy is used and consequently tell them how to effectively manage the energy, thus allowing energy efficiency which is considered as one of the twin pillars of sustainable energy policy (i.e., energy efficiency and renewable energy).} Although NILM is unidentifiable, it is widely believed that the NILM problem can be addressed by data science. Most of the existing approaches address the energy disaggregation problem by conventional techniques such as sparse coding, non-negative matrix factorization, and hidden Markov model. Recent advances reveal that deep neural networks (DNNs) can get favorable performance for NILM since DNNs can inherently learn the discriminative signatures of the different appliances. In this paper, we propose a novel method named adversarial energy disaggregation (AED) based on DNNs. We introduce the idea of adversarial learning into NILM, which is new for the energy disaggregation task. Our method trains a generator and multiple discriminators via an adversarial fashion. The proposed method not only learns shard representations for different appliances, but captures the specific multimode structures of each appliance. Extensive experiments on real-world datasets verify that our method can achieve new state-of-the-art performance.
The Federated Learning setting has a central server coordinating the training of a model on a network of devices. One of the challenges is variable training performance when the dataset has a class imbalance. In this paper, we address this by introducing a new loss function called Fed-Focal Loss. We propose to address the class imbalance by reshaping cross-entropy loss such that it down-weights the loss assigned to well-classified examples along the lines of focal loss. Additionally, by leveraging a tunable sampling framework, we take into account selective client model contributions on the central server to further focus the detector during training and hence improve its robustness. Using a detailed experimental analysis with the VIRTUAL (Variational Federated Multi-Task Learning) approach, we demonstrate consistently superior performance in both the balanced and unbalanced scenarios for MNIST, FEMNIST, VSN and HAR benchmarks. We obtain a more than 9% (absolute percentage) improvement in the unbalanced MNIST benchmark. We further show that our technique can be adopted across multiple Federated Learning algorithms to get improvements.
Non Intrusive Load Monitoring (NILM) or Energy Disaggregation (ED), seeks to save energy by decomposing corresponding appliances power reading from an aggregate power reading of the whole house. It is a single channel blind source separation problem (SCBSS) and difficult prediction problem because it is unidentifiable. Recent research shows that deep learning has become a growing popularity for NILM problem. The ability of neural networks to extract load features is closely related to its depth. However, deep neural network is difficult to train because of exploding gradient, vanishing gradient and network degradation. To solve these problems, we propose a sequence to point learning framework based on bidirectional (non-casual) dilated convolution for NILM. To be more convincing, we compare our method with the state of art method, Seq2point (Zhang) directly and compare with existing algorithms indirectly via two same datasets and metrics. Experiments based on REDD and UK-DALE data sets show that our proposed approach is far superior to existing approaches in all appliances.
In this paper, we propose the first secure federated $chi^2$-test protocol Fed-$chi^2$. To minimize both the privacy leakage and the communication cost, we recast $chi^2$-test to the second moment estimation problem and thus can take advantage of stable projection to encode the local information in a short vector. As such encodings can be aggregated with only summation, secure aggregation can be naturally applied to hide the individual updates. We formally prove the security guarantee of Fed-$chi^2$ that the joint distribution is hidden in a subspace with exponential possible distributions. Our evaluation results show that Fed-$chi^2$ achieves negligible accuracy drops with small client-side computation overhead. In several real-world case studies, the performance of Fed-$chi^2$ is comparable to the centralized $chi^2$-test.