Do you want to publish a course? Click here

Tri-Compress: A Cascaded Data Compression Framework for Smart Electricity Distribution Systems

89   0   0.0 ( 0 )
 Added by Syed Muhammad Atif
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

Modern smart distribution system requires storage, transmission and processing of big data generated by sensors installed in electric meters. On one hand, this data is essentially required for intelligent decision making by smart grid but on the other hand storage, transmission and processing of that huge amount of data is also a challenge. Present approaches to compress this information have only relied on the traditional matrix decomposition techniques benefitting from low number of principal components to represent the entire data. This paper proposes a cascaded data compression technique that blends three different methods in order to achieve high compression rate for efficient storage and transmission. In the first and second stages, two lossy data compression techniques are used, namely Singular Value Decomposition (SVD) and Normalization; Third stage achieves further compression by using the technique of Sparsity Encoding (SE) which is a lossless compression technique but only having appreciable benefits for sparse data sets. Our simulation results show that the combined use of the 3 techniques achieves data compression ratio to be 15% higher than state of the art SVD for small, sparse datasets and up to 28% higher in large, non-sparse datasets with acceptable Mean Absolute Error (MAE).



rate research

Read More

The conventional approach to pre-process data for compression is to apply transforms such as the Fourier, the Karhunen-Lo`{e}ve, or wavelet transforms. One drawback from adopting such an approach is that it is independent of the use of the compressed data, which may induce significant optimality losses when measured in terms of final utility (instead of being measured in terms of distortion). We therefore revisit this paradigm by tayloring the data pre-processing operation to the utility function of the decision-making entity using the compressed (and therefore noisy) data. More specifically, the utility function consists of an Lp-norm, which is very relevant in the area of smart grids. Both a linear and a non-linear use-oriented transforms are designed and compared with conventional data pre-processing techniques, showing that the impact of compression noise can be significantly reduced.
Detecting inaccurate smart meters and targeting them for replacement can save significant resources. For this purpose, a novel deep-learning method was developed based on long short-term memory (LSTM) and a modified convolutional neural network (CNN) to predict electricity usage trajectories based on historical data. From the significant difference between the predicted trajectory and the observed one, the meters that cannot measure electricity accurately are located. In a case study, a proof of principle was demonstrated in detecting inaccurate meters with high accuracy for practical usage to prevent unnecessary replacement and increase the service life span of smart meters.
We introduce a conceptually simple and scalable framework for continual learning domains where tasks are learned sequentially. Our method is constant in the number of parameters and is designed to preserve performance on previously encountered tasks while accelerating learning progress on subsequent problems. This is achieved by training a network with two components: A knowledge base, capable of solving previously encountered problems, which is connected to an active column that is employed to efficiently learn the current task. After learning a new task, the active column is distilled into the knowledge base, taking care to protect any previously acquired skills. This cycle of active learning (progression) followed by consolidation (compression) requires no architecture growth, no access to or storing of previous data or tasks, and no task-specific parameters. We demonstrate the progress & compress approach on sequential classification of handwritten alphabets as well as two reinforcement learning domains: Atari games and 3D maze navigation.
75 - Yang Sun , Fajie Yuan , Min Yang 2020
Sequential recommender systems (SRS) have become the key technology in capturing users dynamic interests and generating high-quality recommendations. Current state-of-the-art sequential recommender models are typically based on a sandwich-structured deep neural network, where one or more middle (hidden) layers are placed between the input embedding layer and output softmax layer. In general, these models require a large number of parameters (such as using a large embedding dimension or a deep network architecture) to obtain their optimal performance. Despite the effectiveness, at some point, further increasing model size may be harder for model deployment in resource-constraint devices, resulting in longer responding time and larger memory footprint. To resolve the issues, we propose a compressed sequential recommendation framework, termed as CpRec, where two generic model shrinking techniques are employed. Specifically, we first propose a block-wise adaptive decomposition to approximate the input and softmax matrices by exploiting the fact that items in SRS obey a long-tailed distribution. To reduce the parameters of the middle layers, we introduce three layer-wise parameter sharing schemes. We instantiate CpRec using deep convolutional neural network with dilated kernels given consideration to both recommendation accuracy and efficiency. By the extensive ablation studies, we demonstrate that the proposed CpRec can achieve up to 4$sim$8 times compression rates in real-world SRS datasets. Meanwhile, CpRec is faster during traininginference, and in most cases outperforms its uncompressed counterpart.
The recent advent of smart meters has led to large micro-level datasets. For the first time, the electricity consumption at individual sites is available on a near real-time basis. Efficient management of energy resources, electric utilities, and transmission grids, can be greatly facilitated by harnessing the potential of this data. The aim of this study is to generate probability density estimates for consumption recorded by individual smart meters. Such estimates can assist decision making by helping consumers identify and minimize their excess electricity usage, especially during peak times. For suppliers, these estimates can be used to devise innovative time-of-use pricing strategies aimed at their target consumers. We consider methods based on conditional kernel density (CKD) estimation with the incorporation of a decay parameter. The methods capture the seasonality in consumption, and enable a nonparametric estimation of its conditional density. Using eight months of half-hourly data for one thousand meters, we evaluate point and density forecasts, for lead times ranging from one half-hour up to a week ahead. We find that the kernel-based methods outperform a simple benchmark method that does not account for seasonality, and compare well with an exponential smoothing method that we use as a sophisticated benchmark. To gauge the financial impact, we use density estimates of consumption to derive prediction intervals of electricity cost for different time-of-use tariffs. We show that a simple strategy of switching between different tariffs, based on a comparison of cost densities, delivers significant cost savings for the great majority of consumers.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا