ترغب بنشر مسار تعليمي؟ اضغط هنا

Enhancing Observability in Distribution Grids using Smart Meter Data

197   0   0.0 ( 0 )
 نشر من قبل Siddharth Bhela
 تاريخ النشر 2016
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Due to limited metering infrastructure, distribution grids are currently challenged by observability issues. On the other hand, smart meter data, including local voltage magnitudes and power injections, are communicated to the utility operator from grid buses with renewable generation and demand-response programs. This work employs grid data from metered buses towards inferring the underlying grid state. To this end, a coupled formulation of the power flow problem (CPF) is put forth. Exploiting the high variability of injections at metered buses, the controllability of solar inverters, and the relative time-invariance of conventional loads, the idea is to solve the non-linear power flow equations jointly over consecutive time instants. An intuitive and easily verifiable rule pertaining to the locations of metered and non-metered buses on the physical grid is shown to be a necessary and sufficient criterion for local observability in radial networks. To account for noisy smart meter readings, a coupled power system state estimation (CPSSE) problem is further developed. Both CPF and CPSSE tasks are tackled via augmented semi-definite program relaxations. The observability criterion along with the CPF and CPSSE solvers are numerically corroborated using synthetic and actual solar generation and load data on the IEEE 34-bus benchmark feeder.

قيم البحث

اقرأ أيضاً

The recent advent of smart meters has led to large micro-level datasets. For the first time, the electricity consumption at individual sites is available on a near real-time basis. Efficient management of energy resources, electric utilities, and tra nsmission grids, can be greatly facilitated by harnessing the potential of this data. The aim of this study is to generate probability density estimates for consumption recorded by individual smart meters. Such estimates can assist decision making by helping consumers identify and minimize their excess electricity usage, especially during peak times. For suppliers, these estimates can be used to devise innovative time-of-use pricing strategies aimed at their target consumers. We consider methods based on conditional kernel density (CKD) estimation with the incorporation of a decay parameter. The methods capture the seasonality in consumption, and enable a nonparametric estimation of its conditional density. Using eight months of half-hourly data for one thousand meters, we evaluate point and density forecasts, for lead times ranging from one half-hour up to a week ahead. We find that the kernel-based methods outperform a simple benchmark method that does not account for seasonality, and compare well with an exponential smoothing method that we use as a sophisticated benchmark. To gauge the financial impact, we use density estimates of consumption to derive prediction intervals of electricity cost for different time-of-use tariffs. We show that a simple strategy of switching between different tariffs, based on a comparison of cost densities, delivers significant cost savings for the great majority of consumers.
210 - Marie Maros , Joakim Jalden 2018
Electric power distribution systems will encounter fluctuations in supply due to the introduction of renewable sources with high variability in generation capacity. It is therefore necessary to provide algorithms that are capable of dynamically findi ng approximate solutions. We propose two semi-distributed algorithms based on ADMM and discuss their advantages and disadvantages. One of the algorithms computes a feasible approximate of the optimal power allocation at each instance. We require coordination between the nodes to guarantee feasibility of each of the iterates. We bound the distance from the approximate solutions to the optimal solution as a function of the variation in optimal power allocation. Finally, we verify our results via experiments.
Many real-world analytics problems involve two significant challenges: prediction and optimization. Due to the typically complex nature of each challenge, the standard paradigm is predict-then-optimize. By and large, machine learning tools are intend ed to minimize prediction error and do not account for how the predictions will be used in the downstream optimization problem. In contrast, we propose a new and very general framework, called Smart Predict, then Optimize (SPO), which directly leverages the optimization problem structure, i.e., its objective and constraints, for designing better prediction models. A key component of our framework is the SPO loss function which measures the decision error induced by a prediction. Training a prediction model with respect to the SPO loss is computationally challenging, and thus we derive, using duality theory, a convex surrogate loss function which we call the SPO+ loss. Most importantly, we prove that the SPO+ loss is statistically consistent with respect to the SPO loss under mild conditions. Our SPO+ loss function can tractably handle any polyhedral, convex, or even mixed-integer optimization problem with a linear objective. Numerical experiments on shortest path and portfolio optimization problems show that the SPO framework can lead to significant improvement under the predict-then-optimize paradigm, in particular when the prediction model being trained is misspecified. We find that linear models trained using SPO+ loss tend to dominate random forest algorithms, even when the ground truth is highly nonlinear.
Data analytics and data science play a significant role in nowadays society. In the context of Smart Grids (SG), the collection of vast amounts of data has seen the emergence of a plethora of data analysis approaches. In this paper, we conduct a Syst ematic Mapping Study (SMS) aimed at getting insights about different facets of SG data analysis: application sub-domains (e.g., power load control), aspects covered (e.g., forecasting), used techniques (e.g., clustering), tool-support, research methods (e.g., experiments/simulations), replicability/reproducibility of research. The final goal is to provide a view of the current status of research. Overall, we found that each sub-domain has its peculiarities in terms of techniques, approaches and research methodologies applied. Simulations and experiments play a crucial role in many areas. The replicability of studies is limited concerning the provided implemented algorithms, and to a lower extent due to the usage of private datasets.
Smart meters are increasingly used worldwide. Smart meters are the advanced meters capable of measuring energy consumption at a fine-grained time interval, e.g., every 15 minutes. Smart meter data are typically bundled with social economic data in an alytics, such as meter geographic locations, weather conditions and user information, which makes the data sets very sizable and the analytics complex. Data mining and emerging cloud computing technologies make collecting, processing, and analyzing the so-called big data possible. This paper proposes an innovative ICT-solution to streamline smart meter data analytics. The proposed solution offers an information integration pipeline for ingesting data from smart meters, a scalable platform for processing and mining big data sets, and a web portal for visualizing analytics results. The implemented system has a hybrid architecture of using Spark or Hive for big data processing, and using the machine learning toolkit, MADlib, for doing in-database data analytics in PostgreSQL database. This paper evaluates the key technologies of the proposed ICT-solution, and the results show the effectiveness and efficiency of using the system for both batch and online analytics.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا