Do you want to publish a course? Click here

A Genetic Algorithm Approach for Modelling Low Voltage Network Demands

107   0   0.0 ( 0 )
 Publication date 2016
and research's language is English




Ask ChatGPT about the research

Distribution network operators (DNOs) are increasingly concerned about the impact of low carbon technologies on the low voltage (LV) networks. More advanced metering infrastructures provide numerous opportunities for more accurate load flow analysis of the LV networks. However, such data may not be readily available for DNOs and in any case is likely to be expensive. Modelling tools are required which can provide realistic, yet accurate, load profiles as input for a network modelling tool, without needing access to large amounts of monitored customer data. In this paper we outline some simple methods for accurately modelling a large number of unmonitored residential customers at the LV level. We do this by a process we call buddying, which models unmonitored customers by assigning them load profiles from a limited sample of monitored customers who have smart meters. Hence the presented method requires access to only a relatively small amount of domestic customers data. The method is efficiently optimised using a genetic algorithm to minimise a weighted cost function between matching the substation data and the individual mean daily demands. Hence we can show the effectiveness of substation monitoring in LV network modelling. Using real LV network modelling, we show that our methods perform significantly better than a comparative Monte Carlo approach, and provide a description of the peak demand behaviour.



rate research

Read More

The increasing use and spread of low carbon technologies are expected to cause new patterns in electric demand and set novel challenges to a distribution network operator (DNO). In this study, we build upon a recently introduced method, called buddying, which simulates low voltage (LV) networks of both residential and non-domestic (e.g. shops, offices, schools, hospitals, etc.) customers through optimization (via a genetic algorithm) of demands based on limited monitored and customer data. The algorithm assigns a limited but diverse number of monitored households (the buddies) to the unmonitored customers on a network. We study and compare two algorithms, one where substation monitoring data is available and a second where no substation information is used. Despite the roll out of monitoring equipment at domestic properties and/or substations, less data is available for commercial customers. This study focuses on substations with commercial customers most of which have no monitored `buddy, in which case a profile must be created. Due to the volatile nature of the low voltage networks, uncertainty bounds are crucial for operational purposes. We introduce and demonstrate two techniques for modelling the confidence bounds on the modelled LV networks. The first method uses probabilistic forecast methods based on substation monitoring; the second only uses a simple bootstrap of the sample of monitored customers but has the advantage of not requiring monitoring at the substation. These modelling tools, buddying and uncertainty bounds, can give further insight to a DNO to better plan and manage the network when limited information is available.
We present a model for generating probabilistic forecasts by combining kernel density estimation (KDE) and quantile regression techniques, as part of the probabilistic load forecasting track of the Global Energy Forecasting Competition 2014. The KDE method is initially implemented with a time-decay parameter. We later improve this method by conditioning on the temperature or the period of the week variables to provide more accurate forecasts. Secondly, we develop a simple but effective quantile regression forecast. The novel aspects of our methodology are two-fold. First, we introduce symmetry into the time-decay parameter of the kernel density estimation based forecast. Secondly we combine three probabilistic forecasts with different weights for different periods of the month.
In several recent publications, Bettencourt, West and collaborators claim that properties of cities such as gross economic production, personal income, numbers of patents filed, number of crimes committed, etc., show super-linear power-scaling with total population, while measures of resource use show sub-linear power-law scaling. Re-analysis of the gross economic production and personal income for cities in the United States, however, shows that the data cannot distinguish between power laws and other functional forms, including logarithmic growth, and that size predicts relatively little of the variation between cities. The striking appearance of scaling in previous work is largely artifact of using extensive quantities (city-wide totals) rather than intensive ones (per-capita rates). The remaining dependence of productivity on city size is explained by concentration of specialist service industries, with high value-added per worker, in larger cities, in accordance with the long-standing economic notion of the hierarchy of central places.
In the context of dynamic emission tomography, the conventional processing pipeline consists of independent image reconstruction of single time frames, followed by the application of a suitable kinetic model to time activity curves (TACs) at the voxel or region-of-interest level. The relatively new field of 4D PET direct reconstruction, by contrast, seeks to move beyond this scheme and incorporate information from multiple time frames within the reconstruction task. Existing 4D direct models are based on a deterministic description of voxels TACs, captured by the chosen kinetic model, considering the photon counting process the only source of uncertainty. In this work, we introduce a new probabilistic modeling strategy based on the key assumption that activity time course would be subject to uncertainty even if the parameters of the underlying dynamic process were known. This leads to a hierarchical Bayesian model, which we formulate using the formalism of Probabilistic Graphical Modeling (PGM). The inference of the joint probability density function arising from PGM is addressed using a new gradient-based iterative algorithm, which presents several advantages compared to existing direct methods: it is flexible to an arbitrary choice of linear and nonlinear kinetic model; it enables the inclusion of arbitrary (sub)differentiable priors for parametric maps; it is simpler to implement and suitable to integration in computing frameworks for machine learning. Computer simulations and an application to real patient scan showed how the proposed approach allows us to weight the importance of the kinetic model, providing a bridge between indirect and deterministic direct methods.
171 - Giona Casiraghi 2021
The complexity underlying real-world systems implies that standard statistical hypothesis testing methods may not be adequate for these peculiar applications. Specifically, we show that the likelihood-ratio tests null-distribution needs to be modified to accommodate the complexity found in multi-edge network data. When working with independent observations, the p-values of likelihood-ratio tests are approximated using a $chi^2$ distribution. However, such an approximation should not be used when dealing with multi-edge network data. This type of data is characterized by multiple correlations and competitions that make the standard approximation unsuitable. We provide a solution to the problem by providing a better approximation of the likelihood-ratio test null-distribution through a Beta distribution. Finally, we empirically show that even for a small multi-edge network, the standard $chi^2$ approximation provides erroneous results, while the proposed Beta approximation yields the correct p-value estimation.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا