Do you want to publish a course? Click here

Physics-Guided Deep Neural Networks for Power Flow Analysis

65   0   0.0 ( 0 )
 Added by Xinyue Hu
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Solving power flow (PF) equations is the basis of power flow analysis, which is important in determining the best operation of existing systems, performing security analysis, etc. However, PF equations can be out-of-date or even unavailable due to system dynamics and uncertainties, making traditional numerical approaches infeasible. To address these concerns, researchers have proposed data-driven approaches to solve the PF problem by learning the mapping rules from historical system operation data. Nevertheless, prior data-driven approaches suffer from poor performance and generalizability, due to overly simplified assumptions of the PF problem or ignorance of physical laws governing power systems. In this paper, we propose a physics-guided neural network to solve the PF problem, with an auxiliary task to rebuild the PF model. By encoding different granularity of Kirchhoffs laws and system topology into the rebuilt PF model, our neural-network based PF solver is regularized by the auxiliary task and constrained by the physical laws. The simulation results show that our physics-guided neural network methods achieve better performance and generalizability compared to existing unconstrained data-driven approaches. Furthermore, we demonstrate that the weight matrices of our physics-guided neural networks embody power system physics by showing their similarities with the bus admittance matrices.



rate research

Read More

We explore machine learning methods for AC Optimal Powerflow (ACOPF) - the task of optimizing power generation in a transmission network according while respecting physical and engineering constraints. We present two formulations of ACOPF as a machine learning problem: 1) an end-to-end prediction task where we directly predict the optimal generator settings, and 2) a constraint prediction task where we predict the set of active constraints in the optimal solution. We validate these approaches on two benchmark grids.
The quest for biologically plausible deep learning is driven, not just by the desire to explain experimentally-observed properties of biological neural networks, but also by the hope of discovering more efficient methods for training artificial networks. In this paper, we propose a new algorithm named Variational Probably Flow (VPF), an extension of minimum probability flow for training binary Deep Boltzmann Machines (DBMs). We show that weight updates in VPF are local, depending only on the states and firing rates of the adjacent neurons. Unlike contrastive divergence, there is no need for Gibbs confabulations; and unlike backpropagation, alternating feedforward and feedback phases are not required. Moreover, the learning algorithm is effective for training DBMs with intra-layer connections between the hidden nodes. Experiments with MNIST and Fashion MNIST demonstrate that VPF learns reasonable features quickly, reconstructs corrupted images more accurately, and generates samples with a high estimated log-likelihood. Lastly, we note that, interestingly, if an asymmetric version of VPF exists, the weight updates directly explain experimental results in Spike-Timing-Dependent Plasticity (STDP).
A Deep Neural Network is applied to classify physical signatures obtained from physical sensor measurements of running gasoline and diesel-powered vehicles and other devices. The classification provides information on the target identities as to vehicle type and even vehicle model. The physical measurements include acoustic, acceleration (vibration), geophonic, and magnetic.
Verifying correctness of deep neural networks (DNNs) is challenging. We study a generic reachability problem for feed-forward DNNs which, for a given set of inputs to the network and a Lipschitz-continuous function over its outputs, computes the lower and upper bound on the function values. Because the network and the function are Lipschitz continuous, all values in the interval between the lower and upper bound are reachable. We show how to obtain the safety verification problem, the output range analysis problem and a robustness measure by instantiating the reachability problem. We present a novel algorithm based on adaptive nested optimisation to solve the reachability problem. The technique has been implemented and evaluated on a range of DNNs, demonstrating its efficiency, scalability and ability to handle a broader class of networks than state-of-the-art verification approaches.
We study the flow of information and the evolution of internal representations during deep neural network (DNN) training, aiming to demystify the compression aspect of the information bottleneck theory. The theory suggests that DNN training comprises a rapid fitting phase followed by a slower compression phase, in which the mutual information $I(X;T)$ between the input $X$ and internal representations $T$ decreases. Several papers observe compression of estimated mutual information on different DNN models, but the true $I(X;T)$ over these networks is provably either constant (discrete $X$) or infinite (continuous $X$). This work explains the discrepancy between theory and experiments, and clarifies what was actually measured by these past works. To this end, we introduce an auxiliary (noisy) DNN framework for which $I(X;T)$ is a meaningful quantity that depends on the networks parameters. This noisy framework is shown to be a good proxy for the original (deterministic) DNN both in terms of performance and the learned representations. We then develop a rigorous estimator for $I(X;T)$ in noisy DNNs and observe compression in various models. By relating $I(X;T)$ in the noisy DNN to an information-theoretic communication problem, we show that compression is driven by the progressive clustering of hidden representations of inputs from the same class. Several methods to directly monitor clustering of hidden representations, both in noisy and deterministic DNNs, are used to show that meaningful clusters form in the $T$ space. Finally, we return to the estimator of $I(X;T)$ employed in past works, and demonstrate that while it fails to capture the true (vacuous) mutual information, it does serve as a measure for clustering. This clarifies the past observations of compression and isolates the geometric clustering of hidden representations as the true phenomenon of interest.

suggested questions

comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا