Do you want to publish a course? Click here

High-Fidelity Model Order Reduction for Microgrids Stability Assessment

95   0   0.0 ( 0 )
 Added by Konstantin Turitsyn
 Publication date 2016
and research's language is English




Ask ChatGPT about the research

Proper modeling of inverter-based microgrids is crucial for accurate assessment of stability boundaries. It has been recently realized that the stability conditions for such microgrids are significantly different from those known for large- scale power systems. While detailed models are available, they are both computationally expensive and can not provide the insight into the instability mechanisms and factors. In this paper, a computationally efficient and accurate reduced-order model is proposed for modeling the inverter-based microgrids. The main factors affecting microgrid stability are analyzed using the developed reduced-order model and are shown to be unique for the microgrid-based network, which has no direct analogy to large-scale power systems. Particularly, it has been discovered that the stability limits for the conventional droop-based system (omega - P/V - Q) are determined by the ratio of inverter rating to network capacity, leading to a smaller stability region for microgrids with shorter lines. The theoretical derivation has been provided to verify the above investigation based on both the simplified and generalized network configurations. More impor- tantly, the proposed reduced-order model not only maintains the modeling accuracy but also enhances the computation efficiency. Finally, the results are verified with the detailed model via both frequency and time domain analyses.

rate research

Read More

This paper aims to propose a novel large-signal order reduction (LSOR) approach for microgrids (MG) by embedding a stability and accuracy assessment theorem. Different from the existing order reduction methods, the proposed approach prevails mainly in two aspects. Firstly, the dynamic stability of full-order MG models can be assessed by only leveraging their derived reduced-order models and boundary layer models with our method. Specially, when the reduced-order system is input-to-state stable and the boundary layer system is uniformly globally asymptotically stable, the original MGs system can be proved to be stable under several common growth conditions. Secondly, a set of accuracy assessment criterion is developed and embedded into a tailored feedback mechanism to guarantee the accuracy of derived reduced model. It is proved that the errors between solutions of reduced and original models are bounded and convergent under such conditions. Strict mathematical proof for the proposed stability and accuracy assessment theorem is provided. The proposed LSOR method is generic and can be applied to arbitrary dynamic systems. Multiple case studies are conducted on MG systems to show the effectiveness of proposed approach.
Transient stability assessment is a critical tool for power system design and operation. With the emerging advanced synchrophasor measurement techniques, machine learning methods are playing an increasingly important role in power system stability assessment. However, most existing research makes a strong assumption that the measurement data transmission delay is negligible. In this paper, we focus on investigating the influence of communication delay on synchrophasor-based transient stability assessment. In particular, we develop a delay aware intelligent system to address this issue. By utilizing an ensemble of multiple long short-term memory networks, the proposed system can make early assessments to achieve a much shorter response time by utilizing incomplete system variable measurements. Compared with existing work, our system is able to make accurate assessments with a significantly improved efficiency. We perform numerous case studies to demonstrate the superiority of the proposed intelligent system, in which accurate assessments can be developed with time one third less than state-of-the-art methodologies. Moreover, the simulations indicate that noise in the measurements has trivial impact on the assessment performance, demonstrating the robustness of the proposed system.
Online identification of post-contingency transient stability is essential in power system control, as it facilitates the grid operator to decide and coordinate system failure correction control actions. Utilizing machine learning methods with synchrophasor measurements for transient stability assessment has received much attention recently with the gradual deployment of wide-area protection and control systems. In this paper, we develop a transient stability assessment system based on the long short-term memory network. By proposing a temporal self-adaptive scheme, our proposed system aims to balance the trade-off between assessment accuracy and response time, both of which may be crucial in real-world scenarios. Compared with previous work, the most significant enhancement is that our system learns from the temporal data dependencies of the input data, which contributes to better assessment accuracy. In addition, the model structure of our system is relatively less complex, speeding up the model training process. Case studies on three power systems demonstrate the efficacy of the proposed transient stability assessment system.
This paper provides a new avenue for exploiting deep neural networks to improve physics-based simulation. Specifically, we integrate the classic Lagrangian mechanics with a deep autoencoder to accelerate elastic simulation of deformable solids. Due to the inertia effect, the dynamic equilibrium cannot be established without evaluating the second-order derivatives of the deep autoencoder network. This is beyond the capability of off-the-shelf automatic differentiation packages and algorithms, which mainly focus on the gradient evaluation. Solving the nonlinear force equilibrium is even more challenging if the standard Newtons method is to be used. This is because we need to compute a third-order derivative of the network to obtain the variational Hessian. We attack those difficulties by exploiting complex-step finite difference, coupled with reverse automatic differentiation. This strategy allows us to enjoy the convenience and accuracy of complex-step finite difference and in the meantime, to deploy complex-value perturbations as collectively as possible to save excessive network passes. With a GPU-based implementation, we are able to wield deep autoencoders (e.g., $10+$ layers) with a relatively high-dimension latent space in real-time. Along this pipeline, we also design a sampling network and a weighting network to enable emph{weight-varying} Cubature integration in order to incorporate nonlinearity in the model reduction. We believe this work will inspire and benefit future research efforts in nonlinearly reduced physical simulation problems.
Model instability and poor prediction of long-term behavior are common problems when modeling dynamical systems using nonlinear black-box techniques. Direct optimization of the long-term predictions, often called simulation error minimization, leads to optimization problems that are generally non-convex in the model parameters and suffer from multiple local minima. In this work we present methods which address these problems through convex optimization, based on Lagrangian relaxation, dissipation inequalities, contraction theory, and semidefinite programming. We demonstrate the proposed methods with a model order reduction task for electronic circuit design and the identification of a pneumatic actuator from experiment.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا