ترغب بنشر مسار تعليمي؟ اضغط هنا

A Graph Computation based Sequential Power Flow Calculation for Large-Scale ACDC Systems

311   0   0.0 ( 0 )
 نشر من قبل Wei Feng
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

This paper proposes a graph computation based sequential power flow calculation method for Line Commutated Converter (LCC) based large-scale AC/DC systems to achieve a high computing performance. Based on the graph theory, the complex AC/DC system is first converted to a graph model and stored in a graph database. Then, the hybrid system is divided into several isolated areas with graph partition algorithm by decoupling AC and DC networks. Thus, the power flow analysis can be executed in parallel for each independent area with the new selected slack buses. Furthermore, for each area, the node-based parallel computing (NPC) and hierarchical parallel computing (HPC) used in graph computation are employed to speed up fast decoupled power flow (FDPF). Comprehensive case studies on the IEEE 300-bus, polished South Carolina 12,000-bus system and a China 11,119-bus system are performed to demonstrate the accuracy and efficiency of the proposed method



قيم البحث

اقرأ أيضاً

Large scale power systems are comprised of regional utilities with IIoT enabled assets that stream sensor readings in real time. In order to detect cyberattacks, the globally acquired, real time sensor data needs to be analyzed in a centralized fashi on. However, owing to operational constraints, such a centralized sharing mechanism turns out to be a major obstacle. In this paper, we propose a blockchain based decentralized framework for detecting coordinated replay attacks with full privacy of sensor data. We develop a Bayesian inference mechanism employing locally reported attack probabilities that is tailor made for a blockchain framework. We compare our framework to a traditional decentralized algorithm based on the broadcast gossip framework both theoretically as well as empirically. With the help of experiments on a private Ethereum blockchain, we show that our approach achieves good detection quality and significantly outperforms gossip driven approaches in terms of accuracy, timeliness and scalability.
283 - Chen Yuan , Yi Lu , Wei Feng 2019
Power flow analysis plays a fundamental and critical role in the energy management system (EMS). It is required to well accommodate large and complex power system. To achieve a high performance and accurate power flow analysis, a graph computing base d distributed power flow analysis approach is proposed in this paper. Firstly, a power system network is divided into multiple areas. Slack buses are selected for each area and, at each SCADA sampling period, the inter-area transmission line power flows are equivalently allocated as extra load injections to corresponding buses. Then, the system network is converted into multiple independent areas. In this way, the power flow analysis could be conducted in parallel for each area and the solved system states could be guaranteed without compromise of accuracy. Besides, for each area, graph computing based fast decoupled power flow (FDPF) is employed to quickly analyze system states. IEEE 118-bus system and MP 10790-bus system are employed to verify the results accuracy and present the promising computation performance of the proposed approach.
95 - Junyao Guo , Gabriela Hug , 2016
Distributed optimization for solving non-convex Optimal Power Flow (OPF) problems in power systems has attracted tremendous attention in the last decade. Most studies are based on the geographical decomposition of IEEE test systems for verifying the feasibility of the proposed approaches. However, it is not clear if one can extrapolate from these studies that those approaches can be applied to very large-scale real-world systems. In this paper, we show, for the first time, that distributed optimization can be effectively applied to a large-scale real transmission network, namely, the Polish 2383-bus system for which no pre-defined partitions exist, by using a recently developed partitioning technique. More specifically, the problem solved is the AC OPF problem with geographical decomposition of the network using the Alternating Direction Method of Multipliers (ADMM) method in conjunction with the partitioning technique. Through extensive experimental results and analytical studies, we show that with the presented partitioning technique the convergence performance of ADMM can be improved substantially, which enables the application of distributed approaches on very large-scale systems.
As online service systems continue to grow in terms of complexity and volume, how service incidents are managed will significantly impact company revenue and user trust. Due to the cascading effect, cloud failures often come with an overwhelming numb er of incidents from dependent services and devices. To pursue efficient incident management, related incidents should be quickly aggregated to narrow down the problem scope. To this end, in this paper, we propose GRLIA, an incident aggregation framework based on graph representation learning over the cascading graph of cloud failures. A representation vector is learned for each unique type of incident in an unsupervised and unified manner, which is able to simultaneously encode the topological and temporal correlations among incidents. Thus, it can be easily employed for online incident aggregation. In particular, to learn the correlations more accurately, we try to recover the complete scope of failures cascading impact by leveraging fine-grained system monitoring data, i.e., Key Performance Indicators (KPIs). The proposed framework is evaluated with real-world incident data collected from a large-scale online service system of Huawei Cloud. The experimental results demonstrate that GRLIA is effective and outperforms existing methods. Furthermore, our framework has been successfully deployed in industrial practice.
The ability to compute similarity scores between graphs based on metrics such as Graph Edit Distance (GED) is important in many real-world applications, such as 3D action recognition and biological molecular identification. Computing exact GED values is typically an NP-hard problem and traditional algorithms usually achieve an unsatisfactory trade-off between accuracy and efficiency. Recently, Graph Neural Networks (GNNs) provide a data-driven solution for this task, which is more efficient while maintaining prediction accuracy in small graph (around 10 nodes per graph) similarity computation. Existing GNN-based methods, which either respectively embed two graphs (lack of low-level cross-graph interactions) or deploy cross-graph interactions for whole graph pairs (redundant and time-consuming), are still not able to achieve competitive results when the number of nodes in graphs increases. In this paper, we focus on similarity computation for large-scale graphs and propose the embedding-coarsening-matching framework, which first embeds and coarsens large graphs to coarsened graphs with denser local topology and then deploys fine-grained interactions on the coarsened graphs for the final similarity scores.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا