ترغب بنشر مسار تعليمي؟ اضغط هنا

Mitigating Blackout Risk via Maintenance : Inference from Simulation Data

103   0   0.0 ( 0 )
 نشر من قبل Jinpeng Guo
 تاريخ النشر 2017
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

Whereas maintenance has been recognized as an important and effective means for risk management in power systems, it turns out to be intractable if cascading blackout risk is considered due to the extremely high computational complexity. In this paper, based on the inference from the blackout simulation data, we propose a methodology to efficiently identify the most influential component(s) for mitigating cascading blackout risk in a large power system. To this end, we first establish an analytic relationship between maintenance strategies and blackout risk estimation by inferring from the data of cascading outage simulations. Then we formulate the component maintenance decision-making problem as a nonlinear 0-1 programming. Afterwards, we quantify the credibility of blackout risk estimation, leading to an adaptive method to determine the least required number of simulations, which servers as a crucial parameter of the optimization model. Finally, we devise two heuristic algorithms to find approximate optimal solutions to the model with very high efficiency. Numerical experiments well manifest the efficacy and high efficiency of our methodology.



قيم البحث

اقرأ أيضاً

We study logistical investment flexibility provided by modular processing technologies for mitigating risk. Specifically, we propose a multi-stage stochastic programming formulation that determines optimal capacity expansion plans that mitigate deman d uncertainty. The formulation accounts for multi-product dependencies between small/large units and for trade-offs between expected profit and risk. The formulation uses a cumulative risk measure to avoid timeconsistency issues of traditional, per-stage risk-minimization formulations and we argue that this approach is more compatible with typical investment metrics such as the net present value. Case studies of different complexity are presented to illustrate the developments. Our studies reveal that the Pareto frontier of a flexible setting (allowing for deployment of small units) dominates the Pareto frontier of an inflexible setting (allowing only for deployment of large units). Notably, this dominance is prevalent despite benefits arising from economies of scale of large processing units.
The risk of cascading blackouts greatly relies on failure probabilities of individual components in power grids. To quantify how component failure probabilities (CFP) influences blackout risk (BR), this paper proposes a sample-induced semi-analytic a pproach to characterize the relationship between CFP and BR. To this end, we first give a generic component failure probability function (CoFPF) to describe CFP with varying parameters or forms. Then the exact relationship between BR and CoFPFs is built on the abstract Markov-sequence model of cascading outages. Leveraging a set of samples generated by blackout simulations, we further establish a sample-induced semi-analytic mapping between the unbiased estimation of BR and CoFPFs. Finally, we derive an efficient algorithm that can directly calculate the unbiased estimation of BR when the CoFPFs change. Since no additional simulations are required, the algorithm is computationally scalable and efficient. Numerical experiments well confirm the theory and the algorithm.
98 - Jacob Abernethy 2016
Recovery from the Flint Water Crisis has been hindered by uncertainty in both the water testing process and the causes of contamination. In this work, we develop an ensemble of predictive models to assess the risk of lead contamination in individual homes and neighborhoods. To train these models, we utilize a wide range of data sources, including voluntary residential water tests, historical records, and city infrastructure data. Additionally, we use our models to identify the most prominent factors that contribute to a high risk of lead contamination. In this analysis, we find that lead service lines are not the only factor that is predictive of the risk of lead contamination of water. These results could be used to guide the long-term recovery efforts in Flint, minimize the immediate damages, and improve resource-allocation decisions for similar water infrastructure crises.
The COVID-19 pandemic has highlighted the importance of in-silico epidemiological modelling in predicting the dynamics of infectious diseases to inform health policy and decision makers about suitable prevention and containment strategies. Work in th is setting involves solving challenging inference and control problems in individual-based models of ever increasing complexity. Here we discuss recent breakthroughs in machine learning, specifically in simulation-based inference, and explore its potential as a novel venue for model calibration to support the design and evaluation of public health interventions. To further stimulate research, we are developing software interfaces that turn two cornerstone COVID-19 and malaria epidemiology models COVID-sim, (https://github.com/mrc-ide/covid-sim/) and OpenMalaria (https://github.com/SwissTPH/openmalaria) into probabilistic programs, enabling efficient interpretable Bayesian inference within those simulators.
Warfarin, a commonly prescribed drug to prevent blood clots, has a highly variable individual response. Determining a maintenance warfarin dose that achieves a therapeutic blood clotting time, as measured by the international normalized ratio (INR), is crucial in preventing complications. Machine learning algorithms are increasingly being used for warfarin dosing; usually, an initial dose is predicted with clinical and genotype factors, and this dose is revised after a few days based on previous doses and current INR. Since a sequence of prior doses and INR better capture the variability in individual warfarin response, we hypothesized that longitudinal dose response data will improve maintenance dose predictions. To test this hypothesis, we analyzed a dataset from the COAG warfarin dosing study, which includes clinical data, warfarin doses and INR measurements over the study period, and maintenance dose when therapeutic INR was achieved. Various machine learning regression models to predict maintenance warfarin dose were trained with clinical factors and dosing history and INR data as features. Overall, dose revision algorithms with a single dose and INR achieved comparable performance as the baseline dose revision algorithm. In contrast, dose revision algorithms with longitudinal dose and INR data provided maintenance dose predictions that were statistically significantly much closer to the true maintenance dose. Focusing on the best performing model, gradient boosting (GB), the proportion of ideal estimated dose, i.e., defined as within $pm$20% of the true dose, increased from the baseline (54.92%) to the GB model with the single (63.11%) and longitudinal (75.41%) INR. More accurate maintenance dose predictions with longitudinal dose response data can potentially achieve therapeutic INR faster, reduce drug-related complications and improve patient outcomes with warfarin therapy.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا