Do you want to publish a course? Click here

Fast Construction of Robustness Degradation Function

118   0   0.0 ( 0 )
 Added by Xinjia Chen
 Publication date 2008
  fields
and research's language is English




Ask ChatGPT about the research

We develop a fast algorithm to construct the robustness degradation function, which describes quantitatively the relationship between the proportion of systems guaranteeing the robustness requirement and the radius of the uncertainty set. This function can be applied to predict whether a controller design based on an inexact mathematical model will perform satisfactorily when implemented on the true system.



rate research

Read More

127 - Xinjia Chen , Kemin Zhou , 2008
In this paper, we develop efficient randomized algorithms for estimating probabilistic robustness margin and constructing robustness degradation curve for uncertain dynamic systems. One remarkable feature of these algorithms is their universal applicability to robustness analysis problems with arbitrary robustness requirements and uncertainty bounding set. In contrast to existing probabilistic methods, our approach does not depend on the feasibility of computing deterministic robustness margin. We have developed efficient methods such as probabilistic comparison, probabilistic bisection and backward iteration to facilitate the computation. In particular, confidence interval for binomial random variables has been frequently used in the estimation of probabilistic robustness margin and in the accuracy evaluation of estimating robustness degradation function. Motivated by the importance of fast computing of binomial confidence interval in the context of probabilistic robustness analysis, we have derived an explicit formula for constructing the confidence interval of binomial parameter with guaranteed coverage probability. The formula overcomes the limitation of normal approximation which is asymptotic in nature and thus inevitably introduce unknown errors in applications. Moreover, the formula is extremely simple and very tight in comparison with classic Clopper-Pearsons approach.
In this paper, we formulate a cycling cost aware economic dispatch problem that co-optimizes generation and storage dispatch while taking into account cycle based storage degradation cost. Our approach exploits the Rainflow cycle counting algorithm to quantify storage degradation for each charging and discharging half-cycle based on its depth. We show that the dispatch is optimal for individual participants in the sense that it maximizes the profit of generators and storage units, under price taking assumptions. We further provide a condition under which the optimal storage response is unique for given market clearing prices. Simulations using data from the New York Independent System Operator (NYISO) illustrate the optimization framework. In particular, they show that the generation-centric dispatch that does not account for storage degradation is insufficient to guarantee storage profitability.
This paper tackles the unconstrained minimization of a class of nonsmooth and nonconvex functions that can be written as finite max-functions. A gradient and function-based sampling method is proposed which, under special circumstances, either moves superlinearly to a minimizer of the problem of interest or superlinearly improves the optimality certificate. Global and local convergence analysis are presented, as well as illustrative examples that corroborate and elucidate the obtained theoretical results.
This paper studies convergence of empirical risks in reproducing kernel Hilbert spaces (RKHS). A conventional assumption in the existing research is that empirical training data do not contain any noise but this may not be satisfied in some practical circumstances. Consequently the existing convergence results do not provide a guarantee as to whether empirical risks based on empirical data are reliable or not when the data contain some noise. In this paper, we fill out the gap in a few steps. First, we derive moderate sufficient conditions under which the expected risk changes stably (continuously) against small perturbation of the probability distribution of the underlying random variables and demonstrate how the cost function and kernel affect the stability. Second, we examine the difference between laws of the statistical estimators of the expected optimal loss based on pure data and contaminated data using Prokhorov metric and Kantorovich metric and derive some qualitative and quantitative statistical robustness results. Third, we identify appropriate metrics under which the statistical estimators are uniformly asymptotically consistent. These results provide theoretical grounding for analysing asymptotic convergence and examining reliability of the statistical estimators in a number of well-known machine learning models.
Discovering dense subgraphs and understanding the relations among them is a fundamental problem in graph mining. We want to not only identify dense subgraphs, but also build a hierarchy among them (e.g., larger but sparser subgraphs formed by two smaller dense subgraphs). Peeling algorithms (k-core, k-truss, and nucleus decomposition) have been effective to locate many dense subgraphs. However, constructing a hierarchical representation of density structure, even correctly computing the connected k-cores and k-trusses, have been mostly overlooked. Keeping track of connected components during peeling requires an additional traversal operation, which is as expensive as the peeling process. In this paper, we start with a thorough survey and point to nuances in problem formulations that lead to significant differences in runtimes. We then propose efficient and generic algorithms to construct the hierarchy of dense subgraphs for k-core, k-truss, or any nucleus decomposition. Our algorithms leverage the disjoint-set forest data structure to efficiently construct the hierarchy during traversal. Furthermore, we introduce a new idea to avoid traversal. We construct the subgraphs while visiting neighborhoods in the peeling process, and build the relations to previously constructed subgraphs. We also consider an existing idea to find the k-core hierarchy and adapt for our objectives efficiently. Experiments on different types of large scale real-world networks show significant speedups over naive algorithms and existing alternatives. Our algorithms also outperform the hypothetical limits of any possible traversal-based solution.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا