No Arabic abstract
In this paper, we consider robust control using randomized algorithms. We extend the existing order statistics distribution theory to the general case in which the distribution of population is not assumed to be continuous and the order statistics is associated with certain constraints. In particular, we derive an inequality on distribution for related order statistics. Moreover, we also propose two different approaches in searching reliable solutions to the robust analysis and optimal synthesis problems under constraints. Furthermore, minimum computational effort is investigated and bounds for sample size are derived.
In this paper we study randomized optimal stopping problems and consider corresponding forward and backward Monte Carlo based optimisation algorithms. In particular we prove the convergence of the proposed algorithms and derive the corresponding convergence rates.
In this paper, we develop efficient randomized algorithms for estimating probabilistic robustness margin and constructing robustness degradation curve for uncertain dynamic systems. One remarkable feature of these algorithms is their universal applicability to robustness analysis problems with arbitrary robustness requirements and uncertainty bounding set. In contrast to existing probabilistic methods, our approach does not depend on the feasibility of computing deterministic robustness margin. We have developed efficient methods such as probabilistic comparison, probabilistic bisection and backward iteration to facilitate the computation. In particular, confidence interval for binomial random variables has been frequently used in the estimation of probabilistic robustness margin and in the accuracy evaluation of estimating robustness degradation function. Motivated by the importance of fast computing of binomial confidence interval in the context of probabilistic robustness analysis, we have derived an explicit formula for constructing the confidence interval of binomial parameter with guaranteed coverage probability. The formula overcomes the limitation of normal approximation which is asymptotic in nature and thus inevitably introduce unknown errors in applications. Moreover, the formula is extremely simple and very tight in comparison with classic Clopper-Pearsons approach.
For optimal power flow problems with chance constraints, a particularly effective method is based on a fixed point iteration applied to a sequence of deterministic power flow problems. However, a priori, the convergence of such an approach is not necessarily guaranteed. This article analyses the convergence conditions for this fixed point approach, and reports numerical experiments including for large IEEE networks.
The goal of this paper is to make Optimal Experimental Design (OED) computationally feasible for problems involving significant computational expense. We focus exclusively on the Mean Objective Cost of Uncertainty (MOCU), which is a specific methodology for OED, and we propose extensions to MOCU that leverage surrogates and adaptive sampling. We focus on reducing the computational expense associated with evaluating a large set of control policies across a large set of uncertain variables. We propose reducing the computational expense of MOCU by approximating intermediate calculations associated with each parameter/control pair with a surrogate. This surrogate is constructed from sparse sampling and (possibly) refined adaptively through a combination of sensitivity estimation and probabilistic knowledge gained directly from the experimental measurements prescribed from MOCU. We demonstrate our methods on example problems and compare performance relative to surrogate-approximated MOCU with no adaptive sampling and to full MOCU. We find evidence that adaptive sampling does improve performance, but the decision on whether to use surrogate-approximated MOCU versus full MOCU will depend on the relative expense of computation versus experimentation. If computation is more expensive than experimentation, then one should consider using our approach.
Multistage risk-averse optimal control problems with nested conditional risk mappings are gaining popularity in various application domains. Risk-averse formulations interpolate between the classical expectation-based stochastic and minimax optimal control. This way, risk-averse problems aim at hedging against extreme low-probability events without being overly conservative. At the same time, risk-based constraints may be employed either as surrogates for chance (probabilistic) constraints or as a robustification of expectation-based constraints. Such multistage problems, however, have been identified as particularly hard to solve. We propose a decomposition method for such nested problems that allows us to solve them via efficient numerical optimization methods. Alongside, we propose a new form of risk constraints which accounts for the propagation of uncertainty in time.