ترغب بنشر مسار تعليمي؟ اضغط هنا

On the Binomial Confidence Interval and Probabilistic Robust Control

162   0   0.0 ( 0 )
 نشر من قبل Xinjia Chen
 تاريخ النشر 2008
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

The Clopper-Pearson confidence interval has ever been documented as an exact approach in some statistics literature. More recently, such approach of interval estimation has been introduced to probabilistic control theory and has been referred as non-conservative in control community. In this note, we clarify the fact that the so-called exact approach is actually conservative. In particular, we derive analytic results demonstrating the extent of conservatism in the context of probabilistic robustness analysis. This investigation encourages seeking better methods of confidence interval construction for robust control purpose.



قيم البحث

اقرأ أيضاً

129 - Xinjia Chen , Kemin Zhou 2008
Order statistics theory is applied in this paper to probabilistic robust control theory to compute the minimum sample size needed to come up with a reliable estimate of an uncertain quantity under continuity assumption of the related probability dist ribution. Also, the concept of distribution-free tolerance intervals is applied to estimate the range of an uncertain quantity and extract the information about its distribution. To overcome the limitations imposed by the continuity assumption in the existing order statistics theory, we have derived a cumulative distribution function of the order statistics without the continuity assumption and developed an inequality showing that this distribution has an upper bound which equals to the corresponding distribution when the continuity assumption is satisfied. By applying this inequality, we investigate the minimum computational effort needed to come up with an reliable estimate for the upper bound (or lower bound) and the range of a quantity. We also give conditions, which are much weaker than the absolute continuity assumption, for the existence of such minimum sample size. Furthermore, the issue of making tradeoff between performance level and risk is addressed and a guideline for making this kind of tradeoff is established. This guideline can be applied in general without continuity assumption.
120 - Xinjia Chen 2009
In this paper, we develop an approach for optimizing the explicit binomial confidence interval recently derived by Chen et al. The optimization reduces conservativeness while guaranteeing prescribed coverage probability.
This paper offers a critical view of the worst-case approach that is the cornerstone of robust control design. It is our contention that a blind acceptance of worst-case scenarios may lead to designs that are actually more dangerous than designs base d on probabilistic techniques with a built-in risk factor. The real issue is one of modeling. If one accepts that no mathematical model of uncertainties is perfect then a probabilistic approach can lead to more reliable control even if it cannot guarantee stability for all possible cases. Our presentation is based on case analysis. We first establish that worst-case is not necessarily all-encompassing. In fact, we show that for some uncertain control problems to have a conventional robust control solution it is necessary to make assumptions that leave out some feasible cases. Once we establish that point, we argue that it is not uncommon for the risk of unaccounted cases in worst-case design to be greater than that of the accepted risk in a probabilistic approach. With an example, we quantify the risks and show that worst-case can be significantly more risky. Finally, we join our analysis with existing results on computational complexity and probabilistic robustness to argue that the deterministic worst-case analysis is not necessarily the better tool.
180 - Henry Lam , Fengpei Li 2019
We consider optimization problems with uncertain constraints that need to be satisfied probabilistically. When data are available, a common method to obtain feasible solutions for such problems is to impose sampled constraints, following the so-calle d scenario optimization approach. However, when the data size is small, the sampled constraints may not statistically support a feasibility guarantee on the obtained solution. This paper studies how to leverage parametric information and the power of Monte Carlo simulation to obtain feasible solutions for small-data situations. Our approach makes use of a distributionally robust optimization (DRO) formulation that translates the data size requirement into a Monte Carlo sample size requirement drawn from what we call a generating distribution. We show that, while the optimal choice of this generating distribution is the one eliciting the data or the baseline distribution in a nonparametric divergence-based DRO, it is not necessarily so in the parametric case. Correspondingly, we develop procedures to obtain generating distributions that improve upon these basic choices. We support our findings with several numerical examples.
We consider several families of binomial sum identities whose definition involves the absolute value function. In particular, we consider centered double sums of the form [S_{alpha,beta}(n) := sum_{k,;ell}binom{2n}{n+k}binom{2n}{n+ell} |k^alpha-ell^a lpha|^beta,] obtaining new results in the cases $alpha = 1, 2$. We show that there is a close connection between these double sums in the case $alpha=1$ and the single centered binomial sums considered by Tuenter.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا