ترغب بنشر مسار تعليمي؟ اضغط هنا

Ordinal Optimisation for the Gaussian Copula Model

66   0   0.0 ( 0 )
 نشر من قبل Robert Chin
 تاريخ النشر 2019
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

We present results on the estimation and evaluation of success probabilities for ordinal optimisation over uncountable sets (such as subsets of $mathbb{R}^{d}$). Our formulation invokes an assumption of a Gaussian copula model, and we show that the success probability can be equivalently computed by assuming a special case of additive noise. We formally prove a lower bound on the success probability under the Gaussian copula model, and numerical experiments demonstrate that the lower bound yields a reasonable approximation to the actual success probability. Lastly, we showcase the utility of our results by guaranteeing high success probabilities with ordinal optimisation.

قيم البحث

اقرأ أيضاً

We study the success probability for a variant of the secretary problem, with noisy observations and multiple offline selection. Our formulation emulates, and is motivated by, problems involving noisy selection arising in the disciplines of stochasti c simulation and simulation-based optimisation. In addition, we employ the philosophy of ordinal optimisation - involving an ordinal selection rule, and a percentile notion of goal softening for the success probability. As a result, it is shown that the success probability only depends on the underlying copula of the problem. Other general properties for the success probability are also presented. Specialising to the case of Gaussian copulas, we also derive an analytic lower bound for the success probability, which may then be inverted to find sufficiently large sample sizes that guarantee a high success probability arbitrarily close to one.
Monge-Kantorovich distances, otherwise known as Wasserstein distances, have received a growing attention in statistics and machine learning as a powerful discrepancy measure for probability distributions. In this paper, we focus on forecasting a Gaus sian process indexed by probability distributions. For this, we provide a family of positive definite kernels built using transportation based distances. We provide a probabilistic understanding of these kernels and characterize the corresponding stochastic processes. We prove that the Gaussian processes indexed by distributions corresponding to these kernels can be efficiently forecast, opening new perspectives in Gaussian process modeling.
Tail dependence refers to clustering of extreme events. In the context of financial risk management, the clustering of high-severity risks has a devastating effect on the well-being of firms and is thus of pivotal importance in risk analysis.When it comes to quantifying the extent of tail dependence, it is generally agreed that measures of tail dependence must be independent of the marginal distributions of the risks but rather solely copula-dependent. Indeed, all classical measures of tail dependence are such, but they investigate the amount of tail dependence along the main diagonal of copulas, which has often little in common with the concentration of extremes in the copulas domain of definition.In this paper we urge that the classical measures of tail dependence may underestimate the level of tail dependence in copulas. For the Gaussian copula, however, we prove that the classical measures are maximal. The implication of the result is two-fold: On the one hand, it means that in the Gaussian case, the (weak) measures of tail dependence that have been reported and used are of utmost prudence, which must be a reassuring news for practitioners. On the other hand, it further encourages substitution of the Gaussian copula with other copulas that are more tail dependent.
115 - Dan Pirjol , Lingjiong Zhu 2019
We study the explosion of the solutions of the SDE in the quasi-Gaussian HJM model with a CEV-type volatility. The quasi-Gaussian HJM models are a popular approach for modeling the dynamics of the yield curve. This is due to their low dimensional Mar kovian representation which simplifies their numerical implementation and simulation. We show rigorously that the short rate in these models explodes in finite time with positive probability, under certain assumptions for the model parameters, and that the explosion occurs in finite time with probability one under some stronger assumptions. We discuss the implications of these results for the pricing of the zero coupon bonds and Eurodollar futures under this model.
135 - Xiaofeng Liu , Site Li , Yubin Ge 2021
The unsupervised domain adaptation (UDA) has been widely adopted to alleviate the data scalability issue, while the existing works usually focus on classifying independently discrete labels. However, in many tasks (e.g., medical diagnosis), the label s are discrete and successively distributed. The UDA for ordinal classification requires inducing non-trivial ordinal distribution prior to the latent space. Target for this, the partially ordered set (poset) is defined for constraining the latent vector. Instead of the typically i.i.d. Gaussian latent prior, in this work, a recursively conditional Gaussian (RCG) set is adapted for ordered constraint modeling, which admits a tractable joint distribution prior. Furthermore, we are able to control the density of content vector that violates the poset constraints by a simple three-sigma rule. We explicitly disentangle the cross-domain images into a shared ordinal prior induced ordinal content space and two separate source/target ordinal-unrelated spaces, and the self-training is worked on the shared space exclusively for ordinal-aware domain alignment. Extensive experiments on UDA medical diagnoses and facial age estimation demonstrate its effectiveness.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا