ترغب بنشر مسار تعليمي؟ اضغط هنا

Bounds on Negative Binomial Approximation to Call Function

206   0   0.0 ( 0 )
 نشر من قبل Amit Kumar
 تاريخ النشر 2021
  مجال البحث
والبحث باللغة English
 تأليف Amit N. Kumar




اسأل ChatGPT حول البحث

In this paper, we develop Steins method for negative binomial distribution using call function defined by $f_z(k)=(k-z)^+=max{k-z,0}$, for $kge 0$ and $z ge 0$. We obtain error bounds between $mathbb{E}[f_z(text{N}_{r,p})]$ and $mathbb{E}[f_z(V)]$, where $text{N}_{r,p}$ follows negative binomial distribution and $V$ is the sum of locally dependent random variables, using certain conditions on moments. We demonstrate our results through an interesting application, namely, collateralized debt obligation (CDO), and compare the bounds with the existing bounds.


قيم البحث

اقرأ أيضاً

We explore asymptotically optimal bounds for deviations of Bernoulli convolutions from the Poisson limit in terms of the Shannon relative entropy and the Pearson $chi^2$-distance. The results are based on proper non-uniform estimates for densities. T hey deal with models of non-homogeneous, non-degenerate Bernoulli distributions.
We explore asymptotically optimal bounds for deviations of distributions of independent Bernoulli random variables from the Poisson limit in terms of the Shannon relative entropy and Renyi/Tsallis relative distances (including Pearsons $chi^2$). This part generalizes the results obtained in Part I and removes any constraints on the parameters of the Bernoulli distributions.
Estimating the parameter of a Bernoulli process arises in many applications, including photon-efficient active imaging where each illumination period is regarded as a single Bernoulli trial. Motivated by acquisition efficiency when multiple Bernoulli processes are of interest, we formulate the allocation of trials under a constraint on the mean as an optimal resource allocation problem. An oracle-aided trial allocation demonstrates that there can be a significant advantage from varying the allocation for different processes and inspires a simple trial allocation gain quantity. Motivated by realizing this gain without an oracle, we present a trellis-based framework for representing and optimizing stopping rules. Considering the convenient case of Beta priors, three implementable stopping rules with similar performances are explored, and the simplest of these is shown to asymptotically achieve the oracle-aided trial allocation. These approaches are further extended to estimating functions of a Bernoulli parameter. In simulations inspired by realistic active imaging scenarios, we demonstrate significant mean-squared error improvements: up to 4.36 dB for the estimation of p and up to 1.80 dB for the estimation of log p.
We analyze the Lanczos method for matrix function approximation (Lanczos-FA), an iterative algorithm for computing $f(mathbf{A}) mathbf{b}$ when $mathbf{A}$ is a Hermitian matrix and $mathbf{b}$ is a given mathbftor. Assuming that $f : mathbb{C} righ tarrow mathbb{C}$ is piecewise analytic, we give a framework, based on the Cauchy integral formula, which can be used to derive {em a priori} and emph{a posteriori} error bounds for Lanczos-FA in terms of the error of Lanczos used to solve linear systems. Unlike many error bounds for Lanczos-FA, these bounds account for fine-grained properties of the spectrum of $mathbf{A}$, such as clustered or isolated eigenvalues. Our results are derived assuming exact arithmetic, but we show that they are easily extended to finite precision computations using existing theory about the Lanczos algorithm in finite precision. We also provide generalized bounds for the Lanczos method used to approximate quadratic forms $mathbf{b}^textsf{H} f(mathbf{A}) mathbf{b}$, and demonstrate the effectiveness of our bounds with numerical experiments.
Benfords law states that for many random variables X > 0 its leading digit D = D(X) satisfies approximately the equation P(D = d) = log_{10}(1 + 1/d) for d = 1,2,...,9. This phenomenon follows from another, maybe more intuitive fact, applied to Y := log_{10}(X): For many real random variables Y, the remainder U := Y - floor(Y) is approximately uniformly distributed on [0,1). The present paper provides new explicit bounds for the latter approximation in terms of the total variation of the density of Y or some derivative of it. These bounds are an interesting alternative to traditional Fourier methods which yield mostly qualitative results. As a by-product we obtain explicit bounds for the approximation error in Benfords law.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا