Do you want to publish a course? Click here

Explicit bounds for the approximation error in Benfords law

126   0   0.0 ( 0 )
 Added by Lutz Duembgen
 Publication date 2008
  fields
and research's language is English




Ask ChatGPT about the research

Benfords law states that for many random variables X > 0 its leading digit D = D(X) satisfies approximately the equation P(D = d) = log_{10}(1 + 1/d) for d = 1,2,...,9. This phenomenon follows from another, maybe more intuitive fact, applied to Y := log_{10}(X): For many real random variables Y, the remainder U := Y - floor(Y) is approximately uniformly distributed on [0,1). The present paper provides new explicit bounds for the latter approximation in terms of the total variation of the density of Y or some derivative of it. These bounds are an interesting alternative to traditional Fourier methods which yield mostly qualitative results. As a by-product we obtain explicit bounds for the approximation error in Benfords law.



rate research

Read More

Let $qgeq 2$ be a positive integer, $B$ be a fractional Brownian motion with Hurst index $Hin(0,1)$, $Z$ be an Hermite random variable of index $q$, and $H_q$ denote the Hermite polynomial having degree $q$. For any $ngeq 1$, set $V_n=sum_{k=0}^{n-1} H_q(B_{k+1}-B_k)$. The aim of the current paper is to derive, in the case when the Hurst index verifies $H>1-1/(2q)$, an upper bound for the total variation distance between the laws $mathscr{L}(Z_n)$ and $mathscr{L}(Z)$, where $Z_n$ stands for the correct renormalization of $V_n$ which converges in distribution towards $Z$. Our results should be compared with those obtained recently by Nourdin and Peccati (2007) in the case when $H<1-1/(2q)$, corresponding to the situation where one has normal approximation.
Long birth time series for Romania are investigated from Benfords law point of view, distinguishing between families with a religious (Orthodox and Non-Orthodox) affiliation. The data extend from Jan. 01, 1905 till Dec. 31, 2001, i.e. over 97 years or 35 429 days. The results point to a drastic breakdown of Benfords law. Some interpretation is proposed, based on the statistical aspects due to population sizes, rather than on human thought constraints when the law breakdown is usually expected. Benfords law breakdown clearly points to natural causes.
We analyze the Lanczos method for matrix function approximation (Lanczos-FA), an iterative algorithm for computing $f(mathbf{A}) mathbf{b}$ when $mathbf{A}$ is a Hermitian matrix and $mathbf{b}$ is a given mathbftor. Assuming that $f : mathbb{C} rightarrow mathbb{C}$ is piecewise analytic, we give a framework, based on the Cauchy integral formula, which can be used to derive {em a priori} and emph{a posteriori} error bounds for Lanczos-FA in terms of the error of Lanczos used to solve linear systems. Unlike many error bounds for Lanczos-FA, these bounds account for fine-grained properties of the spectrum of $mathbf{A}$, such as clustered or isolated eigenvalues. Our results are derived assuming exact arithmetic, but we show that they are easily extended to finite precision computations using existing theory about the Lanczos algorithm in finite precision. We also provide generalized bounds for the Lanczos method used to approximate quadratic forms $mathbf{b}^textsf{H} f(mathbf{A}) mathbf{b}$, and demonstrate the effectiveness of our bounds with numerical experiments.
246 - Tomoaki Okayama 2016
The Sinc approximation is a function approximation formula that attains exponential convergence for rapidly decaying functions defined on the whole real axis. Even for other functions, the Sinc approximation works accurately when combined with a proper variable transformation. The convergence rate has been analyzed for typical cases including finite, semi-infinite, and infinite intervals. Recently, for verified numerical computations, a more explicit, computable error bound has been given in the case of a finite interval. In this paper, such explicit error bounds are derived for other cases.
This work studies approximation based on single-hidden-layer feedforward and recurrent neural networks with randomly generated internal weights. These methods, in which only the last layer of weights and a few hyperparameters are optimized, have been successfully applied in a wide range of static and dynamic learning problems. Despite the popularity of this approach in empirical tasks, important theoretical questions regarding the relation between the unknown function, the weight distribution, and the approximation rate have remained open. In this work it is proved that, as long as the unknown function, functional, or dynamical system is sufficiently regular, it is possible to draw the internal weights of the random (recurrent) neural network from a generic distribution (not depending on the unknown object) and quantify the error in terms of the number of neurons and the hyperparameters. In particular, this proves that echo state networks with randomly generated weights are capable of approximating a wide class of dynamical systems arbitrarily well and thus provides the first mathematical explanation for their empirically observed success at learning dynamical systems.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا