ترغب بنشر مسار تعليمي؟ اضغط هنا

First digit law from Laplace transform

77   0   0.0 ( 0 )
 نشر من قبل Bo-Qiang Ma
 تاريخ النشر 2019
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

The occurrence of digits 1 through 9 as the leftmost nonzero digit of numbers from real-world sources is distributed unevenly according to an empirical law, known as Benfords law or the first digit law. It remains obscure why a variety of data sets generated from quite different dynamics obey this particular law. We perform a study of Benfords law from the application of the Laplace transform, and find that the logarithmic Laplace spectrum of the digital indicator function can be approximately taken as a constant. This particular constant, being exactly the Benford term, explains the prevalence of Benfords law. The slight variation from the Benford term leads to deviations from Benfords law for distributions which oscillate violently in the inverse Laplace space. We prove that the whole family of completely monotonic distributions can satisfy Benfords law within a small bound. Our study suggests that Benfords law originates from the way that we write numbers, thus should be taken as a basic mathematical knowledge.



قيم البحث

اقرأ أيضاً

We present an active-learning strategy for undergraduates that applies Bayesian analysis to candy-covered chocolate $text{m&ms}^circledR$. The exercise is best suited for small class sizes and tutorial settings, after students have been introduced to the concepts of Bayesian statistics. The exercise takes advantage of the non-uniform distribution of $text{m&ms}^circledR~$ colours, and the difference in distributions made at two different factories. In this paper, we provide the intended learning outcomes, lesson plan and step-by-step guide for instruction, and open-source teaching materials. We also suggest an extension to the exercise for the graduate-level, which incorporates hierarchical Bayesian analysis.
A new approach to $L_2$-consistent estimation of a general density functional using $k$-nearest neighbor distances is proposed, where the functional under consideration is in the form of the expectation of some function $f$ of the densities at each p oint. The estimator is designed to be asymptotically unbiased, using the convergence of the normalized volume of a $k$-nearest neighbor ball to a Gamma distribution in the large-sample limit, and naturally involves the inverse Laplace transform of a scaled version of the function $f.$ Some instantiations of the proposed estimator recover existing $k$-nearest neighbor based estimators of Shannon and Renyi entropies and Kullback--Leibler and Renyi divergences, and discover new consistent estimators for many other functionals such as logarithmic entropies and divergences. The $L_2$-consistency of the proposed estimator is established for a broad class of densities for general functionals, and the convergence rate in mean squared error is established as a function of the sample size for smooth, bounded densities.
In this paper we prove that the Hankel multipliers of Laplace transform type on $(0,1)^n$ are of weak type (1,1). Also we analyze Lp-boundedness properties for the imaginary powers of Bessel operator on $(0,1)^n$.
This paper studies the related problems of prediction, covariance estimation, and principal component analysis for the spiked covariance model with heteroscedastic noise. We consider an estimator of the principal components based on whitening the noi se, and we derive optimal singular value and eigenvalue shrinkers for use with these estimated principal components. Underlying these methods are new asymptotic results for the high-dimensional spiked model with heteroscedastic noise, and consistent estimators for the relevant population parameters. We extend previous analysis on out-of-sample prediction to the setting of predictors with whitening. We demonstrate certain advantages of noise whitening. Specifically, we show that in a certain asymptotic regime, optimal singular value shrinkage with whitening converges to the best linear predictor, whereas without whitening it converges to a suboptimal linear predictor. We prove that for generic signals, whitening improves estimation of the principal components, and increases a natural signal-to-noise ratio of the observations. We also show that for rank one signals, our estimated principal components achieve the asymptotic minimax rate.
66 - Yifan Yang , Xiaoyu Zhou 2021
We introduce a stochastic version of Taylors expansion and Mean Value Theorem, originally proved by Aliprantis and Border (1999), and extend them to a multivariate case. For a univariate case, the theorem asserts that suppose a real-valued function $ f$ has a continuous derivative $f$ on a closed interval $I$ and $X$ is a random variable on a probability space $(Omega, mathcal{F}, P)$. Fix $a in I$, there exists a textit{random variable} $xi$ such that $xi(omega) in I$ for every $omega in Omega$ and $f(X(omega)) = f(a) + f(xi(omega))(X(omega) - a)$. The proof is not trivial. By applying these results in statistics, one may simplify some details in the proofs of the Delta method or the asymptotic properties for a maximum likelihood estimator. In particular, when mentioning there exists $theta ^ *$ between $hat{theta}$ (a maximum likelihood estimator) and $theta_0$ (the true value), a stochastic version of Mean Value Theorem guarantees $theta ^ *$ is a random variable (or a random vector).

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا