Do you want to publish a course? Click here

Objective Bayesian Analysis for the Lomax Distribution

101   0   0.0 ( 0 )
 Added by Ricardo Ehlers
 Publication date 2016
and research's language is English




Ask ChatGPT about the research

In this paper we propose to make Bayesian inferences for the parameters of the Lomax distribution using non-informative priors, namely the Jeffreys prior and the reference prior. We assess Bayesian estimation through a Monte Carlo study with 500 simulated data sets. To evaluate the possible impact of prior specification on estimation, two criteria were considered: the bias and square root of the mean square error. The developed procedures are illustrated on a real data set.



rate research

Read More

The use of entropy related concepts goes from physics, such as in statistical mechanics, to evolutionary biology. The Shannon entropy is a measure used to quantify the amount of information in a system, and its estimation is usually made under the frequentist approach. In the present paper, we introduce an fully objective Bayesian analysis to obtain this measures posterior distribution. Notably, we consider the Gamma distribution, which describes many natural phenomena in physics, engineering, and biology. We reparametrize the model in terms of entropy, and different objective priors are derived, such as Jeffreys prior, reference prior, and matching priors. Since the obtained priors are improper, we prove that the obtained posterior distributions are proper and their respective posterior means are finite. An intensive simulation study is conducted to select the prior that returns better results in terms of bias, mean square error, and coverage probabilities. The proposed approach is illustrated in two datasets, where the first one is related to the Achaemenid dynasty reign period, and the second data describes the time to failure of an electronic component in the sugarcane harvest machine.
302 - Olha Bodnar , Taras Bodnar 2021
Objective Bayesian inference procedures are derived for the parameters of the multivariate random effects model generalized to elliptically contoured distributions. The posterior for the overall mean vector and the between-study covariance matrix is deduced by assigning two noninformative priors to the model parameter, namely the Berger and Bernardo reference prior and the Jeffreys prior, whose analytical expressions are obtained under weak distributional assumptions. It is shown that the only condition needed for the posterior to be proper is that the sample size is larger than the dimension of the data-generating model, independently of the class of elliptically contoured distributions used in the definition of the generalized multivariate random effects model. The theoretical findings of the paper are applied to real data consisting of ten studies about the effectiveness of hypertension treatment for reducing blood pressure where the treatment effects on both the systolic blood pressure and diastolic blood pressure are investigated.
303 - Andrew Fowlie 2020
We consider the Jeffreys-Lindley paradox from an objective Bayesian perspective by attempting to find priors representing complete indifference to sample size in the problem. This means that we ensure that the prior for the unknown mean and the prior predictive for the $t$-statistic are independent of the sample size. If successful, this would lead to Bayesian model comparison that was independent of sample size and ameliorate the paradox. Unfortunately, it leads to an improper scale-invariant prior for the unknown mean. We show, however, that a truncated scale-invariant prior delays the dependence on sample size, which could be practically significant. Lastly, we shed light on the paradox by relating it to the fact that the scale-invariant prior is improper.
A composite likelihood is a non-genuine likelihood function that allows to make inference on limited aspects of a model, such as marginal or conditional distributions. Composite likelihoods are not proper likelihoods and need therefore calibration for their use in inference, from both a frequentist and a Bayesian perspective. The maximizer to the composite likelihood can serve as an estimator and its variance is assessed by means of a suitably defined sandwich matrix. In the Bayesian setting, the composite likelihood can be adjusted by means of magnitude and curvature methods. Magnitude methods imply raising the likelihood to a constant, while curvature methods imply evaluating the likelihood at a different point by translating, rescaling and rotating the parameter vector. Some authors argue that curvature methods are more reliable in general, but others proved that magnitude methods are sufficient to recover, for instance, the null distribution of a test statistic. We propose a simple calibration for the marginal posterior distribution of a scalar parameter of interest which is invariant to monotonic and smooth transformations. This can be enough for instance in medical statistics, where a single scalar effect measure is often the target.
Differential networks (DN) are important tools for modeling the changes in conditional dependencies between multiple samples. A Bayesian approach for estimating DNs, from the classical viewpoint, is introduced with a computationally efficient threshold selection for graphical model determination. The algorithm separately estimates the precision matrices of the DN using the Bayesian adaptive graphical lasso procedure. Synthetic experiments illustrate that the Bayesian DN performs exceptionally well in numerical accuracy and graphical structure determination in comparison to state-of-the-art methods. The proposed method is applied to South African COVID-$19$ data to investigate the change in DN structure between various phases of the pandemic.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا