Do you want to publish a course? Click here

A New Wald Test for Hypothesis Testing Based on MCMC outputs

131   0   0.0 ( 0 )
 Added by Yong Li
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

In this paper, a new and convenient $chi^2$ wald test based on MCMC outputs is proposed for hypothesis testing. The new statistic can be explained as MCMC version of Wald test and has several important advantages that make it very convenient in practical applications. First, it is well-defined under improper prior distributions and avoids Jeffrey-Lindleys paradox. Second, its asymptotic distribution can be proved to follow the $chi^2$ distribution so that the threshold values can be easily calibrated from this distribution. Third, its statistical error can be derived using the Markov chain Monte Carlo (MCMC) approach. Fourth, most importantly, it is only based on the posterior MCMC random samples drawn from the posterior distribution. Hence, it is only the by-product of the posterior outputs and very easy to compute. In addition, when the prior information is available, the finite sample theory is derived for the proposed test statistic. At last, the usefulness of the test is illustrated with several applications to latent variable models widely used in economics and finance.



rate research

Read More

64 - Yeonwoo Rho , Yun Liu , 2020
This paper proposes a new linearized mixed data sampling (MIDAS) model and develops a framework to infer clusters in a panel regression with mixed frequency data. The linearized MIDAS estimation method is more flexible and substantially simpler to implement than competing approaches. We show that the proposed clustering algorithm successfully recovers true membership in the cross-section, both in theory and in simulations, without requiring prior knowledge of the number of clusters. This methodology is applied to a mixed-frequency Okuns law model for state-level data in the U.S. and uncovers four meaningful clusters based on the dynamic features of state-level labor markets.
This paper reexamines the seminal Lagrange multiplier test for cross-section independence in a large panel model where both the number of cross-sectional units n and the number of time series observations T can be large. The first contribution of the paper is an enlargement of the test with two extensions: firstly the new asymptotic normality is derived in a simultaneous limiting scheme where the two dimensions (n, T) tend to infinity with comparable magnitudes; second, the result is valid for general error distribution (not necessarily normal). The second contribution of the paper is a new test statistic based on the sum of the fourth powers of cross-section correlations from OLS residuals, instead of their squares used in the Lagrange multiplier statistic. This new test is generally more powerful, and the improvement is particularly visible against alternatives with weak or sparse cross-section dependence. Both simulation study and real data analysis are proposed to demonstrate the advantages of the enlarged Lagrange multiplier test and the power enhanced test in comparison with the existing procedures.
We consider a situation where the distribution of a random variable is being estimated by the empirical distribution of noisy measurements of that variable. This is common practice in, for example, teacher value-added models and other fixed-effect models for panel data. We use an asymptotic embedding where the noise shrinks with the sample size to calculate the leading bias in the empirical distribution arising from the presence of noise. The leading bias in the empirical quantile function is equally obtained. These calculations are new in the literature, where only results on smooth functionals such as the mean and variance have been derived. Given a closed-form expression for the bias, bias-corrected estimator of the distribution function and quantile function can be constructed. We provide both analytical and jackknife corrections that recenter the limit distribution and yield confidence intervals with correct coverage in large samples. These corrections are non-parametric and easy to implement. Our approach can be connected to corrections for selection bias and shrinkage estimation and is to be contrasted with deconvolution. Simulation results confirm the much-improved sampling behavior of the corrected estimators.
We provide a method to determine whether a new recommendation system improves the revenue per visit (RPV) compared to the status quo. We achieve our goal by splitting RPV into conversion rate and average order value (AOV). We use the two-part test suggested by Lachenbruch to determine if the data generating process in the new system is different. In cases that this test does not give us a definitive answer about the change in RPV, we propose two alternative tests to determine if RPV has changed. Both of these tests rely on the assumption that non-zero purchase values follow a log-normal distribution. We empirically validate this assumption using data collected at different points in time from Staples.com. On average, our method needs a smaller sample size than other methods. Furthermore, it does not require any subjective outlier removal. Finally, it characterizes the uncertainty around RPV by providing a confidence interval.
Many recent Markov chain Monte Carlo (MCMC) samplers leverage continuous dynamics to define a transition kernel that efficiently explores a target distribution. In tandem, a focus has been on devising scalable variants that subsample the data and use stochastic gradients in place of full-data gradients in the dynamic simulations. However, such stochastic gradient MCMC samplers have lagged behind their full-data counterparts in terms of the complexity of dynamics considered since proving convergence in the presence of the stochastic gradient noise is non-trivial. Even with simple dynamics, significant physical intuition is often required to modify the dynamical system to account for the stochastic gradient noise. In this paper, we provide a general recipe for constructing MCMC samplers--including stochastic gradie
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا