Do you want to publish a course? Click here

The likelihood-ratio test for multi-edge network models

172   0   0.0 ( 0 )
 Added by Giona Casiraghi
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

The complexity underlying real-world systems implies that standard statistical hypothesis testing methods may not be adequate for these peculiar applications. Specifically, we show that the likelihood-ratio tests null-distribution needs to be modified to accommodate the complexity found in multi-edge network data. When working with independent observations, the p-values of likelihood-ratio tests are approximated using a $chi^2$ distribution. However, such an approximation should not be used when dealing with multi-edge network data. This type of data is characterized by multiple correlations and competitions that make the standard approximation unsuitable. We provide a solution to the problem by providing a better approximation of the likelihood-ratio test null-distribution through a Beta distribution. Finally, we empirically show that even for a small multi-edge network, the standard $chi^2$ approximation provides erroneous results, while the proposed Beta approximation yields the correct p-value estimation.



rate research

Read More

129 - Ryan Chen , Javier Cabrera 2020
This study aims to evaluate the performance of power in the likelihood ratio test for changepoint detection by bootstrap sampling, and proposes a hypothesis test based on bootstrapped confidence interval lengths. Assuming i.i.d normally distributed errors, and using the bootstrap method, the changepoint sampling distribution is estimated. Furthermore, this study describes a method to estimate a data set with no changepoint to form the null sampling distribution. With the null sampling distribution, and the distribution of the estimated changepoint, critical values and power calculations can be made, over the lengths of confidence intervals.
We propose a likelihood ratio test framework for testing normal mean vectors in high-dimensional data under two common scenarios: the one-sample test and the two-sample test with equal covariance matrices. We derive the test statistics under the assumption that the covariance matrices follow a diagonal matrix structure. In comparison with the diagonal Hotellings tests, our proposed test statistics display some interesting characteristics. In particular, they are a summation of the log-transformed squared $t$-statistics rather than a direct summation of those components. More importantly, to derive the asymptotic normality of our test statistics under the null and local alternative hypotheses, we do not need the requirement that the covariance matrices follow a diagonal matrix structure. As a consequence, our proposed test methods are very flexible and readily applicable in practice. Simulation studies and a real data analysis are also carried out to demonstrate the advantages of our likelihood ratio test methods.
State-space models provide an important body of techniques for analyzing time-series, but their use requires estimating unobserved states. The optimal estimate of the state is its conditional expectation given the observation histories, and computing this expectation is hard when there are nonlinearities. Existing filtering methods, including sequential Monte Carlo, tend to be either inaccurate or slow. In this paper, we study a nonlinear filter for nonlinear/non-Gaussian state-space models, which uses Laplaces method, an asymptotic series expansion, to approximate the states conditional mean and variance, together with a Gaussian conditional distribution. This {em Laplace-Gaussian filter} (LGF) gives fast, recursive, deterministic state estimates, with an error which is set by the stochastic characteristics of the model and is, we show, stable over time. We illustrate the estimation ability of the LGF by applying it to the problem of neural decoding and compare it to sequential Monte Carlo both in simulations and with real data. We find that the LGF can deliver superior results in a small fraction of the computing time.
Network analysis needs tools to infer distributions over graphs of arbitrary size from a single graph. Assuming the distribution is generated by a continuous latent space model which obeys certain natural symmetry and smoothness properties, we establish three levels of consistency for non-parametric maximum likelihood inference as the number of nodes grows: (i) the estimated locations of all nodes converge in probability on their true locations; (ii) the distribution over locations in the latent space converges on the true distribution; and (iii) the distribution over graphs of arbitrary size converges.
The likelihood ratio test (LRT) based on the asymptotic chi-squared distribution of the log likelihood is one of the fundamental tools of statistical inference. A recent universal LRT approach based on sample splitting provides valid hypothesis tests and confidence sets in any setting for which we can compute the split likelihood ratio statistic (or, more generally, an upper bound on the null maximum likelihood). The universal LRT is valid in finite samples and without regularity conditions. This test empowers statisticians to construct tests in settings for which no valid hypothesis test previously existed. For the simple but fundamental case of testing the population mean of d-dimensional Gaussian data, the usual LRT itself applies and thus serves as a perfect test bed to compare against the universal LRT. This work presents the first in-depth exploration of the size, power, and relationships between several universal LRT variants. We show that a repeated subsampling approach is the best choice in terms of size and power. We observe reasonable performance even in a high-dimensional setting, where the expected squared radius of the best universal LRT confidence set is approximately 3/2 times the squared radius of the standard LRT-based set. We illustrate the benefits of the universal LRT through testing a non-convex doughnut-shaped null hypothesis, where a universal inference procedure can have higher power than a standard approach.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا