ترغب بنشر مسار تعليمي؟ اضغط هنا

Measuring the likelihood of models for network evolution

130   0   0.0 ( 0 )
 نشر من قبل Richard Clegg
 تاريخ النشر 2013
والبحث باللغة English




اسأل ChatGPT حول البحث

Many researchers have hypothesised models which explain the evolution of the topology of a target network. The framework described in this paper gives the likelihood that the target network arose from the hypothesised model. This allows rival hypothesised models to be compared for their ability to explain the target network. A null model (of random evolution) is proposed as a baseline for comparison. The framework also considers models made from linear combinations of model components. A method is given for the automatic optimisation of component weights. The framework is tested on simulated networks with known parameters and also on real data.



قيم البحث

اقرأ أيضاً

Network analysis needs tools to infer distributions over graphs of arbitrary size from a single graph. Assuming the distribution is generated by a continuous latent space model which obeys certain natural symmetry and smoothness properties, we establ ish three levels of consistency for non-parametric maximum likelihood inference as the number of nodes grows: (i) the estimated locations of all nodes converge in probability on their true locations; (ii) the distribution over locations in the latent space converges on the true distribution; and (iii) the distribution over graphs of arbitrary size converges.
A primary goal of social science research is to understand how latent group memberships predict the dynamic process of network evolution. In the modeling of international conflicts, for example, scholars hypothesize that membership in geopolitical co alitions shapes the decision to engage in militarized conflict. Such theories explain the ways in which nodal and dyadic characteristics affect the evolution of relational ties over time via their effects on group memberships. To aid the empirical testing of these arguments, we develop a dynamic model of network data by combining a hidden Markov model with a mixed-membership stochastic blockmodel that identifies latent groups underlying the network structure. Unlike existing models, we incorporate covariates that predict node membership in latent groups as well as the direct formation of edges between dyads. While prior substantive research often assumes the decision to engage in militarized conflict is independent across states and static over time, we demonstrate that conflict patterns are driven by states evolving membership in geopolitical blocs. Changes in monadic covariates like democracy shift states between coalitions, generating heterogeneous effects on conflict over time and across states. The proposed methodology, which relies on a variational approximation to a collapsed posterior distribution as well as stochastic optimization for scalability, is implemented through an open-source software package.
The literature in social network analysis has largely focused on methods and models which require complete network data; however there exist many networks which can only be studied via sampling methods due to the scale or complexity of the network, a ccess limitations, or the population of interest is hard to reach. In such cases, the application of random walk-based Markov chain Monte Carlo (MCMC) methods to estimate multiple network features is common. However, the reliability of these estimates has been largely ignored. We consider and further develop multivariate MCMC output analysis methods in the context of network sampling to directly address the reliability of the multivariate estimation. This approach yields principled, computationally efficient, and broadly applicable methods for assessing the Monte Carlo estimation procedure. In particular, with respect to two random-walk algorithms, a simple random walk and a Metropolis-Hastings random walk, we construct and compare network parameter estimates, effective sample sizes, coverage probabilities, and stopping rules, all of which speaks to the estimation reliability.
171 - Giona Casiraghi 2021
The complexity underlying real-world systems implies that standard statistical hypothesis testing methods may not be adequate for these peculiar applications. Specifically, we show that the likelihood-ratio tests null-distribution needs to be modifie d to accommodate the complexity found in multi-edge network data. When working with independent observations, the p-values of likelihood-ratio tests are approximated using a $chi^2$ distribution. However, such an approximation should not be used when dealing with multi-edge network data. This type of data is characterized by multiple correlations and competitions that make the standard approximation unsuitable. We provide a solution to the problem by providing a better approximation of the likelihood-ratio test null-distribution through a Beta distribution. Finally, we empirically show that even for a small multi-edge network, the standard $chi^2$ approximation provides erroneous results, while the proposed Beta approximation yields the correct p-value estimation.
Developing spatio-temporal crime prediction models, and to a lesser extent, developing measures of accuracy and operational efficiency for them, has been an active area of research for almost two decades. Despite calls for rigorous and independent ev aluations of model performance, such studies have been few and far between. In this paper, we argue that studies should focus not on finding the one predictive model or the one measure that is the most appropriate at all times, but instead on careful consideration of several factors that affect the choice of the model and the choice of the measure, to find the best measure and the best model for the problem at hand. We argue that because each problem is unique, it is important to develop measures that empower the practitioner with the ability to input the choices and preferences that are most appropriate for the problem at hand. We develop a new measure called the penalized predictive accuracy index (PPAI) which imparts such flexibility. We also propose the use of the expected utility function to combine multiple measures in a way that is appropriate for a given problem in order to assess the models against multiple criteria. We further propose the use of the average logarithmic score (ALS) measure that is appropriate for many crime models and measures accuracy differently than existing measures. These measures can be used alongside existing measures to provide a more comprehensive means of assessing the accuracy and potential utility of spatio-temporal crime prediction models.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا