Selection of Exponential-Family Random Graph Models via Held-Out Predictive Evaluation (HOPE)


الملخص بالإنكليزية

Statistical models for networks with complex dependencies pose particular challenges for model selection and evaluation. In particular, many well-established statistical tools for selecting between models assume conditional independence of observations and/or conventional asymptotics, and their theoretical foundations are not always applicable in a network modeling context. While simulation-based approaches to model adequacy assessment are now widely used, there remains a need for procedures that quantify a models performance in a manner suitable for selecting among competing models. Here, we propose to address this issue by developing a predictive evaluation strategy for exponential family random graph models that is analogous to cross-validation. Our approach builds on the held-out predictive evaluation (HOPE) scheme introduced by Wang et al. (2016) to assess imputation performance. We systematically hold out parts of the observed network to: evaluate how well the model is able to predict the held-out data; identify where the model performs poorly based on which data are held-out, indicating e.g. potential weaknesses; and calculate general summaries of predictive performance that can be used for model selection. As such, HOPE can assist researchers in improving models by indicating where a model performs poorly, and by quantitatively comparing predictive performance across competing models. The proposed method is applied to model selection problem of two well-known data sets, and the results are compared to those obtained via nominal AIC and BIC scores.

تحميل البحث