ترغب بنشر مسار تعليمي؟ اضغط هنا

Diagnostics and Visualization of Point Process Models for Event Times on a Social Network

57   0   0.0 ( 0 )
 نشر من قبل Jing Wu
 تاريخ النشر 2020
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

Point process models have been used to analyze interaction event times on a social network, in the hope to provides valuable insights for social science research. However, the diagnostics and visualization of the modeling results from such an analysis have received limited discussion in the literature. In this paper, we develop a systematic set of diagnostic tools and visualizations for point process models fitted to data from a network setting. We analyze the residual process and Pearson residual on the network by inspecting their structure and clustering structure. Equipped with these tools, we can validate whether a model adequately captures the temporal and/or network structures in the observed data. The utility of our approach is demonstrated using simulation studies and point process models applied to a study of animal social interactions.



قيم البحث

اقرأ أيضاً

In this article, we study the activity patterns of modern social media users on platforms such as Twitter and Facebook. To characterize the complex patterns we observe in users interactions with social media, we describe a new class of point process models. The components in the model have straightforward interpretations and can thus provide meaningful insights into user activity patterns. A composite likelihood approach and a composite EM estimation procedure are developed to overcome the challenges that arise in parameter estimation. Using the proposed method, we analyze Donald Trumps Twitter data and study if and how his tweeting behavior evolved before, during and after the presidential campaign. Additionally, we analyze a large-scale social media data from Sina Weibo and identify interesting groups of users with distinct behaviors; in this analysis, we also discuss the effect of social ties on a users online content generating behavior.
This paper introduces a statistical model for the arrival times of connection events in a computer network. Edges between nodes in a network can be interpreted and modelled as point processes where events in the process indicate information being sen t along that edge. A model of normal behaviour can be constructed for each edge in the network by identifying key network user features such as seasonality and self-exciting behaviour, where events typically arise in bursts at particular times of day. When monitoring the network in real time, unusual patterns of activity could indicate the presence of a malicious actor. Four different models for self-exciting behaviour are introduced and compared using data collected from the Imperial College and Los Alamos National Laboratory computer networks.
110 - Romain Boulet 2008
Large graphs are natural mathematical models for describing the structure of the data in a wide variety of fields, such as web mining, social networks, information retrieval, biological networks, etc. For all these applications, automatic tools are r equired to get a synthetic view of the graph and to reach a good understanding of the underlying problem. In particular, discovering groups of tightly connected vertices and understanding the relations between those groups is very important in practice. This paper shows how a kernel version of the batch Self Organizing Map can be used to achieve these goals via kernels derived from the Laplacian matrix of the graph, especially when it is used in conjunction with more classical methods based on the spectral analysis of the graph. The proposed method is used to explore the structure of a medieval social network modeled through a weighted graph that has been directly built from a large corpus of agrarian contracts.
This paper reviews, classifies and compares recent models for social networks that have mainly been published within the physics-oriented complex networks literature. The models fall into two categories: those in which the addition of new links is de pendent on the (typically local) network structure (network evolution models, NEMs), and those in which links are generated based only on nodal attributes (nodal attribute models, NAMs). An exponential random graph model (ERGM) with structural dependencies is included for comparison. We fit models from each of these categories to two empirical acquaintance networks with respect to basic network properties. We compare higher order structures in the resulting networks with those in the data, with the aim of determining which models produce the most realistic network structure with respect to degree distributions, assortativity, clustering spectra, geodesic path distributions, and community structure (subgroups with dense internal connections). We find that the nodal attribute models successfully produce assortative networks and very clear community structure. However, they generate unrealistic clustering spectra and peaked degree distributions that do not match empirical data on large social networks. On the other hand, many of the network evolution models produce degree distributions and clustering spectra that agree more closely with data. They also generate assortative networks and community structure, although often not to the same extent as in the data. The ERG model turns out to produce the weakest community structure.
185 - J. Park , W. Chang , B. Choi 2021
With rapid transmission, the coronavirus disease 2019 (COVID-19) has led to over 2 million deaths worldwide, posing significant societal challenges. Understanding the spatial patterns of patient visits and detecting the local spreading events are cru cial to controlling disease outbreaks. We analyze highly detailed COVID-19 contact tracing data collected from Seoul, which provides a unique opportunity to understand the mechanism of patient visit occurrence. Analyzing contact tracing data is challenging because patient visits show strong clustering patterns while clusters of events may have complex interaction behavior. To account for such behaviors, we develop a novel interaction Neyman-Scott process that regards the observed patient visit events as offsprings generated from a parent spreading event. Inference for such models is complicated since the likelihood involves intractable normalizing functions. To address this issue, we embed an auxiliary variable algorithm into our Markov chain Monte Carlo. We fit our model to several simulated and real data examples under different outbreak scenarios and show that our method can describe spatial patterns of patient visits well. We also provide visualization tools that can inform public health interventions for infectious diseases such as social distancing.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا