No Arabic abstract
In recent years, opinion dynamics has received an increasing attention, and various models have been introduced and evaluated mainly by simulation. In this study, we introduce and study a dynamical model inspired by the so-called `bounded confidence approach where voters engaged in an electoral decision with two options are influenced by individuals sharing an opinion similar to their own. This model allows one to capture salient features of the evolution of opinions and results in final clusters of voters. The model is nonlinear and discontinuous. We provide a detailed study of the model, including a complete classification of fixed points of the appearing dynamical system and analysis of their stability. It is shown that any trajectory tends to a fixed point. The model highlights that the final electoral outcome depends on the level of interaction in the society, besides the initial opinion of each individual, so that a strongly interconnected society can reverse the electoral outcome as compared to a society with looser exchange.
Peoples opinions evolve over time as they interact with their friends, family, colleagues, and others. In the study of opinion dynamics on networks, one often encodes interactions between people in the form of dyadic relationships, but many social interactions in real life are polyadic (i.e., they involve three or more people). In this paper, we extend an asynchronous bounded-confidence model (BCM) on graphs, in which nodes are connected pairwise by edges, to an asynchronous BCM on hypergraphs, in which arbitrarily many nodes can be connected by a single hyperedge. We show that our hypergraph BCM converges to consensus under a wide range of initial conditions for the opinions of the nodes, including for non-uniform and asymmetric initial opinion distributions. We also show that, under suitable conditions, echo chambers can form on hypergraphs with community structure. We demonstrate that the opinions of individuals can sometimes jump from one opinion cluster to another in a single time step, a phenomenon (which we call ``opinion jumping) that is not possible in standard dyadic BCMs. Additionally, we observe that there is a phase transition in the convergence time on {a complete hypergraph} when the variance $sigma^2$ of the initial opinion distribution equals the confidence bound $c$. We prove that the convergence time grows at least exponentially fast with the number of nodes when $sigma^2 > c$ and the initial opinions are normally distributed. Therefore, to determine the convergence properties of our hypergraph BCM when the variance and the number of hyperedges are both large, it is necessary to use analytical methods instead of relying only on Monte Carlo simulations.
The flow of information reaching us via the online media platforms is optimized not by the information content or relevance but by popularity and proximity to the target. This is typically performed in order to maximise platform usage. As a side effect, this introduces an algorithmic bias that is believed to enhance polarization of the societal debate. To study this phenomenon, we modify the well-known continuous opinion dynamics model of bounded confidence in order to account for the algorithmic bias and investigate its consequences. In the simplest version of the original model the pairs of discussion participants are chosen at random and their opinions get closer to each other if they are within a fixed tolerance level. We modify the selection rule of the discussion partners: there is an enhanced probability to choose individuals whose opinions are already close to each other, thus mimicking the behavior of online media which suggest interaction with similar peers. As a result we observe: a) an increased tendency towards polarization, which emerges also in conditions where the original model would predict convergence, and b) a dramatic slowing down of the speed at which the convergence at the asymptotic state is reached, which makes the system highly unstable. Polarization is augmented by a fragmented initial population.
The problem of analyzing the performance of networked agents exchanging evidence in a dynamic network has recently grown in importance. This problem has relevance in signal and data fusion network applications and in studying opinion and consensus dynamics in social networks. Due to its capability of handling a wider variety of uncertainties and ambiguities associated with evidence, we use the framework of Dempster-Shafer (DS) theory to capture the opinion of an agent. We then examine the consensus among agents in dynamic networks in which an agent can utilize either a cautious or receptive updating strategy. In particular, we examine the case of bounded confidence updating where an agent exchanges its opinion only with neighboring nodes possessing similar evidence. In a fusion network, this captures the case in which nodes only update their state based on evidence consistent with the nodes own evidence. In opinion dynamics, this captures the notions of Social Judgment Theory (SJT) in which agents update their opinions only with other agents possessing opinions closer to their own. Focusing on the two special DS theoretic cases where an agent state is modeled as a Dirichlet body of evidence and a probability mass function (p.m.f.), we utilize results from matrix theory, graph theory, and networks to prove the existence of consensus agent states in several time-varying network cases of interest. For example, we show the existence of a consensus in which a subset of network nodes achieves a consensus that is adopted by follower network nodes. Of particular interest is the case of multiple opinion leaders, where we show that the agents do not reach a consensus in general, but rather converge to opinion clusters. Simulation results are provided to illustrate the main results.
The communication process in a situation of emergency is discussed within the Scheff theory of shame and pride. The communication involves messages from media and from other persons. Three strategies are considered: selfish (to contact friends), collective (to join other people) and passive (to do nothing). We show that the pure selfish strategy cannot be evolutionarily stable. The main result is that the community structure is statistically meaningful only if the interpersonal communication is weak.
We present a model for studying communities of epistemically interacting agents who update their belief states by averaging (in a specified way) the belief states of other agents in the community. The agents in our model have a rich belief state, involving multiple independent issues which are interrelated in such a way that they form a theory of the world. Our main goal is to calculate the probability for an agent to end up in an inconsistent belief state due to updating (in the given way). To that end, an analytical expression is given and evaluated numerically, both exactly and using statistical sampling. It is shown that, under the assumptions of our model, an agent always has a probability of less than 2% of ending up in an inconsistent belief state. Moreover, this probability can be made arbitrarily small by increasing the number of independent issues the agents have to judge or by increasing the group size. A real-world situation to which this model applies is a group of experts participating in a Delphi-study.