No Arabic abstract
Opinion Dynamics lacks a theoretical basis. In this article, I propose to use a decision-theoretic framework, based on the updating of subjective probabilities, as that basis. We will see we get a basic tool for a better understanding of the interaction between the agents in Opinion Dynamics problems and for creating new models. I will review the few existing applications of Bayesian update rules to both discrete and continuous opinion problems and show that several traditional models can be obtained as special cases or approximations from these Bayesian models. The empirical basis and useful properties of the framework will be discussed and examples of how the framework can be used to describe different problems given.
Traditional opinion dynamics models are simple and yet, enough to explore the consequences in basic scenarios. But, to better describe problems such as polarization and extremism, we might need to include details about human biases and other cognitive characteristics. In this paper, I explain how we can describe and use mental models and assumptions of the agents using Bayesian-inspired model building. The relationship between human rationality and Bayesian methods will be explored, and we will see that Bayesian ideas can indeed be used to explain how humans reason. We will see how to use Bayesian-inspired rules using the simplest version of the Continuous Opinions and Discrete Actions (CODA) model. From that, we will explore how we can obtain update rules that include human behavioral characteristics such as confirmation bias, motivated reasoning, or our tendency to change opinions much less than we should. Keywords: Opinion dynamics, Bayesian methods, Cognition, CODA, Agent-based models
It is known that individual opinions on different policy issues often align to a dominant ideological dimension (e.g. left vs. right) and become increasingly polarized. We provide an agent-based model that reproduces these two stylized facts as emergent properties of an opinion dynamics in a multi-dimensional space of continuous opinions. The mechanisms for the change of agents opinions in this multi-dimensional space are derived from cognitive dissonance theory and structural balance theory. We test assumptions from proximity voting and from directional voting regarding their ability to reproduce the expected emerging properties. We further study how the emotional involvement of agents, i.e. their individual resistance to change opinions, impacts the dynamics. We identify two regimes for the global and the individual alignment of opinions. If the affective involvement is high and shows a large variance across agents, this fosters the emergence of a dominant ideological dimension. Agents align their opinions along this dimension in opposite directions, i.e. create a state of polarization.
We propose an agent-based model of collective opinion formation to study the wisdom of crowds under social influence. The opinion of an agent is a continuous positive value, denoting its subjective answer to a factual question. The wisdom of crowds states that the average of all opinions is close to the truth, i.e. the correct answer. But if agents have the chance to adjust their opinion in response to the opinions of others, this effect can be destroyed. Our model investigates this scenario by evaluating two competing effects: (i) agents tend to keep their own opinion (individual conviction $beta$), (ii) they tend to adjust their opinion if they have information about the opinions of others (social influence $alpha$). For the latter, two different regimes (full information vs. aggregated information) are compared. Our simulations show that social influence only in rare cases enhances the wisdom of crowds. Most often, we find that agents converge to a collective opinion that is even farther away from the true answer. So, under social influence the wisdom of crowds can be systematically wrong.
The flow of information reaching us via the online media platforms is optimized not by the information content or relevance but by popularity and proximity to the target. This is typically performed in order to maximise platform usage. As a side effect, this introduces an algorithmic bias that is believed to enhance polarization of the societal debate. To study this phenomenon, we modify the well-known continuous opinion dynamics model of bounded confidence in order to account for the algorithmic bias and investigate its consequences. In the simplest version of the original model the pairs of discussion participants are chosen at random and their opinions get closer to each other if they are within a fixed tolerance level. We modify the selection rule of the discussion partners: there is an enhanced probability to choose individuals whose opinions are already close to each other, thus mimicking the behavior of online media which suggest interaction with similar peers. As a result we observe: a) an increased tendency towards polarization, which emerges also in conditions where the original model would predict convergence, and b) a dramatic slowing down of the speed at which the convergence at the asymptotic state is reached, which makes the system highly unstable. Polarization is augmented by a fragmented initial population.
In this paper, we discuss a class of distributed detection algorithms which can be viewed as implementations of Bayes law in distributed settings. Some of the algorithms are proposed in the literature most recently, and others are first developed in this paper. The common feature of these algorithms is that they all combine (i) certain kinds of consensus protocols with (ii) Bayesian updates. They are different mainly in the aspect of the type of consensus protocol and the order of the two operations. After discussing their similarities and differences, we compare these distributed algorithms by numerical examples. We focus on the rate at which these algorithms detect the underlying true state of an object. We find that (a) The algorithms with consensus via geometric average is more efficient than that via arithmetic average; (b) The order of consensus aggregation and Bayesian update does not apparently influence the performance of the algorithms; (c) The existence of communication delay dramatically slows down the rate of convergence; (d) More communication between agents with different signal structures improves the rate of convergence.