Do you want to publish a course? Click here

Disinformation, Stochastic Harm, and Costly Filtering: A Principal-Agent Analysis of Regulating Social Media Platforms

66   0   0.0 ( 0 )
 Added by Shehroze Khan
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

The spread of disinformation on social media platforms such as Facebook is harmful to society. This harm can take the form of a gradual degradation of public discourse; but it can also take the form of sudden dramatic events such as the recent insurrection on Capitol Hill. The platforms themselves are in the best position to prevent the spread of disinformation, as they have the best access to relevant data and the expertise to use it. However, filtering disinformation is costly, not only for implementing filtering algorithms or employing manual filtering effort, but also because removing such highly viral content impacts user growth and thus potential advertising revenue. Since the costs of harmful content are borne by other entities, the platform will therefore have no incentive to filter at a socially-optimal level. This problem is similar to the problem of environmental regulation, in which the costs of adverse events are not directly borne by a firm, the mitigation effort of a firm is not observable, and the causal link between a harmful consequence and a specific failure is difficult to prove. In the environmental regulation domain, one solution to this issue is to perform costly monitoring to ensure that the firm takes adequate precautions according a specified rule. However, classifying disinformation is performative, and thus a fixed rule becomes less effective over time. Encoding our domain as a Markov decision process, we demonstrate that no penalty based on a static rule, no matter how large, can incentivize adequate filtering by the platform. Penalties based on an adaptive rule can incentivize optimal effort, but counterintuitively, only if the regulator sufficiently overreacts to harmful events by requiring a greater-than-optimal level of filtering.



rate research

Read More

The Ubiquitous nature of smartphones has significantly increased the use of social media platforms, such as Facebook, Twitter, TikTok, and LinkedIn, etc., among the public, government, and businesses. Facebook generated ~70 billion USD in 2019 in advertisement revenues alone, a ~27% increase from the previous year. Social media has also played a strong role in outbreaks of social protests responsible for political changes in different countries. As we can see from the above examples, social media plays a big role in business intelligence and international politics. In this paper, we present and discuss a high-level functional intelligence model (recipe) of Social Media Analysis (SMA). This model synthesizes the input data and uses operational intelligence to provide actionable recommendations. In addition, it also matches the synthesized function of the experiences and learning gained from the environment. The SMA model presented is independent of the application domain, and can be applied to different domains, such as Education, Healthcare and Government, etc. Finally, we also present some of the challenges faced by SMA and how the SMA model presented in this paper solves them.
83 - Samir Wadhwa , Roy Dong 2020
The advent of machine learning tools has led to the rise of data markets. These data markets are characterized by multiple data purchasers interacting with a set of data sources. Data sources have more information about the quality of data than the data purchasers; additionally, data itself is a non-rivalrous good that can be shared with multiple parties at negligible marginal cost. In this paper, we study the multiple-principal, multiple-agent problem with non-rivalrous goods. Under the assumption that the principals payoff is quasilinear in the payments given to agents, we show that there is a fundamental degeneracy in the market of non-rivalrous goods. Specifically, for a general class of payment contracts, there will be an infinite set of generalized Nash equilibria. This multiplicity of equilibria also affects common refinements of equilibrium definitions intended to uniquely select an equilibrium: both variational equilibria and normalized equilibria will be non-unique in general. This implies that most existing equilibrium concepts cannot provide predictions on the outcomes of data markets emerging today. The results support the idea that modifications to payment contracts themselves are unlikely to yield a unique equilibrium, and either changes to the models of study or new equilibrium concepts will be required to determine unique equilibria in settings with multiple principals and a non-rivalrous good.
A population of voters must elect representatives among themselves to decide on a sequence of possibly unforeseen binary issues. Voters care only about the final decision, not the elected representatives. The disutility of a voter is proportional to the fraction of issues, where his preferences disagree with the decision. While an issue-by-issue vote by all voters would maximize social welfare, we are interested in how well the preferences of the population can be approximated by a small committee. We show that a k-sortition (a random committee of k voters with the majority vote within the committee) leads to an outcome within the factor 1+O(1/k) of the optimal social cost for any number of voters n, any number of issues $m$, and any preference profile. For a small number of issues m, the social cost can be made even closer to optimal by delegation procedures that weigh committee members according to their number of followers. However, for large m, we demonstrate that the k-sortition is the worst-case optimal rule within a broad family of committee-based rules that take into account metric information about the preference profile of the whole population.
In this work we consider a stochastic linear quadratic two-player game. The state measurements are observed through a switched noiseless communication link. Each player incurs a finite cost every time the link is established to get measurements. Along with the usual control action, each player is equipped with a switching action to control the communication link. The measurements help to improve the estimate and hence reduce the quadratic cost but at the same time the cost is increased due to switching. We study the subgame perfect equilibrium control and switching strategies for the players. We show that the problem can be solved in a two-step process by solving two dynamic programming problems. The first step corresponds to solving a dynamic programming for the control strategy and the second step solves another dynamic programming for the switching strategy
Agent-based modeling (ABM) is a powerful paradigm to gain insight into social phenomena. One area that ABM has rarely been applied is coalition formation. Traditionally, coalition formation is modeled using cooperative game theory. In this paper, a heuristic algorithm is developed that can be embedded into an ABM to allow the agents to find coalition. The resultant coalition structures are comparable to those found by cooperative game theory solution approaches, specifically, the core. A heuristic approach is required due to the computational complexity of finding a cooperative game theory solution which limits its application to about only a score of agents. The ABM paradigm provides a platform in which simple rules and interactions between agents can produce a macro-level effect without the large computational requirements. As such, it can be an effective means for approximating cooperative game solutions for large numbers of agents. Our heuristic algorithm combines agent-based modeling and cooperative game theory to help find agent partitions that are members of a games core solution. The accuracy of our heuristic algorithm can be determined by comparing its outcomes to the actual core solutions. This comparison achieved by developing an experiment that uses a specific example of a cooperative game called the glove game. The glove game is a type of exchange economy game. Finding the traditional cooperative game theory solutions is computationally intensive for large numbers of players because each possible partition must be compared to each possible coalition to determine the core set; hence our experiment only considers games of up to nine players. The results indicate that our heuristic approach achieves a core solution over 90% of the time for the games considered in our experiment.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا