Do you want to publish a course? Click here

A Pluralist Approach to Democratizing Online Discourse

367   0   0.0 ( 0 )
 Added by Jay Chen
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Online discourse takes place in corporate-controlled spaces thought by users to be public realms. These platforms in name enable free speech but in practice implement varying degrees of censorship either by government edict or by uneven and unseen corporate policy. This kind of censorship has no countervailing accountability mechanism, and as such platform owners, moderators, and algorithms shape public discourse without recourse or transparency. Systems research has explored approaches to decentralizing or democratizing Internet infrastructure for decades. In parallel, the Internet censorship literature is replete with efforts to measure and overcome online censorship. However, in the course of designing specialized open-source platforms and tools, projects generally neglect the needs of supportive but uninvolved `average users. In this paper, we propose a pluralistic approach to democratizing online discourse that considers both the systems-related and user-facing issues as first-order design goals.



rate research

Read More

A debate in the research community has buzzed in the background for years: should large-scale Internet services be centralized or decentralized? Now-common centralized cloud and web services have downsides -- user lock-in and loss of privacy and data control -- that are increasingly apparent. However, their decentralized counterparts have struggled to gain adoption, suffer from their own problems of scalability and trust, and eventually may result in the exact same lock-in they intended to prevent. In this paper, we explore the design of a pluralist cloud architecture, Stencil, one that can serve as a narrow waist for user-facing services such as social media. We aim to enable pluralism via a unifying set of abstractions that support migration from one service to a competing service. We find that migrating linked data introduces many challenges in both source and destination services as links are severed. We show how Stencil enables correct and efficient data migration between services, how it supports the deployment of new services, and how Stencil could be incrementally deployed.
There is an extensive literature about online controlled experiments, both on the statistical methods available to analyze experiment results as well as on the infrastructure built by several large scale Internet companies but also on the organizational challenges of embracing online experiments to inform product development. At Booking.com we have been conducting evidenced based product development using online experiments for more than ten years. Our methods and infrastructure were designed from their inception to reflect Booking.com culture, that is, with democratization and decentralization of experimentation and decision making in mind. In this paper we explain how building a central repository of successes and failures to allow for knowledge sharing, having a generic and extensible code library which enforces a loose coupling between experimentation and business logic, monitoring closely and transparently the quality and the reliability of the data gathering pipelines to build trust in the experimentation infrastructure, and putting in place safeguards to enable anyone to have end to end ownership of their experiments have allowed such a large organization as Booking.com to truly and successfully democratize experimentation.
93 - Anna Zeng , Will Crichton 2019
Rust is a low-level programming language known for its unique approach to memory-safe systems programming and for its steep learning curve. To understand what makes Rust difficult to adopt, we surveyed the top Reddit and Hacker News posts and comments about Rust; from these online discussions, we identified three hypotheses about Rusts barriers to adoption. We found that certain key features, idioms, and integration patterns were not easily accessible to new users.
126 - Songyang Zhang 2020
Recently, much effort has been devoted by researchers from both academia and industry to develop novel congestion control methods. LearningCC is presented in this letter, in which the congestion control problem is solved by reinforce learning approach. Instead of adjusting the congestion window with fixed policy, there are serval options for an endpoint to choose. To predict the best option is a hard task. Each option is mapped as an arm of a bandit machine. The endpoint can learn to determine the optimal choice through trial and error method. Experiments are performed on ns3 platform to verify the effectiveness of LearningCC by comparing with other benchmark algorithms. Results indicate it can achieve lower transmission delay than loss based algorithms. Especially, we found LearningCC makes significant improvement in link suffering from random loss.
The enormous amount of discourse taking place online poses challenges to the functioning of a civil and informed public sphere. Efforts to standardize online discourse data, such as ClaimReview, are making available a wealth of new data about potentially inaccurate claims, reviewed by third-party fact-checkers. These data could help shed light on the nature of online discourse, the role of political elites in amplifying it, and its implications for the integrity of the online information ecosystem. Unfortunately, the semi-structured nature of much of this data presents significant challenges when it comes to modeling and reasoning about online discourse. A key challenge is relation extraction, which is the task of determining the semantic relationships between named entities in a claim. Here we develop a novel supervised learning method for relation extraction that combines graph embedding techniques with path traversal on semantic dependency graphs. Our approach is based on the intuitive observation that knowledge of the entities along the path between the subject and object of a triple (e.g. Washington,_D.C.}, and United_States_of_America) provides useful information that can be leveraged for extracting its semantic relation (i.e. capitalOf). As an example of a potential application of this technique for modeling online discourse, we show that our method can be integrated into a pipeline to reason about potential misinformation claims.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا