No Arabic abstract
This White Paper summarizes the authors discussion regarding objectionable content for the University of Houston (UH) Research Team to outline a strategy for building an extensive repository of online videos to support research into automated multimodal approaches to detect objectionable content. The workshop focused on defining what harmful content is, to whom it is harmful, and why it is harmful.
Algorithmic fairness research has traditionally been linked to the disciplines of philosophy, ethics, and economics, where notions of fairness are prescriptive and seek objectivity. Increasingly, however, scholars are turning to the study of what different people perceive to be fair, and how these perceptions can or should help to shape the design of machine learning, particularly in the policy realm. The present work experimentally explores five novel research questions at the intersection of the Who, What, and How of fairness perceptions. Specifically, we present the results of a multi-factor conjoint analysis study that quantifies the effects of the specific context in which a question is asked, the framing of the given question, and who is answering it. Our results broadly suggest that the Who and What, at least, matter in ways that are 1) not easily explained by any one theoretical perspective, 2) have critical implications for how perceptions of fairness should be measured and/or integrated into algorithmic decision-making systems.
The proliferation of harmful content on online social media platforms has necessitated empirical understandings of experiences of harm online and the development of practices for harm mitigation. Both understandings of harm and approaches to mitigating that harm, often through content moderation, have implicitly embedded frameworks of prioritization - what forms of harm should be researched, how policy on harmful content should be implemented, and how harmful content should be moderated. To aid efforts of better understanding the variety of online harms, how they relate to one another, and how to prioritize harms relevant to research, policy, and practice, we present a theoretical framework of severity for harmful online content. By employing a grounded theory approach, we developed a framework of severity based on interviews and card-sorting activities conducted with 52 participants over the course of ten months. Through our analysis, we identified four Types of Harm (physical, emotional, relational, and financial) and eight Dimensions along which the severity of harm can be understood (perspectives, intent, agency, experience, scale, urgency, vulnerability, sphere). We describe how our framework can be applied to both research and policy settings towards deeper understandings of specific forms of harm (e.g., harassment) and prioritization frameworks when implementing policies encompassing many forms of harm.
We present a method for accurately predicting the long time popularity of online content from early measurements of user access. Using two content sharing portals, Youtube and Digg, we show that by modeling the accrual of views and votes on content offered by these services we can predict the long-term dynamics of individual submissions from initial data. In the case of Digg, measuring access to given stories during the first two hours allows us to forecast their popularity 30 days ahead with remarkable accuracy, while downloads of Youtube videos need to be followed for 10 days to attain the same performance. The differing time scales of the predictions are shown to be due to differences in how content is consumed on the two portals: Digg stories quickly become outdated, while Youtube videos are still found long after they are initially submitted to the portal. We show that predictions are more accurate for submissions for which attention decays quickly, whereas predictions for evergreen content will be prone to larger errors.
The closest galaxy center, our own Central Molecular Zone (CMZ; the central 500 pc of the Milky Way), is a powerful laboratory for studying the secular processes that shape galaxies across cosmic time, from large-scale gas flows and star formation to stellar feedback and interaction with a central supermassive black hole. Research over the last decade has revealed that the process of converting gas into stars in galaxy centers differs from that in galaxy disks. The CMZ is the only galaxy center in which we can identify and weigh individual forming stars, so it is the key location to establish the physical laws governing star formation and feedback under the conditions that dominate star formation across cosmic history. Large-scale surveys of molecular and atomic gas within the inner kiloparsec of the Milky Way (~10 degrees) will require efficient mapping capabilities on single-dish radio telescopes. Characterizing the detailed star formation process will require large-scale, high-resolution surveys of the protostellar populations and small-scale gas structure with dedicated surveys on the Atacama Large Millimeter/submillimeter Array, and eventually with the James Webb Space Telescope, the Next Generation Very Large Array, and the Origins Space Telescope.
The axion has emerged in recent years as a leading particle candidate to provide the mysterious dark matter in the cosmos, as we review here for a general scientific audience. We describe first the historical roots of the axion in the Standard Model of particle physics and the problem of charge-parity invariance of the strong nuclear force. We then discuss how the axion emerges as a dark matter candidate, and how it is produced in the early Universe. The symmetry properties of the axion dictate the form of its interactions with ordinary matter. Astrophysical considerations restrict the particle mass and interaction strengths to a limited range, which facilitates the planning of experiments to detect the axion. A companion review discusses the exciting prospect that the axion could indeed be detected in the near term in the laboratory.