Do you want to publish a course? Click here

Exploring Perceptions of Veganism

390   0   0.0 ( 0 )
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

This project examined perceptions of the vegan lifestyle using surveys and social media to explore barriers to choosing veganism. A survey of 510 individuals indicated that non-vegans did not believe veganism was as healthy or difficult as vegans. In a second analysis, Instagram posts using #vegan suggest content is aimed primarily at the female vegan community. Finally, sentiment analysis of roughly 5 million Twitter posts mentioning vegan found veganism to be portrayed in a more positive light compared to other topics. Results suggest non-vegans lack of interest in veganism is driven by non-belief in the health benefits of the diet.



rate research

Read More

Risk-limiting audits (RLAs) are expected to strengthen the public confidence in the correctness of an election outcome. We hypothesize that this is not always the case, in part because for large margins between the winner and the runner-up, the number of ballots to be drawn can be so small that voters lose confidence. We conduct a user study with 105 participants resident in the US. Our findings confirm the hypothesis, showing that our study participants felt less confident when they were told the number of ballots audited for RLAs. We elaborate on our findings and propose recommendations for future use of RLAs.
Children are increasingly using the internet nowadays. While internet use exposes children to various privacy and security risks, few studies have examined how parents perceive and address their childrens cybersecurity risks. To address this gap, we conducted a qualitative study with 25 parents living in Norway with children aged between 10 to 15. We conducted semi-structured interviews with the parents and performed a thematic analysis of the interview data. The results of this paper include a list of cybersecurity awareness needs for children from a parental perspective, a list of learning resources for children, and a list of challenges for parents to ensure cybersecurity at home. Our results are useful for developers and educators in developing cybersecurity solutions for children. Future research should focus on defining cybersecurity theories and practices that contribute to childrens and parents awareness about cybersecurity risks, needs, and solutions.
How popular a topic or an opinion appears to be in a network can be very different from its actual popularity. For example, in an online network of a social media platform, the number of people who mention a topic in their posts---i.e., its global popularity---can be dramatically different from how people see it in their social feeds---i.e., its perceived popularity---where the feeds aggregate their friends posts. We trace the origin of this discrepancy to the friendship paradox in directed networks, which states that people are less popular than their friends (or followers) are, on average. We identify conditions on network structure that give rise to this perception bias, and validate the findings empirically using data from Twitter. Within messages posted by Twitter users in our sample, we identify topics that appear more frequently within the users social feeds, than they do globally, i.e., among all posts. In addition, we present a polling algorithm that leverages the friendship paradox to obtain a statistically efficient estimate of a topics global prevalence from biased perceptions of individuals. We characterize the bias of the polling estimate, provide an upper bound for its variance, and validate the algorithms efficiency through synthetic polling experiments on our Twitter data. Our paper elucidates the non-intuitive ways in which the structure of directed networks can distort social perceptions and resulting behaviors.
Many policies allocate harms or benefits that are uncertain in nature: they produce distributions over the population in which individuals have different probabilities of incurring harm or benefit. Comparing different policies thus involves a comparison of their corresponding probability distributions, and we observe that in many instances the policies selected in practice are hard to explain by preferences based only on the expected value of the total harm or benefit they produce. In cases where the expected value analysis is not a sufficient explanatory framework, what would be a reasonable model for societal preferences over these distributions? Here we investigate explanations based on the framework of probability weighting from the behavioral sciences, which over several decades has identified systematic biases in how people perceive probabilities. We show that probability weighting can be used to make predictions about preferences over probabilistic distributions of harm and benefit that function quite differently from expected-value analysis, and in a number of cases provide potential explanations for policy preferences that appear hard to motivate by other means. In particular, we identify optimal policies for minimizing perceived total harm and maximizing perceived total benefit that take the distorting effects of probability weighting into account, and we discuss a number of real-world policies that resemble such allocational strategies. Our analysis does not provide specific recommendations for policy choices, but is instead fundamentally interpretive in nature, seeking to describe observed phenomena in policy choices.
Algorithmic fairness research has traditionally been linked to the disciplines of philosophy, ethics, and economics, where notions of fairness are prescriptive and seek objectivity. Increasingly, however, scholars are turning to the study of what different people perceive to be fair, and how these perceptions can or should help to shape the design of machine learning, particularly in the policy realm. The present work experimentally explores five novel research questions at the intersection of the Who, What, and How of fairness perceptions. Specifically, we present the results of a multi-factor conjoint analysis study that quantifies the effects of the specific context in which a question is asked, the framing of the given question, and who is answering it. Our results broadly suggest that the Who and What, at least, matter in ways that are 1) not easily explained by any one theoretical perspective, 2) have critical implications for how perceptions of fairness should be measured and/or integrated into algorithmic decision-making systems.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا