Do you want to publish a course? Click here

The Effect of Moderation on Online Mental Health Conversations

90   0   0.0 ( 0 )
 Added by David Wadden
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Many people struggling with mental health issues are unable to access adequate care due to high costs and a shortage of mental health professionals, leading to a global mental health crisis. Online mental health communities can help mitigate this crisis by offering a scalable, easily accessible alternative to in-person sessions with therapists or support groups. However, people seeking emotional or psychological support online may be especially vulnerable to the kinds of antisocial behavior that sometimes occur in online discussions. Moderation can improve online discourse quality, but we lack an understanding of its effects on online mental health conversations. In this work, we leveraged a natural experiment, occurring across 200,000 messages from 7,000 online mental health conversations, to evaluate the effects of moderation on online mental health discussions. We found that participation in group mental health discussions led to improvements in psychological perspective, and that these improvements were larger in moderated conversations. The presence of a moderator increased user engagement, encouraged users to discuss negative emotions more candidly, and dramatically reduced bad behavior among chat participants. Moderation also encouraged stronger linguistic coordination, which is indicative of trust building. In addition, moderators who remained active in conversations were especially successful in keeping conversations on topic. Our findings suggest that moderation can serve as a valuable tool to improve the efficacy and safety of online mental health conversations. Based on these findings, we discuss implications and trade-offs involved in designing effective online spaces for mental health support.



rate research

Read More

Online social media provides a channel for monitoring peoples social behaviors and their mental distress. Due to the restrictions imposed by COVID-19 people are increasingly using online social networks to express their feelings. Consequently, there is a significant amount of diverse user-generated social media content. However, COVID-19 pandemic has changed the way we live, study, socialize and recreate and this has affected our well-being and mental health problems. There are growing researches that leverage online social media analysis to detect and assess users mental status. In this paper, we survey the literature of social media analysis for mental disorders detection, with a special focus on the studies conducted in the context of COVID-19 during 2020-2021. Firstly, we classify the surveyed studies in terms of feature extraction types, varying from language usage patterns to aesthetic preferences and online behaviors. Secondly, we explore detection methods used for mental disorders detection including machine learning and deep learning detection methods. Finally, we discuss the challenges of mental disorder detection using social media data, including the privacy and ethical concerns, as well as the technical challenges of scaling and deploying such systems at large scales, and discuss the learnt lessons over the last few years.
Online peer-to-peer support platforms enable conversations between millions of people who seek and provide mental health support. If successful, web-based mental health conversations could improve access to treatment and reduce the global disease burden. Psychologists have repeatedly demonstrated that empathy, the ability to understand and feel the emotions and experiences of others, is a key component leading to positive outcomes in supportive conversations. However, recent studies have shown that highly empathic conversations are rare in online mental health platforms. In this paper, we work towards improving empathy in online mental health support conversations. We introduce a new task of empathic rewriting which aims to transform low-empathy conversational posts to higher empathy. Learning such transformations is challenging and requires a deep understanding of empathy while maintaining conversation quality through text fluency and specificity to the conversational context. Here we propose PARTNER, a deep reinforcement learning agent that learns to make sentence-level edits to posts in order to increase the expressed level of empathy while maintaining conversation quality. Our RL agent leverages a policy network, based on a transformer language model adapted from GPT-2, which performs the dual task of generating candidate empathic sentences and adding those sentences at appropriate positions. During training, we reward transformations that increase empathy in posts while maintaining text fluency, context specificity and diversity. Through a combination of automatic and human evaluation, we demonstrate that PARTNER successfully generates more empathic, specific, and diverse responses and outperforms NLP methods from related tasks like style transfer and empathic dialogue generation. Our work has direct implications for facilitating empathic conversations on web-based platforms.
Mental health challenges are thought to afflict around 10% of the global population each year, with many going untreated due to stigma and limited access to services. Here, we explore trends in words and phrases related to mental health through a collection of 1- , 2-, and 3-grams parsed from a data stream of roughly 10% of all English tweets since 2012. We examine temporal dynamics of mental health language, finding that the popularity of the phrase mental health increased by nearly two orders of magnitude between 2012 and 2018. We observe that mentions of mental health spike annually and reliably due to mental health awareness campaigns, as well as unpredictably in response to mass shootings, celebrities dying by suicide, and popular fictional stories portraying suicide. We find that the level of positivity of messages containing mental health, while stable through the growth period, has declined recently. Finally, we use the ratio of original tweets to retweets to quantify the fraction of appearances of mental health language due to social amplification. Since 2015, mentions of mental health have become increasingly due to retweets, suggesting that stigma associated with discussion of mental health on Twitter has diminished with time.
Citizen-generated counter speech is a promising way to fight hate speech and promote peaceful, non-polarized discourse. However, there is a lack of large-scale longitudinal studies of its effectiveness for reducing hate speech. To this end, we perform an exploratory analysis of the effectiveness of counter speech using several different macro- and micro-level measures to analyze 180,000 political conversations that took place on German Twitter over four years. We report on the dynamic interactions of hate and counter speech over time and provide insights into whether, as in `classic bullying situations, organized efforts are more effective than independent individuals in steering online discourse. Taken together, our results build a multifaceted picture of the dynamics of hate and counter speech online. While we make no causal claims due to the complexity of discourse dynamics, our findings suggest that organized hate speech is associated with changes in public discourse and that counter speech -- especially when organized -- may help curb hateful rhetoric in online discourse.
Mental illness is a global health problem, but access to mental healthcare resources remain poor worldwide. Online peer-to-peer support platforms attempt to alleviate this fundamental gap by enabling those who struggle with mental illness to provide and receive social support from their peers. However, successful social support requires users to engage with each other and failures may have serious consequences for users in need. Our understanding of engagement patterns on mental health platforms is limited but critical to inform the role, limitations, and design of these platforms. Here, we present a large-scale analysis of engagement patterns of 35 million posts on two popular online mental health platforms, TalkLife and Reddit. Leveraging communication models in human-computer interaction and communication theory, we operationalize a set of four engagement indicators based on attention and interaction. We then propose a generative model to jointly model these indicators of engagement, the output of which is synthesized into a novel set of eleven distinct, interpretable patterns. We demonstrate that this framework of engagement patterns enables informative evaluations and analysis of online support platforms. Specifically, we find that mutual back-and-forth interactions are associated with significantly higher user retention rates on TalkLife. Such back-and-forth interactions, in turn, are associated with early response times and the sentiment of posts.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا