No Arabic abstract
Background: Shared decision-making (SDM) aims to empower patients to take an active role in their treatment choices, supported by clinicians and patient decision aids (PDAs). The purpose of this study is to explore barriers and possible facilitators to SDM and a PDA in the prostate cancer trajectory. In the process we identify possible actions that organizations and individuals can take to support implementation in practice. Methods: We use the Ottawa Model of Research Use as a framework to determine the barriers and facilitators to SDM and PDAs from the perspective of clinicians. Semi-structured interviews were conducted with urologists (n=4), radiation oncologists (n=3), and oncology nurses (n=2), focusing on the current decision-making process experienced by these stakeholders. Questions included their attitudes towards SDM and PDAs, barriers to implementation and possible strategies to overcome them. Results: Time pressure and patient characteristics were cited as major barriers by 55% of the clinicians we interviewed. Structural factors such as external quotas for certain treatment procedures were also considered as barriers by 44% of the clinicians. Facilitating factors involved organizational changes to em-bed PDAs in the treatment trajectory, training in using PDAs as a tool for SDM, and clinician motivation by disseminating positive clinical outcomes. Our findings also suggest a role for external stakeholders such as healthcare insurers in creating economic incentives to facilitate implementation. Conclusion: Our findings highlight the importance of a multi-faceted implementation strategy to support SDM. While clinician motivation and patient activation are essential, structural/economic barriers may hamper implementation. Action must also be taken at the administrative and policy levels to foster a collaborative environment for SDM and, in the process, for PDAs.
The Inclusive Astronomy (IA) conference series aims to create a safe space where community members can listen to the experiences of marginalized individuals in astronomy, discuss actions being taken to address inequities, and give recommendations to the community for how to improve diversity, equity, and inclusion in astronomy. The first IA was held in Nashville, TN, USA, 17-19 June, 2015. The Inclusive Astronomy 2 (IA2) conference was held in Baltimore, MD, USA, 14-15 October, 2019. The Inclusive Astronomy 2 (IA2) Local Organizing Committee (LOC) has put together a comprehensive document of recommendations for planning future Inclusive Astronomy conferences based on feedback received and lessons learned. While these are specific to the IA series, many parts will be applicable to other conferences as well. Please find the recommendations and accompanying letter to the community here: https://outerspace.stsci.edu/display/IA2/LOC+Recommendations.
Hierarchical model fitting has become commonplace for case-control studies of cognition and behaviour in mental health. However, these techniques require us to formalise assumptions about the data-generating process at the group level, which may not be known. Specifically, researchers typically must choose whether to assume all subjects are drawn from a common population, or to model them as deriving from separate populations. These assumptions have profound implications for computational psychiatry, as they affect the resulting inference (latent parameter recovery) and may conflate or mask true group-level differences. To test these assumptions we ran systematic simulations on synthetic multi-group behavioural data from a commonly used multi-armed bandit task (reinforcement learning task). We then examined recovery of group differences in latent parameter space under the two commonly used generative modelling assumptions: (1) modelling groups under a common shared group-level prior (assuming all participants are generated from a common distribution, and are likely to share common characteristics); (2) modelling separate groups based on symptomatology or diagnostic labels, resulting in separate group-level priors. We evaluated the robustness of these approaches to variations in data quality and prior specifications on a variety of metrics. We found that fitting groups separately (assumptions 2), provided the most accurate and robust inference across all conditions. Our results suggest that when dealing with data from multiple clinical groups, researchers should analyse patient and control groups separately as it provides the most accurate and robust recovery of the parameters of interest.
The QAnon conspiracy theory claims that a cabal of (literally) blood-thirsty politicians and media personalities are engaged in a war to destroy society. By interpreting cryptic drops of information from an anonymous insider calling themself Q, adherents of the conspiracy theory believe that Donald Trump is leading them in an active fight against this cabal. QAnon has been covered extensively by the media, as its adherents have been involved in multiple violent acts, including the January 6th, 2021 seditious storming of the US Capitol building. Nevertheless, we still have relatively little understanding of how the theory evolved and spread on the Web, and the role played in that by multiple platforms. To address this gap, we study QAnon from the perspective of Q themself. We build a dataset of 4,949 canonical Q drops collected from six aggregation sites, which curate and archive them from their original posting to anonymous and ephemeral image boards. We expose that these sites have a relatively low (overall) agreement, and thus at least some Q drops should probably be considered apocryphal. We then analyze the Q drops contents to identify topics of discussion and find statistically significant indications that drops were not authored by a single individual. Finally, we look at how posts on Reddit are used to disseminate Q drops to wider audiences. We find that dissemination was (initially) limited to a few sub-communities and that, while heavy-handed moderation decisions have reduced the overall issue, the gospel of Q persists on the Web.
Using the concept of principal stratification from the causal inference literature, we introduce a new notion of fairness, called principal fairness, for human and algorithmic decision-making. The key idea is that one should not discriminate among individuals who would be similarly affected by the decision. Unlike the existing statistical definitions of fairness, principal fairness explicitly accounts for the fact that individuals can be impacted by the decision. We propose an axiomatic assumption that all groups are created equal. This assumption is motivated by a belief that protected attributes such as race and gender should have no direct causal effects on potential outcomes. Under this assumption, we show that principal fairness implies all three existing statistical fairness criteria once we account for relevant covariates. This result also highlights the essential role of conditioning covariates in resolving the previously recognized tradeoffs between the existing statistical fairness criteria. Finally, we discuss how to empirically choose conditioning covariates and then evaluate the principal fairness of a particular decision.
The main output of the FORCE11 Software Citation working group (https://www.force11.org/group/software-citation-working-group) was a paper on software citation principles (https://doi.org/10.7717/peerj-cs.86) published in September 2016. This paper laid out a set of six high-level principles for software citation (importance, credit and attribution, unique identification, persistence, accessibility, and specificity) and discussed how they could be used to implement software citation in the scholarly community. In a series of talks and other activities, we have promoted software citation using these increasingly accepted principles. At the time the initial paper was published, we also provided guidance and examples on how to make software citable, though we now realize there are unresolved problems with that guidance. The purpose of this document is to provide an explanation of current issues impacting scholarly attribution of research software, organize updated implementation guidance, and identify where best practices and solutions are still needed.