Do you want to publish a course? Click here

Causal Support: Modeling Causal Inferences with Visualizations

279   0   0.0 ( 0 )
 Added by Alex Kale
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Analysts often make visual causal inferences about possible data-generating models. However, visual analytics (VA) software tends to leave these models implicit in the mind of the analyst, which casts doubt on the statistical validity of informal visual insights. We formally evaluate the quality of causal inferences from visualizations by adopting causal support -- a Bayesian cognition model that learns the probability of alternative causal explanations given some data -- as a normative benchmark for causal inferences. We contribute two experiments assessing how well crowdworkers can detect (1) a treatment effect and (2) a confounding relationship. We find that chart users causal inferences tend to be insensitive to sample size such that they deviate from our normative benchmark. While interactively cross-filtering data in visualizations can improve sensitivity, on average users do not perform reliably better with common visualizations than they do with textual contingency tables. These experiments demonstrate the utility of causal support as an evaluation framework for inferences in VA and point to opportunities to make analysts mental models more explicit in VA software.



rate research

Read More

One widely used approach towards understanding the inner workings of deep convolutional neural networks is to visualize unit responses via activation maximization. Feature visualizations via activation maximization are thought to provide humans with precise information about the image features that cause a unit to be activated. If this is indeed true, these synthetic images should enable humans to predict the effect of an intervention, such as whether occluding a certain patch of the image (say, a dogs head) changes a units activation. Here, we test this hypothesis by asking humans to predict which of two square occlusions causes a larger change to a units activation. Both a large-scale crowdsourced experiment and measurements with experts show that on average, the extremely activating feature visualizations by Olah et al. (2017) indeed help humans on this task ($67 pm 4%$ accuracy; baseline performance without any visualizations is $60 pm 3%$). However, they do not provide any significant advantage over other visualizations (such as e.g. dataset samples), which yield similar performance ($66 pm 3%$ to $67 pm 3%$ accuracy). Taken together, we propose an objective psychophysical task to quantify the benefit of unit-level interpretability methods for humans, and find no evidence that feature visualizations provide humans with better causal understanding than simple alternative visualizations.
Modeling complex systems is a time-consuming, difficult and fragmented task, often requiring the analyst to work with disparate data, a variety of models, and expert knowledge across a diverse set of domains. Applying a user-centered design process, we developed a mixed-initiative visual analytics approach, a subset of the Causemos platform, that allows analysts to rapidly assemble qualitative causal models of complex socio-natural systems. Our approach facilitates the construction, exploration, and curation of qualitative models bringing together data across disparate domains. Referencing a recent user evaluation, we demonstrate our approachs ability to interactively enrich user mental models and accelerate qualitative model building.
New text as data techniques offer a great promise: the ability to inductively discover measures that are useful for testing social science theories of interest from large collections of text. We introduce a conceptual framework for making causal inferences with discovered measures as a treatment or outcome. Our framework enables researchers to discover high-dimensional textual interventions and estimate the ways that observed treatments affect text-based outcomes. We argue that nearly all text-based causal inferences depend upon a latent representation of the text and we provide a framework to learn the latent representation. But estimating this latent representation, we show, creates new risks: we may introduce an identification problem or overfit. To address these risks we describe a split-sample framework and apply it to estimate causal effects from an experiment on immigration attitudes and a study on bureaucratic response. Our work provides a rigorous foundation for text-based causal inferences.
43 - R. Rossi Jr 2017
Causal discovery algorithms allow for the inference of causal structures from probabilistic relations of random variables. A natural field for the application of this tool is quantum mechanics, where a long-standing debate about the role of causality in the theory has flourished since its early days. In this paper, a causal discovery algorithm is applied in the search for causal models to describe a quantum version of Wheelers delayed-choice experiment. The outputs explicitly show the restrictions for the introduction of classical concepts in this system. The exclusion of models with two hidden variables is one of them. A consequence of such a constraint is the impossibility to construct a causal model that avoids superluminal causation and assumes an objective view of the wave and particle properties simultaneously.
In many application areas---lending, education, and online recommenders, for example---fairness and equity concerns emerge when a machine learning system interacts with a dynamically changing environment to produce both immediate and long-term effects for individuals and demographic groups. We discuss causal directed acyclic graphs (DAGs) as a unifying framework for the recent literature on fairness in such dynamical systems. We show that this formulation affords several new directions of inquiry to the modeler, where causal assumptions can be expressed and manipulated. We emphasize the importance of computing interventional quantities in the dynamical fairness setting, and show how causal assumptions enable simulation (when environment dynamics are known) and off-policy estimation (when dynamics are unknown) of intervention on short- and long-term outcomes, at both the group and individual levels.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا