ﻻ يوجد ملخص باللغة العربية
Analysts often make visual causal inferences about possible data-generating models. However, visual analytics (VA) software tends to leave these models implicit in the mind of the analyst, which casts doubt on the statistical validity of informal visual insights. We formally evaluate the quality of causal inferences from visualizations by adopting causal support -- a Bayesian cognition model that learns the probability of alternative causal explanations given some data -- as a normative benchmark for causal inferences. We contribute two experiments assessing how well crowdworkers can detect (1) a treatment effect and (2) a confounding relationship. We find that chart users causal inferences tend to be insensitive to sample size such that they deviate from our normative benchmark. While interactively cross-filtering data in visualizations can improve sensitivity, on average users do not perform reliably better with common visualizations than they do with textual contingency tables. These experiments demonstrate the utility of causal support as an evaluation framework for inferences in VA and point to opportunities to make analysts mental models more explicit in VA software.
One widely used approach towards understanding the inner workings of deep convolutional neural networks is to visualize unit responses via activation maximization. Feature visualizations via activation maximization are thought to provide humans with
Modeling complex systems is a time-consuming, difficult and fragmented task, often requiring the analyst to work with disparate data, a variety of models, and expert knowledge across a diverse set of domains. Applying a user-centered design process,
New text as data techniques offer a great promise: the ability to inductively discover measures that are useful for testing social science theories of interest from large collections of text. We introduce a conceptual framework for making causal infe
Causal discovery algorithms allow for the inference of causal structures from probabilistic relations of random variables. A natural field for the application of this tool is quantum mechanics, where a long-standing debate about the role of causality
In many application areas---lending, education, and online recommenders, for example---fairness and equity concerns emerge when a machine learning system interacts with a dynamically changing environment to produce both immediate and long-term effect