No Arabic abstract
Little is known about how people learn from a brief glimpse of three-dimensional (3D) bivariate vector field visualizations and about how well visual features can guide behavior. Here we report empirical study results on the use of color, texture, and length to guide viewing of bivariate glyphs: these three visual features are mapped to the first integer variable (v1) and length to the second quantitative variable (v2). Participants performed two tasks within 20 seconds: (1) MAX: find the largest v2 when v1 is fixed; (2) SEARCH: find a specific bivariate variable shown on the screen in a vector field. Our first study with eighteen participants performing these tasks showed that the randomized vector positions, although they lessened viewers ability to group vectors, did not reduce task accuracy compared to structured vector fields. This result may support that these color, texture, and length can provide to a certain degree, guide viewers attention to task-relevant regions. The second study measured eye movement to quantify viewers behaviors with three-errors (scanning, recognition, and decision errors) and one-behavior (refixation) metrics. Our results showed two dominant search strategies: drilling and scanning. Coloring tended to restrict eye movement to the task-relevant regions of interest, enabling drilling. Length tended to support scanners who quickly wandered around at different v1 levels. Drillers had significantly less errors than scanners and the error rates for color and texture were also lowest. And length had limited discrimination power than color and texture as a 3D visual guidance. Our experiment results may suggest that using categorical visual feature could help obtain the global structure of a vector field visualization. We provide the first benchmark of the attention cost of seeing a bivariate vector on average about 5 items per second.
Aerial cinematography is significantly expanding the capabilities of film-makers. Recent progress in autonomous unmanned aerial vehicles (UAVs) has further increased the potential impact of aerial cameras, with systems that can safely track actors in unstructured cluttered environments. Professional productions, however, require the use of multiple cameras simultaneously to record different viewpoints of the same scene, which are edited into the final footage either in real time or in post-production. Such extreme motion coordination is particularly hard for unscripted action scenes, which are a common use case of aerial cameras. In this work we develop a real-time multi-UAV coordination system that is capable of recording dynamic targets while maximizing shot diversity and avoiding collisions and mutual visibility between cameras. We validate our approach in multiple cluttered environments of a photo-realistic simulator, and deploy the system using two UAVs in real-world experiments. We show that our coordination scheme has low computational cost and takes only 1.17 ms on average to plan for a team of 3 UAVs over a 10 s time horizon. Supplementary video: https://youtu.be/m2R3anv2ADE
We present study results from two experiments to empirically validate that separable bivariate pairs for univariate representations of large-magnitude-range vectors are more efficient than integral pairs. The first experiment with 20 participants compared: one integral pair, three separable pairs, and one redundant pair, which is a mix of the integral and separable features. Participants performed three local tasks requiring reading numerical values, estimating ratio, and comparing two points. The second 18-participant study compared three separable pairs using three global tasks when participants must look at the entire field to get an answer: find a specific target in 20 seconds, find the maximum magnitude in 20 seconds, and estimate the total number of vector exponents within 2 seconds. Our results also reveal the following: separable pairs led to the most accurate answers and the shortest task execution time, while integral dimensions were among the least accurate; it achieved high performance only when a pop-out separable feature (here color) was added. To reconcile this finding with the existing literature, our second experiment suggests that the higher the separability, the higher the accuracy; the reason is probably that the emergent global scene created by the separable pairs reduces the subsequent search space.
In studies of the connection between active galactic nuclei (AGN) and their host galaxies there is widespread disagreement on some key aspects stemming largely from a lack of understanding of the nature of the full underlying AGN population. Recent attempts to probe this connection utilize both observations and simulations to correct for a missed population, but presently are limited by intrinsic biases and complicated models. We take a simple simulation for galaxy evolution and add a new prescription for AGN activity to connect galaxy growth to dark matter halo properties and AGN activity to star formation. We explicitly model selection effects to produce an observed AGN population for comparison with observations and empirically motivated models of the local universe. This allows us to bypass the difficulties inherent in many models which attempt to infer the AGN population by inverting selection effects. We investigate the impact of selecting AGN based on thresholds in luminosity or Eddington ratio on the observed AGN population. By limiting our model AGN sample in luminosity, we are able to recreate the observed local AGN luminosity function and specific star formation-stellar mass distribution, and show that using an Eddington ratio threshold introduces less bias into the sample by selecting the full range of growing black holes, despite the challenge of selecting low mass black holes. We find that selecting AGN using these various thresholds yield samples with different AGN host galaxy properties.
We have recently developed an algorithm for vector field visualization with oriented streamlines, able to depict the flow directions everywhere in a dense vector field and the sense of the local orientations. The algorithm has useful applications in the visualization of the director field in nematic liquid crystals. Here we propose an improvement of the algorithm able to enhance the visualization of the local magnitude of the field. This new approach of the algorithm is compared with the same procedure applied to the Line Integral Convolution (LIC) visualization.
EXplainable AI (XAI) methods have been proposed to interpret how a deep neural network predicts inputs through model saliency explanations that highlight the parts of the inputs deemed important to arrive a decision at a specific target. However, it remains challenging to quantify correctness of their interpretability as current evaluation approaches either require subjective input from humans or incur high computation cost with automated evaluation. In this paper, we propose backdoor trigger patterns--hidden malicious functionalities that cause misclassification--to automate the evaluation of saliency explanations. Our key observation is that triggers provide ground truth for inputs to evaluate whether the regions identified by an XAI method are truly relevant to its output. Since backdoor triggers are the most important features that cause deliberate misclassification, a robust XAI method should reveal their presence at inference time. We introduce three complementary metrics for systematic evaluation of explanations that an XAI method generates and evaluate seven state-of-the-art model-free and model-specific posthoc methods through 36 models trojaned with specifically crafted triggers using color, shape, texture, location, and size. We discovered six methods that use local explanation and feature relevance fail to completely highlight trigger regions, and only a model-free approach can uncover the entire trigger region.