ترغب بنشر مسار تعليمي؟ اضغط هنا

Task-Based Effectiveness of Interactive Contiguous Area Cartograms

88   0   0.0 ( 0 )
 نشر من قبل Michael Gastner
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Cartograms are map-based data visualizations in which the area of each map region is proportional to an associated numeric data value (e.g., population or gross domestic product). A cartogram is called contiguous if it conforms to this area principle while also keeping neighboring regions connected. Because of their distorted appearance, contiguous cartograms have been criticized as difficult to read. Some authors have suggested that cartograms may be more legible if they are accompanied by interactive features (e.g., animations, linked brushing, or infotips). We conducted an experiment to evaluate this claim. Participants had to perform visual analysis tasks with interactive and noninteractive contiguous cartograms. The task types covered various aspects of cartogram readability, ranging from elementary lookup tasks to synoptic tasks (i.e., tasks in which participants had to summarize high-level differences between two cartograms). Elementary tasks were carried out equally well with and without interactivity. Synoptic tasks, by contrast, were more difficult without interactive features. With access to interactivity, however, most participants answered even synoptic questions correctly. In a subsequent survey, participants rated the interactive features as easy to use and helpful. Our study suggests that interactivity has the potential to make contiguous cartograms accessible even for those readers who are unfamiliar with interactive computer graphics or do not have a prior affinity to working with maps. Among the interactive features, animations had the strongest positive effect, so we recommend them as a minimum of interactivity when contiguous cartograms are displayed on a computer screen.



قيم البحث

اقرأ أيضاً

Cartograms are maps in which the areas of regions (e.g., countries or provinces) are proportional to a thematic mapping variable (e.g., population or gross domestic product). A cartogram is called contiguous if it keeps geographically adjacent region s connected. Over the past few years, several web tools have been developed for the creation of contiguous cartograms. However, most of these tools do not advise how to use cartograms correctly. To mitigate these shortcomings, we attempt to establish good practices through our recently developed web application go-cart.io: (1) use cartograms to show numeric data that add up to an interpretable total, (2) present a cartogram alongside a conventional map that uses the same color scheme, (3) indicate whether the data for a region are missing, (4) include a legend so that readers can infer the magnitude of the mapping variable, (5) if a cartogram is presented electronically, assist readers with interactive graphics.
We argue that a key challenge in enabling usable and useful interactive task learning for intelligent agents is to facilitate effective Human-AI collaboration. We reflect on our past 5 years of efforts on designing, developing and studying the SUGILI TE system, discuss the issues on incorporating recent advances in AI with HCI principles in mixed-initiative interactions and multi-modal interactions, and summarize the lessons we learned. Lastly, we identify several challenges and opportunities, and describe our ongoing work
Dynamic networks can be challenging to analyze visually, especially if they span a large time range during which new nodes and edges can appear and disappear. Although it is straightforward to provide interfaces for visualization that represent multi ple states of the network (i.e., multiple timeslices) either simultaneously (e.g., through small multiples) or interactively (e.g., through interactive animation), these interfaces might not support tasks in which disjoint timeslices need to be compared. Since these tasks are key for understanding the dynamic aspects of the network, understanding which interactive visualizations best support these tasks is important. We present the results of a series of laboratory experiments comparing two traditional approaches (small multiples and interactive animation), with a more recent approach based on interactive timeslicing. The tasks were performed on a large display through a touch interface. Participants completed 24 trials of three tasks with all techniques. The results show that interactive timeslicing brings benefit when comparing distant points in time, but less benefits when analyzing contiguous intervals of time.
Natural language programming is a promising approach to enable end users to instruct new tasks for intelligent agents. However, our formative study found that end users would often use unclear, ambiguous or vague concepts when naturally instructing t asks in natural language, especially when specifying conditionals. Existing systems have limited support for letting the user teach agents new concepts or explaining unclear concepts. In this paper, we describe a new multi-modal domain-independent approach that combines natural language programming and programming-by-demonstration to allow users to first naturally describe tasks and associated conditions at a high level, and then collaborate with the agent to recursively resolve any ambiguities or vagueness through conversations and demonstrations. Users can also define new procedures and concepts by demonstrating and referring to contents within GUIs of existing mobile apps. We demonstrate this approach in PUMICE, an end-user programmable agent that implements this approach. A lab study with 10 users showed its usability.
Identity recognition plays an important role in ensuring security in our daily life. Biometric-based (especially activity-based) approaches are favored due to their fidelity, universality, and resilience. However, most existing machine learning-based approaches rely on a traditional workflow where models are usually trained once for all, with limited involvement from end-users in the process and neglecting the dynamic nature of the learning process. This makes the models static and can not be updated in time, which usually leads to high false positive or false negative. Thus, in practice, an expert is desired to assist with providing high-quality observations and interpretation of model outputs. It is expedient to combine both advantages of human experts and the computational capability of computers to create a tight-coupling incremental learning process for better performance. In this study, we develop RLTIR, an interactive identity recognition approach based on reinforcement learning, to adjust the identification model by human guidance. We first build a base tree-structured identity recognition model. And an expert is introduced in the model for giving feedback upon model outputs. Then, the model is updated according to strategies that are automatically learned under a designated reinforcement learning framework. To the best of our knowledge, it is the very first attempt to combine human expert knowledge with model learning in the area of identity recognition. The experimental results show that the reinforced interactive identity recognition framework outperforms baseline methods with regard to recognition accuracy and robustness.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا