ترغب بنشر مسار تعليمي؟ اضغط هنا

Communication Analysis through Visual Analytics: Current Practices, Challenges, and New Frontiers

61   0   0.0 ( 0 )
 نشر من قبل Maximilian T. Fischer
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The automated analysis of digital human communication data often focuses on specific aspects like content or network structure in isolation, while classical communication research stresses the importance of a holistic analysis approach. This work aims to formalize digital communication analysis and investigate how classical results can be leveraged as part of visually interactive systems, which offers new analysis opportunities to allow for less biased, skewed, or incomplete results. For this, we construct a conceptual framework and design space based on the existing research landscape, technical considerations, and communication research that describes the properties, capabilities, and composition of such systems through 30 criteria in four analysis dimensions. We make the case how visual analytics principles are uniquely suited for a more holistic approach by tackling the automation complexity and leverage domain knowledge, paving the way to generate design guidelines for building such approaches. Our framework provides a common language and description of communication analysis systems to support existing research, highlights relevant design areas while promoting and supporting the mutual exchange between researchers. Additionally, our framework identifies existing gaps and highlights opportunities in research areas that are worth investigating further. With this contribution, we pave the path for the formalization of digital communication analysis through visual analytics.



قيم البحث

اقرأ أيضاً

Communication consists of both meta-information as well as content. Currently, the automated analysis of such data often focuses either on the network aspects via social network analysis or on the content, utilizing methods from text-mining. However, the first category of approaches does not leverage the rich content information, while the latter ignores the conversation environment and the temporal evolution, as evident in the meta-information. In contradiction to communication research, which stresses the importance of a holistic approach, both aspects are rarely applied simultaneously, and consequently, their combination has not yet received enough attention in automated analysis systems. In this work, we aim to address this challenge by discussing the difficulties and design decisions of such a path as well as contribute CommAID, a blueprint for a holistic strategy to communication analysis. It features an integrated visual analytics design to analyze communication networks through dynamics modeling, semantic pattern retrieval, and a user-adaptable and problem-specific machine learning-based retrieval system. An interactive multi-level matrix-based visualization facilitates a focused analysis of both network and content using inline visuals supporting cross-checks and reducing context switches. We evaluate our approach in both a case study and through formative evaluation with eight law enforcement experts using a real-world communication corpus. Results show that our solution surpasses existing techniques in terms of integration level and applicability. With this contribution, we aim to pave the path for a more holistic approach to communication analysis.
The reliance on vision for tasks related to cooking and eating healthy can present barriers to cooking for oneself and achieving proper nutrition. There has been little research exploring cooking practices and challenges faced by people with visual i mpairments. We present a content analysis of 122 YouTube videos to highlight the cooking practices of visually impaired people, and we describe detailed practices for 12 different cooking activities (e.g., cutting and chopping, measuring, testing food for doneness). Based on the cooking practices, we also conducted semi-structured interviews with 12 visually impaired people who have cooking experience and show existing challenges, concerns, and risks in cooking (e.g., tracking the status of tasks in progress, verifying whether things are peeled or cleaned thoroughly). We further discuss opportunities to support the current practices and improve the independence of people with visual impairments in cooking (e.g., zero-touch interactions for cooking). Overall, our findings provide guidance for future research exploring various assistive technologies to help people cook without relying on vision.
87 - Dylan Cashman 2018
Many visual analytics systems allow users to interact with machine learning models towards the goals of data exploration and insight generation on a given dataset. However, in some situations, insights may be less important than the production of an accurate predictive model for future use. In that case, users are more interested in generating of diverse and robust predictive models, verifying their performance on holdout data, and selecting the most suitable model for their usage scenario. In this paper, we consider the concept of Exploratory Model Analysis (EMA), which is defined as the process of discovering and selecting relevant models that can be used to make predictions on a data source. We delineate the differences between EMA and the well-known term exploratory data analysis in terms of the desired outcome of the analytic process: insights into the data or a set of deployable models. The contributions of this work are a visual analytics system workflow for EMA, a user study, and two use cases validating the effectiveness of the workflow. We found that our system workflow enabled users to generate complex models, to assess them for various qualities, and to select the most relevant model for their task.
As the use of machine learning (ML) models in product development and data-driven decision-making processes became pervasive in many domains, peoples focus on building a well-performing model has increasingly shifted to understanding how their model works. While scholarly interest in model interpretability has grown rapidly in research communities like HCI, ML, and beyond, little is known about how practitioners perceive and aim to provide interpretability in the context of their existing workflows. This lack of understanding of interpretability as practiced may prevent interpretability research from addressing important needs, or lead to unrealistic solutions. To bridge this gap, we conducted 22 semi-structured interviews with industry practitioners to understand how they conceive of and design for interpretability while they plan, build, and use their models. Based on a qualitative analysis of our results, we differentiate interpretability roles, processes, goals and strategies as they exist within organizations making heavy use of ML models. The characterization of interpretability work that emerges from our analysis suggests that model interpretability frequently involves cooperation and mental model comparison between people in different roles, often aimed at building trust not only between people and models but also between people within the organization. We present implications for design that discuss gaps between the interpretability challenges that practitioners face in their practice and approaches proposed in the literature, highlighting possible research directions that can better address real-world needs.
104 - Hai Dang , Daniel Buschek 2021
This paper presents GestureMap, a visual analytics tool for gesture elicitation which directly visualises the space of gestures. Concretely, a Variational Autoencoder embeds gestures recorded as 3D skeletons on an interactive 2D map. GestureMap furth er integrates three computational capabilities to connect exploration to quantitative measures: Leveraging DTW Barycenter Averaging (DBA), we compute average gestures to 1) represent gesture groups at a glance; 2) compute a new consensus measure (variance around average gesture); and 3) cluster gestures with k-means. We evaluate GestureMap and its concepts with eight experts and an in-depth analysis of published data. Our findings show how GestureMap facilitates exploring large datasets and helps researchers to gain a visual understanding of elicited gesture spaces. It further opens new directions, such as comparing elicitations across studies. We discuss implications for elicitation studies and research, and opportunities to extend our approach to additional tasks in gesture elicitation.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا