Do you want to publish a course? Click here

AffectiveTDA: Using Topological Data Analysis to Improve Analysis and Explainability in Affective Computing

91   0   0.0 ( 0 )
 Added by Paul Rosen
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

We present an approach utilizing Topological Data Analysis to study the structure of face poses used in affective computing, i.e., the process of recognizing human emotion. The approach uses a conditional comparison of different emotions, both respective and irrespective of time, with multiple topological distance metrics, dimension reduction techniques, and face subsections (e.g., eyes, nose, mouth, etc.). The results confirm that our topology-based approach captures known patterns, distinctions between emotions, and distinctions between individuals, which is an important step towards more robust and explainable emotion recognition by machines.



rate research

Read More

Over the past few years, adversarial training has become an extremely active research topic and has been successfully applied to various Artificial Intelligence (AI) domains. As a potentially crucial technique for the development of the next generation of emotional AI systems, we herein provide a comprehensive overview of the application of adversarial training to affective computing and sentiment analysis. Various representative adversarial training algorithms are explained and discussed accordingly, aimed at tackling diverse challenges associated with emotional AI systems. Further, we highlight a range of potential future research directions. We expect that this overview will help facilitate the development of adversarial training for affective computing and sentiment analysis in both the academic and industrial communities.
Data credibility is a crucial issue in mobile crowd sensing (MCS) and, more generally, people-centric Internet of Things (IoT). Prior work takes approaches such as incentive mechanism design and data mining to address this issue, while overlooking the power of crowds itself, which we exploit in this paper. In particular, we propose a cross validation approach which seeks a validating crowd to verify the data credibility of the original sensing crowd, and uses the verification result to reshape the original sensing dataset into a more credible posterior belief of the ground truth. Following this approach, we design a specific cross validation mechanism, which integrates four sampling techniques with a privacy-aware competency-adaptive push (PACAP) algorithm and is applicable to time-sensitive and quality-critical MCS applications. It does not require redesigning a new MCS system but rather functions as a lightweight plug-in, making it easier for practical adoption. Our results demonstrate that the proposed mechanism substantially improves data credibility in terms of both reinforcing obscure truths and scavenging hidden truths.
People naturally bring their prior beliefs to bear on how they interpret the new information, yet few formal models exist for accounting for the influence of users prior beliefs in interactions with data presentations like visualizations. We demonstrate a Bayesian cognitive model for understanding how people interpret visualizations in light of prior beliefs and show how this model provides a guide for improving visualization evaluation. In a first study, we show how applying a Bayesian cognition model to a simple visualization scenario indicates that peoples judgments are consistent with a hypothesis that they are doing approximate Bayesian inference. In a second study, we evaluate how sensitive our observations of Bayesian behavior are to different techniques for eliciting people subjective distributions, and to different datasets. We find that people dont behave consistently with Bayesian predictions for large sample size datasets, and this difference cannot be explained by elicitation technique. In a final study, we show how normative Bayesian inference can be used as an evaluation framework for visualizations, including of uncertainty.
Natural language interaction with data visualization tools often involves the use of vague subjective modifiers in utterances such as show me the sectors that are performing and where is a good neighborhood to buy a house?. Interpreting these modifiers is often difficult for these tools because their meanings lack clear semantics and are in part defined by context and personal user preferences. This paper presents a system called system that makes a first step in better understanding these vague predicates. The algorithm employs word co-occurrence and sentiment analysis to determine which data attributes and filters ranges to associate with the vague predicates. The provenance results from the algorithm are exposed to the user as interactive text that can be repaired and refined. We conduct a qualitative evaluation of the Sentifiers system that indicates the usefulness of the interface as well as opportunities for better supporting subjective utterances in visual analysis tasks through natural language.
53 - Ke Xu , Yun Wang , Leni Yang 2019
Detecting and analyzing potential anomalous performances in cloud computing systems is essential for avoiding losses to customers and ensuring the efficient operation of the systems. To this end, a variety of automated techniques have been developed to identify anomalies in cloud computing performance. These techniques are usually adopted to track the performance metrics of the system (e.g., CPU, memory, and disk I/O), represented by a multivariate time series. However, given the complex characteristics of cloud computing data, the effectiveness of these automated methods is affected. Thus, substantial human judgment on the automated analysis results is required for anomaly interpretation. In this paper, we present a unified visual analytics system named CloudDet to interactively detect, inspect, and diagnose anomalies in cloud computing systems. A novel unsupervised anomaly detection algorithm is developed to identify anomalies based on the specific temporal patterns of the given metrics data (e.g., the periodic pattern), the results of which are visualized in our system to indicate the occurrences of anomalies. Rich visualization and interaction designs are used to help understand the anomalies in the spatial and temporal context. We demonstrate the effectiveness of CloudDet through a quantitative evaluation, two case studies with real-world data, and interviews with domain experts.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا