ترغب بنشر مسار تعليمي؟ اضغط هنا

AI systems have seen significant adoption in various domains. At the same time, further adoption in some domains is hindered by inability to fully trust an AI system that it will not harm a human. Besides the concerns for fairness, privacy, transpare ncy, and explainability are key to developing trusts in AI systems. As stated in describing trustworthy AI Trust comes through understanding. How AI-led decisions are made and what determining factors were included are crucial to understand. The subarea of explaining AI systems has come to be known as XAI. Multiple aspects of an AI system can be explained; these include biases that the data might have, lack of data points in a particular region of the example space, fairness of gathering the data, feature importances, etc. However, besides these, it is critical to have human-centered explanations that are directly related to decision-making similar to how a domain expert makes decisions based on domain knowledge, that also include well-established, peer-validated explicit guidelines. To understand and validate an AI systems outcomes (such as classification, recommendations, predictions), that lead to developing trust in the AI system, it is necessary to involve explicit domain knowledge that humans understand and use.
As the role of online platforms has become increasingly prominent for communication, toxic behaviors, such as cyberbullying and harassment, have been rampant in the last decade. On the other hand, online toxicity is multi-dimensional and sensitive in nature, which makes its detection challenging. As the impact of exposure to online toxicity can lead to serious implications for individuals and communities, reliable models and algorithms are required for detecting and understanding such communications. In this paper, we define toxicity to provide a foundation drawing social theories. Then, we provide an approach that identifies multiple dimensions of toxicity and incorporates explicit knowledge in a statistical learning algorithm to resolve ambiguity across such dimensions.
Contextualized entity representations learned by state-of-the-art transformer-based language models (TLMs) like BERT, GPT, T5, etc., leverage the attention mechanism to learn the data context from training data corpus. However, these models do not us e the knowledge context. Knowledge context can be understood as semantics about entities and their relationship with neighboring entities in knowledge graphs. We propose a novel and effective technique to infuse knowledge context from multiple knowledge graphs for conceptual and ambiguous entities into TLMs during fine-tuning. It projects knowledge graph embeddings in the homogeneous vector-space, introduces new token-types for entities, aligns entity position ids, and a selective attention mechanism. We take BERT as a baseline model and implement the Knowledge-Infused BERT by infusing knowledge context from ConceptNet and WordNet, which significantly outperforms BERT and other recent knowledge-aware BERT variants like ERNIE, SenseBERT, and BERT_CS over eight different subtasks of GLUE benchmark. The KI-BERT-base model even significantly outperforms BERT-large for domain-specific tasks like SciTail and academic subsets of QQP, QNLI, and MNLI.
We discuss how over the last 30 to 50 years, Artificial Intelligence (AI) systems that focused only on data have been handicapped, and how knowledge has been critical in developing smarter, intelligent, and more effective systems. In fact, the vast p rogress in AI can be viewed in terms of the three waves of AI as identified by DARPA. During the first wave, handcrafted knowledge has been at the center-piece, while during the second wave, the data-driven approaches supplanted knowledge. Now we see a strong role and resurgence of knowledge fueling major breakthroughs in the third wave of AI underpinning future intelligent systems as they attempt human-like decision making, and seek to become trusted assistants and companions for humans. We find a wider availability of knowledge created from diverse sources, using manual to automated means both by repurposing as well as by extraction. Using knowledge with statistical learning is becoming increasingly indispensable to help make AI systems more transparent and auditable. We will draw a parallel with the role of knowledge and experience in human intelligence based on cognitive science, and discuss emerging neuro-symbolic or hybrid AI systems in which knowledge is the critical enabler for combining capabilities of the data-intensive statistical AI systems with those of symbolic AI systems, resulting in more capable AI systems that support more human-like intelligence.
The unprecedented growth of Internet users has resulted in an abundance of unstructured information on social media including health forums, where patients request health-related information or opinions from other users. Previous studies have shown t hat online peer support has limited effectiveness without expert intervention. Therefore, a system capable of assessing the severity of health state from the patients social media posts can help health professionals (HP) in prioritizing the users post. In this study, we inspect the efficacy of different aspects of Natural Language Understanding (NLU) to identify the severity of the users health state in relation to two perspectives(tasks) (a) Medical Condition (i.e., Recover, Exist, Deteriorate, Other) and (b) Medication (i.e., Effective, Ineffective, Serious Adverse Effect, Other) in online health communities. We propose a multiview learning framework that models both the textual content as well as contextual-information to assess the severity of the users health state. Specifically, our model utilizes the NLU views such as sentiment, emotions, personality, and use of figurative language to extract the contextual information. The diverse NLU views demonstrate its effectiveness on both the tasks and as well as on the individual disease to assess a users health.
With the proliferation of social media over the last decade, determining peoples attitude with respect to a specific topic, document, interaction or events has fueled research interest in natural language processing and introduced a new channel calle d sentiment and emotion analysis. For instance, businesses routinely look to develop systems to automatically understand their customer conversations by identifying the relevant content to enhance marketing their products and managing their reputations. Previous efforts to assess peoples sentiment on Twitter have suggested that Twitter may be a valuable resource for studying political sentiment and that it reflects the offline political landscape. According to a Pew Research Center report, in January 2016 44 percent of US adults stated having learned about the presidential election through social media. Furthermore, 24 percent reported use of social media posts of the two candidates as a source of news and information, which is more than the 15 percent who have used both candidates websites or emails combined. The first presidential debate between Trump and Hillary was the most tweeted debate ever with 17.1 million tweets.
The World Wide Web continues to evolve and serve as the infrastructure for carrying massive amounts of multimodal and multisensory observations. These observations capture various situations pertinent to peoples needs and interests along with all the ir idiosyncrasies. To support human-centered computing that empower people in making better and timely decisions, we look towards computation that is inspired by human perception and cognition. Toward this goal, we discuss computing paradigms of semantic computing, cognitive computing, and an emerging aspect of computing, which we call perceptual computing. In our view, these offer a continuum to make the most out of vast, growing, and diverse data pertinent to human needs and interests. We propose details of perceptual computing characterized by interpretation and exploration operations comparable to the interleaving of bottom and top brain processing. This article consists of two parts. First we describe semantic computing, cognitive computing, and perceptual computing to lay out distinctions while acknowledging their complementary capabilities. We then provide a conceptual overview of the newest of these three paradigms--perceptual computing. For further insights, we focus on an application scenario of asthma management converting massive, heterogeneous and multimodal (big) data into actionable information or smart data.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا