ترغب بنشر مسار تعليمي؟ اضغط هنا

Smart and Context-Aware System employing Emotions Recognition

394   0   0.0 ( 0 )
 نشر من قبل Harsh Sharma
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

People have the ability to make sensible assumptions about other peoples emotional states by being sympathetic, and because of our common sense of knowledge and the ability to think visually. Over the years, much research has been done on providing machines with the ability to detect human emotions and to develop automated emotional intelligence systems. The computers ability to detect human emotions is gaining popularity in creating sensitive systems such as learning environments, health care systems and real-world. Improving peoples health has been the subject of much research. This paper describes the formation as conceptual evidence of emotional acquisition and control in intelligent health settings. The authors of this paper aim for an unconventional approach with a friendly look to get emotional scenarios from the system to establish a functional, non-intrusive and emotionally-sensitive environment where users can do their normal activities naturally and see the program only when pleasant mood activating services are received. The context-sensitive system interacts with users to detect and differentiate emotions through facial expressions or speech recognition, to make music recommendations and mood color treatments with the services installed on their IoT devices.



قيم البحث

اقرأ أيضاً

267 - Suyoun Kim , Florian Metze 2018
Existing speech recognition systems are typically built at the sentence level, although it is known that dialog context, e.g. higher-level knowledge that spans across sentences or speakers, can help the processing of long conversations. The recent pr ogress in end-to-end speech recognition systems promises to integrate all available information (e.g. acoustic, language resources) into a single model, which is then jointly optimized. It seems natural that such dialog context information should thus also be integrated into the end-to-end models to improve further recognition accuracy. In this work, we present a dialog-context aware speech recognition model, which explicitly uses context information beyond sentence-level information, in an end-to-end fashion. Our dialog-context model captures a history of sentence-level context so that the whole system can be trained with dialog-context information in an end-to-end manner. We evaluate our proposed approach on the Switchboard conversational speech corpus and show that our system outperforms a comparable sentence-level end-to-end speech recognition system.
Many successful games rely heavily on data analytics to understand players and inform design. Popular methodologies focus on machine learning and statistical analysis of aggregated data. While effective in extracting information regarding player acti on, much of the context regarding when and how those actions occurred is lost. Qualitative methods allow researchers to examine context and derive meaningful explanations about the goals and motivations behind player behavior, but are difficult to scale. In this paper, we build on previous work by combining two existing methodologies: Interactive Behavior Analytics (IBA) and sequence analysis (SA), in order to create a novel, mixed methods, human-in-the-loop data analysis methodology that uses behavioral labels and visualizations to allow analysts to examine player behavior in a way that is context sensitive, scalable, and generalizable. We present the methodology along with a case study demonstrating how it can be used to analyze behavioral patterns of teamwork in the popular multiplayer game Defense of the Ancients 2 (DotA 2).
We present EmotiCon, a learning-based algorithm for context-aware perceived human emotion recognition from videos and images. Motivated by Freges Context Principle from psychology, our approach combines three interpretations of context for emotion re cognition. Our first interpretation is based on using multiple modalities(e.g. faces and gaits) for emotion recognition. For the second interpretation, we gather semantic context from the input image and use a self-attention-based CNN to encode this information. Finally, we use depth maps to model the third interpretation related to socio-dynamic interactions and proximity among agents. We demonstrate the efficiency of our network through experiments on EMOTIC, a benchmark dataset. We report an Average Precision (AP) score of 35.48 across 26 classes, which is an improvement of 7-8 over prior methods. We also introduce a new dataset, GroupWalk, which is a collection of videos captured in multiple real-world settings of people walking. We report an AP of 65.83 across 4 categories on GroupWalk, which is also an improvement over prior methods.
Social norms characterize collective and acceptable group conducts in human society. Furthermore, some social norms emerge from interactions of agents or humans. To achieve agent autonomy and make norm satisfaction explainable, we include emotions in to the normative reasoning process, which evaluate whether to comply or violate a norm. Specifically, before selecting an action to execute, an agent observes the environment and infer the state and consequences with its internal states after norm satisfaction or violation of a social norm. Both norm satisfaction and violation provoke further emotions, and the subsequent emotions affect norm enforcement. This paper investigates how modeling emotions affect the emergence and robustness of social norms via social simulation experiments. We find that an ability in agents to consider emotional responses to the outcomes of norm satisfaction and violation (1) promote norm compliance; and (2) improve societal welfare.
Today, children are increasingly connected to the Internet and consume content and services through various means. It has been a challenge for less tech-savvy parents to protect children from harmful content and services. Internet of Things (IoT) has made the situation much worse as IoT devices allow children to connect to the Internet in novel ways (e.g., connected refrigerators, TVs, and so on). In this paper, we propose mySafeHome, an approach which utilises family dynamics to provide a more natural, and intuitive access control mechanism to protect children from harmful content and services in the context of IoT. In mySafeHome, access control dynamically adapts based on the physical distance between family members. For example, a particular type of content can only be consumed, through TV, by children if the parents are in the same room (or hearing distance). mySafeHome allows parents to assess a given content by themselves. Our approach also aims to create granular levels of access control (e.g., block / limit certain content, features, services, on certain devices when the parents are not in the vicinity). We developed a prototype using OpenHAB and several smart home devices to demonstrate the proposed approach. We believe that our approach also facilitates the creation of better relationships between family members. A demo can be viewed here: http://safehome.technology/demo.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا