Do you want to publish a course? Click here

An implementation of an imitation game with ASD children to learn nursery rhymes

113   0   0.0 ( 0 )
 Added by Sao Mai Nguyen
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Previous studies have suggested that being imitated by an adult is an effective intervention with children with autism and developmental delay. The purpose of this study is to investigate if an imitation game with a robot can arise interest from children and constitute an effective tool to be used in clinical activities. In this paper, we describe the design of our nursery rhyme imitation game, its implementation based on RGB image pose recognition and the preliminary tests we performed.

rate research

Read More

With the outlook of improving communication and social abilities of people with ASD, we propose to extend the paradigm of robot-based imitation games to ASD teenagers. In this paper, we present an interaction scenario adapted to ASD teenagers, propose a computational architecture using the latest machine learning algorithm Openpose for human pose detection, and present the results of our basic testing of the scenario with human caregivers. These results are preliminary due to the number of session (1) and participants (4). They include a technical assessment of the performance of Openpose, as well as a preliminary user study to confirm our game scenario could elicit the expected response from subjects.
Automated driving system - dedicated vehicles (ADS-DVs), specially designed for people with various disabilities, can be beneficial to improve their mobility. However, research related to autonomous vehicles (AVs) for people with cognitive disabilities, especially Autism Spectrum Disorder (ASD) is limited. Thus, in this study, we focused on the challenge that we framed: How might we design an ADS-DV that benefits people with ASD and their caregivers?. In order to address the design challenge, we followed the human-centered design process. First, we conducted user research with caregivers of people with ASD. Second, we identified their user needs, including safety, monitoring and updates, individual preferences, comfort, trust, and reliability. Third, we generated a large number of ideas with brainstorming and affinity diagrams, based on which we proposed an ADS-DV prototype with a mobile application and an interior design. Fourth, we tested both the low-fidelity and high-fidelity prototypes to fix the possible issues. Our preliminary results showed that such an ASD-DV would potentially improve the mobility of those with ASD without worries.
137 - Ziyu Yao , Yiqi Tang , Wen-tau Yih 2020
Despite the widely successful applications, bootstrapping and fine-tuning semantic parsers are still a tedious process with challenges such as costly data annotation and privacy risks. In this paper, we suggest an alternative, human-in-the-loop methodology for learning semantic parsers directly from users. A semantic parser should be introspective of its uncertainties and prompt for user demonstration when uncertain. In doing so it also gets to imitate the user behavior and continue improving itself autonomously with the hope that eventually it may become as good as the user in interpreting their questions. To combat the sparsity of demonstration, we propose a novel annotation-efficient imitation learning algorithm, which iteratively collects new datasets by mixing demonstrated states and confident predictions and re-trains the semantic parser in a Dataset Aggregation fashion (Ross et al., 2011). We provide a theoretical analysis of its cost bound and also empirically demonstrate its promising performance on the text-to-SQL problem. Code will be available at https://github.com/sunlab-osu/MISP.
Active learning agents typically employ a query selection algorithm which solely considers the agents learning objectives. However, this may be insufficient in more realistic human domains. This work uses imitation learning to enable an agent in a constrained environment to concurrently reason about both its internal learning goals and environmental constraints externally imposed, all within its objective function. Experiments are conducted on a concept learning task to test generalization of the proposed algorithm to different environmental conditions and analyze how time and resource constraints impact efficacy of solving the learning problem. Our findings show the environmentally-aware learning agent is able to statistically outperform all other active learners explored under most of the constrained conditions. A key implication is adaptation for active learning agents to more realistic human environments, where constraints are often externally imposed on the learner.
The work presented in this paper aims to explore how, and to what extent, an adaptive robotic coach has the potential to provide extra motivation to adhere to long-term rehabilitation and help fill the coaching gap which occurs during repetitive solo practice in high performance sport. Adapting the behavior of a social robot to a specific user, using reinforcement learning (RL), could be a way of increasing adherence to an exercise routine in both domains. The requirements gathering phase is underway and is presented in this paper along with the rationale of using RL in this context.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا