ترغب بنشر مسار تعليمي؟ اضغط هنا

A Robust and Accurate Deep Learning based Pattern Recognition Framework for Upper Limb Prosthesis using sEMG

57   0   0.0 ( 0 )
 نشر من قبل Sidharth Pancholi
 تاريخ النشر 2021
  مجال البحث هندسة إلكترونية
والبحث باللغة English




اسأل ChatGPT حول البحث

In EMG based pattern recognition (EMG-PR), deep learning-based techniques have become more prominent for their self-regulating capability to extract discriminant features from large data-sets. Moreover, the performance of traditional machine learning-based methods show limitation to categorize over a certain number of classes and degrades over a period of time. In this paper, an accurate, robust, and fast convolutional neural network-based framework for EMG pattern identification is presented. To assess the performance of the proposed system, five publicly available and benchmark data-sets of upper limb activities were used. This data-set contains 49 to 52 upper limb motions (NinaPro DB1, NinaPro DB2, and NinaPro DB3), Data with force variation, and data with arm position variation for intact and amputated subjects. The classification accuracies of 91.11% (53 classes), 89.45% (49 classes), 81.67% (49 classes of amputees), 95.67% (6 classes with force variation), and 99.11% (8 classes with arm position variation) have been observed during the testing and validation. The performance of the proposed system is compared with the state of art techniques in the literature. The findings demonstrate that classification accuracy and time complexity have improved significantly. Keras, TensorFlows high-level API for constructing deep learning models, was used for signal pre-processing and deep-learning-based algorithms. The suggested method was run on an Intel 3.5GHz Core i7, 7th Gen CPU with 8GB DDR4 RAM.



قيم البحث

اقرأ أيضاً

The upper limb of the body is a vital for various kind of activities for human. The complete or partial loss of the upper limb would lead to a significant impact on daily activities of the amputees. EMG carries important information of human physique which helps to decode the various functionalities of human arm. EMG signal based bionics and prosthesis have gained huge research attention over the past decade. Conventional EMG-PR based prosthesis struggles to give accurate performance due to off-line training used and incapability to compensate for electrode position shift and change in arm position. This work proposes online training and incremental learning based system for upper limb prosthetic application. This system consists of ADS1298 as AFE (analog front end) and a 32 bit arm cortex-m4 processor for DSP (digital signal processing). The system has been tested for both intact and amputated subjects. Time derivative moment based features have been implemented and utilized for effective pattern classification. Initially, system have been trained for four classes using the on-line training process later on the number of classes have been incremented on user demand till eleven, and system performance has been evaluated. The system yielded a completion rate of 100% for healthy and amputated subjects when four motions have been considered. Further 94.33% and 92% completion rate have been showcased by the system when the number of classes increased to eleven for healthy and amputees respectively. The motion efficacy test is also evaluated for all the subjects. The highest efficacy rate of 91.23% and 88.64% are observed for intact and amputated subjects respectively.
Modulation classification, recognized as the intermediate step between signal detection and demodulation, is widely deployed in several modern wireless communication systems. Although many approaches have been studied in the last decades for identify ing the modulation format of an incoming signal, they often reveal the obstacle of learning radio characteristics for most traditional machine learning algorithms. To overcome this drawback, we propose an accurate modulation classification method by exploiting deep learning for being compatible with constellation diagram. Particularly, a convolutional neural network is developed for proficiently learning the most relevant radio characteristics of gray-scale constellation image. The deep network is specified by multiple processing blocks, where several grouped and asymmetric convolutional layers in each block are organized by a flow-in-flow structure for feature enrichment. These blocks are connected via skip-connection to prevent the vanishing gradient problem while effectively preserving the information identify throughout the network. Regarding several intensive simulations on the constellation image dataset of eight digital modulations, the proposed deep network achieves the remarkable classification accuracy of approximately 87% at 0 dB signal-to-noise ratio (SNR) under a multipath Rayleigh fading channel and further outperforms some state-of-the-art deep models of constellation-based modulation classification.
Activity recognition is the ability to identify and recognize the action or goals of the agent. The agent can be any object or entity that performs action that has end goals. The agents can be a single agent performing the action or group of agents p erforming the actions or having some interaction. Human activity recognition has gained popularity due to its demands in many practical applications such as entertainment, healthcare, simulations and surveillance systems. Vision based activity recognition is gaining advantage as it does not require any human intervention or physical contact with humans. Moreover, there are set of cameras that are networked with the intention to track and recognize the activities of the agent. Traditional applications that were required to track or recognize human activities made use of wearable devices. However, such applications require physical contact of the person. To overcome such challenges, vision based activity recognition system can be used, which uses a camera to record the video and a processor that performs the task of recognition. The work is implemented in two stages. In the first stage, an approach for the Implementation of Activity recognition is proposed using background subtraction of images, followed by 3D- Convolutional Neural Networks. The impact of using Background subtraction prior to 3D-Convolutional Neural Networks has been reported. In the second stage, the work is further extended and implemented on Raspberry Pi, that can be used to record a stream of video, followed by recognizing the activity that was involved in the video. Thus, a proof-of-concept for activity recognition using small, IoT based device, is provided, which can enhance the system and extend its applications in various forms like, increase in portability, networking, and other capabilities of the device.
112 - Renjie Xie , Yanzhi Chen , Yan Wo 2019
Deep neural networks (DNN) have been a de facto standard for nowadays biometric recognition solutions. A serious, but still overlooked problem in these DNN-based recognition systems is their vulnerability against adversarial attacks. Adversarial atta cks can easily cause the output of a DNN system to greatly distort with only tiny changes in its input. Such distortions can potentially lead to an unexpected match between a valid biometric and a synthetic one constructed by a strategic attacker, raising security issue. In this work, we show how this issue can be resolved by learning robust biometric features through a deep, information-theoretic framework, which builds upon the recent deep variational information bottleneck method but is carefully adapted to biometric recognition tasks. Empirical evaluation demonstrates that our method not only offers stronger robustness against adversarial attacks but also provides better recognition performance over state-of-the-art approaches.
Machine learning methods, such as deep learning, show promising results in the medical domain. However, the lack of interpretability of these algorithms may hinder their applicability to medical decision support systems. This paper studies an interpr etable deep learning technique, called SincNet. SincNet is a convolutional neural network that efficiently learns customized band-pass filters through trainable sinc-functions. In this study, we use SincNet to analyze the neural activity of individuals with Autism Spectrum Disorder (ASD), who experience characteristic differences in neural oscillatory activity. In particular, we propose a novel SincNet-based neural network for detecting emotions in ASD patients using EEG signals. The learned filters can be easily inspected to detect which part of the EEG spectrum is used for predicting emotions. We found that our system automatically learns the high-$alpha$ (9-13 Hz) and $beta$ (13-30 Hz) band suppression often present in individuals with ASD. This result is consistent with recent neuroscience studies on emotion recognition, which found an association between these band suppressions and the behavioral deficits observed in individuals with ASD. The improved interpretability of SincNet is achieved without sacrificing performance in emotion recognition.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا