ترغب بنشر مسار تعليمي؟ اضغط هنا

Edge Deep Learning for Neural Implants

94   0   0.0 ( 0 )
 نشر من قبل Xilin Liu
 تاريخ النشر 2020
  مجال البحث هندسة إلكترونية
والبحث باللغة English




اسأل ChatGPT حول البحث

Implanted devices providing real-time neural activity classification and control are increasingly used to treat neurological disorders, such as epilepsy and Parkinsons disease. Classification performance is critical to identifying brain states appropriate for the therapeutic action. However, advanced algorithms that have shown promise in offline studies, in particular deep learning (DL) methods, have not been deployed on resource-restrained neural implants. Here, we designed and optimized three embedded DL models of commonly adopted architectures and evaluated their inference performance in a case study of seizure detection. A deep neural network (DNN), a convolutional neural network (CNN), and a long short-term memory (LSTM) network were designed to classify ictal, preictal, and interictal phases from the CHB-MIT scalp EEG database. After iterative model compression and quantization, the algorithms were deployed on a general-purpose, off-the-shelf microcontroller. Inference sensitivity, false positive rate, execution time, memory size, and power consumption were quantified. For seizure event detection, the sensitivity and FPR (h-1) for the DNN, CNN, and LSTM models were 87.36%/0.169, 96.70%/0.102, and 97.61%/0.071, respectively. Predicting seizures for early warnings was also feasible. The implemented compression and quantization achieved a significant saving of power and memory with an accuracy degradation of less than 0.5%. Edge DL models achieved performance comparable to many prior implementations that had no time or computational resource limitations. Generic microcontrollers can provide the required memory and computational resources, while model designs can be migrated to ASICs for further optimization. The results suggest that edge DL inference is a feasible option for future neural implants to improve classification performance and therapeutic outcomes.

قيم البحث

اقرأ أيضاً

Realization of deep learning with coherent optical field has attracted remarkably attentions presently, which benefits on the fact that optical matrix manipulation can be executed at speed of light with inherent parallel computation as well as low la tency. Photonic neural network has a significant potential for prediction-oriented tasks. Yet, real-value Backpropagation behaves somewhat intractably for coherent photonic intelligent training. We develop a compatible learning protocol in complex space, of which nonlinear activation could be selected efficiently depending on the unveiled compatible condition. Compatibility indicates that matrix representation in complex space covers its real counterpart, which could enable a single channel mingled training in real and complex space as a unified model. The phase logical XOR gate with Mach-Zehnder interferometers and diffractive neural network with optical modulation mechanism, implementing intelligent weight learned from compatible learning, are presented to prove the availability. Compatible learning opens an envisaged window for deep photonic neural network.
The use of wireless implanted medical devices (IMDs) is growing because they facilitate continuous monitoring of patients during normal activities, simplify medical procedures required for data retrieval and reduce the likelihood of infection associa ted with trailing wires. However, most of the state-of-the-art IMDs are passive and offline devices. One of the key obstacles to an active and online IMD is the infeasibility of real-time, high-quality video broadcast from the IMD. Such broadcast would help develop innovative devices such as a video-streaming capsule endoscopy (CE) pill with therapeutic intervention capabilities. State-of-the-art IMDs employ radio-frequency electromagnetic waves for information transmission. However, high attenuation of RF-EM waves in tissues and federal restrictions on the transmit power and operable bandwidth lead to fundamental performance constraints for IMDs employing RF links, and prevent achieving high data rates that could accomodate video broadcast. In this work, ultrasonic waves were used for video transmission and broadcast through biological tissues. The proposed proof-of-concept system was tested on a porcine intestine ex vivo and a rabbit in vivo. It was demonstrated that using a millimeter-sized, implanted biocompatible transducer operating at 1.1-1.2 MHz, it was possible to transmit endoscopic video with high resolution (1280 pixels by 720 pixels) through porcine intestine wrapped with bacon, and to broadcast standard definition (640 pixels by 480 pixels) video near real-time through rabbit abdomen in vivo. A media repository that includes experimental demonstrations and media files accompanies this paper. The accompanying media repository can be found at this link: https://bit.ly/3wuc7tk.
Owing to the complicated characteristics of 5G communication system, designing RF components through mathematical modeling becomes a challenging obstacle. Moreover, such mathematical models need numerous manual adjustments for various specification r equirements. In this paper, we present a learning-based framework to model and compensate Power Amplifiers (PAs) in 5G communication. In the proposed framework, Deep Neural Networks (DNNs) are used to learn the characteristics of the PAs, while, correspondent Digital Pre-Distortions (DPDs) are also learned to compensate for the nonlinear and memory effects of PAs. On top of the framework, we further propose two frequency domain losses to guide the learning process to better optimize the target, compared to naive time domain Mean Square Error (MSE). The proposed framework serves as a drop-in replacement for the conventional approach. The proposed approach achieves an average of 56.7% reduction of nonlinear and memory effects, which converts to an average of 16.3% improvement over a carefully-designed mathematical model, and even reaches 34% enhancement in severe distortion scenarios.
In this paper, we are interested in building a domain knowledge based deep learning framework to solve the chiller plants energy optimization problems. Compared to the hotspot applications of deep learning (e.g. image classification and NLP), it is d ifficult to collect enormous data for deep network training in real-world physical systems. Most existing methods reduce the complex systems into linear model to facilitate the training on small samples. To tackle the small sample size problem, this paper considers domain knowledge in the structure and loss design of deep network to build a nonlinear model with lower redundancy function space. Specifically, the energy consumption estimation of most chillers can be physically viewed as an input-output monotonic problem. Thus, we can design a Neural Network with monotonic constraints to mimic the physical behavior of the system. We verify the proposed method in a cooling system of a data center, experimental results show the superiority of our framework in energy optimization compared to the existing ones.
68 - Jed Mills , Jia Hu , Geyong Min 2020
Federated Learning (FL) is an emerging approach for collaboratively training Deep Neural Networks (DNNs) on mobile devices, without private user data leaving the devices. Previous works have shown that non-Independent and Identically Distributed (non -IID) user data harms the convergence speed of the FL algorithms. Furthermore, most existing work on FL measures global-model accuracy, but in many cases, such as user content-recommendation, improving individual User model Accuracy (UA) is the real objective. To address these issues, we propose a Multi-Task FL (MTFL) algorithm that introduces non-federated Batch-Normalization (BN) layers into the federated DNN. MTFL benefits UA and convergence speed by allowing users to train models personalised to their own data. MTFL is compatible with popular iterative FL optimisation algorithms such as Federated Averaging (FedAvg), and we show empirically that a distributed form of Adam optimisation (FedAvg-Adam) benefits convergence speed even further when used as the optimisation strategy within MTFL. Experiments using MNIST and CIFAR10 demonstrate that MTFL is able to significantly reduce the number of rounds required to reach a target UA, by up to $5times$ when using existing FL optimisation strategies, and with a further $3times$ improvement when using FedAvg-Adam. We compare MTFL to competing personalised FL algorithms, showing that it is able to achieve the best UA for MNIST and CIFAR10 in all considered scenarios. Finally, we evaluate MTFL with FedAvg-Adam on an edge-computing testbed, showing that its convergence and UA benefits outweigh its overhead.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا