ترغب بنشر مسار تعليمي؟ اضغط هنا

Video-Streaming Biomedical Implants using Ultrasonic Waves for Communication

133   0   0.0 ( 0 )
 نشر من قبل Gizem Tabak
 تاريخ النشر 2021
  مجال البحث هندسة إلكترونية
والبحث باللغة English




اسأل ChatGPT حول البحث

The use of wireless implanted medical devices (IMDs) is growing because they facilitate continuous monitoring of patients during normal activities, simplify medical procedures required for data retrieval and reduce the likelihood of infection associated with trailing wires. However, most of the state-of-the-art IMDs are passive and offline devices. One of the key obstacles to an active and online IMD is the infeasibility of real-time, high-quality video broadcast from the IMD. Such broadcast would help develop innovative devices such as a video-streaming capsule endoscopy (CE) pill with therapeutic intervention capabilities. State-of-the-art IMDs employ radio-frequency electromagnetic waves for information transmission. However, high attenuation of RF-EM waves in tissues and federal restrictions on the transmit power and operable bandwidth lead to fundamental performance constraints for IMDs employing RF links, and prevent achieving high data rates that could accomodate video broadcast. In this work, ultrasonic waves were used for video transmission and broadcast through biological tissues. The proposed proof-of-concept system was tested on a porcine intestine ex vivo and a rabbit in vivo. It was demonstrated that using a millimeter-sized, implanted biocompatible transducer operating at 1.1-1.2 MHz, it was possible to transmit endoscopic video with high resolution (1280 pixels by 720 pixels) through porcine intestine wrapped with bacon, and to broadcast standard definition (640 pixels by 480 pixels) video near real-time through rabbit abdomen in vivo. A media repository that includes experimental demonstrations and media files accompanies this paper. The accompanying media repository can be found at this link: https://bit.ly/3wuc7tk.



قيم البحث

اقرأ أيضاً

Quantitative MRI methods that estimate multiple physical parameters simultaneously often require the fitting of a computational complex signal model defined through the Bloch equations. Repeated Bloch simulations can be avoided by matching the measur ed signal with a precomputed signal dictionary on a discrete parameter grid, as used in MR Fingerprinting. However, accurate estimation requires discretizing each parameter with a high resolution and consequently high computational and memory costs for dictionary generation, storage, and matching. Here, we reduce the required parameter resolution by approximating the signal between grid points through B-spline interpolation. The interpolant and its gradient are evaluated efficiently which enables a least-squares fitting method for parameter mapping. The resolution of each parameter was minimized while obtaining a user-specified interpolation accuracy. The method was evaluated by phantom and in-vivo experiments using fully-sampled and undersampled unbalanced (FISP) MR fingerprinting acquisitions. Bloch simulations incorporated relaxation effects ($T_1,T_2$), proton density ($PD$), receiver phase ($phi_0$), transmit field inhomogeneity ($B_1^+$), and slice profile. Parameter maps were compared with those obtained from dictionary matching, where the parameter resolution was chosen to obtain similar signal (interpolation) accuracy. For both the phantom and the in-vivo acquisition, the proposed method approximated the parameter maps obtained through dictionary matching while reducing the parameter resolution in each dimension ($T_1,T_2,B_1^+$) by, on average, an order of magnitude. In effect, the applied dictionary was reduced from 1.47 GB to 464 KB. Dictionary fitting with B-spline interpolation reduces the computational and memory costs of dictionary-based methods and is therefore a promising method for multi-parametric mapping.
Implanted devices providing real-time neural activity classification and control are increasingly used to treat neurological disorders, such as epilepsy and Parkinsons disease. Classification performance is critical to identifying brain states approp riate for the therapeutic action. However, advanced algorithms that have shown promise in offline studies, in particular deep learning (DL) methods, have not been deployed on resource-restrained neural implants. Here, we designed and optimized three embedded DL models of commonly adopted architectures and evaluated their inference performance in a case study of seizure detection. A deep neural network (DNN), a convolutional neural network (CNN), and a long short-term memory (LSTM) network were designed to classify ictal, preictal, and interictal phases from the CHB-MIT scalp EEG database. After iterative model compression and quantization, the algorithms were deployed on a general-purpose, off-the-shelf microcontroller. Inference sensitivity, false positive rate, execution time, memory size, and power consumption were quantified. For seizure event detection, the sensitivity and FPR (h-1) for the DNN, CNN, and LSTM models were 87.36%/0.169, 96.70%/0.102, and 97.61%/0.071, respectively. Predicting seizures for early warnings was also feasible. The implemented compression and quantization achieved a significant saving of power and memory with an accuracy degradation of less than 0.5%. Edge DL models achieved performance comparable to many prior implementations that had no time or computational resource limitations. Generic microcontrollers can provide the required memory and computational resources, while model designs can be migrated to ASICs for further optimization. The results suggest that edge DL inference is a feasible option for future neural implants to improve classification performance and therapeutic outcomes.
In this paper, we study the server-side rate adaptation problem for streaming tile-based adaptive 360-degree videos to multiple users who are competing for transmission resources at the network bottleneck. Specifically, we develop a convolutional neu ral network (CNN)-based viewpoint prediction model to capture the nonlinear relationship between the future and historical viewpoints. A Laplace distribution model is utilized to characterize the probability distribution of the prediction error. Given the predicted viewpoint, we then map the viewport in the spherical space into its corresponding planar projection in the 2-D plane, and further derive the visibility probability of each tile based on the planar projection and the prediction error probability. According to the visibility probability, tiles are classified as viewport, marginal and invisible tiles. The server-side tile rate allocation problem for multiple users is then formulated as a non-linear discrete optimization problem to minimize the overall received video distortion of all users and the quality difference between the viewport and marginal tiles of each user, subject to the transmission capacity constraints and users specific viewport requirements. We develop a steepest descent algorithm to solve this non-linear discrete optimization problem, by initializing the feasible starting point in accordance with the optimal solution of its continuous relaxation. Extensive experimental results show that the proposed algorithm can achieve a near-optimal solution, and outperforms the existing rate adaptation schemes for tile-based adaptive 360-video streaming.
A challenge for rescue teams when fighting against wildfire in remote areas is the lack of information, such as the size and images of fire areas. As such, live streaming from Unmanned Aerial Vehicles (UAVs), capturing videos of dynamic fire areas, i s crucial for firefighter commanders in any location to monitor the fire situation with quick response. The 5G network is a promising wireless technology to support such scenarios. In this paper, we consider a UAV-to-UAV (U2U) communication scenario, where a UAV at a high altitude acts as a mobile base station (UAV-BS) to stream videos from other flying UAV-users (UAV-UEs) through the uplink. Due to the mobility of the UAV-BS and UAV-UEs, it is important to determine the optimal movements and transmission powers for UAV-BSs and UAV-UEs in real-time, so as to maximize the data rate of video transmission with smoothness and low latency, while mitigating the interference according to the dynamics in fire areas and wireless channel conditions. In this paper, we co-design the video resolution, the movement, and the power control of UAV-BS and UAV-UEs to maximize the Quality of Experience (QoE) of real-time video streaming. To learn the Deep Q-Network (DQN) and Actor-Critic (AC) to maximize the QoE of video transmission from all UAV-UEs to a single UAVBS. Simulation results show the effectiveness of our proposed algorithm in terms of the QoE, delay and video smoothness as compared to the Greedy algorithm.
Holographic communication is intended as an holistic way to manipulate with unprecedented flexibility the electromagnetic field generated or sensed by an antenna. This is of particular interest when using large antennas at high frequency (e.g., the m illimeter wave or terahertz), whose operating condition may easily fall in the Fresnel propagation region (radiating near-field), where the classical plane wave propagation assumption is no longer valid. This paper analyzes the optimal communication involving large intelligent surfaces, realized for example with metamaterials as possible enabling technology for holographic communication. It is shown that traditional propagation models must be revised and that, when exploiting spherical wave propagation in the Fresnel region with large surfaces, new opportunities are opened, for example, in terms of the number of orthogonal communication channels.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا