ترغب بنشر مسار تعليمي؟ اضغط هنا

Graphene-based Distributed 3D Sensing Electrodes for Mapping Spatiotemporal Auricular Physiological Signals

96   0   0.0 ( 0 )
 نشر من قبل Qingyun Huang
 تاريخ النشر 2021
  مجال البحث هندسة إلكترونية
والبحث باللغة English




اسأل ChatGPT حول البحث

Underneath the ear skin there are richly branching vascular and neural networks that ultimately connecting to our heart and brain. Hence, the three-dimensional (3D) mapping of auricular electrophysiological signals could provide a new perspective for biomedical studies such as diagnosis of cardiovascular diseases and neurological disorders. However, it is still extremely challenging for current sensing techniques to cover the entire ultra-curved auricle. Here, we report a graphene-based ear-conformable sensing device with embedded and distributed 3D electrodes which enable full-auricle physiological monitoring. The sensing device, which incorporates programable 3D electrode thread array and personalized auricular mold, has 3D-conformable sensing interfaces with curved auricular skin, and was developed using one-step multi-material 3D-printing process. As a proof-of-concept, spatiotemporal auricular electrical skin resistance (AESR) mapping was demonstrated. For the first time, 3D AESR contours were generated and human subject-specific AESR distributions among a population were observed. From the data of 17 volunteers, the auricular region-specific AESR changes after cycling exercise were observed in 98% of the tests and were validated via machine learning techniques. Correlations of AESR with heart rate and blood pressure were also studied using statistical analysis. This 3D electronic platform and AESR-based new biometrical findings show promising biomedical applications.



قيم البحث

اقرأ أيضاً

Freezing-of-gait a mysterious symptom of Parkinsons disease and defined as a sudden loss of ability to move forward. Common treatments of freezing episodes are currently of moderate efficacy and can likely be improved through a reliable freezing eval uation. Basic-science studies about the characterization of freezing episodes and a 24/7 evidence-support freezing detection system can contribute to the reliability of the evaluation in daily life. In this study, we analyzed multi-modal features from brain, eye, heart, motion, and gait activity from 15 participants with idiopathic Parkinsons disease and 551 freezing episodes induced by turning in place. Statistical analysis was first applied on 248 of the 551 to determine which multi-modal features were associated with freezing episodes. Features significantly associated with freezing episodes were ranked and used for the freezing detection. We found that eye-stabilization speed during turning and lower-body trembling measure significantly associated with freezing episodes and used for freezing detection. Using a leave-one-subject-out cross-validation, we obtained a sensitivity of 97%+/-3%, a specificity of 96%+/-7%, a precision of 73%+/-21%, a Matthews correlation coefficient of 0.82+/-0.15, and an area under the Precision-Recall curve of 0.94+/-0.05. According to the Precision-Recall curves, the proposed freezing detection method using the multi-modal features performed better than using single-modal features.
Voice interfaces has become an integral part of our lives, with the proliferation of smart devices. Today, IoT devices mainly rely on microphones to sense sound. Microphones, however, have fundamental limitations, such as weak source separation, limi ted range in the presence of acoustic insulation, and being prone to multiple side-channel attacks. In this paper, we propose RadioMic, a radio-based sound sensing system to mitigate these issues and enrich sound applications. RadioMic constructs sound based on tiny vibrations on active sources (e.g., a speaker or human throat) or object surfaces (e.g., paper bag), and can work through walls, even a soundproof one. To convert the extremely weak sound vibration in the radio signals into sound signals, RadioMic introduces radio acoustics, and presents training-free approaches for robust sound detection and high-fidelity sound recovery. It then exploits a neural network to further enhance the recovered sound by expanding the recoverable frequencies and reducing the noises. RadioMic translates massive online audios to synthesized data to train the network, and thus minimizes the need of RF data. We thoroughly evaluate RadioMic under different scenarios using a commodity mmWave radar. The results show RadioMic outperforms the state-of-the-art systems significantly. We believe RadioMic provides new horizons for sound sensing and inspires attractive sensing capabilities of mmWave sensing devices
Physiological signals, such as the electrocardiogram and the phonocardiogram are very often corrupted by noisy sources. Usually, artificial intelligent algorithms analyze the signal regardless of its quality. On the other hand, physicians use a compl etely orthogonal strategy. They do not assess the entire recording, instead they search for a segment where the fundamental and abnormal waves are easily detected, and only then a prognostic is attempted. Inspired by this fact, a new algorithm that automatically selects an optimal segment for a post-processing stage, according to a criteria defined by the user is proposed. In the process, a Neural Network is used to compute the output state probability distribution for each sample. Using the aforementioned quantities, a graph is designed, whereas state transition constraints are physically imposed into the graph and a set of constraints are used to retrieve a subset of the recording that maximizes the likelihood function, proposed by the user. The developed framework is tested and validated in two applications. In both cases, the system performance is boosted significantly, e.g in heart sound segmentation, sensitivity increases 2.4% when compared to the standard approaches in the literature.
Moving loads such as cars and trains are very useful sources of seismic waves, which can be analyzed to retrieve information on the seismic velocity of subsurface materials using the techniques of ambient noise seismology. This information is valuabl e for a variety of applications such as geotechnical characterization of the near-surface, seismic hazard evaluation, and groundwater monitoring. However, for such processes to converge quickly, data segments with appropriate noise energy should be selected. Distributed Acoustic Sensing (DAS) is a novel sensing technique that enables acquisition of these data at very high spatial and temporal resolution for tens of kilometers. One major challenge when utilizing the DAS technology is the large volume of data that is produced, thereby presenting a significant Big Data challenge to find regions of useful energy. In this work, we present a highly scalable and efficient approach to process real, complex DAS data by integrating physics knowledge acquired during a data exploration phase followed by deep supervised learning to identify useful coherent surface waves generated by anthropogenic activity, a class of seismic waves that is abundant on these recordings and is useful for geophysical imaging. Data exploration and training were done on 130~Gigabytes (GB) of DAS measurements. Using parallel computing, we were able to do inference on an additional 170~GB of data (or the equivalent of 10 days worth of recordings) in less than 30 minutes. Our method provides interpretable patterns describing the interaction of ground-based human activities with the buried sensors.
In many sensing applications, typically radio signals are emitted by a radar and from the bounced reflections of the obstacles, inference about the environment is made. Even though radars can be used to sense the landscapes around the user-equipment (UE) such as whether UE is in the forested region, inside buildings, etc., it is not suitable in many wireless applications as many UEs does not have radars in them. Using radar will also increase the cost and power requirements on the UEs in applications requiring sensing of the landscapes. In this paper, we provide a mechanism where basestation (BS) is able to sense the UEs landscape without the use of a radar. We propose an artificial intelligence (AI) based approach with suitable choice of the features derived from the wireless channel to infer the landscape of the UEs. Results for the proposed methods when applied to practical environments such as London city scenario yields a precision score of more than 95 percent.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا