ترغب بنشر مسار تعليمي؟ اضغط هنا

AI Aided Noise Processing of Spintronic Based IoT Sensor for Magnetocardiography Application

142   0   0.0 ( 0 )
 نشر من قبل Muftah Al-Mahdawi
 تاريخ النشر 2019
والبحث باللغة English




اسأل ChatGPT حول البحث

As we are about to embark upon the highly hyped Society 5.0, powered by the Internet of Things (IoT), traditional ways to monitor human heart signals for tracking cardio-vascular conditions are challenging, particularly in remote healthcare settings. On the merits of low power consumption, portability, and non-intrusiveness, there are no suitable IoT solutions that can provide information comparable to the conventional Electrocardiography (ECG). In this paper, we propose an IoT device utilizing a spintronic ultra-sensitive sensor that measures the magnetic fields produced by cardio-vascular electrical activity, i.e. Magentocardiography (MCG). After that, we treat the low-frequency noise generated by the sensors, which is also a challenge for most other sensors dealing with low-frequency bio-magnetic signals. Instead of relying on generic signal processing techniques such as averaging or filtering, we employ deep-learning training on bio-magnetic signals. Using an existing dataset of ECG records, MCG labels are synthetically constructed. A unique deep learning structure composed of combined Convolutional Neural Network (CNN) with Gated Recurrent Unit (GRU) is trained using the labeled data moving through a striding window, which is able to smartly capture and eliminate the noise features. Simulation results are reported to evaluate the effectiveness of the proposed method that demonstrates encouraging performance.

قيم البحث

اقرأ أيضاً

A novel approach is presented in this work for context-aware connectivity and processing optimization of Internet of things (IoT) networks. Different from the state-of-the-art approaches, the proposed approach simultaneously selects the best connecti vity and processing unit (e.g., device, fog, and cloud) along with the percentage of data to be offloaded by jointly optimizing energy consumption, response-time, security, and monetary cost. The proposed scheme employs a reinforcement learning algorithm, and manages to achieve significant gains compared to deterministic solutions. In particular, the requirements of IoT devices in terms of response-time and security are taken as inputs along with the remaining battery level of the devices, and the developed algorithm returns an optimized policy. The results obtained show that only our method is able to meet the holistic multi-objective optimisation criteria, albeit, the benchmark approaches may achieve better results on a particular metric at the cost of failing to reach the other targets. Thus, the proposed approach is a device-centric and context-aware solution that accounts for the monetary and battery constraints.
Besides being part of the Internet of Things (IoT), drones can play a relevant role in it as enablers. The 3D mobility of UAVs can be exploited to improve node localization in IoT networks for, e.g., search and rescue or goods localization and tracki ng. One of the widespread IoT communication technologies is Long Range Wide Area Network (LoRaWAN), which allows achieving long communication distances with low power. In this work, we present a drone-aided localization system for LoRa networks in which a UAV is used to improve the estimation of a nodes location initially provided by the network. We characterize the relevant parameters of the communication system and use them to develop and test a search algorithm in a realistic simulated scenario. We then move to the full implementation of a real system in which a drone is seamlessly integrated into Swisscoms LoRa network. The drone coordinates with the network with a two-way exchange of information which results in an accurate and fully autonomous localization system. The results obtained in our field tests show a ten-fold improvement in localization precision with respect to the estimation provided by the fixed network. Up to our knowledge, this is the first time a UAV is successfully integrated in a LoRa network to improve its localization accuracy.
Wearable sensor-based human activity recognition (HAR) has been a research focus in the field of ubiquitous and mobile computing for years. In recent years, many deep models have been applied to HAR problems. However, deep learning methods typically require a large amount of data for models to generalize well. Significant variances caused by different participants or diverse sensor devices limit the direct application of a pre-trained model to a subject or device that has not been seen before. To address these problems, we present an invariant feature learning framework (IFLF) that extracts common information shared across subjects and devices. IFLF incorporates two learning paradigms: 1) meta-learning to capture robust features across seen domains and adapt to an unseen one with similarity-based data selection; 2) multi-task learning to deal with data shortage and enhance overall performance via knowledge sharing among different subjects. Experiments demonstrated that IFLF is effective in handling both subject and device diversion across popular open datasets and an in-house dataset. It outperforms a baseline model of up to 40% in test accuracy.
This paper considers the problem of time-difference-of-arrival (TDOA) source localization using possibly unreliable data collected by the Internet of Things (IoT) sensors in the error-prone environments. The Welsch loss function is integrated into a hardware realizable projection-type neural network (PNN) model, in order to enhance the robustness of location estimator to the erroneous measurements. For statistical efficiency, the formulation here is derived upon the underlying time-of-arrival composition via joint estimation of the source position and onset time, instead of the TDOA counterpart generated in the postprocessing of sensor-collected timestamps. The local stability conditions and implementation complexity of the proposed PNN model are also analyzed in detail. Simulation investigations demonstrate that our neurodynamic TDOA localization solution is capable of outperforming several existing schemes in terms of localization accuracy and computational efficiency.
As Internet of Things (IoT) has emerged as the next logical stage of the Internet, it has become imperative to understand the vulnerabilities of the IoT systems when supporting diverse applications. Because machine learning has been applied in many I oT systems, the security implications of machine learning need to be studied following an adversarial machine learning approach. In this paper, we propose an adversarial machine learning based partial-model attack in the data fusion/aggregation process of IoT by only controlling a small part of the sensing devices. Our numerical results demonstrate the feasibility of this attack to disrupt the decision making in data fusion with limited control of IoT devices, e.g., the attack success rate reaches 83% when the adversary tampers with only 8 out of 20 IoT devices. These results show that the machine learning engine of IoT system is highly vulnerable to attacks even when the adversary manipulates a small portion of IoT devices, and the outcome of these attacks severely disrupts IoT system operations.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا