Do you want to publish a course? Click here

Learn to Sense: a Meta-learning Based Sensing and Fusion Framework for Wireless Sensor Networks

80   0   0.0 ( 0 )
 Added by Zhaoyang Zhang
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

Wireless sensor networks (WSN) acts as the backbone of Internet of Things (IoT) technology. In WSN, field sensing and fusion are the most commonly seen problems, which involve collecting and processing of a huge volume of spatial samples in an unknown field to reconstruct the field or extract its features. One of the major concerns is how to reduce the communication overhead and data redundancy with prescribed fusion accuracy. In this paper, an integrated communication and computation framework based on meta-learning is proposed to enable adaptive field sensing and reconstruction. It consists of a stochastic-gradient-descent (SGD) based base-learner used for the field model prediction aiming to minimize the average prediction error, and a reinforcement meta-learner aiming to optimize the sensing decision by simultaneously rewarding the error reduction with samples obtained so far and penalizing the corresponding communication cost. An adaptive sensing algorithm based on the above two-layer meta-learning framework is presented. It actively determines the next most informative sensing location, and thus considerably reduces the spatial samples and yields superior performance and robustness compared with conventional schemes. The convergence behavior of the proposed algorithm is also comprehensively analyzed and simulated. The results reveal that the proposed field sensing algorithm significantly improves the convergence rate.



rate research

Read More

This paper unveils the importance of intelligent reflecting surface (IRS) in a wireless powered sensor network (WPSN). Specifically, a multi-antenna power station (PS) employs energy beamforming to provide wireless charging for multiple Internet of Things (IoT) devices, which utilize the harvested energy to deliver their own messages to an access point (AP). Meanwhile, an IRS is deployed to enhance the performances of wireless energy transfer (WET) and wireless information transfer (WIT) by intelligently adjusting the phase shift of each reflecting element. To evaluate the performance of this IRS assisted WPSN, we are interested in maximizing its system sum throughput to jointly optimize the energy beamforming of the PS, the transmission time allocation, as well as the phase shifts of the WET and WIT phases. The formulated problem is not jointly convex due to the multiple coupled variables. To deal with its non-convexity, we first independently find the phase shifts of the WIT phase in closed-form. We further propose an alternating optimization (AO) algorithm to iteratively solve the sum throughput maximization problem. To be specific, a semidefinite programming (SDP) relaxation approach is adopted to design the energy beamforming and the time allocation for given phase shifts of WET phase, which is then optimized for given energy beamforming and time allocation. Moreover, we propose an AO low-complexity scheme to significantly reduce the computational complexity incurred by the SDP relaxation, where the optimal closed-form energy beamforming, time allocation, and phase shifts of the WET phase are derived. Finally, numerical results are demonstrated to validate the effectiveness of the proposed algorithm, and highlight the beneficial role of the IRS in comparison to the benchmark schemes.
91 - S. Xue , A. Li , J. Wang 2019
Deep learning is driving a radical paradigm shift in wireless communications, all the way from the application layer down to the physical layer. Despite this, there is an ongoing debate as to what additional values artificial intelligence (or machine learning) could bring to us, particularly on the physical layer design; and what penalties there may have? These questions motivate a fundamental rethinking of the wireless modem design in the artificial intelligence era. Through several physical-layer case studies, we argue for a significant role that machine learning could play, for instance in parallel error-control coding and decoding, channel equalization, interference cancellation, as well as multiuser and multiantenna detection. In addition, we will also discuss the fundamental bottlenecks of machine learning as well as their potential solutions in this paper.
In this paper we investigate fusion rules for distributed detection in large random clustered-wireless sensor networks (WSNs) with a three-tier hierarchy; the sensor nodes (SNs), the cluster heads (CHs) and the fusion center (FC). The CHs collect the SNs local decisions and relay them to the FC that then fuses them to reach the ultimate decision. The SN-CH and the CH-FC channels suffer from additive white Gaussian noise (AWGN). In this context, we derive the optimal log-likelihood ratio (LLR) fusion rule, which turns out to be intractable. So, we develop a sub-optimal linear fusion rule (LFR) that weighs the clusters data according to both its local detection performance and the quality of the communication channels. In order to implement it, we propose an approximate maximum likelihood based LFR (LFR-aML), which estimates the required parameters for the LFR. We also derive Gaussian-tail upper bounds for the detection and false alarms probabilities for the LFR. Furthermore, an optimal CH transmission power allocation strategy is developed by solving the Karush-Kuhn-Tucker (KKT) conditions for the related optimization problem. Extensive simulations show that the LFR attains a detection performance near to that of the optimal LLR and confirms the validity of the proposed upper bounds. Moreover, when compared to equal power allocation, simulations show that our proposed power allocation strategy achieves a significant power saving at the expense of a small reduction in the detection performance.
We study wireless power transmission by an energy source to multiple energy harvesting nodes with the aim to maximize the energy efficiency. The source transmits energy to the nodes using one of the available power levels in each time slot and the nodes transmit information back to the energy source using the harvested energy. The source does not have any channel state information and it only knows whether a received codeword from a given node was successfully decoded or not. With this limited information, the source has to learn the optimal power level that maximizes the energy efficiency of the network. We model the problem as a stochastic Multi-Armed Bandits problem and develop an Upper Confidence Bound based algorithm, which learns the optimal transmit power of the energy source that maximizes the energy efficiency. Numerical results validate the performance guarantees of the proposed algorithm and show significant gains compared to the benchmark schemes.
In this paper, the problem of dynamic spectrum sensing and aggregation is investigated in a wireless network containing N correlated channels, where these channels are occupied or vacant following an unknown joint 2-state Markov model. At each time slot, a single cognitive user with certain bandwidth requirement either stays idle or selects a segment comprising C (C < N) contiguous channels to sense. Then, the vacant channels in the selected segment will be aggregated for satisfying the user requirement. The user receives a binary feedback signal indicating whether the transmission is successful or not (i.e., ACK signal) after each transmission, and makes next decision based on the sensing channel states. Here, we aim to find a policy that can maximize the number of successful transmissions without interrupting the primary users (PUs). The problem can be considered as a partially observable Markov decision process (POMDP) due to without full observation of system environment. We implement a Deep Q-Network (DQN) to address the challenge of unknown system dynamics and computational expenses. The performance of DQN, Q-Learning, and the Improvident Policy with known system dynamics is evaluated through simulations. The simulation results show that DQN can achieve near-optimal performance among different system scenarios only based on partial observations and ACK signals.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا