Do you want to publish a course? Click here

Physical-layer Network Coding in Two-Way Heterogeneous Cellular Networks with Power Imbalance

147   0   0.0 ( 0 )
 Added by Ajay Thampi
 Publication date 2014
and research's language is English




Ask ChatGPT about the research

The growing demand for high-speed data, quality of service (QoS) assurance and energy efficiency has triggered the evolution of 4G LTE-A networks to 5G and beyond. Interference is still a major performance bottleneck. This paper studies the application of physical-layer network coding (PNC), a technique that exploits interference, in heterogeneous cellular networks. In particular, we propose a rate-maximising relay selection algorithm for a single cell with multiple relays based on the decode-and-forward strategy. With nodes transmitting at different powers, the proposed algorithm adapts the resource allocation according to the differing link rates and we prove theoretically that the optimisation problem is log-concave. The proposed technique is shown to perform significantly better than the widely studied selection-cooperation technique. We then undertake an experimental study on a software radio platform of the decoding performance of PNC with unbalanced SNRs in the multiple-access transmissions. This problem is inherent in cellular networks and it is shown that with channel coding and decoders based on multiuser detection and successive interference cancellation, the performance is better with power imbalance. This paper paves the way for further research in multi-cell PNC, resource allocation, and the implementation of PNC with higher-order modulations and advanced coding techniques.



rate research

Read More

Physical-layer Network Coding (PNC) can significantly improve the throughput of wireless two way relay channel (TWRC) by allowing the two end nodes to transmit messages to the relay simultaneously. To achieve reliable communication, channel coding could be applied on top of PNC. This paper investigates link-by-link channel-coded PNC, in which a critical process at the relay is to transform the superimposed channel-coded packets received from the two end nodes plus noise, Y3=X1+X2+W3, to the network-coded combination of the source packets, S1 XOR S2 . This is in distinct to the traditional multiple-access problem, in which the goal is to obtain S1 and S2 separately. The transformation from Y3 to (S1 XOR S2) is referred to as the Channel-decoding-Network-Coding process (CNC) in that it involves both channel decoding and network coding operations. A contribution of this paper is the insight that in designing CNC, we should first (i) channel-decode Y3 to the superimposed source symbols S1+S2 before (ii) transforming S1+S2 to the network-coded packets (S1 XOR S2) . Compared with previously proposed strategies for CNC, this strategy reduces the channel-coding network-coding mismatch. It is not obvious, however, that an efficient decoder for step (i) exists. A second contribution of this paper is to provide an explicit construction of such a decoder based on the use of the Repeat Accumulate (RA) code. Specifically, we redesign the belief propagation algorithm of the RA code for traditional point-to-point channel to suit the need of the PNC multiple-access channel. Simulation results show that our new scheme outperforms the previously proposed schemes significantly in terms of BER without added complexity.
This paper investigates the information freshness of two-way relay networks (TWRN) operated with physical-layer network coding (PNC). Information freshness is quantified by age of information (AoI), defined as the time elapsed since the generation time of the latest received information update. PNC reduces communication latency of TWRNs by turning superimposed electromagnetic waves into network-coded messages so that end users can send update packets to each other via the relay more frequently. Although sending update packets more frequently is potential to reduce AoI, how to deal with packet corruption has not been well investigated. Specifically, if old packets are corrupted in any hop of a TWRN, one needs to decide the old packets to be dropped or to be retransmitted, e.g., new packets have recent information, but may require more time to be delivered. We study the average AoI with and without ARQ in PNC-enabled TWRNs. We first consider a non-ARQ scheme where old packets are always dropped when corrupted, referred to once-lost-then-drop (OLTD), and a classical ARQ scheme with no packet lost, referred to as reliable packet transmission (RPT). Interestingly, our analysis shows that neither the non-ARQ scheme nor the pure ARQ scheme achieves good average AoI. We then put forth an uplink-lost-then-drop (ULTD) protocol that combines packet drop and ARQ. Experiments on software-defined radio indicate that ULTD significantly outperforms OLTD and RPT in terms of average AoI. Although this paper focuses on TWRNs, we believe the insight of ULTD applies generally to other two-hop networks. Our insight is that to achieve high information freshness, when packets are corrupted in the first hop, new packets should be generated and sent (i.e., old packets are discarded); when packets are corrupted in the second hop, old packets should be retransmitted until successful reception.
This paper presents the first reliable physical-layer network coding (PNC) system that supports real TCP/IP applications for the two-way relay network (TWRN). Theoretically, PNC could boost the throughput of TWRN by a factor of 2 compared with traditional scheduling (TS) in the high signal-to-noise (SNR) regime. Although there have been many theoretical studies on PNC performance, there have been relatively few experimental and implementation efforts. Our earlier PNC prototype, built in 2012, was an offline system that processed signals offline. For a system that supports real applications, signals must be processed online in real-time. Our real-time reliable PNC prototype, referred to as RPNC, solves a number of key challenges to enable the support of real TCP/IP applications. The enabling components include: 1) a time-slotted system that achieves us-level synchronization for the PNC system; 2) reduction of PNC signal processing complexity to meet real-time constraints; 3) an ARQ design tailored for PNC to ensure reliable packet delivery; 4) an interface to the application layer. We took on the challenge to implement all the above with general-purpose processors in PC through an SDR platform rather than ASIC or FPGA. With all these components, we have successfully demonstrated image exchange with TCP and twoparty video conferencing with UDP over RPNC. Experimental results show that the achieved throughput approaches the PHYlayer data rate at high SNR, demonstrating the high efficiency of the RPNC system.
Future wireless networks will be characterized by heterogeneous traffic requirements. Such requirements can be low-latency or minimum-throughput. Therefore, the network has to adjust to different needs. Usually, users with low-latency requirements have to deliver their demand within a specific time frame, i.e., before a deadline, and they co-exist with throughput oriented users. In addition, the users are mobile and they share the same wireless channel. Therefore, they have to adjust their power transmission to achieve reliable communication. However, due to the limited power budget of wireless mobile devices, a power-efficient scheduling scheme is required by the network. In this work, we cast a stochastic network optimization problem for minimizing the packet drop rate while guaranteeing a minimum throughput and taking into account the limited-power capabilities of the users. We apply tools from Lyapunov optimization theory in order to provide an algorithm, named Dynamic Power Control (DPC) algorithm, that solves the formulated problem in realtime. It is proved that the DPC algorithm gives a solution arbitrarily close to the optimal one. Simulation results show that our algorithm outperforms the baseline Largest-Debt-First (LDF) algorithm for short deadlines and multiple users.
Distributed storage systems provide reliable access to data through redundancy spread over individually unreliable nodes. Application scenarios include data centers, peer-to-peer storage systems, and storage in wireless networks. Storing data using an erasure code, in fragments spread across nodes, requires less redundancy than simple replication for the same level of reliability. However, since fragments must be periodically replaced as nodes fail, a key question is how to generate encoded fragments in a distributed way while transferring as little data as possible across the network. For an erasure coded system, a common practice to repair from a node failure is for a new node to download subsets of data stored at a number of surviving nodes, reconstruct a lost coded block using the downloaded data, and store it at the new node. We show that this procedure is sub-optimal. We introduce the notion of regenerating codes, which allow a new node to download emph{functions} of the stored data from the surviving nodes. We show that regenerating codes can significantly reduce the repair bandwidth. Further, we show that there is a fundamental tradeoff between storage and repair bandwidth which we theoretically characterize using flow arguments on an appropriately constructed graph. By invoking constructive results in network coding, we introduce regenerating codes that can achieve any point in this optimal tradeoff.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا