ترغب بنشر مسار تعليمي؟ اضغط هنا

We consider adversarial machine learning based attacks on power allocation where the base station (BS) allocates its transmit power to multiple orthogonal subcarriers by using a deep neural network (DNN) to serve multiple user equipments (UEs). The D NN that corresponds to a regression model is trained with channel gains as the input and allocated transmit powers as the output. While the BS allocates the transmit power to the UEs to maximize rates for all UEs, there is an adversary that aims to minimize these rates. The adversary may be an external transmitter that aims to manipulate the inputs to the DNN by interfering with the pilot signals that are transmitted to measure the channel gain. Alternatively, the adversary may be a rogue UE that transmits fabricated channel estimates to the BS. In both cases, the adversary carefully crafts adversarial perturbations to manipulate the inputs to the DNN of the BS subject to an upper bound on the strengths of these perturbations. We consider the attacks targeted on a single UE or all UEs. We compare these attacks with a benchmark, where the adversary scales down the input to the DNN. We show that adversarial attacks are much more effective than the benchmark attack in terms of reducing the rate of communications. We also show that adversarial attacks are robust to the uncertainty at the adversary including the erroneous knowledge of channel gains and the potential errors in exercising the attacks exactly as specified.
Radio access network (RAN) slicing is an important part of network slicing in 5G. The evolving network architecture requires the orchestration of multiple network resources such as radio and cache resources. In recent years, machine learning (ML) tec hniques have been widely applied for network slicing. However, most existing works do not take advantage of the knowledge transfer capability in ML. In this paper, we propose a transfer reinforcement learning (TRL) scheme for joint radio and cache resources allocation to serve 5G RAN slicing.We first define a hierarchical architecture for the joint resources allocation. Then we propose two TRL algorithms: Q-value transfer reinforcement learning (QTRL) and action selection transfer reinforcement learning (ASTRL). In the proposed schemes, learner agents utilize the expert agents knowledge to improve their performance on target tasks. The proposed algorithms are compared with both the model-free Q-learning and the model-based priority proportional fairness and time-to-live (PPF-TTL) algorithms. Compared with Q-learning, QTRL and ASTRL present 23.9% lower delay for Ultra Reliable Low Latency Communications slice and 41.6% higher throughput for enhanced Mobile Broad Band slice, while achieving significantly faster convergence than Q-learning. Moreover, 40.3% lower URLLC delay and almost twice eMBB throughput are observed with respect to PPF-TTL.
156 - Vishnu B , Abhishek Sinha 2021
This paper considers the problem of secure packet routing at the maximum achievable rate in a Quantum key distribution (QKD) network. Assume that a QKD protocol generates symmetric private keys for secure communication over each link in a multi-hop n etwork. The quantum key generation process, which is affected by noise, is assumed to be modeled by a stochastic counting process. Packets are first encrypted with the available quantum keys for each hop and then transmitted on a point-to-point basis over the communication links. A fundamental problem that arises in this setting is to design a secure and capacity-achieving routing policy that accounts for the time-varying availability of the quantum keys for encryption and finite link capacities for transmission. In this paper, by combining the QKD protocol with the Universal Max Weight (UMW) routing policy, we design a new secure throughput-optimal routing policy, called Tandem Queue Decomposition (TQD). TQD solves the problem of secure routing efficiently for a wide class of traffic, including unicast, broadcast, and multicast. One of our main contributions in this paper is to show that the problem can be reduced to the usual generalized network flow problem on a transformed network without the key availability constraints. Simulation results show that the proposed policy incurs a substantially smaller delay as compared to the state-of-the-art routing and key management policies. The proof of throughput-optimality of the proposed policy makes use of the Lyapunov stability theory along with a careful treatment of the key-storage dynamics.
One of the major capacity boosters for 5G networks is the deployment of ultra-dense heterogeneous networks (UDHNs). However, this deployment results in tremendousincrease in the energy consumption of the network due to the large number of base statio ns (BSs) involved. In addition to enhanced capacity, 5G networks must also be energy efficient for it to be economically viable and environmentally friendly. Dynamic cell switching is a very common way of reducing the total energy consumption of the network but most of the proposed methods are computationally demanding which makes them unsuitable for application in ultra-dense network deployment with massive number of BSs. To tackle this problem, we propose a lightweight cell switching scheme also known as Threshold-based Hybrid cEllswItching Scheme (THESIS) for energy optimization in UDHNs. The developed approach combines the benefits of clustering and exhaustive search (ES) algorithm to produce a solution whose optimality is close to that of the ES (which is guaranteed tobe optimal), but is computationally more efficient than ES and as such can be applied for cell switching in real networks even when their dimension is large. The performance evaluation shows that the THESIS produces a significant reduction in the energy consumption of the UDHN and is able to reduce the complexity of finding a near-optimal solution from exponential to polynomial complexity.
Urban LoRa networks promise to provide a cost-efficient and scalable communication backbone for smart cities. One core challenge in rolling out and operating these networks is radio network planning, i.e., precise predictions about possible new locat ions and their impact on network coverage. Path loss models aid in this task, but evaluating and comparing different models requires a sufficiently large set of high-quality received packet power samples. In this paper, we report on a corresponding large-scale measurement study covering an urban area of 200km2 over a period of 230 days using sensors deployed on garbage trucks, resulting in more than 112 thousand high-quality samples for received packet power. Using this data, we compare eleven previously proposed path loss models and additionally provide new coefficients for the Log-distance model. Our results reveal that the Log-distance model and other well-known empirical models such as Okumura or Winner+ provide reasonable estimations in an urban environment, and terrain based models such as ITM or ITWOM have no advantages. In addition, we derive estimations for the needed sample size in similar measurement campaigns. To stimulate further research in this direction, we make all our data publicly available.
A status updating system is considered in which multiple data sources generate packets to be delivered to a destination through a shared energy harvesting sensor. Only one sources data, when available, can be transmitted by the sensor at a time, subj ect to energy availability. Transmissions are prune to erasures, and each successful transmission constitutes a status update for its corresponding source at the destination. The goal is to schedule source transmissions such that the collective long-term average age-of-information (AoI) is minimized. AoI is defined as the time elapsed since the latest successfully-received data has been generated at its source. To solve this problem, the case with a single source is first considered, with a focus on threshold waiting policies, in which the sensor attempts transmission only if the time until both energy and data are available grows above a certain threshold. The distribution of the AoI is fully characterized under such a policy. This is then used to analyze the performance of the multiple sources case under maximum-age-first scheduling, in which the sensors resources are dedicated to the source with the maximum AoI at any given time. The achievable collective long-term average AoI is derived in closed-form. Multiple numerical evaluations are demonstrated to show how the optimal threshold value behaves as a function of the system parameters, and showcase the benefits of a threshold-based waiting policy with intermittent energy and data arrivals.
Decentralized control, low-complexity, flexible and efficient communications are the requirements of an architecture that aims to scale blockchains beyond the current state. Such properties are attainable by reducing ledger size and providing paralle l operations in the blockchain. Sharding is one of the approaches that lower the burden of the nodes and enhance performance. However, the current solutions lack the features for resolving concurrency during cross-shard communications. With multiple participants belonging to different shards, handling concurrent operations is essential for optimal sharding. This issue becomes prominent due to the lack of architectural support and requires additional consensus for cross-shard communications. Inspired by hybrid Proof-of-Work/Proof-of-Stake (PoW/PoS), like Ethereum, hybrid consensus and 2-hop blockchain, we propose Reinshard, a new blockchain that inherits the properties of hybrid consensus for optimal sharding. Reinshard uses PoW and PoS chain-pairs with PoS sub-chains for all the valid chain-pairs where the hybrid consensus is attained through Verifiable Delay Function (VDF). Our architecture provides a secure method of arranging nodes in shards and resolves concurrency conflicts using the delay factor of VDF. The applicability of Reinshard is demonstrated through security and experimental evaluations. A practical concurrency problem is considered to show the efficacy of Reinshard in providing optimal sharding.
Decentralized cooperative resource allocation schemes for robotic swarms are essential to enable high reliability in high throughput data exchanges. These cooperative schemes require control signaling with the aim to avoid half-duplex problems at the receiver and mitigate interference. We propose two cooperative resource allocation schemes, device sequential and group scheduling, and introduce a control signaling design. We observe that failure in the reception of these control signals leads to non-cooperative behavior and to significant performance degradation. The cause of these failures are identified and specific countermeasures are proposed and evaluated. We compare the proposed resource allocation schemes against the NR sidelink mode 2 resource allocation and show that even though signaling has an important impact on the resource allocation performance, our proposed device sequential and group scheduling resource allocation schemes improve reliability by an order of magnitude compared to sidelink mode 2.
Automated and industrial Internet of Things (IoT) devices are increasing daily. As the number of IoT devices grows, the volume of data generated by them will also grow. Managing these rapidly expanding IoT devices and enormous data efficiently to be available to all authorized users without compromising its integrity will become essential in the near future. On the other side, many information security incidents have been recorded, increasing the requirement for countermeasures. While safeguards against hostile third parties have been commonplace until now, operators and parties have seen an increase in demand for data falsification detection and blocking. Blockchain technology is well-known for its privacy, immutability, and decentralized nature. Single-board computers are becoming more powerful while also becoming more affordable as IoT platforms. These single-board computers are gaining traction in the automation industry. This study focuses on a paradigm of IoT-Blockchain integration where the blockchain node runs autonomously on the IoT platform itself. It enables the system to conduct machine-to-machine transactions without the intervention of a person and to exert direct access control over IoT devices. This paper assumed that the readers are familiar with Hyperledger Fabric basic operations and focus on the practical approach of integration. A basic introduction is provided for the newbie on the blockchain.
In this paper, an analytical framework for evaluating the performance of scalable cell-free massive MIMO (SCF-mMIMO) systems in which all user equipments (UEs) and access points (APs) employ finite resolution digital-to-analog converters (DACs) and a nalog-to-digital converters (ADCs) and operates under correlated Rician fading, is presented. By using maximal-ratio combining (MRC) detection, generic expressions for the uplink (UL) spectral efficiency (SE) for both distributed and centralized schemes are derived. In order to further reduce the computational complexity (CC) of the original local partial MMSE (LP-MMSE) and partial MMSE (P-MMSE) detectors, two novel scalable low complexity MMSE detectors are proposed for distributed and centralized schemes respectively, which achieves very similar SE performance. Furthermore, for the distributed scheme a novel partial large-scale fading decoding (P-LSFD) weighting vector is introduced and its analytical SE performance is very similar to the performance of an equivalent unscalable LSFD vector. Finally, a scalable algorithm jointly consisting of AP cluster formation, pilot assignment, and power control is proposed, which outperforms the conventional random pilot assignment and user-group based pilot assignment policies and, contrary to an equal power transmit strategy, it guarantees quality of service (QoS) fairness for all accessing UEs.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا