No Arabic abstract
The 5G Internet of Vehicles has become a new paradigm alongside the growing popularity and variety of computation-intensive applications with high requirements for computational resources and analysis capabilities. Existing network architectures and resource management mechanisms may not sufficiently guarantee satisfactory Quality of Experience and network efficiency, mainly suffering from coverage limitation of Road Side Units, insufficient resources, and unsatisfactory computational capabilities of onboard equipment, frequently changing network topology, and ineffective resource management schemes. To meet the demands of such applications, in this article, we first propose a novel architecture by integrating the satellite network with 5G cloud-enabled Internet of Vehicles to efficiently support seamless coverage and global resource management. A incentive mechanism based joint optimization problem of opportunistic computation offloading under delay and cost constraints is established under the aforementioned framework, in which a vehicular user can either significantly reduce the application completion time by offloading workloads to several nearby vehicles through opportunistic vehicle-to-vehicle channels while effectively controlling the cost or protect its own profit by providing compensated computing service. As the optimization problem is non-convex and NP-hard, simulated annealing based on the Markov Chain Monte Carlo as well as the metropolis algorithm is applied to solve the optimization problem, which can efficaciously obtain both high-quality and cost-effective approximations of global optimal solutions. The effectiveness of the proposed mechanism is corroborated through simulation results.
In this paper, we propose a novel resource management scheme that jointly allocates the transmit power and computational resources in a centralized radio access network architecture. The network comprises a set of computing nodes to which the requested tasks of different users are offloaded. The optimization problem minimizes the energy consumption of task offloading while takes the end-to-end latency, i.e., the transmission, execution, and propagation latencies of each task, into account. We aim to allocate the transmit power and computational resources such that the maximum acceptable latency of each task is satisfied. Since the optimization problem is non-convex, we divide it into two sub-problems, one for transmit power allocation and another for task placement and computational resource allocation. Transmit power is allocated via the convex-concave procedure. In addition, a heuristic algorithm is proposed to jointly manage computational resources and task placement. We also propose a feasibility analysis that finds a feasible subset of tasks. Furthermore, a disjoint method that separately allocates the transmit power and the computational resources is proposed as the baseline of comparison. A lower bound on the optimal solution of the optimization problem is also derived based on exhaustive search over task placement decisions and utilizing Karush-Kuhn-Tucker conditions. Simulation results show that the joint method outperforms the disjoint method in terms of acceptance ratio. Simulations also show that the optimality gap of the joint method is less than 5%.
Internet of Things (IoT) is considered as the enabling platform for a variety of promising applications, such as smart transportation and smart city, where massive devices are interconnected for data collection and processing. These IoT applications pose a high demand on storage and computing capacity, while the IoT devices are usually resource-constrained. As a potential solution, mobile edge computing (MEC) deploys cloud resources in the proximity of IoT devices so that their requests can be better served locally. In this work, we investigate computation offloading in a dynamic MEC system with multiple edge servers, where computational tasks with various requirements are dynamically generated by IoT devices and offloaded to MEC servers in a time-varying operating environment (e.g., channel condition changes over time). The objective of this work is to maximize the completed tasks before their respective deadlines and minimize energy consumption. To this end, we propose an end-to-end Deep Reinforcement Learning (DRL) approach to select the best edge server for offloading and allocate the optimal computational resource such that the expected long-term utility is maximized. The simulation results are provided to demonstrate that the proposed approach outperforms the existing methods.
The cellular technology is mostly an urban technology that has been unable to serve rural areas well. This is because the traditional cellular models are not economical for areas with low user density and lesser revenues. In 5G cellular networks, the coverage dilemma is likely to remain the same, thus widening the rural-urban digital divide further. It is about time to identify the root cause that has hindered the rural technology growth and analyse the possible options in 5G architecture to address this issue. We advocate that it can only be accomplished in two phases by sequentially addressing economic viability followed by performance progression. We deliberate how various works in literature focus on the later stage of this two-phase problem and are not feasible to implement in the first place. We propose the concept of TV band white space (TVWS) dovetailed with 5G infrastructure for rural coverage and show that it can yield cost-effectiveness from a service provider perspective.
We consider the problem of maximizing aggregate user utilities over a multi-hop network, subject to link capacity constraints, maximum end-to-end delay constraints, and user throughput requirements. A users utility is a concave function of the achieved throughput or the experienced maximum delay. The problem is important for supporting real-time multimedia traffic, and is uniquely challenging due to the need of simultaneously considering maximum delay constraints and throughput requirements. We first show that it is NP-complete either (i) to construct a feasible solution strictly meeting all constraints, or (ii) to obtain an optimal solution after we relax maximum delay constraints or throughput requirements up to constant ratios. We then develop a polynomial-time approximation algorithm named PASS. The design of PASS leverages a novel understanding between non-convex maximum-delay-aware problems and their convex average-delay-aware counterparts, which can be of independent interest and suggest a new avenue for solving maximum-delay-aware network optimization problems. Under realistic conditions, PASS achieves constant or problem-dependent approximation ratios, at the cost of violating maximum delay constraints or throughput requirements by up to constant or problem-dependent ratios. PASS is practically useful since the conditions for PASS are satisfied in many popular application scenarios. We empirically evaluate PASS using extensive simulations of supporting video-conferencing traffic across Amazon EC2 datacenters. Compared to existing algorithms and a conceivable baseline, PASS obtains up to $100%$ improvement of utilities, by meeting the throughput requirements but relaxing the maximum delay constraints that are acceptable for practical video conferencing applications.
In this work, we consider the problem of jointly minimizing the average cost of sampling and transmitting status updates by users over a wireless channel subject to average Age of Information (AoI) constraints. Errors in the transmission may occur and a scheduling policy has to decide if the users sample a new packet or attempt for retransmission of the packet sampled previously. The cost consists of both sampling and transmission costs. The sampling of a new packet after a failure imposes an additional cost on the system. We formulate a stochastic optimization problem with the average cost in the objective under average AoI constraints. To solve this problem, we propose three scheduling policies; a) a dynamic policy, that is centralized and requires full knowledge of the state of the system, b) two stationary randomized policies that require no knowledge of the state of the system. We utilize tools from Lyapunov optimization theory in order to provide the dynamic policy, and we prove that its solution is arbitrary close to the optimal one. In order to provide the randomized policies, we model the system by utilizing Discrete Time Markov Chain (DTMC). We provide closed-form and approximated expressions for the average AoI and its distribution, for each randomized policy. Simulation results show the importance of providing the option to transmit an old packet in order to minimize the total average cost.