No Arabic abstract
Software-defined networking (SDN) provides an agile and programmable way to optimize radio access networks via a control-data plane separation. Nevertheless, reaping the benefits of wireless SDN hinges on making optimal use of the limited wireless fronthaul capacity. In this work, the problem of fronthaul-aware resource allocation and user scheduling is studied. To this end, a two-timescale fronthaul-aware SDN control mechanism is proposed in which the controller maximizes the time-averaged network throughput by enforcing a coarse correlated equilibrium in the long timescale. Subsequently, leveraging the controllers recommendations, each base station schedules its users using Lyapunov stochastic optimization in the short timescale, i.e., at each time slot. Simulation results show that significant network throughput enhancements and up to 40% latency reduction are achieved with the aid of the SDN controller. Moreover, the gains are more pronounced for denser network deployments.
Software-defined networking (SDN) is the concept of decoupling the control and data planes to create a flexible and agile network, assisted by a central controller. However, the performance of SDN highly depends on the limitations in the fronthaul which are inadequately discussed in the existing literature. In this paper, a fronthaul-aware software-defined resource allocation mechanism is proposed for 5G wireless networks with in-band wireless fronthaul constraints. Considering the fronthaul capacity, the controller maximizes the time-averaged network throughput by enforcing a coarse correlated equilibrium (CCE) and incentivizing base stations (BSs) to locally optimize their decisions to ensure mobile users (MUs) quality-of-service (QoS) requirements. By marrying tools from Lyapunov stochastic optimization and game theory, we propose a two-timescale approach where the controller gives recommendations, i.e., sub-carriers with low interference, in a long-timescale whereas BSs schedule their own MUs and allocate the available resources in every time slot. Numerical results show considerable throughput enhancements and delay reductions over a non-SDN network baseline.
The performance of computer networks relies on how bandwidth is shared among different flows. Fair resource allocation is a challenging problem particularly when the flows evolve over time.To address this issue, bandwidth sharing techniques that quickly react to the traffic fluctuations are of interest, especially in large scale settings with hundreds of nodes and thousands of flows. In this context, we propose a distributed algorithm that tackles the fair resource allocation problem in a distributed SDN control architecture. Our algorithm continuously generates a sequence of resource allocation solutions converging to the fair allocation while always remaining feasible, a property that standard primal-dual decomposition methods often lack. Thanks to the distribution of all computer intensive operations, we demonstrate that we can handle large instances in real-time.
Radio access network (RAN) virtualization is gaining more and more ground and expected to re-architect the next-generation cellular networks. Existing RAN virtualization studies and solutions have mostly focused on sharing communication capacity and tend to require the use of the same PHY and MAC layers across network slices. This approach has not considered the scenarios where different slices require different PHY and MAC layers, for instance, for radically different services and for whole-stack research in wireless living labs where novel PHY and MAC layers need to be deployed concurrently with existing ones on the same physical infrastructure. To enable whole-stack slicing where different PHY and MAC layers may be deployed in different slices, we develop PV-RAN, the first open-source virtual RAN platform that enables the sharing of the same SDR physical resources across multiple slices. Through API Remoting, PV-RAN enables running paravirtualized instances of OpenAirInterface (OAI) at different slices without requiring modifying OAI source code. PV-RAN effectively leverages the inter-domain communication mechanisms of Xen to transport time-sensitive I/Q samples via shared memory, making the virtualization overhead in communication almost negligible. We conduct detailed performance benchmarking of PV-RAN and demonstrate its low overhead and high efficiency. We also integrate PV-RAN with the CyNet wireless living lab for smart agriculture and transportation.
Integrating time-frequency resource conversion (TFRC), a new network resource allocation strategy, with call admission control can not only increase the cell capacity but also reduce network congestion effectively. However, the optimal setting of TFRC-oriented call admission control suffers from the curse of dimensionality, due to Markov chain-based optimization in a high-dimensional space. To address the scalability issue of TFRC, in [1] we extend the study of TFRC into the area of scheduling. Specifically, we study downlink scheduling based on TFRC for an LTE-type cellular network, to maximize service delivery. The service scheduling of interest is formulated as a joint request, channel and slot allocation problem which is NP-hard. An offline deflation and sequential fixing based algorithm (named DSFRB) with only polynomial-time complexity is proposed to solve the problem. For practical online implementation, two TFRC-enabled low-complexity algorithms, modified Smith ratio algorithm (named MSR) and modified exponential capacity algorithm (named MEC), are proposed as well. In this report, we present detailed numerical results of the proposed offline and online algorithms, which not only show the effectiveness of the proposed algorithms but also corroborate the advantages of the proposed TFRC-based schedule techniques in terms of quality-of-service (QoS) provisioning for each user and revenue improvement for a service operator.
Time-sensitive wireless networks are an important enabling building block for many emerging industrial Internet of Things (IoT) applications. Quick prototyping and evaluation of time-sensitive wireless technologies are desirable for R&D efforts. Software-defined radio (SDR), by allowing wireless signal processing on a personal computer (PC), has been widely used for such quick prototyping efforts. Unfortunately, because of the textit{uncontrollable delay} between the PC and the radio board, SDR is generally deemed not suitable for time-sensitive wireless applications that demand communication with low and deterministic latency. For a rigorous evaluation of its suitability for industrial IoT applications, this paper conducts a quantitative investigation of the synchronization accuracy and end-to-end latency achievable by an SDR wireless system. To this end, we designed and implemented a time-slotted wireless system on the Universal Software Radio Peripheral (USRP) SDR platform. We developed a time synchronization mechanism to maintain synchrony among nodes in the system. To reduce the delays and delay jitters between the USRP board and its PC, we devised a {textit{Just-in-time}} algorithm to ensure that packets sent by the PC to the USRP can reach the USRP just before the time slots they are to be transmitted. Our experiments demonstrate that $90%$ ($100%$) of the time slots of different nodes can be synchronized and aligned to within $ pm 0.5$ samples or $ pm 0.05mu s$ ($ pm 1.5$ samples or $ pm 0.15mu s$), and that the end-to-end packet delivery latency can be down to $3.75ms$. This means that SDR-based solutions can be applied in a range of IIoT applications that require tight synchrony and moderately low latency, e.g., sensor data collection, automated guided vehicle (AGV) control, and Human-Machine-Interaction (HMI).