ترغب بنشر مسار تعليمي؟ اضغط هنا

Renaissance: Self-Stabilizing Distributed SDN Control Plane

56   0   0.0 ( 0 )
 نشر من قبل Iosif Salem
 تاريخ النشر 2017
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English
 تأليف Marco Canini




اسأل ChatGPT حول البحث

By introducing programmability, automated verification, and innovative debugging tools, Software-Defined Networks (SDNs) are poised to meet the increasingly stringent dependability requirements of todays communication networks. However, the design of fault-tolerant SDNs remains an open challenge. This paper considers the design of dependable SDNs through the lenses of self-stabilization - a very strong notion of fault-tolerance. In particular, we develop algorithms for an in-band and distributed control plane for SDNs, called Renaissance, which tolerate a wide range of (concurrent) controller, link, and communication failures. Our self-stabilizing algorithms ensure that after the occurrence of an arbitrary combination of failures, (i) every non-faulty SDN controller can reach any switch (or another controller) in the network within a bounded communication delay (in the presence of a bounded number of concurrent failures) and (ii) every switch is managed by at least one controller (as long as at least one controller is not faulty). We evaluate Renaissance through a rigorous worst-case analysis as well as a prototype implementation (based on OVS and Floodlight), and we report on our experiments using Mininet.



قيم البحث

اقرأ أيضاً

In dynamic wireless ad-hoc networks (DynWANs), autonomous computing devices set up a network for the communication needs of the moment. These networks require the implementation of a medium access control (MAC) layer. We consider MAC protocols for Dy nWANs that need to be autonomous and robust as well as have high bandwidth utilization, high predictability degree of bandwidth allocation, and low communication delay in the presence of frequent topological changes to the communication network. Recent studies have shown that existing implementations cannot guarantee the necessary satisfaction of these timing requirements. We propose a self-stabilizing MAC algorithm for DynWANs that guarantees a short convergence period, and by that, it can facilitate the satisfaction of severe timing requirements, such as the above. Besides the contribution in the algorithmic front of research, we expect that our proposal can enable quicker adoption by practitioners and faster deployment of DynWANs that are subject changes in the network topology.
Unexpected increases in demand and most of all flash crowds are considered the bane of every web application as they may cause intolerable delays or even service unavailability. Proper quality of service policies must guarantee rapid reactivity and r esponsiveness even in such critical situations. Previous solutions fail to meet common performance requirements when the system has to face sudden and unpredictable surges of traffic. Indeed they often rely on a proper setting of key parameters which requires laborious manual tuning, preventing a fast adaptation of the control policies. We contribute an original Self-* Overload Control (SOC) policy. This allows the system to self-configure a dynamic constraint on the rate of admitted sessions in order to respect service level agreements and maximize the resource utilization at the same time. Our policy does not require any prior information on the incoming traffic or manual configuration of key parameters. We ran extensive simulations under a wide range of operating conditions, showing that SOC rapidly adapts to time varying traffic and self-optimizes the resource utilization. It admits as many new sessions as possible in observance of the agreements, even under intense workload variations. We compared our algorithm to previously proposed approaches highlighting a more stable behavior and a better performance.
In Software-Defined Networking (SDN)-enabled cloud data centers, live migration is a key approach used for the reallocation of Virtual Machines (VMs) in cloud services and Virtual Network Functions (VNFs) in Service Function Chaining (SFC). Using liv e migration methods, cloud providers can address their dynamic resource management and fault tolerance objectives without interrupting the service of users. However, in cloud data centers, performing multiple live migrations in arbitrary order can lead to service degradation. Therefore, efficient migration planning is essential to reduce the impact of live migration overheads. In addition, to prevent Quality of Service (QoS) degradations and Service Level Agreement (SLA) violations, it is necessary to set priorities for different live migration requests with various urgency. In this paper, we propose SLAMIG, a set of algorithms that composes the deadline-aware multiple migration grouping algorithm and on-line migration scheduling to determine the sequence of VM/VNF migrations. The experimental results show that our approach with reasonable algorithm runtime can efficiently reduce the number of deadline misses and has a good migration performance compared with the one-by-one scheduling and two state-of-the-art algorithms in terms of total migration time, average execution time, downtime, and transferred data. We also evaluate and analyze the impact of multiple migration planning and scheduling on QoS and energy consumption.
In this paper, we provide a comprehensive review and updated solutions related to 5G network slicing using SDN and NFV. Firstly, we present 5G service quality and business requirements followed by a description of 5G network softwarization and slicin g paradigms including essential concepts, history and different use cases. Secondly, we provide a tutorial of 5G network slicing technology enablers including SDN, NFV, MEC, cloud/Fog computing, network hypervisors, virtual machines & containers. Thidly, we comprehensively survey different industrial initiatives and projects that are pushing forward the adoption of SDN and NFV in accelerating 5G network slicing. A comparison of various 5G architectural approaches in terms of practical implementations, technology adoptions and deployment strategies is presented. Moreover, we provide a discussion on various open source orchestrators and proof of concepts representing industrial contribution. The work also investigates the standardization efforts in 5G networks regarding network slicing and softwarization. Additionally, the article presents the management and orchestration of network slices in a single domain followed by a comprehensive survey of management and orchestration approaches in 5G network slicing across multiple domains while supporting multiple tenants. Furthermore, we highlight the future challenges and research directions regarding network softwarization and slicing using SDN and NFV in 5G networks.
We study the problem of distributed task allocation inspired by the behavior of social insects, which perform task allocation in a setting of limited capabilities and noisy environment feedback. We assume that each task has a demand that should be sa tisfied but not exceeded, i.e., there is an optimal number of ants that should be working on this task at a given time. The goal is to assign a near-optimal number of workers to each task in a distributed manner and without explicit access to the values of the demands nor the number of ants working on the task. We seek to answer the question of how the quality of task allocation depends on the accuracy of assessing whether too many (overload) or not enough (lack) ants are currently working on a given task. Concretely, we address the open question of solving task allocation in the model where each ant receives feedback that depends on the deficit defined as the (possibly negative) difference between the optimal demand and the current number of workers in the task. The feedback is modeled as a random variable that takes value lack or overload with probability given by a sigmoid of the deficit. Each ants receives the feedback independently, but the higher the overload or lack of workers for a task, the more likely it is that all the ants will receive the same, correct feedback from this task; the closer the deficit is to zero, the less reliable the feedback becomes. We measure the performance of task allocation algorithms using the notion of regret, defined as the absolute value of the deficit summed over all tasks and summed over time. We propose a simple, constant-memory, self-stabilizing, distributed algorithm that quickly converges from any initial distribution to a near-optimal assignment. We also show that our algorithm works not only under stochastic noise but also in an adversarial noise setting.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا