ترغب بنشر مسار تعليمي؟ اضغط هنا

Load-balanced Routing for Nested Interconnection Networks

159   0   0.0 ( 0 )
 نشر من قبل Zhipeng Xu
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We introduced the load-balanced routing algorithms, for interconnection networks resulting from nesting, by considering the pressure of the data forwarding in each node. Benchmarks on a small cluster with various network topologies, and simulations for several larger clusters whose prototypes are too costly to construct, demonstrated substantial gains of communication performance with our routing on these networks over other mainstream routing algorithms.

قيم البحث

اقرأ أيضاً

Fat-tree networks have been widely adopted to High Performance Computing (HPC) clusters and to Data Center Networks (DCN). These parallel systems usually have a large number of servers and hosts, which generate large volumes of highly-volatile traffi c. Thus, distributed load-balancing routing design becomes critical to achieve high bandwidth utilization, and low-latency packet delivery. Existing distributed designs rely on remote congestion feedbacks to address congestion, which add overheads to collect and react to network-wide congestion information. In contrast, we propose a simple but effective load-balancing scheme, called Dynamic Randomized load-Balancing (DRB), to achieve network-wide low levels of path collisions through local-link adjustment which is free of communications and cooperations between switches. First, we use D-mod-k path selection scheme to allocate default paths to all source-destination (S-D) pairs in a fat-tree network, guaranteeing low levels of path collision over downlinks for any set of active S-D pairs. Then, we propose Threshold-based Two-Choice (TTC) randomized technique to balance uplink traffic through local uplink adjustment at each switch. We theoretically show that the proposed TTC for the uplink-load balancing in a fat-tree network have a similar performance as the two-choice technique in the area of randomized load balancing. Simulation results show that DRB with TTC technique achieves a significant improvement over many randomized routing schemes for fat-tree networks.
205 - Moufida Maimour 2008
Wireless sensor networks hold a great potential in the deployment of several applications of a paramount importance in our daily life. Video sensors are able to improve a number of these applications where new approaches adapted to both wireless sens or networks and video transport specific characteristics are required. The aim of this work is to provide the necessary bandwidth and to alleviate the congestion problem to video streaming. In this paper, we investigate various load repartition strategies for congestion control mechanism on top of a multipath routing feature. Simulations are performed in order to get insight into the performances of our proposals.
The Load-Balanced Router architecture has received a lot of attention because it does not require centralized scheduling at the internal switch fabrics. In this paper we reexamine the architecture, motivated by its potential to turn off multiple comp onents and thereby conserve energy in the presence of low traffic. We perform a detailed analysis of the queue and delay performance of a Load-Balanced Router under a simple random routing algorithm. We calculate probabilistic bounds for queue size and delay, and show that the probabilities drop exponentially with increasing queue size or delay. We also demonstrate a tradeoff in energy consumption against the queue and delay performance.
This paper reports experimental results on self-organizing wireless networks carried by small flying robots. Flying ad hoc networks (FANETs) composed of small unmanned aerial vehicles (UAVs) are flexible, inexpensive and fast to deploy. This makes th em a very attractive technology for many civilian and military applications. Due to the high mobility of the nodes, maintaining a communication link between the UAVs is a challenging task. The topology of these networks is more dynamic than that of typical mobile ad hoc networks (MANETs) and of typical vehicle ad hoc networks (VANETs). As a consequence, the existing routing protocols designed for MANETs partly fail in tracking network topology changes. In this work, we compare two different routing algorithms for ad hoc networks: optimized link-state routing (OLSR), and predictive-OLSR (P-OLSR). The latter is an OLSR extension that we designed for FANETs; it takes advantage of the GPS information available on board. To the best of our knowledge, P-OLSR is currently the only FANET-specific routing technique that has an available Linux implementation. We present results obtained by both Media Access Control (MAC) layer emulations and real-world experiments. In the experiments, we used a testbed composed of two autonomous fixed-wing UAVs and a node on the ground. Our experiments evaluate the link performance and the communication range, as well as the routing performance. Our emulation and experimental results show that P-OLSR significantly outperforms OLSR in routing in the presence of frequent network topology changes.
Routing plays a fundamental role in network applications, but it is especially challenging in Delay Tolerant Networks (DTNs). These are a kind of mobile ad hoc networks made of e.g. (possibly, unmanned) vehicles and humans where, despite a lack of co ntinuous connectivity, data must be transmitted while the network conditions change due to the nodes mobility. In these contexts, routing is NP-hard and is usually solved by heuristic store and forward replication-based approaches, where multiple copies of the same message are moved and stored across nodes in the hope that at least one will reach its destination. Still, the existing routing protocols produce relatively low delivery probabilities. Here, we genetically improve two routing protocols widely adopted in DTNs, namely Epidemic and PRoPHET, in the attempt to optimize their delivery probability. First, we dissect them into their fundamental components, i.e., functionalities such as checking if a node can transfer data, or sending messages to all connections. Then, we apply Genetic Improvement (GI) to manipulate these components as terminal nodes of evolving trees. We apply this methodology, in silico, to six test cases of urban networks made of hundreds of nodes, and find that GI produces consistent gains in delivery probability in four cases. We then verify if this improvement entails a worsening of other relevant network metrics, such as latency and buffer time. Finally, we compare the logics of the best evolved protocols with those of the baseline protocols, and we discuss the generalizability of the results across test cases.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا