ترغب بنشر مسار تعليمي؟ اضغط هنا

Randomized Load-balanced Routing for Fat-tree Networks

138   0   0.0 ( 0 )
 نشر من قبل Suzhen Wang
 تاريخ النشر 2017
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Fat-tree networks have been widely adopted to High Performance Computing (HPC) clusters and to Data Center Networks (DCN). These parallel systems usually have a large number of servers and hosts, which generate large volumes of highly-volatile traffic. Thus, distributed load-balancing routing design becomes critical to achieve high bandwidth utilization, and low-latency packet delivery. Existing distributed designs rely on remote congestion feedbacks to address congestion, which add overheads to collect and react to network-wide congestion information. In contrast, we propose a simple but effective load-balancing scheme, called Dynamic Randomized load-Balancing (DRB), to achieve network-wide low levels of path collisions through local-link adjustment which is free of communications and cooperations between switches. First, we use D-mod-k path selection scheme to allocate default paths to all source-destination (S-D) pairs in a fat-tree network, guaranteeing low levels of path collision over downlinks for any set of active S-D pairs. Then, we propose Threshold-based Two-Choice (TTC) randomized technique to balance uplink traffic through local uplink adjustment at each switch. We theoretically show that the proposed TTC for the uplink-load balancing in a fat-tree network have a similar performance as the two-choice technique in the area of randomized load balancing. Simulation results show that DRB with TTC technique achieves a significant improvement over many randomized routing schemes for fat-tree networks.



قيم البحث

اقرأ أيضاً

We introduced the load-balanced routing algorithms, for interconnection networks resulting from nesting, by considering the pressure of the data forwarding in each node. Benchmarks on a small cluster with various network topologies, and simulations f or several larger clusters whose prototypes are too costly to construct, demonstrated substantial gains of communication performance with our routing on these networks over other mainstream routing algorithms.
We present a novel framework, called balanced overlay networks (BON), that provides scalable, decentralized load balancing for distributed computing using large-scale pools of heterogeneous computers. Fundamentally, BON encodes the information about each nodes available computational resources in the structure of the links connecting the nodes in the network. This distributed encoding is self-organized, with each node managing its in-degree and local connectivity via random-walk sampling. Assignment of incoming jobs to nodes with the most free resources is also accomplished by sampling the nodes via short random walks. Extensive simulations show that the resulting highly dynamic and self-organized graph structure can efficiently balance computational load throughout large-scale networks. These simulations cover a wide spectrum of cases, including significant heterogeneity in available computing resources and high burstiness in incoming load. We provide analytical results that prove BONs scalability for truly large-scale networks: in particular we show that under certain ideal conditions, the network structure converges to Erdos-Renyi (ER) random graphs; our simulation results, however, show that the algorithm does much better, and the structures seem to approach the ideal case of d-regular random graphs. We also make a connection between highly-loaded BONs and the well-known ball-bin randomized load balancing framework.
215 - William McDoniel 2019
Due to a hard dependency between time steps, large-scale simulations of gas using the Direct Simulation Monte Carlo (DSMC) method proceed at the pace of the slowest processor. Scalability is therefore achievable only by ensuring that the work done ea ch time step is as evenly apportioned among the processors as possible. Furthermore, as the simulated system evolves, the load shifts, and thus this load-balancing typically needs to be performed multiple times over the course of a simulation. Common methods generally use either crude performance models or processor-level timers. We combine both to create a timer-augmented cost function which both converges quickly and yields well-balanced processor decompositions. When compared to a particle-based performance model alone, our method achieves 2x speedup at steady-state on up to 1024 processors for a test case consisting of a Mach 9 argon jet impacting a solid wall.
The server-centric data centre network architecture can accommodate a wide variety of network topologies. Newly proposed topologies in this arena often require several rounds of analysis and experimentation in order that they might achieve their full potential as data centre networks. We propose a family of novel routing algorithms on two well-known data centre networks of this type, (Generalized) DCell and FiConn, using techniques that can be applied more generally to the class of networks we call completely connected recursively-defined networks. In doing so, we develop a classification of all possible routes from server-node to server-node on these networks, called general routes of order $t$, and find that for certain topologies of interest, our routing algorithms efficiently produce paths that are up to 16% shorter than the best previously known algorithms, and are comparable to shortest paths. In addition to finding shorter paths, we show evidence that our algorithms also have good load-balancing properties.
Randomized load-balancing algorithms play an important role in improving performance in large-scale networks at relatively low computational cost. A common model of such a system is a network of $N$ parallel queues in which incoming jobs with indepen dent and identically distributed service times are routed on arrival using the join-the-shortest-of-$d$-queues routing algorithm. Under fairly general conditions, it was shown by Aghajani and Ramanan that as $Nrightarrowinfty$, the state dynamics converges to the unique solution of a countable system of coupled deterministic measure-valued equations called the hydrodynamic equations. In this article, a characterization of invariant states of these hydrodynamic equations is obtained and, when $d=2$, used to construct a numerical algorithm to compute the queue length distribution and mean virtual waiting time in the invariant state. Additionally, it is also shown that under a suitable tail condition on the service distribution, the queue length distribution of the invariant state exhibits a doubly exponential tail decay, thus demonstrating a vast improvement in performance over the case $d=1$, which corresponds to random routing, when the tail decay could even be polynomial. Furthermore, numerical evidence is provided to support the conjecture that the invariant state is the limit of the steady-state distributions of the $N$-server models. The proof methodology, which entails analysis of a coupled system of measure-valued equations, can potentially be applied to other many-server systems with general service distributions, where measure-valued representations are useful.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا