ترغب بنشر مسار تعليمي؟ اضغط هنا

A Timer-Augmented Cost Function for Load Balanced DSMC

216   0   0.0 ( 0 )
 نشر من قبل Paolo Bientinesi
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English
 تأليف William McDoniel




اسأل ChatGPT حول البحث

Due to a hard dependency between time steps, large-scale simulations of gas using the Direct Simulation Monte Carlo (DSMC) method proceed at the pace of the slowest processor. Scalability is therefore achievable only by ensuring that the work done each time step is as evenly apportioned among the processors as possible. Furthermore, as the simulated system evolves, the load shifts, and thus this load-balancing typically needs to be performed multiple times over the course of a simulation. Common methods generally use either crude performance models or processor-level timers. We combine both to create a timer-augmented cost function which both converges quickly and yields well-balanced processor decompositions. When compared to a particle-based performance model alone, our method achieves 2x speedup at steady-state on up to 1024 processors for a test case consisting of a Mach 9 argon jet impacting a solid wall.



قيم البحث

اقرأ أيضاً

Fat-tree networks have been widely adopted to High Performance Computing (HPC) clusters and to Data Center Networks (DCN). These parallel systems usually have a large number of servers and hosts, which generate large volumes of highly-volatile traffi c. Thus, distributed load-balancing routing design becomes critical to achieve high bandwidth utilization, and low-latency packet delivery. Existing distributed designs rely on remote congestion feedbacks to address congestion, which add overheads to collect and react to network-wide congestion information. In contrast, we propose a simple but effective load-balancing scheme, called Dynamic Randomized load-Balancing (DRB), to achieve network-wide low levels of path collisions through local-link adjustment which is free of communications and cooperations between switches. First, we use D-mod-k path selection scheme to allocate default paths to all source-destination (S-D) pairs in a fat-tree network, guaranteeing low levels of path collision over downlinks for any set of active S-D pairs. Then, we propose Threshold-based Two-Choice (TTC) randomized technique to balance uplink traffic through local uplink adjustment at each switch. We theoretically show that the proposed TTC for the uplink-load balancing in a fat-tree network have a similar performance as the two-choice technique in the area of randomized load balancing. Simulation results show that DRB with TTC technique achieves a significant improvement over many randomized routing schemes for fat-tree networks.
We present a novel framework, called balanced overlay networks (BON), that provides scalable, decentralized load balancing for distributed computing using large-scale pools of heterogeneous computers. Fundamentally, BON encodes the information about each nodes available computational resources in the structure of the links connecting the nodes in the network. This distributed encoding is self-organized, with each node managing its in-degree and local connectivity via random-walk sampling. Assignment of incoming jobs to nodes with the most free resources is also accomplished by sampling the nodes via short random walks. Extensive simulations show that the resulting highly dynamic and self-organized graph structure can efficiently balance computational load throughout large-scale networks. These simulations cover a wide spectrum of cases, including significant heterogeneity in available computing resources and high burstiness in incoming load. We provide analytical results that prove BONs scalability for truly large-scale networks: in particular we show that under certain ideal conditions, the network structure converges to Erdos-Renyi (ER) random graphs; our simulation results, however, show that the algorithm does much better, and the structures seem to approach the ideal case of d-regular random graphs. We also make a connection between highly-loaded BONs and the well-known ball-bin randomized load balancing framework.
We introduced the load-balanced routing algorithms, for interconnection networks resulting from nesting, by considering the pressure of the data forwarding in each node. Benchmarks on a small cluster with various network topologies, and simulations f or several larger clusters whose prototypes are too costly to construct, demonstrated substantial gains of communication performance with our routing on these networks over other mainstream routing algorithms.
Accurate load prediction is an effective way to reduce power system operation costs. Traditionally, the mean square error (MSE) is a common-used loss function to guide the training of an accurate load forecasting model. However, the MSE loss function is unable to precisely reflect the real costs associated with forecasting errors because the cost caused by forecasting errors in the real power system is probably neither symmetric nor quadratic. To tackle this issue, this paper proposes a generalized cost-oriented load forecasting framework. Specifically, how to obtain a differentiable loss function that reflects real cost and how to integrate the loss function with regression models are studied. The economy and effectiveness of the proposed load forecasting method are verified by the case studies of an optimal dispatch problem that is built on the IEEE 30-bus system and the open load dataset from the Global Energy Forecasting Competition 2012 (GEFCom2012).
The Load-Balanced Router architecture has received a lot of attention because it does not require centralized scheduling at the internal switch fabrics. In this paper we reexamine the architecture, motivated by its potential to turn off multiple comp onents and thereby conserve energy in the presence of low traffic. We perform a detailed analysis of the queue and delay performance of a Load-Balanced Router under a simple random routing algorithm. We calculate probabilistic bounds for queue size and delay, and show that the probabilities drop exponentially with increasing queue size or delay. We also demonstrate a tradeoff in energy consumption against the queue and delay performance.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا