No Arabic abstract
This paper first presents a parallel solution for the Flowshop Scheduling Problem in parallel environment, and then proposes a novel load balancing strategy. The proposed Proportional Fairness Strategy (PFS) takes computational performance of computing process sets into account, and assigns additional load to computing nodes proportionally to their evaluated performance. In order to efficiently utilize the power of parallel resource, we also discuss the data structure used in communications among computational nodes and design an optimized data transfer strategy. This data transfer strategy combined with the proposed load balancing strategy have been implemented and tested on a super computer consisted of 86 CPUs using MPI as the middleware. The results show that the proposed PFS achieves better performance in terms of computing time than the existing Adaptive Contracting Within Neighborhood Strategy. We also show that the combination of both the Proportional Fairness Strategy and the proposed data transferring strategy achieves additional 13~15% improvement in efficiency of parallelism.
Recently, fog computing has been introduced as a modern distributed paradigm and complement to cloud computing to provide services. Fog system extends storing and computing to the edge of the network, which can solve the problem about service computing of the delay-sensitive applications remarkably besides enabling the location awareness and mobility support. Load balancing is an important aspect of fog networks that avoids a situation with some under-loaded or overloaded fog nodes. Quality of Service (QoS) parameters such as resource utilization, throughput, cost, response time, performance, and energy consumption can be improved with load balancing. In recent years, some researches in load balancing techniques in fog networks have been carried out, but there is no systematic review to consolidate these studies. This article reviews the load-balancing mechanisms systematically in fog computing in four classifications, including approximate, exact, fundamental, and hybrid methods (published between 2013 and August 2020). Also, this article investigates load balancing metrics with all advantages and disadvantages related to chosen load balancing mechanisms in fog networks. The evaluation techniques and tools applied for each reviewed study are explored as well. Additionally, the essential open challenges and future trends of these mechanisms are discussed.
Aiming at the local overload of multi-controller deployment in software-defined networks, a load balancing mechanism of SDN controller based on reinforcement learning is designed. The initial paired migrate-out domain and migrate-in domain are obtained by calculating the load ratio deviation between the controllers, a preliminary migration triplet, contains migration domain mentioned above and a group of switches which are subordinated to the migrate-out domain, makes the migration efficiency reach the local optimum. Under the constraint of the best efficiency of migration in the whole and without migration conflict, selecting multiple sets of triples based on reinforcement learning, as the final migration of this round to attain the global optimal controller load balancing with minimum cost. The experimental results illustrate that the mechanism can make full use of the controllers resources, quickly balance the load between controllers, reduce unnecessary migration overhead and get a faster response rate of the packet-in request.
Equation systems resulting from a p-version FEM discretisation typically require a special treatment as iterative solvers are not very efficient here. Applying hierarchical concepts based on a nested dissection approach allow for both the design of sophisticated solvers as well as for advanced parallelisation strategies. To fully exploit the underlying computing power of parallel systems, dynamic load balancing strategies become an essential component.
Load Balancing plays a vital role in modern data centers to distribute traffic among instances of network functions or services. State-of-the-art load balancers such as Silkroad dispatch traffic obliviously without considering the real-time utilization of service instances and therefore can lead to uneven load distribution and suboptimal performance. In this paper, we design and implement Spotlight, a scalable and distributed load balancing architecture that maintains connection-to-instance mapping consistency at the edge of data center networks. Spotlight uses a new stateful flow dispatcher which periodically polls instances load and dispatches incoming connections to instances in proportion to their available capacity. Our design utilizes distributed control plane and in-band flow dispatching and thus scales horizontally in data center networks. Through extensive flow-level simulation and packet-level experiments on a testbed, we demonstrate that compared to existing methods Spotlight distributes the traffic more efficiently and has near-optimum performance in terms of overall service utilization. Moreover, Spotlight is not sensitive to utilization polling interval and therefore can be implemented with low polling frequency to reduce the amount of control traffic. Indeed, Spotlight achieves the mentioned performance improvements using O(100ms) polling interval.
The emergence of cloud computing based on virtualization technologies brings huge opportunities to host virtual resource at low cost without the need of owning any infrastructure. Virtualization technologies enable users to acquire, configure and be charged on pay-per-use basis. However, Cloud data centers mostly comprise heterogeneous commodity servers hosting multiple virtual machines (VMs) with potential various specifications and fluctuating resource usages, which may cause imbalanced resource utilization within servers that may lead to performance degradation and service level agreements (SLAs) violations. To achieve efficient scheduling, these challenges should be addressed and solved by using load balancing strategies, which have been proved to be NP-hard problem. From multiple perspectives, this work identifies the challenges and analyzes existing algorithms for allocating VMs to PMs in infrastructure Clouds, especially focuses on load balancing. A detailed classification targeting load balancing algorithms for VM placement in cloud data centers is investigated and the surveyed algorithms are classified according to the classification. The goal of this paper is to provide a comprehensive and comparative understanding of existing literature and aid researchers by providing an insight for potential future enhancements.