Do you want to publish a course? Click here

Locality Sim: Cloud Simulator with Data Locality

55   0   0.0 ( 0 )
 Publication date 2017
and research's language is English




Ask ChatGPT about the research

Cloud Computing (CC) is a model for enabling on-demand access to a shared pool of configurable computing resources. Testing and evaluating the performance of the cloud environment for allocating, provisioning, scheduling, and data allocation policy have great attention to be achieved. Therefore, using cloud simulator would save time and money, and provide a flexible environment to evaluate new research work. Unfortunately, the current simulators (e.g., CloudSim, NetworkCloudSim, GreenCloud, etc..) deal with the data as for size only without any consideration about the data allocation policy and locality. On the other hand, the NetworkCloudSim simulator is considered one of the most common used simulators because it includes different modules which support needed functions to a simulated cloud environment, and it could be extended to include new extra modules. According to work in this paper, the NetworkCloudSim simulator has been extended and modified to support data locality. The modified simulator is called LocalitySim. The accuracy of the proposed LocalitySim simulator has been proved by building a mathematical model. Also, the proposed simulator has been used to test the performance of the three-tire data center as a case study with considering the data locality feature.



rate research

Read More

Distributed systems achieve scalability by distributing load across many machines, but wide-area deployments can introduce worst-case response latencies proportional to the networks diameter. Crux is a general framework to build locality-preserving distributed systems, by transforming an existing scalable distributed algorithm A into a new locality-preserving algorithm ALP, which guarantees for any two clients u and v interacting via ALP that their interactions exhibit worst-case response latencies proportional to the network latency between u and v. Crux builds on compact-routing theory, but generalizes these techniques beyond routing applications. Crux provides weak and strong consistency flavors, and shows latency improvements for localized interactions in both cases, specifically up to several orders of magnitude for weakly-consistent Crux (from roughly 900ms to 1ms). We deployed on PlanetLab locality-preservi
State-of-the-art distributed in-memory datastores (FaRM, FaSST, DrTM) provide strongly-consistent distributed transactions with high performance and availability. Transactions in those systems are fully general; they can atomically manipulate any set of objects in the store, regardless of their location. To achieve this, these systems use complex distributed transactional protocols. Meanwhile, many workloads have a high degree of locality. For such workloads, distributed transactions are an overkill as most operations only access objects located on the same server -- if sharded appropriately. In this paper, we show that for these workloads, a single-node transactional protocol combined with dynamic object re-sharding and asynchronously pipelined replication can provide the same level of generality with better performance, simpler protocols, and lower developer effort. We present Zeus, an in-memory distributed datastore that provides general transactions by acquiring all objects involved in the transaction to the same server and executing a single-node transaction on them. Zeus is fault-tolerant and strongly-consistent. At the heart of Zeus is a reliable dynamic object sharding protocol that can move 250K objects per second per server, allowing Zeus to process millions of transactions per second and outperform more traditional distributed transactions on a wide range of workloads that exhibit locality.
Many graph problems are locally checkable: a solution is globally feasible if it looks valid in all constant-radius neighborhoods. This idea is formalized in the concept of locally checkable labelings (LCLs), introduced by Naor and Stockmeyer (1995). Recently, Chang et al. (2016) showed that in bounded-degree graphs, every LCL problem belongs to one of the following classes: - Easy: solvable in $O(log^* n)$ rounds with both deterministic and randomized distributed algorithms. - Hard: requires at least $Omega(log n)$ rounds with deterministic and $Omega(log log n)$ rounds with randomized distributed algorithms. Hence for any parameterized LCL problem, when we move from local problems towards global problems, there is some point at which complexity suddenly jumps from easy to hard. For example, for vertex coloring in $d$-regular graphs it is now known that this jump is at precisely $d$ colors: coloring with $d+1$ colors is easy, while coloring with $d$ colors is hard. However, it is currently poorly understood where this jump takes place when one looks at defective colorings. To study this question, we define $k$-partial $c$-coloring as follows: nodes are labeled with numbers between $1$ and $c$, and every node is incident to at least $k$ properly colored edges. It is known that $1$-partial $2$-coloring (a.k.a. weak $2$-coloring) is easy for any $d ge 1$. As our main result, we show that $k$-partial $2$-coloring becomes hard as soon as $k ge 2$, no matter how large a $d$ we have. We also show that this is fundamentally different from $k$-partial $3$-coloring: no matter which $k ge 3$ we choose, the problem is always hard for $d = k$ but it becomes easy when $d gg k$. The same was known previously for partial $c$-coloring with $c ge 4$, but the case of $c < 4$ was open.
As ISPs begin to cooperate to expose their network locality information as services, e.g., P4P, solutions based on locality information provision for P2P traffic localization will soon approach their capability limits. A natural question is: can we do any better provided that no further locality information improvement can be made? This paper shows how the utility of locality information could be limited by conventional P2P data scheduling algorithms, even as sophisticated as the local rarest first policy. Network codings simplified data scheduling makes it competent for improving P2P applications throughput. Instead of only using locality information in the topology construction, this paper proposes the locality-aware network coding (LANC) that uses locality information in both the topology construction and downloading decision, and demonstrates its exceptional ability for P2P traffic localization. The randomization introduced by network coding enhances the chance for a peer to find innovative blocks in its neighborhood. Aided by proper locality-awareness, the probability for a peer to get innovative blocks from its proximity will increase as well, resulting in more efficient use of network resources. Extensive simulation results show that LANC can significantly reduce P2P traffic redundancy without sacrificing application-level performance. Aided by the same locality knowledge, the traffic redundancies of LANC in most cases are less than 50% of the current best approach that does not use network coding.
We present a simple, parallel and distributed algorithm for setting up and partitioning a sparse representation of a regular discretized simulation domain. This method is scalable for a large number of processes even for complex geometries and ensures load balance between the domains, reasonable communication interfaces, and good data locality within the domain. Applying this scheme to a list-based lattice Boltzmann flow solver can achieve similar or even higher flow solver performance than widely used standard graph partition based tools such as METIS and PT-SCOTCH.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا