ترغب بنشر مسار تعليمي؟ اضغط هنا

Technical Report: Estimating Reliability of Workers for Cooperative Distributed Computing

95   0   0.0 ( 0 )
 نشر من قبل Kishori Konwar
 تاريخ النشر 2014
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Internet supercomputing is an approach to solving partitionable, computation-intensive problems by harnessing the power of a vast number of interconnected computers. For the problem of using network supercomputing to perform a large collection of independent tasks, prior work introduced a decentralized approach and provided randomized synchronous algorithms that perform all tasks correctly with high probability, while dealing with misbehaving or crash-prone processors. The main weaknesses of existing algorithms is that they assume either that the emph{average} probability of a non-crashed processor returning incorrect results is inferior to $frac{1}{2}$, or that the probability of returning incorrect results is known to emph{each} processor. Here we present a randomized synchronous distributed algorithm that tightly estimates the probability of each processor returning correct results. Starting with the set $P$ of $n$ processors, let $F$ be the set of processors that crash. Our algorithm estimates the probability $p_i$ of returning a correct result for each processor $i in P-F$, making the estimates available to all these processors. The estimation is based on the $(epsilon, delta)$-approximation, where each estimated probability $tilde{p_i}$ of $p_i$ obeys the bound ${sf Pr}[p_i(1-epsilon) leq tilde{p_i} leq p_i(1+epsilon)] > 1 - delta$, for any constants $delta >0$ and $epsilon >0$ chosen by the user. An important aspect of this algorithm is that each processor terminates without global coordination. We assess the efficiency of the algorithm in three adversarial models as follows. For the model where the number of non-crashed processors $|P-F|$ is linearly bounded the time complexity $T(n)$ of the algorithm is $Theta(log{n})$, work complexity $W(n)$ is $Theta(nlog{n})$, and message complexity $M(n)$ is $Theta(nlog^2n)$.

قيم البحث

اقرأ أيضاً

Internet supercomputing is an approach to solving partitionable, computation-intensive problems by harnessing the power of a vast number of interconnected computers. This paper presents a new algorithm for the problem of using network supercomputing to perform a large collection of independent tasks, while dealing with undependable processors. The adversary may cause the processors to return bogus results for tasks with certain probabilities, and may cause a subset $F$ of the initial set of processors $P$ to crash. The adversary is constrained in two ways. First, for the set of non-crashed processors $P-F$, the emph{average} probability of a processor returning a bogus result is inferior to $frac{1}{2}$. Second, the adversary may crash a subset of processors $F$, provided the size of $P-F$ is bounded from below. We consider two models: the first bounds the size of $P-F$ by a fractional polynomial, the second bounds this size by a poly-logarithm. Both models yield adversaries that are much stronger than previously studied. Our randomized synchronous algorithm is formulated for $n$ processors and $t$ tasks, with $nle t$, where depending on the number of crashes each live processor is able to terminate dynamically with the knowledge that the problem is solved with high probability. For the adversary constrained by a fractional polynomial, the round complexity of the algorithm is $O(frac{t}{n^varepsilon}log{n}log{log{n}})$, its work is $O(tlog{n} log{log{n}})$ and message complexity is $O(nlog{n}log{log{n}})$. For the poly-log constrained adversary, the round complexity is $O(t)$, work is $O(t n^{varepsilon})$, %$O(t , poly log{n})$, and message complexity is $O(n^{1+varepsilon})$ %$O(n , poly log{n})$. All bounds are shown to hold with high probability.
122 - Kai Li , Yong Wang , Meilin Liu 2014
Cloud computing is a newly emerging distributed system which is evolved from Grid computing. Task scheduling is the core research of cloud computing which studies how to allocate the tasks among the physical nodes, so that the tasks can get a balance d allocation or each tasks execution cost decreases to the minimum, or the overall system performance is optimal. Unlike task scheduling based on time or cost before, aiming at the special reliability requirements in cloud computing, we propose a non-cooperative game model for reliability-based task scheduling approach. This model takes the steady-state availability that computing nodes provide as the target, takes the task slicing strategy of the schedulers as the game strategy, then finds the Nash equilibrium solution. And also, we design a task scheduling algorithm based on this model. The experiments can be seen that our task scheduling algorithm is better than the so-called balanced scheduling algorithm.
Realistic, relevant, and reproducible experiments often need input traces collected from real-world environments. We focus in this work on traces of workflows---common in datacenters, clouds, and HPC infrastructures. We show that the state-of-the-art in using workflow-traces raises important issues: (1) the use of realistic traces is infrequent, and (2) the use of realistic, {it open-access} traces even more so. Alleviating these issues, we introduce the Workflow Trace Archive (WTA), an open-access archive of workflow traces from diverse computing infrastructures and tooling to parse, validate, and analyze traces. The WTA includes ${>}48$ million workflows captured from ${>}10$ computing infrastructures, representing a broad diversity of trace domains and characteristics. To emphasize the importance of trace diversity, we characterize the WTA contents and analyze in simulation the impact of trace diversity on experiment results. Our results indicate significant differences in characteristics, properties, and workflow structures between workload sources, domains, and fields.
Workflows are prevalent in todays computing infrastructures. The workflow model support various different domains, from machine learning to finance and from astronomy to chemistry. Different Quality-of-Service (QoS) requirements and other desires of both users and providers makes workflow scheduling a tough problem, especially since resource providers need to be as efficient as possible with their resources to be competitive. To a newcomer or even an experienced researcher, sifting through the vast amount of articles can be a daunting task. Questions regarding the difference techniques, policies, emerging areas, and opportunities arise. Surveys are an excellent way to cover these questions, yet surveys rarely publish their tools and data on which it is based. Moreover, the communities that are behind these articles are rarely studied. We attempt to address these shortcomings in this work. We focus on four areas within workflow scheduling: 1) the workflow formalism, 2) workflow allocation, 3) resource provisioning, and 4) applications and services. Each part features one or more taxonomies, a view of the community, important and emerging keywords, and directions for future work. We introduce and make open-source an instrument we used to combine and store article meta-data. Using this meta-data, we 1) obtain important keywords overall and per year, per community, 2) identify keywords growing in importance, 3) get insight into the structure and relations within each community, and 4) perform a systematic literature survey per part to validate and complement our taxonomies.
Low-latency online services have strict Service Level Objectives (SLOs) that require datacenter systems to support high throughput at microsecond-scale tail latency. Dataplane operating systems have been designed to scale up multi-core servers with m inimal overhead for such SLOs. However, as application demands continue to increase, scaling up is not enough, and serving larger demands requires these systems to scale out to multiple servers in a rack. We present RackSched, the first rack-level microsecond-scale scheduler that provides the abstraction of a rack-scale computer (i.e., a huge server with hundreds to thousands of cores) to an external service with network-system co-design. The core of RackSched is a two-layer scheduling framework that integrates inter-server scheduling in the top-of-rack (ToR) switch with intra-server scheduling in each server. We use a combination of analytical results and simulations to show that it provides near-optimal performance as centralized scheduling policies, and is robust for both low-dispersion and high-dispersion workloads. We design a custom switch data plane for the inter-server scheduler, which realizes power-of-k-choices, ensures request affinity, and tracks server loads accurately and efficiently. We implement a RackSched prototype on a cluster of commodity servers connected by a Barefoot Tofino switch. End-to-end experiments on a twelve-server testbed show that RackSched improves the throughput by up to 1.44x, and scales out the throughput near linearly, while maintaining the same tail latency as one server until the system is saturated.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا