ﻻ يوجد ملخص باللغة العربية
Internet supercomputing is an approach to solving partitionable, computation-intensive problems by harnessing the power of a vast number of interconnected computers. This paper presents a new algorithm for the problem of using network supercomputing to perform a large collection of independent tasks, while dealing with undependable processors. The adversary may cause the processors to return bogus results for tasks with certain probabilities, and may cause a subset $F$ of the initial set of processors $P$ to crash. The adversary is constrained in two ways. First, for the set of non-crashed processors $P-F$, the emph{average} probability of a processor returning a bogus result is inferior to $frac{1}{2}$. Second, the adversary may crash a subset of processors $F$, provided the size of $P-F$ is bounded from below. We consider two models: the first bounds the size of $P-F$ by a fractional polynomial, the second bounds this size by a poly-logarithm. Both models yield adversaries that are much stronger than previously studied. Our randomized synchronous algorithm is formulated for $n$ processors and $t$ tasks, with $nle t$, where depending on the number of crashes each live processor is able to terminate dynamically with the knowledge that the problem is solved with high probability. For the adversary constrained by a fractional polynomial, the round complexity of the algorithm is $O(frac{t}{n^varepsilon}log{n}log{log{n}})$, its work is $O(tlog{n} log{log{n}})$ and message complexity is $O(nlog{n}log{log{n}})$. For the poly-log constrained adversary, the round complexity is $O(t)$, work is $O(t n^{varepsilon})$, %$O(t , poly log{n})$, and message complexity is $O(n^{1+varepsilon})$ %$O(n , poly log{n})$. All bounds are shown to hold with high probability.
Internet supercomputing is an approach to solving partitionable, computation-intensive problems by harnessing the power of a vast number of interconnected computers. For the problem of using network supercomputing to perform a large collection of ind
Low-latency online services have strict Service Level Objectives (SLOs) that require datacenter systems to support high throughput at microsecond-scale tail latency. Dataplane operating systems have been designed to scale up multi-core servers with m
To improve customer experience, datacenter operators offer support for simplifying application and resource management. For example, running workloads of workflows on behalf of customers is desirable, but requires increasingly more sophisticated auto
Scientific workflows are a cornerstone of modern scientific computing. They are used to describe complex computational applications that require efficient and robust management of large volumes of data, which are typically stored/processed at heterog
As communication networks are growing at a fast pace, the need for more scalable approaches to operate such networks is pressing. Decentralization and locality are key concepts to provide scalability. Existing models for which local algorithms are de