ترغب بنشر مسار تعليمي؟ اضغط هنا

The Benefit of Being Flexible in Distributed Computation

58   0   0.0 ( 0 )
 نشر من قبل Linqi Song
 تاريخ النشر 2017
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In wireless distributed computing, networked nodes perform intermediate computations over data placed in their memory and exchange these intermediate values to calculate function values. In this paper we consider an asymmetric setting where each node has access to a random subset of the data, i.e., we cannot control the data placement. The paper makes a simple point: we can realize significant benefits if we are allowed to be flexible, and decide which node computes which function, in our system. We make this argument in the case where each function depends on only two of the data messages, as is the case in similarity searches. We establish a percolation in the behavior of the system, where, depending on the amount of observed data, by being flexible, we may need no communication at all.

قيم البحث

اقرأ أيضاً

210 - Royee Yosibash , Ram Zamir 2021
Distributed computation is a framework used to break down a complex computational task into smaller tasks and distributing them among computational nodes. Erasure correction codes have recently been introduced and have become a popular workaround to the well known ``straggling nodes problem, in particular, by matching linear coding for linear computation tasks. It was observed that decoding tends to amplify the computation ``noise, i.e., the numerical errors at the computation nodes. We propose taking advantage of the case that more nodes return than minimally required. We show how a clever construction of a polynomial code, inspired by recent results on robust frames, can significantly reduce the amplification of noise, and achieves graceful degradation with the number of straggler nodes.
We present a communication-efficient distributed protocol for computing the Babai point, an approximate nearest point for a random vector ${bf X}inmathbb{R}^n$ in a given lattice. We show that the protocol is optimal in the sense that it minimizes th e sum rate when the components of ${bf X}$ are mutually independent. We then investigate the error probability, i.e. the probability that the Babai point does not coincide with the nearest lattice point. In dimensions two and three, this probability is seen to grow with the packing density. For higher dimensions, we use a bound from probability theory to estimate the error probability for some well-known lattices. Our investigations suggest that for uniform distributions, the error probability becomes large with the dimension of the lattice, for lattices with good packing densities. We also consider the case where $mathbf{X}$ is obtained by adding Gaussian noise to a randomly chosen lattice point. In this case, the error probability goes to zero with the lattice dimension when the noise variance is sufficiently small. In such cases, a distributed algorithm for finding the approximate nearest lattice point is sufficient for finding the nearest lattice point.
Quantized compressive sensing (QCS) deals with the problem of coding compressive measurements of low-complexity signals with quantized, finite precision representations, i.e., a mandatory process involved in any practical sensing model. While the res olution of this quantization clearly impacts the quality of signal reconstruction, there actually exist incompatible combinations of quantization functions and sensing matrices that proscribe arbitrarily low reconstruction error when the number of measurements increases. This work shows that a large class of random matrix constructions known to respect the restricted isometry property (RIP) is compatible with a simple scalar and uniform quantization if a uniform random vector, or a random dither, is added to the compressive signal measurements before quantization. In the context of estimating low-complexity signals (e.g., sparse or compressible signals, low-rank matrices) from their quantized observations, this compatibility is demonstrated by the existence of (at least) one signal reconstruction method, the projected back projection (PBP), whose reconstruction error decays when the number of measurements increases. Interestingly, given one RIP matrix and a single realization of the dither, a small reconstruction error can be proved to hold uniformly for all signals in the considered low-complexity set. We confirm these observations numerically in several scenarios involving sparse signals, low-rank matrices, and compressible signals, with various RIP matrix constructions such as sub-Gaussian random matrices and random partial discrete cosine transform (DCT) matrices.
We consider a status update system in which the update packets need to be processed to extract the embedded useful information. The source node sends the acquired information to a computation unit (CU) which consists of a master node and $n$ worker n odes. The master node distributes the received computation task to the worker nodes. Upon computation, the master node aggregates the results and sends them back to the source node to keep it emph{updated}. We investigate the age performance of uncoded and coded (repetition coded, MDS coded, and multi-message MDS (MM-MDS) coded) schemes in the presence of stragglers under i.i.d.~exponential transmission delays and i.i.d~shifted exponential computation times. We show that asymptotically MM-MDS coded scheme outperforms the other schemes. Furthermore, we characterize the optimal codes such that the average age is minimized.
246 - Oliver Kosut , Lang Tong 2007
The distributed source coding problem is considered when the sensors, or encoders, are under Byzantine attack; that is, an unknown group of sensors have been reprogrammed by a malicious intruder to undermine the reconstruction at the fusion center. T hree different forms of the problem are considered. The first is a variable-rate setup, in which the decoder adaptively chooses the rates at which the sensors transmit. An explicit characterization of the variable-rate achievable sum rates is given for any number of sensors and any groups of traitors. The converse is proved constructively by letting the traitors simulate a fake distribution and report the generated values as the true ones. This fake distribution is chosen so that the decoder cannot determine which sensors are traitors while maximizing the required rate to decode every value. Achievability is proved using a scheme in which the decoder receives small packets of information from a sensor until its message can be decoded, before moving on to the next sensor. The sensors use randomization to choose from a set of coding functions, which makes it probabilistically impossible for the traitors to cause the decoder to make an error. Two forms of the fixed-rate problem are considered, one with deterministic coding and one with randomized coding. The achievable rate regions are given for both these problems, and it is shown that lower rates can be achieved with randomized coding.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا