No Arabic abstract
The paper presents techniques for analyzing the expected download time in distributed storage systems that employ systematic availability codes. These codes provide access to hot data through the systematic server containing the object and multiple recovery groups. When a request for an object is received, it can be replicated (forked) to the systematic server and all recovery groups. We first consider the low-traffic regime and present the close-form expression for the download time. By comparison across systems with availability, maximum distance separable (MDS), and replication codes, we demonstrate that availability codes can reduce download time in some settings but are not always optimal. In the high-traffic regime, the system consists of multiple inter-dependent Fork-Join queues, making exact analysis intractable. Accordingly, we present upper and lower bounds on the download time, and an M/G/1 queue approximation for several cases of interest. Via extensive numerical simulations, we evaluate our bounds and demonstrate that the M/G/1 queue approximation has a high degree of accuracy.
We consider a distributed storage system which stores several hot (popular) and cold (less popular) data files across multiple nodes or servers. Hot files are stored using repetition codes while cold files are stored using erasure codes. The nodes are prone to failure and hence at any given time, we assume that only a fraction of the nodes are available. Using a cavity process based mean field framework, we analyze the download time for users accessing hot or cold data in the presence of failed nodes. Our work also illustrates the impact of the choice of the storage code on the download time performance of users in the system.
This paper studies the problem of code symbol availability: a code symbol is said to have $(r, t)$-availability if it can be reconstructed from $t$ disjoint groups of other symbols, each of size at most $r$. For example, $3$-replication supports $(1, 2)$-availability as each symbol can be read from its $t= 2$ other (disjoint) replicas, i.e., $r=1$. However, the rate of replication must vanish like $frac{1}{t+1}$ as the availability increases. This paper shows that it is possible to construct codes that can support a scaling number of parallel reads while keeping the rate to be an arbitrarily high constant. It further shows that this is possible with the minimum distance arbitrarily close to the Singleton bound. This paper also presents a bound demonstrating a trade-off between minimum distance, availability and locality. Our codes match the aforementioned bound and their construction relies on combinatorial objects called resolvable designs. From a practical standpoint, our codes seem useful for distributed storage applications involving hot data, i.e., the information which is frequently accessed by multiple processes in parallel.
In this work we present a class of locally recoverable codes, i.e. codes where an erasure at a position $P$ of a codeword may be recovered from the knowledge of the entries in the positions of a recovery set $R_P$. The codes in the class that we define have availability, meaning that for each position $P$ there are several distinct recovery sets. Also, the entry at position $P$ may be recovered even in the presence of erasures in some of the positions of the recovery sets, and the number of supported erasures may vary among the various recovery sets.
Recently, the research on local repair codes is mainly confined to repair the failed nodes within each repair group. But if the extreme cases occur that the entire repair group has failed, the local code stored in the failed group need to be recovered as a whole. In this paper, local codes with cooperative repair, in which the local codes are constructed based on minimum storage regeneration (MSR) codes, is proposed to achieve repairing the failed groups. Specifically, the proposed local codes with cooperative repair construct a kind of mutual interleaving structure among the parity symbols, that the parity symbols of each local code, named as distributed local parity, can be generated by the parity symbols of the MSR codes in its two adjacent local codes. Taking advantage of the structure given, the failed local groups can be repaired cooperatively by their adjacent local groups with lower repair locality, and meanwhile the minimum distance of local codes with cooperative repair is derived. Theoretical analysis and simulation experiments show that, compared with codes with local regeneration (such as MSR-local codes and MBR-local codes), the proposed local codes with cooperative repair have benefits in bandwidth overhead and repair locality for the case of local groups failure.
This chapter deals with the topic of designing reliable and efficient codes for the storage and retrieval of large quantities of data over storage devices that are prone to failure. For long, the traditional objective has been one of ensuring reliability against data loss while minimizing storage overhead. More recently, a third concern has surfaced, namely of the need to efficiently recover from the failure of a single storage unit, corresponding to recovery from the erasure of a single code symbol. We explain here, how coding theory has evolved to tackle this fresh challenge.