ترغب بنشر مسار تعليمي؟ اضغط هنا

CodeNet: Training Large Scale Neural Networks in Presence of Soft-Errors

95   0   0.0 ( 0 )
 نشر من قبل Sanghamitra Dutta
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

This work proposes the first strategy to make distributed training of neural networks resilient to computing errors, a problem that has remained unsolved despite being first posed in 1956 by von Neumann. He also speculated that the efficiency and reliability of the human brain is obtained by allowing for low power but error-prone components with redundancy for error-resilience. It is surprising that this problem remains open, even as massive artificial neural networks are being trained on increasingly low-cost and unreliable processing units. Our coding-theory-inspired strategy, CodeNet, solves this problem by addressing three challenges in the science of reliable computing: (i) Providing the first strategy for error-resilient neural network training by encoding each layer separately; (ii) Keeping the overheads of coding (encoding/error-detection/decoding) low by obviating the need to re-encode the updated parameter matrices after each iteration from scratch. (iii) Providing a completely decentralized implementation with no central node (which is a single point of failure), allowing all primary computational steps to be error-prone. We theoretically demonstrate that CodeNet has higher error tolerance than replication, which we leverage to speed up computation time. Simultaneously, CodeNet requires lower redundancy than replication, and equal computational and communication costs in scaling sense. We first demonstrate the benefits of CodeNet in reducing expected computation time over replication when accounting for checkpointing. Our experiments show that CodeNet achieves the best accuracy-runtime tradeoff compared to both replication and uncoded strategies. CodeNet is a significant step towards biologically plausible neural network training, that could hold the key to orders of magnitude efficiency improvements.

قيم البحث

اقرأ أيضاً

Full-batch training on Graph Neural Networks (GNN) to learn the structure of large graphs is a critical problem that needs to scale to hundreds of compute nodes to be feasible. It is challenging due to large memory capacity and bandwidth requirements on a single compute node and high communication volumes across multiple nodes. In this paper, we present DistGNN that optimizes the well-known Deep Graph Library (DGL) for full-batch training on CPU clusters via an efficient shared memory implementation, communication reduction using a minimum vertex-cut graph partitioning algorithm and communication avoidance using a family of delayed-update algorithms. Our results on four common GNN benchmark datasets: Reddit, OGB-Products, OGB-Papers and Proteins, show up to 3.7x speed-up using a single CPU socket and up to 97x speed-up using 128 CPU sockets, respectively, over baseline DGL implementations running on a single CPU socket
We study the trade-offs between storage/bandwidth and prediction accuracy of neural networks that are stored in noisy media. Conventionally, it is assumed that all parameters (e.g., weight and biases) of a trained neural network are stored as binary arrays and are error-free. This assumption is based upon the implementation of error correction codes (ECCs) that correct potential bit flips in storage media. However, ECCs add storage overhead and cause bandwidth reduction when loading the trained parameters during the inference. We study the robustness of deep neural networks when bit errors exist but ECCs are turned off for different neural network models and datasets. It is observed that more sophisticated models and datasets are more vulnerable to errors in their trained parameters. We propose a simple detection approach that can universally improve the robustness, which in some cases can be improved by orders of magnitude. We also propose an alternative binary representation of the parameters such that the distortion brought by bit flips is reduced and even theoretically vanishing when the number of bits to represent a parameter increases.
This paper has two contributions. First, we propose a novel coded matrix multiplication technique called Generalized PolyDot codes that advances on existing methods for coded matrix multiplication under storage and communication constraints. This tec hnique uses garbage alignment, i.e., aligning computations in coded computing that are not a part of the desired output. Generalized PolyDot codes bridge between Polynomial codes and MatDot codes, trading off between recovery threshold and communication costs. Second, we demonstrate that Generalized PolyDot can be used for training large Deep Neural Networks (DNNs) on unreliable nodes prone to soft-errors. This requires us to address three additional challenges: (i) prohibitively large overhead of coding the weight matrices in each layer of the DNN at each iteration; (ii) nonlinear operations during training, which are incompatible with linear coding; and (iii) not assuming presence of an error-free master node, requiring us to architect a fully decentralized implementation without any single point of failure. We allow all primary DNN training steps, namely, matrix multiplication, nonlinear activation, Hadamard product, and update steps as well as the encoding/decoding to be error-prone. We consider the case of mini-batch size $B=1$, as well as $B>1$, leveraging coded matrix-vector products, and matrix-matrix products respectively. The problem of DNN training under soft-errors also motivates an interesting, probabilistic error model under which a real number $(P,Q)$ MDS code is shown to correct $P-Q-1$ errors with probability $1$ as compared to $lfloor frac{P-Q}{2} rfloor$ for the more conventional, adversarial error model. We also demonstrate that our proposed strategy can provide unbounded gains in error tolerance over a competing replication strategy and a preliminary MDS-code-based strategy for both these error models.
Training large deep learning models at scale is very challenging. This paper proposes Chimera, a novel pipeline parallelism scheme which combines bidirectional pipelines for efficiently training large-scale models. Chimera is a synchronous approach a nd therefore no loss of accuracy, which is more convergence-friendly than asynchronous approaches. Compared with the latest synchronous pipeline approach, Chimera reduces the number of bubbles by up to 50%; benefiting from the sophisticated scheduling of bidirectional pipelines, Chimera has a more balanced activation memory consumption. Evaluations are conducted on Transformer based language models. For a GPT-2 model with 1.3 billion parameters running on 2,048 GPU nodes of the Piz Daint supercomputer, Chimera improves the training throughput by 1.16x-2.34x over the state-of-the-art synchronous and asynchronous pipeline approaches.
The notion of age-of-information (AoI) is investigated in the context of large-scale wireless networks, in which transmitters need to send a sequence of information packets, which are generated as independent Bernoulli processes, to their intended re ceivers over a shared spectrum. Due to interference, the rate of packet depletion at any given node is entangled with both the spatial configurations, which determine the path loss, and temporal dynamics, which influence the active states, of the other transmitters, resulting in the queues to interact with each other in both space and time over the entire network. To that end, variants in the packet update frequency affect not just the inter-arrival time but also the departure process, and the impact of such phenomena on the AoI is not well understood. In this paper, we establish a theoretical framework to characterize the AoI performance in the aforementioned setting. Particularly, tractable expressions are derived for both the peak and average AoI under two different transmission protocols, namely the FCFS and the LCFS-PR. Based on the theoretical outcomes, we find that: i) networks operating under LCFS-PR are able to attain smaller values of peak and average AoI than that under FCFS, whereas the gain is more pronounced when the infrastructure is densely deployed, ii) in sparsely deployed networks, ALOHA with a universally designed channel access probability is not instrumental in reducing the AoI, thus calling for more advanced channel access approaches, and iii) when the infrastructure is densely rolled out, there exists a non-trivial ALOHA channel access probability that minimizes the peak and average AoI under both FCFS and LCFS-PR.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا