ﻻ يوجد ملخص باللغة العربية
Many applications today, such as NLP, network analysis, and code analysis, rely on semantically embedding objects into low-dimensional fixed-length vectors. Such embeddings naturally provide a way to perform useful downstream tasks, such as identifying relations among objects or predicting objects for a given context, etc. Unfortunately, the training necessary for accurate embeddings is usually computationally intensive and requires processing large amounts of data. Furthermore, distributing this training is challenging. Most embedding training uses stochastic gradient descent (SGD), an inherently sequential algorithm. Prior approaches to parallelizing SGD do not honor these dependencies and thus potentially suffer poor convergence. This paper presents a distributed training framework for a class of applications that use Skip-gram-like models to generate embeddings. We call this class Any2Vec and it includes Word2Vec, DeepWalk, and Node2Vec among others. We first formulate Any2Vec training algorithm as a graph application and leverage the state-of-the-art distributed graph analytics framework, D-Galois. We adapt D-Galois to support dynamic graph generation and repartitioning, and incorporate novel communication optimizations. Finally, we introduce a novel way to combine gradients during distributed training to prevent accuracy loss. We show that our framework, called GraphAny2Vec, matches on a cluster of 32 hosts the accuracy of the state-of-the-art shared-memory implementations of Word2Vec and Vertex2Vec on 1 host, and gives a geo-mean speedup of 12x and 5x respectively. Furthermore, GraphAny2Vec is on average 2x faster than the state-of-the-art distributed Word2Vec implementation, DMTK, on 32 hosts. We also show the superiority of our Gradient Combiner independent of GraphAny2Vec by incorporating it in DMTK, which raises its accuracy by > 30%.
The graph convolutional network (GCN) is a go-to solution for machine learning on graphs, but its training is notoriously difficult to scale both in terms of graph size and the number of model parameters. Although some work has explored training on l
Modern machine learning techniques are successfully being adapted to data modeled as graphs. However, many real-world graphs are typically very large and do not fit in memory, often making the problem of training machine learning models on them intra
We propose {rm texttt{ResIST}}, a novel distributed training protocol for Residual Networks (ResNets). {rm texttt{ResIST}} randomly decomposes a global ResNet into several shallow sub-ResNets that are trained independently in a distributed manner for
We propose a new framework for computing the embeddings of large-scale graphs on a single machine. A graph embedding is a fixed length vector representation for each node (and/or edge-type) in a graph and has emerged as the de-facto approach to apply
Model parallelism has become a necessity for training modern large-scale deep language models. In this work, we identify a new and orthogonal dimension from existing model parallel approaches: it is possible to perform pipeline parallelism within a s