ترغب بنشر مسار تعليمي؟ اضغط هنا

Hemingway: Modeling Distributed Optimization Algorithms

166   0   0.0 ( 0 )
 نشر من قبل Xinghao Pan
 تاريخ النشر 2017
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Distributed optimization algorithms are widely used in many industrial machine learning applications. However choosing the appropriate algorithm and cluster size is often difficult for users as the performance and convergence rate of optimization algorithms vary with the size of the cluster. In this paper we make the case for an ML-optimizer that can select the appropriate algorithm and cluster size to use for a given problem. To do this we propose building two models: one that captures the system level characteristics of how computation, communication change as we increase cluster sizes and another that captures how convergence rates change with cluster sizes. We present preliminary results from our prototype implementation called Hemingway and discuss some of the challenges involved in developing such a system.



قيم البحث

اقرأ أيضاً

Distributed training of deep learning models on large-scale training data is typically conducted with asynchronous stochastic optimization to maximize the rate of updates, at the cost of additional noise introduced from asynchrony. In contrast, the s ynchronous approach is often thought to be impractical due to idle time wasted on waiting for straggling workers. We revisit these conventional beliefs in this paper, and examine the weaknesses of both approaches. We demonstrate that a third approach, synchronous optimization with backup workers, can avoid asynchronous noise while mitigating for the worst stragglers. Our approach is empirically validated and shown to converge faster and to better test accuracies.
In this paper we study fractional coloring from the angle of distributed computing. Fractional coloring is the linear relaxation of the classical notion of coloring, and has many applications, in particular in scheduling. It was proved by Hasemann, H irvonen, Rybicki and Suomela (2016) that for every real $alpha>1$ and integer $Delta$, a fractional coloring of total weight at most $alpha(Delta+1)$ can be obtained deterministically in a single round in graphs of maximum degree $Delta$, in the LOCAL model of computation. However, a major issue of this result is that the output of each vertex has unbounded size. Here we prove that even if we impose the more realistic assumption that the output of each vertex has constant size, we can find fractional colorings of total weight arbitrarily close to known tight bounds for the fractional chromatic number in several cases of interest. More precisely, we show that for any fixed $epsilon > 0$ and $Delta$, a fractional coloring of total weight at most $Delta+epsilon$ can be found in $O(log^*n)$ rounds in graphs of maximum degree $Delta$ with no $K_{Delta+1}$, while finding a fractional coloring of total weight at most $Delta$ in this case requires $Omega(log log n)$ rounds for randomized algorithms and $Omega( log n)$ rounds for deterministic algorithms. We also show how to obtain fractional colorings of total weight at most $2+epsilon$ in grids of any fixed dimension, for any $epsilon>0$, in $O(log^*n)$ rounds. Finally, we prove that in sparse graphs of large girth from any proper minor-closed family we can find a fractional coloring of total weight at most $2+epsilon$, for any $epsilon>0$, in $O(log n)$ rounds.
Data parallelism does a good job in speeding up the training. However, when it comes to the case when the memory of a single device can not host a whole model, data parallelism would not have the chance to do anything. Another option is to split the model by operator, or horizontally. Megatron-LM introduced a 1-Dimensional distributed method to use GPUs to speed up the training process. Optimus is a 2D solution for distributed tensor parallelism. However, these methods have a high communication overhead and a low scaling efficiency on large-scale computing clusters. To solve this problem, we investigate the 2.5-Dimensional distributed tensor parallelism.Introduced by Solomonik et al., 2.5-Dimensional Matrix Multiplication developed an effective method to perform multiple Cannons algorithm at the same time to increase the efficiency. With many restrictions of Cannons Algorithm and a huge amount of shift operation, we need to invent a new method of 2.5-dimensional matrix multiplication to enhance the performance. Absorbing the essence from both SUMMA and 2.5-Dimensional Matrix Multiplication, we introduced SUMMA2.5-LM for language models to overcome the abundance of unnecessary transmission loss result from the increasing size of language model parallelism. Compared to previous 1D and 2D model parallelization of language models, our SUMMA2.5-LM managed to reduce the transmission cost on each layer, which could get a 1.45X efficiency according to our weak scaling result between 2.5-D [4,4,4] arrangement and 2-D [8,8,1] arrangement.
187 - Jason Dai , Yiheng Wang , Xin Qiu 2018
This paper presents BigDL (a distributed deep learning framework for Apache Spark), which has been used by a variety of users in the industry for building deep learning applications on production big data platforms. It allows deep learning applicatio ns to run on the Apache Hadoop/Spark cluster so as to directly process the production data, and as a part of the end-to-end data analysis pipeline for deployment and management. Unlike existing deep learning frameworks, BigDL implements distributed, data parallel training directly on top of the functional compute model (with copy-on-write and coarse-grained operations) of Spark. We also share real-world experience and war stories of users that have adopted BigDL to address their challenges(i.e., how to easily build end-to-end data analysis and deep learning pipelines for their production data).
364 - Ji Liu , Jizhou Huang , Yang Zhou 2021
In recent years, data and computing resources are typically distributed in the devices of end users, various regions or organizations. Because of laws or regulations, the distributed data and computing resources cannot be directly shared among differ ent regions or organizations for machine learning tasks. Federated learning emerges as an efficient approach to exploit distributed data and computing resources, so as to collaboratively train machine learning models, while obeying the laws and regulations and ensuring data security and data privacy. In this paper, we provide a comprehensive survey of existing works for federated learning. We propose a functional architecture of federated learning systems and a taxonomy of related techniques. Furthermore, we present the distributed training, data communication, and security of FL systems. Finally, we analyze their limitations and propose future research directions.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا