Go Wider Instead of Deeper


الملخص بالإنكليزية

More transformer blocks with residual connections have recently achieved impressive results on various tasks. To achieve better performance with fewer trainable parameters, recent methods are proposed to go shallower by parameter sharing or model compressing along with the depth. However, weak modeling capacity limits their performance. Contrastively, going wider by inducing more trainable matrixes and parameters would produce a huge model requiring advanced parallelism to train and inference. In this paper, we propose a parameter-efficient framework, going wider instead of deeper. Specially, following existing works, we adapt parameter sharing to compress along depth. But, such deployment would limit the performance. To maximize modeling capacity, we scale along model width by replacing feed-forward network (FFN) with mixture-of-experts (MoE). Across transformer blocks, instead of sharing normalization layers, we propose to use individual layernorms to transform various semantic representations in a more parameter-efficient way. To evaluate our plug-and-run framework, we design WideNet and conduct comprehensive experiments on popular computer vision and natural language processing benchmarks. On ImageNet-1K, our best model outperforms Vision Transformer (ViT) by $1.5%$ with $0.72 times$ trainable parameters. Using $0.46 times$ and $0.13 times$ parameters, our WideNet can still surpass ViT and ViT-MoE by $0.8%$ and $2.1%$, respectively. On four natural language processing datasets, WideNet outperforms ALBERT by $1.8%$ on average and surpass BERT using factorized embedding parameterization by $0.8%$ with fewer parameters.

تحميل البحث