ﻻ يوجد ملخص باللغة العربية
Squeeze-and-Excitation (SE) block presents a channel attention mechanism for modeling global context via explicitly capturing dependencies across channels. However, we are still far from understanding how the SE block works. In this work, we first revisit the SE block, and then present a detailed empirical study of the relationship between global context and attention distribution, based on which we propose a simple yet effective module, called Linear Context Transform (LCT) block. We divide all channels into different groups and normalize the globally aggregated context features within each channel group, reducing the disturbance from irrelevant channels. Through linear transform of the normalized context features, we model global context for each channel independently. The LCT block is extremely lightweight and easy to be plugged into different backbone models while with negligible parameters and computational burden increase. Extensive experiments show that the LCT block outperforms the SE block in image classification task on the ImageNet and object detection/segmentation on the COCO dataset with different backbone models. Moreover, LCT yields consistent performance gains over existing state-of-the-art detection architectures, e.g., 1.5$sim$1.7% AP$^{bbox}$ and 1.0$sim$1.2% AP$^{mask}$ improvements on the COCO benchmark, irrespective of different baseline models of varied capacities. We hope our simple yet effective approach will shed some light on future research of attention-based models.
Continual (sequential) training and multitask (simultaneous) training are often attempting to solve the same overall objective: to find a solution that performs well on all considered tasks. The main difference is in the training regimes, where conti
Data augmentation is a powerful technique to improve performance in applications such as image and text classification tasks. Yet, there is little rigorous understanding of why and how various augmentations work. In this work, we consider a family of
Although spatio-temporal graph neural networks have achieved great empirical success in handling multiple correlated time series, they may be impractical in some real-world scenarios due to a lack of sufficient high-quality training data. Furthermore
Recurrent Neural Networks (RNNs) are used in state-of-the-art models in domains such as speech recognition, machine translation, and language modelling. Sparsity is a technique to reduce compute and memory requirements of deep learning models. Sparse
Generalization across environments is critical to the successful application of reinforcement learning algorithms to real-world challenges. In this paper, we consider the problem of learning abstractions that generalize in block MDPs, families of env