ﻻ يوجد ملخص باللغة العربية
Transformers with linearised attention (linear Transformers) have demonstrated the practical scalability and effectiveness of outer product-based Fast Weight Programmers (FWPs) from the 90s. However, the original FWP formulation is more general than the one of linear Transformers: a slow neural network (NN) continually reprograms the weights of a fast NN with arbitrary NN architectures. In existing linear Transformers, both NNs are feedforward and consist of a single layer. Here we explore new variations by adding recurrence to the slow and fast nets. We evaluate our novel recurrent FWPs (RFWPs) on two synthetic algorithmic tasks (code execution and sequential ListOps), Wikitext-103 language models, and on the Atari 2600 2D game environment. Our models exhibit properties of Transformers and RNNs. In the reinforcement learning setting, we report large improvements over LSTM in several Atari games. Our code is public.
We show the formal equivalence of linearised self-attention mechanisms and fast weight controllers from the early 90s, where a ``slow neural net learns by gradient descent to program the ``fast weights of another net through sequences of elementary p
Deep Reinforcement Learning (RL) powered by neural net approximation of the Q function has had enormous empirical success. While the theory of RL has traditionally focused on linear function approximation (or eluder dimension) approaches, little is k
Transformers have been recently adapted for large scale image classification, achieving high scores shaking up the long supremacy of convolutional neural networks. However the optimization of image transformers has been little studied so far. In this
We compute non-linear corrections to the matter power spectrum taking the time- and scale-dependent free-streaming length of neutrinos into account. We adopt a hybrid scheme that matches the full Boltzmann hierarchy to an effective two-fluid descript
Recently, Neural Architecture Search (NAS) has been widely applied to automate the design of deep neural networks. Various NAS algorithms have been proposed to reduce the search cost and improve the generalization performance of those final selected