ﻻ يوجد ملخص باللغة العربية
We show that Transformer encoder architectures can be sped up, with limited accuracy costs, by replacing the self-attention sublayers with simple linear transformations that mix input tokens. These linear mixers, along with standard nonlinearities in feed-forward layers, prove competent at modeling semantic relationships in several text classification tasks. Most surprisingly, we find that replacing the self-attention sublayer in a Transformer encoder with a standard, unparameterized Fourier Transform achieves 92-97% of the accuracy of BERT counterparts on the GLUE benchmark, but trains 80% faster on GPUs and 70% faster on TPUs at standard 512 input lengths. At longer input lengths, our FNet model is significantly faster: when compared to the efficient Transformers on the Long Range Arena benchmark, FNet matches the accuracy of the most accurate models, while outpacing the fastest models across all sequence lengths on GPUs (and across relatively shorter lengths on TPUs). Finally, FNet has a light memory footprint and is particularly efficient at smaller model sizes; for a fixed speed and accuracy budget, small FNet models outperform Transformer counterparts.
Reconstructing continuous signals from a small number of discrete samples is a fundamental problem across science and engineering. In practice, we are often interested in signals with simple Fourier structure, such as bandlimited, multiband, and Four
The Fractional Fourier Transform (FrFT) has widespread applications in areas like signal analysis, Fourier optics, diffraction theory, etc. The Holomorphic Fractional Fourier Transform (HFrFT) proposed in the present paper may be used in the same wid
In this work we verify the sufficiency of a Jensens necessary and sufficient condition for a class of genus 0 or 1 entire functions to have only real zeros. They are Fourier transforms of even, positive, indefinitely differentiable, and very fast dec
Spatial Semantic Pointers (SSPs) have recently emerged as a powerful tool for representing and transforming continuous space, with numerous applications to cognitive modelling and deep learning. Fundamental to SSPs is the notion of similarity between
We show that the adjunction counits of a Fourier-Mukai transform $Phi$ from $D(X_1)$ to $D(X_2)$ arise from maps of the kernels of the corresponding Fourier-Mukai transforms. In a very general setting of proper separable schemes of finite type over a