ترغب بنشر مسار تعليمي؟ اضغط هنا

Statistically Meaningful Approximation: a Case Study on Approximating Turing Machines with Transformers

83   0   0.0 ( 0 )
 نشر من قبل Colin Wei
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

A common lens to theoretically study neural net architectures is to analyze the functions they can approximate. However, the constructions from approximation theory often have unrealistic aspects, for example, reliance on infinite precision to memorize target function values, which make these results potentially less meaningful. To address these issues, this work proposes a formal definition of statistically meaningful approximation which requires the approximating network to exhibit good statistical learnability. We present case studies on statistically meaningful approximation for two classes of functions: boolean circuits and Turing machines. We show that overparameterized feedforward neural nets can statistically meaningfully approximate boolean circuits with sample complexity depending only polynomially on the circuit size, not the size of the approximating network. In addition, we show that transformers can statistically meaningfully approximate Turing machines with computation time bounded by $T$, requiring sample complexity polynomial in the alphabet size, state space size, and $log (T)$. Our analysis introduces new tools for generalization bounds that provide much tighter sample complexity guarantees than the typical VC-dimension or norm-based bounds, which may be of independent interest.



قيم البحث

اقرأ أيضاً

Despite their ubiquity in core AI fields like natural language processing, the mechanics of deep attention-based neural networks like the Transformer model are not fully understood. In this article, we present a new perspective towards understanding how Transformers work. In particular, we show that the dot-product attention that is the core of the Transformers operation can be characterized as a kernel learning method on a pair of Banach spaces. In particular, the Transformers kernel is characterized as having an infinite feature dimension. Along the way we consider an extension of the standard kernel learning problem to a binary setting, where data come from two input domains and a response is defined for every cross-domain pair. We prove a new representer theorem for these binary kernel machines with non-Mercer (indefinite, asymmetric) kernels (implying that the functions learned are elements of reproducing kernel Banach spaces rather than Hilbert spaces), and also prove a new universal approximation theorem showing that the Transformer calculation can learn any binary non-Mercer reproducing kernel Banach space pair. We experiment with new kernels in Transformers, and obtain results that suggest the infinite dimensionality of the standard Transformer kernel is partially responsible for its performance. This papers results provide a new theoretical understanding of a very important but poorly understood model in modern machine~learning.
It is well known that recurrent neural networks (RNNs) faced limitations in learning long-term dependencies that have been addressed by memory structures in long short-term memory (LSTM) networks. Matrix neural networks feature matrix representation which inherently preserves the spatial structure of data and has the potential to provide better memory structures when compared to canonical neural networks that use vector representation. Neural Turing machines (NTMs) are novel RNNs that implement notion of programmable computers with neural network controllers to feature algorithms that have copying, sorting, and associative recall tasks. In this paper, we study the augmentation of memory capacity with a matrix representation of RNNs and NTMs (MatNTMs). We investigate if matrix representation has a better memory capacity than the vector representations in conventional neural networks. We use a probabilistic model of the memory capacity using Fisher information and investigate how the memory capacity for matrix representation networks are limited under various constraints, and in general, without any constraints. In the case of memory capacity without any constraints, we found that the upper bound on memory capacity to be $N^2$ for an $Ntimes N$ state matrix. The results from our experiments using synthetic algorithmic tasks show that MatNTMs have a better learning capacity when compared to its counterparts.
Pre-trained Transformer-based models have achieved state-of-the-art performance for various Natural Language Processing (NLP) tasks. However, these models often have billions of parameters, and, thus, are too resource-hungry and computation-intensive to suit low-capability devices or applications with strict latency requirements. One potential remedy for this is model compression, which has attracted a lot of research attention. Here, we summarize the research in compressing Transformers, focusing on the especially popular BERT model. In particular, we survey the state of the art in compression for BERT, we clarify the current best practices for compressing large-scale Transformer models, and we provide insights into the workings of various methods. Our categorization and analysis also shed light on promising future research directions for achieving lightweight, accurate, and generic NLP models.
98 - Dong-Sheng Wang 2019
The model of local Turing machines is introduced, including classical and quantum ones, in the framework of matrix-product states. The locality refers to the fact that at any instance of the computation the heads of a Turing machine have definite loc ations. The local Turing machines are shown to be equivalent to the corresponding circuit models and standard models of Turing machines by simulation methods. This work reveals the fundamental connection between tensor-network states and information processing.
208 - Adrian K. Xu 2021
Clift and Murfet (2019) introduced a naive Bayesian smooth relaxation of Turing machines motivated by work in differential linear logic; this was subsequently used to endow spaces of program codes of bounded length with a smooth manifold structure vi a the staged-pseudo universal Turing machine introduced by Clift, Murfet and Wallbridge (2021). In this paper, we give a general construction for simulating n-tape Turing machines on a single tape Turing machine such that the (naive Bayesian) uncertainty is propagated in an equivalent manner. This result suggests a stronger kind of equivalence between single tape and n-tape Turing machines than that established by classical results, however, the clarification of these implications is open to future work. We then construct a pseudo universal Turing machine which similarly preserves the propagation of uncertainty in its simulations, and observe that this gives rise to a particularly natural smooth relaxation of the space of programs.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا