No Arabic abstract
In this paper, the echo state network (ESN) memory capacity, which represents the amount of input data an ESN can store, is analyzed for a new type of deep ESNs. In particular, two deep ESN architectures are studied. First, a parallel deep ESN is proposed in which multiple reservoirs are connected in parallel allowing them to average outputs of multiple ESNs, thus decreasing the prediction error. Then, a series architecture ESN is proposed in which ESN reservoirs are placed in cascade that the output of each ESN is the input of the next ESN in the series. This series ESN architecture can capture more features between the input sequence and the output sequence thus improving the overall prediction accuracy. Fundamental analysis shows that the memory capacity of parallel ESNs is equivalent to that of a traditional shallow ESN, while the memory capacity of series ESNs is smaller than that of a traditional shallow ESN.In terms of normalized root mean square error, simulation results show that the parallel deep ESN achieves 38.5% reduction compared to the traditional shallow ESN while the series deep ESN achieves 16.8% reduction.
Recurrent Neural Networks (RNNs) have demonstrated their outstanding ability in sequence tasks and have achieved state-of-the-art in wide range of applications, such as industrial, medical, economic and linguistic. Echo State Network (ESN) is simple type of RNNs and has emerged in the last decade as an alternative to gradient descent training based RNNs. ESN, with a strong theoretical ground, is practical, conceptually simple, easy to implement. It avoids non-converging and computationally expensive in the gradient descent methods. Since ESN was put forward in 2002, abundant existing works have promoted the progress of ESN, and the recently introduced Deep ESN model opened the way to uniting the merits of deep learning and ESNs. Besides, the combinations of ESNs with other machine learning models have also overperformed baselines in some applications. However, the apparent simplicity of ESNs can sometimes be deceptive and successfully applying ESNs needs some experience. Thus, in this paper, we categorize the ESN-based methods to basic ESNs, DeepESNs and combinations, then analyze them from the perspective of theoretical studies, network designs and specific applications. Finally, we discuss the challenges and opportunities of ESNs by summarizing the open questions and proposing possible future works.
A key attribute that drives the unprecedented success of modern Recurrent Neural Networks (RNNs) on learning tasks which involve sequential data, is their ability to model intricate long-term temporal dependencies. However, a well established measure of RNNs long-term memory capacity is lacking, and thus formal understanding of the effect of depth on their ability to correlate data throughout time is limited. Specifically, existing depth efficiency results on convolutional networks do not suffice in order to account for the success of deep RNNs on data of varying lengths. In order to address this, we introduce a measure of the networks ability to support information flow across time, referred to as the Start-End separation rank, which reflects the distance of the function realized by the recurrent network from modeling no dependency between the beginning and end of the input sequence. We prove that deep recurrent networks support Start-End separation ranks which are combinatorially higher than those supported by their shallow counterparts. Thus, we establish that depth brings forth an overwhelming advantage in the ability of recurrent networks to model long-term dependencies, and provide an exemplar of quantifying this key attribute which may be readily extended to other RNN architectures of interest, e.g. variants of LSTM networks. We obtain our results by considering a class of recurrent networks referred to as Recurrent Arithmetic Circuits, which merge the hidden state with the input via the Multiplicative Integration operation, and empirically demonstrate the discussed phenomena on common RNNs. Finally, we employ the tool of quantum Tensor Networks to gain additional graphic insight regarding the complexity brought forth by depth in recurrent networks.
The quest for biologically plausible deep learning is driven, not just by the desire to explain experimentally-observed properties of biological neural networks, but also by the hope of discovering more efficient methods for training artificial networks. In this paper, we propose a new algorithm named Variational Probably Flow (VPF), an extension of minimum probability flow for training binary Deep Boltzmann Machines (DBMs). We show that weight updates in VPF are local, depending only on the states and firing rates of the adjacent neurons. Unlike contrastive divergence, there is no need for Gibbs confabulations; and unlike backpropagation, alternating feedforward and feedback phases are not required. Moreover, the learning algorithm is effective for training DBMs with intra-layer connections between the hidden nodes. Experiments with MNIST and Fashion MNIST demonstrate that VPF learns reasonable features quickly, reconstructs corrupted images more accurately, and generates samples with a high estimated log-likelihood. Lastly, we note that, interestingly, if an asymmetric version of VPF exists, the weight updates directly explain experimental results in Spike-Timing-Dependent Plasticity (STDP).
Reinforcement learning (RL) algorithms have made huge progress in recent years by leveraging the power of deep neural networks (DNN). Despite the success, deep RL algorithms are known to be sample inefficient, often requiring many rounds of interaction with the environments to obtain satisfactory performance. Recently, episodic memory based RL has attracted attention due to its ability to latch on good actions quickly. In this paper, we present a simple yet effective biologically inspired RL algorithm called Episodic Memory Deep Q-Networks (EMDQN), which leverages episodic memory to supervise an agent during training. Experiments show that our proposed method can lead to better sample efficiency and is more likely to find good policies. It only requires 1/5 of the interactions of DQN to achieve many state-of-the-art performances on Atari games, significantly outperforming regular DQN and other episodic memory based RL algorithms.
Echo state networks (ESNs) have been recently proved to be universal approximants for input/output systems with respect to various $L ^p$-type criteria. When $1leq p< infty$, only $p$-integrability hypotheses need to be imposed, while in the case $p=infty$ a uniform boundedness hypotheses on the inputs is required. This note shows that, in the last case, a universal family of ESNs can be constructed that contains exclusively elements that have the echo state and the fading memory properties. This conclusion could not be drawn with the results and methods available so far in the literature.