Do you want to publish a course? Click here

Forecasting Using Reservoir Computing: The Role of Generalized Synchronization

344   0   0.0 ( 0 )
 Added by Randall Clark
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Reservoir computers (RC) are a form of recurrent neural network (RNN) used for forecasting time series data. As with all RNNs, selecting the hyperparameters presents a challenge when training on new inputs. We present a method based on generalized synchronization (GS) that gives direction in designing and evaluating the architecture and hyperparameters of a RC. The auxiliary method for detecting GS provides a pre-training test that guides hyperparameter selection. Furthermore, we provide a metric for a well trained RC using the reproduction of the input systems Lyapunov exponents.

rate research

Read More

Reservoir computers (RC) are a form of recurrent neural network (RNN) used for forecasting timeseries data. As with all RNNs, selecting the hyperparameters presents a challenge when training onnew inputs. We present a method based on generalized synchronization (GS) that gives direction in designing and evaluating the architecture and hyperparameters of an RC. The auxiliary method for detecting GS provides a computationally efficient pre-training test that guides hyperparameterselection. Furthermore, we provide a metric for RC using the reproduction of the input systems Lyapunov exponentsthat demonstrates robustness in prediction.
A new explanation of geometric nature of the reservoir computing phenomenon is presented. Reservoir computing is understood in the literature as the possibility of approximating input/output systems with randomly chosen recurrent neural systems and a trained linear readout layer. Light is shed on this phenomenon by constructing what is called strongly universal reservoir systems as random projections of a family of state-space systems that generate Volterra series expansions. This procedure yields a state-affine reservoir system with randomly generated coefficients in a dimension that is logarithmically reduced with respect to the original system. This reservoir system is able to approximate any element in the fading memory filters class just by training a different linear readout for each different filter. Explicit expressions for the probability distributions needed in the generation of the projected reservoir system are stated and bounds for the committed approximation error are provided.
Reservoir computing is an emerging methodology for neuromorphic computing that is especially well-suited for hardware implementations in size, weight, and power (SWaP) constrained environments. This work proposes a novel hardware implementation of a reservoir computer using a planar nanomagnet array. A small nanomagnet reservoir is demonstrated via micromagnetic simulations to be able to identify simple waveforms with 100% accuracy. Planar nanomagnet reservoirs are a promising new solution to the growing need for dedicated neuromorphic hardware.
We simulated our nanomagnet reservoir computer (NMRC) design on benchmark tasks, demonstrating NMRCs high memory content and expressibility. In support of the feasibility of this method, we fabricated a frustrated nanomagnet reservoir layer. Using this structure, we describe a low-power, low-area system with an area-energy-delay product $10^7$ lower than conventional RC systems, that is therefore promising for size, weight, and power (SWaP) constrained applications.
We propose an approximation of Echo State Networks (ESN) that can be efficiently implemented on digital hardware based on the mathematics of hyperdimensional computing. The reservoir of the proposed integer Echo State Network (intESN) is a vector containing only n-bits integers (where n<8 is normally sufficient for a satisfactory performance). The recurrent matrix multiplication is replaced with an efficient cyclic shift operation. The proposed intESN approach is verified with typical tasks in reservoir computing: memorizing of a sequence of inputs; classifying time-series; learning dynamic processes. Such architecture results in dramatic improvements in memory footprint and computational efficiency, with minimal performance loss. The experiments on a field-programmable gate array confirm that the proposed intESN approach is much more energy efficient than the conventional ESN.

suggested questions

comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا