Abstract:
Recurrent neural networks (RNN) are a type of artificial neural networks (ANN) that have been successfully applied to many problems in artificial intelligence. However, they are expensive to train since the number of learned weights grows exponentially with the number of hidden neurons. Non-iterative training algorithms have been proposed to reduce the training time, mainly on feedforward ANN. In this work, the application of non-iterative randomized training algorithms to various RNN architectures, including Elman RNN, fully connected RNN, and long short-term memory (LSTM), are investigated. The mathematical formulation and theoretical computational complexity of the proposed algorithms are presented. Finally, their performance is empirically compared to other iterative RNN training algorithms on time series prediction and sequential decision-making problems. Non-iteratively-trained RNN architectures showed promising results as significant training speedup of up to 99%, and improved repeatability were achieved compared to backpropagation-trained RNN. Although the decrease in prediction accuracy was found to be statistically significant based on Friedman and ANOVA testing, some applications like real-time embedded systems can tolerate and make use of that. © 2018 Elsevier B.V.