A piecewise weight update rule for a supervised training of cortical algorithms
Loading...
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Springer London
Abstract
First introduced by MountCastle, cortical algorithms (CA) are positioned to outperform artificial neural networks second generations due to their ability to hierarchically store sequences of patterns in an invariant form. Despite their closer resemblance to the human cortex and their hypothetical improved performance, CA adoption as a deep learning approach remains limited in energy aware environments due to their high computational training complexity. Motivated to reduce CA supervised training complexity in limited hardware resources environments, we propose in this paper a piecewise linear or polygonal weight update rule for a supervised training of CA based on a linearization of the exponential function. As shown by our simulation results on 12 publicly available databases and our developed error-bound proofs, the proposed rule reduces CA training time by a factor of 3 at the expense of a 0.5% degradation in accuracy. A simpler approximation relying on the asymptotes at 0 and infinity reduces training time by a factor of 3.5 coupled with a reduction of 1.49% in accuracy. © 2017, The Natural Computing Applications Forum.
Description
Keywords
Cortical algorithms, Energy aware computing, Model complexity, Polygonal approximation, Supervised learning, Complex networks, Deep learning, Exponential functions, Neural networks, Piecewise linear techniques, Power management, Energy-aware computing, Hardware resources, Learning approach, Polygonal approximations, Second generation, Supervised trainings, Training complexity, Approximation algorithms