@csaba

Complexity of Learning: the Case of Everyday Neural Networks

, and . Proceedings of IEEE WCCI ICNN'94, 1, page 61--65. Orlando, Florida, (June 1994)

Abstract

Nowadays artificial neural networks (ANNs) receive an increasing attention. However recent computer architectures do not allow yet the implementation of large ANNs. Thus it is an important question to examine how the learning time of ANNs scales respect to their size (and/or with the size of the tasks). Judd has introduced a computational framework for the learning problem (J. Judd, ``Neural Network Design and the Complexity of Learning.'' A Bradford Book, MIT Press, Cambridge, 1990) and proved, that learning in neural networks in general is too hard, i.e. in the worst case learning in neural networks is NP-complete. However, in his proof he restricts the domain of neural network architectures and tasks in such a way, that ``everyday'' neural network architectures, such as the one of the back-propagation algorithm, are excluded. Consequently Judd's proof does not tell anything for these types of networks. First we outline a thorough framework for loading. The framework enables to differentiate between loading problems at a finer level. Two theorems are presented about the complexity of learning for ``everyday'' ANN architectures. The first theorem says, that for extended binary tasks and in the worst-case, the loading problem is NP-complete, while the second says, that for binary tasks and basis LUF there exists a polynomial time algorithm. From these results it seems that the loading problem for ``everyday'' neural network is interesting from the mathematical point of view as it lies on the boundary of efficiently solvable problems.

Links and resources

Tags