Artikel,

Approximation capabilities of multilayer feedforward networks

, und .
Neural Networks, 4 (2): 251--257 (1991)
DOI: 10.1016/0893-6080(91)90009-T

Zusammenfassung

We show that standard multilayer feedforward networks with as few as a single hidden layer and arbitrary bounded and nonconstant activation function are universal approximators with respect to Lp(渭) performance criteria, for arbitrary finite input environment measures 渭, provided only that sufficiently many hidden units are available. If the activation function is continuous, bounded and nonconstant, then continuous mappings can be learned uniformly over compact input sets. We also give very general conditions ensuring that networks with sufficiently smooth activation functions are capable of arbitrarily accurate approximation to a function and its derivatives.

Tags

Nutzer

  • @baby9992006

Kommentare und Rezensionen