Article,

Approximation Capabilities of Neural ODEs and Invertible Residual Networks

, , , and .
(2019)cite arxiv:1907.12998.

Abstract

Neural ODEs and i-ResNet are recently proposed methods for enforcing invertibility of residual neural models. Having a generic technique for constructing invertible models can open new avenues for advances in learning systems, but so far the question of whether Neural ODEs and i-ResNets can model any continuous invertible function remained unresolved. Here, we show that both of these models are limited in their approximation capabilities. We then prove that any homeomorphism on a $p$-dimensional Euclidean space can be approximated by a Neural ODE operating on a $2p$-dimensional Euclidean space, and a similar result for i-ResNets. We conclude by showing that capping a Neural ODE or an i-ResNet with a single linear layer is sufficient to turn the model into a universal approximator for non-invertible continuous functions.

Tags

Users

  • @kirk86

Comments and Reviews