Abstract
In 1988, Eric B. Baum showed that two-layers neural networks with threshold
activation function can perfectly memorize the binary labels of $n$ points in
general position in $R^d$ using only $n/d \urcorner$
neurons. We observe that with ReLU networks, using four times as many neurons
one can fit arbitrary real labels. Moreover, for approximate memorization up to
error $\epsilon$, the neural tangent kernel can also memorize with only
$Ołeft(nd łog(1/\epsilon) \right)$ neurons (assuming that the
data is well dispersed too). We show however that these constructions give rise
to networks where the magnitude of the neurons' weights are far from optimal.
In contrast we propose a new training procedure for ReLU networks, based on
complex (as opposed to real) recombination of the neurons, for which we show
approximate memorization with both $Ołeft(nd \cdot
łog(1/\epsilon)\epsilon\right)$ neurons, as well as nearly-optimal
size of the weights.
Users
Please
log in to take part in the discussion (add own reviews or comments).