Backprop as Functor: A compositional perspective on supervised learning
, , and .
(2017)cite arxiv:1711.10455Comment: 13 pages + 4 page appendix.

A supervised learning algorithm searches over a set of functions $A B$ parametrised by a space $P$ to find the best approximation to some ideal function $fA B$. It does this by taking examples $(a,f(a)) ın AB$, and updating the parameter according to some rule. We define a category where these update rules may be composed, and show that gradient descent---with respect to a fixed step size and an error function satisfying a certain property---defines a monoidal functor from a category of parametrised functions to this category of update rules. This provides a structural perspective on backpropagation, as well as a broad generalisation of neural networks.
  • @kirk86
  • @dblp
This publication has not been reviewed yet.

rating distribution
average user rating0.0 out of 5.0 based on 0 reviews
    Please log in to take part in the discussion (add own reviews or comments).