@kirk86

Backprop as Functor: A compositional perspective on supervised learning

, , and . (2017)cite arxiv:1711.10455Comment: 13 pages + 4 page appendix.

Abstract

A supervised learning algorithm searches over a set of functions $A B$ parametrised by a space $P$ to find the best approximation to some ideal function $fA B$. It does this by taking examples $(a,f(a)) ın AB$, and updating the parameter according to some rule. We define a category where these update rules may be composed, and show that gradient descent---with respect to a fixed step size and an error function satisfying a certain property---defines a monoidal functor from a category of parametrised functions to this category of update rules. This provides a structural perspective on backpropagation, as well as a broad generalisation of neural networks.

Description

[1711.10455] Backprop as Functor: A compositional perspective on supervised learning

Links and resources

URL:
BibTeX key:
fong2017backprop
search on:

Comments and Reviews  
(0)

There is no review or comment yet. You can write one!

Tags


Cite this publication