Neural networks are the workhorse of many of the algorithms developed at DeepMind. For example, AlphaGo uses convolutional neural networks to evaluate board positions in the game of Go and DQN and Deep Reinforcement Learning algorithms use neural networks to choose actions to play at super-human level on video games. This post introduces some of our latest research in progressing the capabilities and training procedures of neural networks called Decoupled Neural Interfaces using Synthetic Gradients. This work gives us a way to allow neural networks to communicate, to learn to send messages between themselves, in a decoupled, scalable manner paving the way for multiple neural networks to communicate with each other or improving the long term temporal dependency of recurrent networks.
A. Herschtal, and B. Raskutti. ICML '04: Proceedings of the twenty-first international conference on Machine learning, page 49. New York, NY, USA, ACM, (2004)
J. Martens. (2014)cite arxiv:1412.1193Comment: New title and abstract. Added multiple sections, including a proper introduction/outline and one on convergence speed. Many other revisions throughout.
J. Zheng, S. Pawar, and D. Goodman. (2017)cite arxiv:1710.04626Comment: Submitted to IEEE Transactions on Visualization and Computer Graphics on 11/04/2018.