Engineer friends often ask me: Graph Deep Learning sounds great, but are there any big commercial success stories? Is it being deployed in practical applications? Besides the obvious ones–recommendation systems at Pinterest, Alibaba and Twitter–a slightly nuanced success story is the Transformer architecture, which has taken the NLP industry by storm. Through this post, I want to establish links between Graph Neural Networks (GNNs) and Transformers. I’ll talk about the intuitions behind model architectures in the NLP and GNN communities, make connections using equations and figures, and discuss how we could work together to drive progress.
E. Izhikevich. IEEE Transactions on Neural Networks, 15 (5):
1063--1070(September 2004)Yaeger references this article Hereby you are granted
the permission to freely use this figure in your
publications provided that (1) You add the line
Electronic version of the figure and reproduction
permissions are freely available at www.izhikevich.com
to your paper and (2) you send me a copy of your paper
when it is published..
E. Izhikevich. IEEE transactions on neural networks / a publication
of the IEEE Neural Networks Council, 14 (6):
1569--1572(November 2003)Yaeger references this article The following figure
summarizes the model, and it is available in pdf, gif,
bmp, and eps formats. Hereby you are granted the
permission to freely use this figure in your
publications provided that (1) You add the line
Electronic version of the figure and reproduction
permissions are freely available at www.izhikevich.com
to your paper and (2) you send me a copy of your paper
when it is published..
A. Hyvärinen, und E. Oja. Neural Networks: The Official Journal of the
International Neural Network Society, 13 (4-5):
411--430(Juni 2000)PMID: 10946390.
L. Yaeger. Artificial Life III, Vol. XVII of SFI Studies in the
Sciences of Complexity, Santa Fe Institute, Seite 263--298. Los Alamos, New Mexico, Addison-Wesley, (1993)