We introduce the Generative Query Network (GQN), a framework within which machines learn to perceive their surroundings by training only on data obtained by themselves as they move around scenes. Much like infants and animals, the GQN learns by trying to make sense of its observations of the world around it. In doing so, the GQN learns about plausible scenes and their geometrical properties, without any human labelling of the contents of scenes.
In this project, we provide our implementations of CNN [Zeng et al., 2014] and PCNN [Zeng et al.,2015] and their extended version with sentence-level attention scheme [Lin et al., 2016] .
E. Izhikevich. IEEE Transactions on Neural Networks, 15 (5):
1063--1070(September 2004)Yaeger references this article Hereby you are granted
the permission to freely use this figure in your
publications provided that (1) You add the line
Electronic version of the figure and reproduction
permissions are freely available at www.izhikevich.com
to your paper and (2) you send me a copy of your paper
when it is published..
P. Xia, S. Wu, and B. Van Durme. Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), page 7516--7533. Association for Computational Linguistics, (November 2020)