Author of the publication

Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention

, , , and . (2020)cite arxiv:2006.16236Comment: ICML 2020, project at https://linear-transformers.com/.

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

Masked Autoencoding Does Not Help Natural Language Supervision at Scale., , , , and . CVPR, page 23432-23444. IEEE, (2023)Not All Samples Are Created Equal: Deep Learning with Importance Sampling., and . ICML, volume 80 of Proceedings of Machine Learning Research, page 2530-2539. PMLR, (2018)Biased Importance Sampling for Deep Neural Network Training., and . CoRR, (2017)Self Supervision Does Not Help Natural Language Supervision at Scale., , , , and . CoRR, (2023)Learning local feature aggregation functions with backpropagation., , , and . EUSIPCO, page 748-752. IEEE, (2017)Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention., , , and . ICML, volume 119 of Proceedings of Machine Learning Research, page 5156-5165. PMLR, (2020)Controllable Music Production with Diffusion Models and Guidance Gradients., , , , and . CoRR, (2023)Fast Transformers with Clustered Attention., , and . NeurIPS, (2020)Fast Supervised LDA for Discovering Micro-Events in Large-Scale Video Datasets., , , and . ACM Multimedia, page 332-336. ACM, (2016)Processing Megapixel Images with Deep Attention-Sampling Models., and . ICML, volume 97 of Proceedings of Machine Learning Research, page 3282-3291. PMLR, (2019)