Author of the publication

An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale

, , , , , , , , , , , and . (2020)cite arxiv:2010.11929Comment: Fine-tuning code and pre-trained models are available at https://github.com/google-research/vision_transformer. ICLR camera-ready version with 2 small modifications: 1) Added a discussion of CLS vs GAP classifier in the appendix, 2) Fixed an error in exaFLOPs computation in Figure 5 and Table 6 (relative performance of models is basically not affected).

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

S4L: Self-Supervised Semi-Supervised Learning., , , and . ICCV, page 1476-1485. IEEE, (2019)GWAS on GPUs: Streaming Data from HDD for Sustained Performance., and . Euro-Par, volume 8097 of Lecture Notes in Computer Science, page 788-799. Springer, (2013)How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers., , , , , and . Trans. Mach. Learn. Res., (2022)Deep multi-class learning from label proportions., , , , and . CoRR, (2019)PaLI-3 Vision Language Models: Smaller, Faster, Stronger., , , , , , , , , and 9 other author(s). CoRR, (2023)VeLO: Training Versatile Learned Optimizers by Scaling Up., , , , , , , , , and 1 other author(s). CoRR, (2022)Scaling Vision Transformers., , , and . CoRR, (2021)LiT: Zero-Shot Transfer with Locked-image text Tuning., , , , , , and . CVPR, page 18102-18112. IEEE, (2022)On Robustness and Transferability of Convolutional Neural Networks., , , , , , , , , and 4 other author(s). CVPR, page 16458-16468. Computer Vision Foundation / IEEE, (2021)The Efficiency Misnomer., , , , and . ICLR, OpenReview.net, (2022)