Author of the publication

VisQA: X-raying Vision and Language Reasoning in Transformers.

, , , , , and . IEEE Trans. Vis. Comput. Graph., 28 (1): 976-986 (2022)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

Estimating semantic structure for the VQA answer space., , , and . CoRR, (2020)Roses Are Red, Violets Are Blue... but Should Vqa Expect Them To?, , , and . CoRR, (2020)An experimental study of the vision-bottleneck in VQA., , , , and . CoRR, (2022)Are E2E ASR models ready for an industrial usage?, and . CoRR, (2021)Weak Supervision Helps Emergence of Word-Object Alignment and Improves Vision-Language Tasks., , , and . ECAI, volume 325 of Frontiers in Artificial Intelligence and Applications, page 2728-2735. IOS Press, (2020)VisQA: X-raying Vision and Language Reasoning in Transformers., , , , , and . CoRR, (2021)Weak Supervision helps Emergence of Word-Object Alignment and improves Vision-Language Tasks., , , and . CoRR, (2019)Supervising the Transfer of Reasoning Patterns in VQA., , , , and . NeurIPS, page 18256-18267. (2021)How Transferable Are Reasoning Patterns in VQA?, , , , , and . CVPR, page 4207-4216. Computer Vision Foundation / IEEE, (2021)Roses Are Red, Violets Are Blue... but Should VQA Expect Them To?, , , and . CVPR, page 2776-2785. Computer Vision Foundation / IEEE, (2021)