From post

Counterfactually-Augmented SNLI Training Data Does Not Yield Better Generalization Than Unaugmented Data.

, , и . Insights, стр. 82-87. Association for Computational Linguistics, (2020)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed.

 

Другие публикации лиц с тем же именем

Grammatical Analysis of Pretrained Sentence Encoders with Acceptability Judgments., и . CoRR, (2019)Sentence Encoders on STILTs: Supplementary Training on Intermediate Labeled-data Tasks., , и . CoRR, (2018)When Do You Need Billions of Words of Pretraining Data?, , , и . CoRR, (2020)Asking Crowdworkers to Write Entailment Examples: The Best of Bad Options., , и . AACL/IJCNLP, стр. 672-686. Association for Computational Linguistics, (2020)What Do NLP Researchers Believe? Results of the NLP Community Metasurvey., , , , , , , , , и 1 other автор(ы). CoRR, (2022)Probing What Different NLP Tasks Teach Machines about Function Word Comprehension., , , , , , , , , и 2 other автор(ы). CoRR, (2019)Studying Large Language Model Generalization with Influence Functions., , , , , , , , , и 7 other автор(ы). CoRR, (2023)BBQ: A Hand-Built Bias Benchmark for Question Answering., , , , , , , и . CoRR, (2021)The Capacity for Moral Self-Correction in Large Language Models., , , , , , , , , и 39 other автор(ы). CoRR, (2023)Measuring Progress on Scalable Oversight for Large Language Models., , , , , , , , , и 36 other автор(ы). CoRR, (2022)