The Grid Corpus is a large multitalker audiovisual sentence corpus designed to support joint computational-behavioral studies in speech perception. In brief, the corpus consists of high-quality audio and video (facial) recordings of 1000 sentences spoken by each of 34 talkers (18 male, 16 female), for a total of 34000 sentences. Sentences are of the form "put red at G9 now". audio_25k.zip contains the wav format utterances at a 25 kHz sampling rate in a separate directory per talker alignments.zip provides word-level time alignments, again separated by talker s1.zip, s2.zip etc contain .jpg videos for each talker [note that due to an oversight, no video for talker t21 is available] The Grid Corpus is described in detail in the paper jasagrid.pdf included in the dataset.
A. Dulny, A. Hotho, and A. Krause. Machine Learning and Knowledge Discovery in Databases: Research Track, page 438--455. Cham, Springer Nature Switzerland, (2023)
A. Dulny, A. Hotho, and A. Krause. Machine Learning and Knowledge Discovery in Databases: Research Track, page 438--455. Cham, Springer Nature Switzerland, (2023)
A. Dulny, A. Hotho, and A. Krause. Machine Learning and Knowledge Discovery in Databases: Research Track, page 438--455. Cham, Springer Nature Switzerland, (2023)
H. Zhang, A. Santos, and J. Freire. Proceedings of the 30th ACM International Conference on Information &$\mathsemicolon$ Knowledge Management, ACM, (October 2021)
K. Piczak. Proceedings of the 23rd ACM International Conference on Multimedia, page 1015–1018. New York, NY, USA, Association for Computing Machinery, (2015)