Zusammenfassung
We have now entered the era of trillion parameter machine learning models
trained on billion-sized datasets scraped from the internet. The rise of these
gargantuan datasets has given rise to formidable bodies of critical work that
has called for caution while generating these large datasets. These address
concerns surrounding the dubious curation practices used to generate these
datasets, the sordid quality of alt-text data available on the world wide web,
the problematic content of the CommonCrawl dataset often used as a source for
training large language models, and the entrenched biases in large-scale
visio-linguistic models (such as OpenAI's CLIP model) trained on opaque
datasets (WebImageText). In the backdrop of these specific calls of caution, we
examine the recently released LAION-400M dataset, which is a CLIP-filtered
dataset of Image-Alt-text pairs parsed from the Common-Crawl dataset. We found
that the dataset contains, troublesome and explicit images and text pairs of
rape, pornography, malign stereotypes, racist and ethnic slurs, and other
extremely problematic content. We outline numerous implications, concerns and
downstream harms regarding the current state of large scale datasets while
raising open questions for various stakeholders including the AI community,
regulators, policy makers and data subjects.
Beschreibung
Multimodal datasets: misogyny, pornography, and malignant stereotypes
Links und Ressourcen
Tags