@jaeschke

Will we run out of data? An analysis of the limits of scaling datasets in Machine Learning

, , , , , and . (2022)cite arxiv:2211.04325.

Abstract

We analyze the growth of dataset sizes used in machine learning for natural language processing and computer vision, and extrapolate these using two methods; using the historical growth rate and estimating the compute-optimal dataset size for future predicted compute budgets. We investigate the growth in data usage by estimating the total stock of unlabeled data available on the internet over the coming decades. Our analysis indicates that the stock of high-quality language data will be exhausted soon; likely before 2026. By contrast, the stock of low-quality language data and image data will be exhausted only much later; between 2030 and 2050 (for low-quality language) and between 2030 and 2060 (for images). Our work suggests that the current trend of ever-growing ML models that rely on enormous datasets might slow down if data efficiency is not drastically improved or new sources of data become available.

Description

[2211.04325] Will we run out of data? An analysis of the limits of scaling datasets in Machine Learning

Links and resources

Tags

community

  • @becker
  • @jaeschke
  • @parismic
  • @dblp
@jaeschke's tags highlighted