We enhance auto-regressive language models by conditioning on document chunks
retrieved from a large corpus, based on local similarity with preceding tokens.
With a $2$ trillion token database, our Retrieval-Enhanced Transformer (RETRO)
obtains comparable performance to GPT-3 and Jurassic-1 on the Pile, despite
using 25$\times$ fewer parameters. After fine-tuning, RETRO performance
translates to downstream knowledge-intensive tasks such as question answering.
RETRO combines a frozen Bert retriever, a differentiable encoder and a chunked
cross-attention mechanism to predict tokens based on an order of magnitude more
data than what is typically consumed during training. We typically train RETRO
from scratch, yet can also rapidly RETROfit pre-trained transformers with
retrieval and still achieve good performance. Our work opens up new avenues for
improving language models through explicit memory at unprecedented scale.
Описание
Improving language models by retrieving from trillions of tokens
%0 Generic
%1 borgeaud2021improving
%A Borgeaud, Sebastian
%A Mensch, Arthur
%A Hoffmann, Jordan
%A Cai, Trevor
%A Rutherford, Eliza
%A Millican, Katie
%A Driessche, George van den
%A Lespiau, Jean-Baptiste
%A Damoc, Bogdan
%A Clark, Aidan
%A Casas, Diego de Las
%A Guy, Aurelia
%A Menick, Jacob
%A Ring, Roman
%A Hennigan, Tom
%A Huang, Saffron
%A Maggiore, Loren
%A Jones, Chris
%A Cassirer, Albin
%A Brock, Andy
%A Paganini, Michela
%A Irving, Geoffrey
%A Vinyals, Oriol
%A Osindero, Simon
%A Simonyan, Karen
%A Rae, Jack W.
%A Elsen, Erich
%A Sifre, Laurent
%D 2021
%K llm retrieval
%T Improving language models by retrieving from trillions of tokens
%U http://arxiv.org/abs/2112.04426
%X We enhance auto-regressive language models by conditioning on document chunks
retrieved from a large corpus, based on local similarity with preceding tokens.
With a $2$ trillion token database, our Retrieval-Enhanced Transformer (RETRO)
obtains comparable performance to GPT-3 and Jurassic-1 on the Pile, despite
using 25$\times$ fewer parameters. After fine-tuning, RETRO performance
translates to downstream knowledge-intensive tasks such as question answering.
RETRO combines a frozen Bert retriever, a differentiable encoder and a chunked
cross-attention mechanism to predict tokens based on an order of magnitude more
data than what is typically consumed during training. We typically train RETRO
from scratch, yet can also rapidly RETROfit pre-trained transformers with
retrieval and still achieve good performance. Our work opens up new avenues for
improving language models through explicit memory at unprecedented scale.
@misc{borgeaud2021improving,
abstract = {We enhance auto-regressive language models by conditioning on document chunks
retrieved from a large corpus, based on local similarity with preceding tokens.
With a $2$ trillion token database, our Retrieval-Enhanced Transformer (RETRO)
obtains comparable performance to GPT-3 and Jurassic-1 on the Pile, despite
using 25$\times$ fewer parameters. After fine-tuning, RETRO performance
translates to downstream knowledge-intensive tasks such as question answering.
RETRO combines a frozen Bert retriever, a differentiable encoder and a chunked
cross-attention mechanism to predict tokens based on an order of magnitude more
data than what is typically consumed during training. We typically train RETRO
from scratch, yet can also rapidly RETROfit pre-trained transformers with
retrieval and still achieve good performance. Our work opens up new avenues for
improving language models through explicit memory at unprecedented scale.},
added-at = {2023-08-17T15:00:46.000+0200},
author = {Borgeaud, Sebastian and Mensch, Arthur and Hoffmann, Jordan and Cai, Trevor and Rutherford, Eliza and Millican, Katie and Driessche, George van den and Lespiau, Jean-Baptiste and Damoc, Bogdan and Clark, Aidan and Casas, Diego de Las and Guy, Aurelia and Menick, Jacob and Ring, Roman and Hennigan, Tom and Huang, Saffron and Maggiore, Loren and Jones, Chris and Cassirer, Albin and Brock, Andy and Paganini, Michela and Irving, Geoffrey and Vinyals, Oriol and Osindero, Simon and Simonyan, Karen and Rae, Jack W. and Elsen, Erich and Sifre, Laurent},
biburl = {https://www.bibsonomy.org/bibtex/2ab005037001dc6fdb460707bf397702b/lisa-ee},
description = {Improving language models by retrieving from trillions of tokens},
interhash = {4a49951e42bb8e4532abd193e6be9b3a},
intrahash = {ab005037001dc6fdb460707bf397702b},
keywords = {llm retrieval},
note = {cite arxiv:2112.04426Comment: Fix incorrect reported numbers in Table 14},
timestamp = {2023-08-17T15:00:46.000+0200},
title = {Improving language models by retrieving from trillions of tokens},
url = {http://arxiv.org/abs/2112.04426},
year = 2021
}