The potential of deep learning has been recognized in the protein structure
prediction community for some time, and became indisputable after CASP13. In
CASP14, deep learning has boosted the field to unanticipated levels reaching
near-experimental accuracy. This success comes from advances transferred from
other machine learning areas, as well as methods specifically designed to deal
with protein sequences and structures, and their abstractions. Novel emerging
approaches include (i) geometric learning, i.e. learning on representations
such as graphs, 3D Voronoi tessellations, and point clouds; (ii) pre-trained
protein language models leveraging attention; (iii) equivariant architectures
preserving the symmetry of 3D space; (iv) use of large meta-genome databases;
(v) combinations of protein representations; (vi) and finally truly end-to-end
architectures, i.e. differentiable models starting from a sequence and
returning a 3D structure. Here, we provide an overview and our opinion of the
novel deep learning approaches developed in the last two years and widely used
in CASP14.
Description
Protein sequence-to-structure learning: Is this the end(-to-end revolution)?
%0 Generic
%1 laine2021protein
%A Laine, Elodie
%A Eismann, Stephan
%A Elofsson, Arne
%A Grudinin, Sergei
%D 2021
%K 3D deep folding learning modeling protein
%T Protein sequence-to-structure learning: Is this the end(-to-end
revolution)?
%U http://arxiv.org/abs/2105.07407
%X The potential of deep learning has been recognized in the protein structure
prediction community for some time, and became indisputable after CASP13. In
CASP14, deep learning has boosted the field to unanticipated levels reaching
near-experimental accuracy. This success comes from advances transferred from
other machine learning areas, as well as methods specifically designed to deal
with protein sequences and structures, and their abstractions. Novel emerging
approaches include (i) geometric learning, i.e. learning on representations
such as graphs, 3D Voronoi tessellations, and point clouds; (ii) pre-trained
protein language models leveraging attention; (iii) equivariant architectures
preserving the symmetry of 3D space; (iv) use of large meta-genome databases;
(v) combinations of protein representations; (vi) and finally truly end-to-end
architectures, i.e. differentiable models starting from a sequence and
returning a 3D structure. Here, we provide an overview and our opinion of the
novel deep learning approaches developed in the last two years and widely used
in CASP14.
@preprint{laine2021protein,
abstract = {The potential of deep learning has been recognized in the protein structure
prediction community for some time, and became indisputable after CASP13. In
CASP14, deep learning has boosted the field to unanticipated levels reaching
near-experimental accuracy. This success comes from advances transferred from
other machine learning areas, as well as methods specifically designed to deal
with protein sequences and structures, and their abstractions. Novel emerging
approaches include (i) geometric learning, i.e. learning on representations
such as graphs, 3D Voronoi tessellations, and point clouds; (ii) pre-trained
protein language models leveraging attention; (iii) equivariant architectures
preserving the symmetry of 3D space; (iv) use of large meta-genome databases;
(v) combinations of protein representations; (vi) and finally truly end-to-end
architectures, i.e. differentiable models starting from a sequence and
returning a 3D structure. Here, we provide an overview and our opinion of the
novel deep learning approaches developed in the last two years and widely used
in CASP14.},
added-at = {2021-05-20T03:42:50.000+0200},
author = {Laine, Elodie and Eismann, Stephan and Elofsson, Arne and Grudinin, Sergei},
biburl = {https://www.bibsonomy.org/bibtex/247d47267a150f37169c68bf2e33c4e4d/zgcarvalho},
description = {Protein sequence-to-structure learning: Is this the end(-to-end revolution)?},
interhash = {9aa2208bf93c9fc6ff4bd40d89ffdb40},
intrahash = {47d47267a150f37169c68bf2e33c4e4d},
keywords = {3D deep folding learning modeling protein},
note = {cite arxiv:2105.07407},
timestamp = {2021-05-20T03:46:57.000+0200},
title = {Protein sequence-to-structure learning: Is this the end(-to-end
revolution)?},
url = {http://arxiv.org/abs/2105.07407},
year = 2021
}