Nature 26 Oct 2021--Catalogue of billions of phrases from 107 million papers could ease computerized searching of the literature. Catalogue of billions of phrases from 107 million papers could ease computerized searching of the literature.
In a project that could unlock the world’s research papers for easier computerized analysis, an American technologist [Carl Malamud]has released online a gigantic index of the words and short phrases contained in more than 100 million journal articles — including many paywalled papers.
The catalogue, which was released on 7 October and is free to use, holds tables of more than 355 billion words and sentence fragments listed next to the articles in which they appear. It is an effort to help scientists use software to glean insights from published work even if they have no legal access to the underlying papers, says its creator, Carl Malamud. He released the files under the auspices of Public Resource, a non-profit corporation in Sebastopol, California that he founded.
Malamud says that because his index doesn’t contain the full text of articles, but only sentence snippets up to five words long, releasing it does not breach publishers' copyright restrictions on the re-use of paywalled articles. However, one legal expert says that publishers might question the legality of how Malamud created the index in the first place.
Nature, July 2019. -- A giant data store quietly being built in India could free vast swathes of science for computer analysis — but is it legal? A giant data store quietly being built in India could free vast swathes of science for computer analysis —but is it legal?
Over the past year, Malamud has — without asking publishers — teamed up with Indian researchers to build a gigantic store of text and images extracted from 73 million journal articles dating from 1847 up to the present day. The cache, which is still being created, will be kept on a 576-terabyte storage facility at Jawaharlal Nehru University (JNU) in New Delhi. “This is not every journal article ever written, but it’s a lot,” Malamud says. It’s comparable to the size of the core collection in the Web of Science database, for instance. Malamud and his JNU collaborator, bioinformatician Andrew Lynn, call their facility the JNU data depot.
"GPT-3 is the latest and greatest in the AI world, achieving state-of-the-art in a range of tasks. Its main breakthrough is eliminating the need for task-specific fine-tuning. In terms of size, the model drastically scales up once again, reaching 175 billion parameters, or 116x the size of its predecessor."
In 2017, researchers asked: Could AI write most code by 2040? OpenAI’s GPT-3, now in use by beta testers, can already code in any language. Machine-dominated coding is almost at our doorstep. GPT-3…