The main task of the GenIELex project is the development of a biochemistry specific lexicon as well as of an annotated corpus for the evaluation of the system. The need for the construction of such a lexicon is illustrated by the following figures, based
Neil Ireson, Fabio Ciravegna, Marie Elaine Califf, Dayne Freitag, Nicholas Kushmerick, Alberto Lavelli: Evaluating Machine Learning for Information Extraction, 22nd International Conference on Machine Learning (ICML 2005), Bonn, Germany, 7-11 August, 2005
The Fusion PDF Image Extractor has two purposes:
To extract all of the individual images from a PDF (to gather the images from brochures etc) (limited to JPG images so far)
To extract all of the pages of a PDF as JPEG image representations of the original page
We have released a zip file containing all of the program files and the source code to do with as you please. We have also released a windows installation image for anyone not comfortable handling zip files.
This project aims to develop an efficient rule based extractor of entries of references, located in scientific articles in English language. The application takes a pdf file or a directory of pdf and then returns an html file, containing the list of all entries with their respective title. Moreover the title of the article cited is searched through Google Web Service to get the URL that identifying the article on the web. If the URL provides on the page a Bibtex entry, this will appear in the html output under the relative entries, stolen from some typical site like citeseer, ieeexlpore etc. The application does not make search over pdf file based on images.
What would be a good way to extract headlines, dates, and authors from news articles? It seems easy to write a scraper using xpath or similar to extract this information from a single site, but I'm not sure of a more scalable solution if you're extracting from say 10,000 sites.