Extract, Transform, and Load (ETL) is a process in data warehousing that involves
* extracting data from outside sources,
* transforming it to fit business needs (which can include quality levels), and ultimately
* loading it into the end target, i.e. the data warehouse.
hResume is a microformat for publishing résumé or Curriculum Vitae (CV) information [1] using (X)HTML on web pages. Like many other microformats, hResume uses CSS class names to make an otherwise non-semantic XHTML document more meaningful. A document containing resume information could be improved to use hResume without altering the appearance to the browser, making it easy to adopt.
Book sources
From Wikipedia, the free encyclopedia
Jump to: navigation, search
This page allows users to search for multiple sources for a book given the ISBN number. Spaces and dashes in the ISBN number do not matter. The number starts after the colon for "ISBN-10:" and "ISBN-13:" numbers.
DBpedia – Querying Wikipedia like a Semantic Database
Latest dbpedia news
DBpedia-Presentation at ISWC
Sören presented today the paper “DBpedia: A Nucleus for a Web of Open Data” at International Semantic Web Conference in Busan, Korea. You can view the slides here.
DBpedia Relationship Finder Release 2
Second Release of the DBpedia Relationship Finder. The Relationship Finder explores the DBpedia infobox dataset to find out which relations exist between two things. It can answer questions like “How are Leipzig and the Semantic Web related?“. The new version includes, amongst other changes, better algorithms and the possibility to ignore objects and properties.
DBpedia Relationship Finder released
Release of the DBpedia Relationship Finder. The relationship finder explores the DBpedia dataset two find out which relations exist between two things. It can answer questions like “How are Leipzig and the Semantic Web related?“.
DBpedia Hack Night in Copenhagen
Via Binary Relations Blog: If you are in the general vicinity of Copenhagen on the evening of the 24th of April (yep, that’s tomorrow), and remotely interested in RDF, SPARQL or DBpedia, stop by ITU, where we’ll be hacking away from 20:00. If you read Danish, see the original announcement by Claus Dahl in the kitchen: [...]
dbpedia is catching on
The dbpedia project started by AKSW (together with Chris Bizer from FU Berlin and OpenLink Software) is getting increasingly popular. No wonder, since the over 10 Mio. RDF triples extracted from the English Wikipedia allow the astonishing answering of previously hard-to-answer questions. Who for example knows what connects Leipzig with Innsbruck? Interesting articles about dbpedia: Did [...]
Overview
Do you know all mayors from towns elevated higher than 1000m, all sitcoms set in New York, or all philosophers that were influenced by Friedrich Nietzsche?
Wikipedia contains information required for answering such questions, but has the problem that its constricted search capabilities only allow very limited access to this valuable knowledge-base. The Semantic Web still lacks a critical mass of RDF data online and up-to-date terms and ontologies are missing for many application domains.
The dbpedia.org project approaches both problems by extracting structured information from Wikipedia and by making this information available on the Web. dbpedia.org allows you to ask sophisticated queries against Wikipedia (like the ones mentioned above) and to link other datasets on the Web to dbpedia data.
Navigational databases are characterized by the fact that objects in the database are found primarily by following references from other objects. Traditionally navigational interfaces are procedural, though one could characterize some modern systems like XPath as being simultaneously navigational and declarative.
S. Ponzetto, и M. Strube. Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics, стр. 192--199. Morristown, NJ, USA, Association for Computational Linguistics, (2006)