Automatically Refining the Wikipedia Infobox Ontology
F. Wu, and D. Weld. Proceedings of the 17th International Conference on World Wide Web, page 635--644. New York, NY, USA, ACM, (2008)
DOI: 10.1145/1367497.1367583
Abstract
The combined efforts of human volunteers have recently extracted numerous facts from Wikipedia, storing them as machine-harvestable object-attribute-value triples in Wikipedia infoboxes. Machine learning systems, such as Kylin, use these infoboxes as training data, accurately extracting even more semantic knowledge from natural language text. But in order to realize the full power of this information, it must be situated in a cleanly-structured ontology. This paper introduces KOG, an autonomous system for refining Wikipedia's infobox-class ontology towards this end. We cast the problem of ontology refinement as a machine learning problem and solve it using both SVMs and a more powerful joint-inference approach expressed in Markov Logic Networks. We present experiments demonstrating the superiority of the joint-inference approach and evaluating other aspects of our system. Using these techniques, we build a rich ontology, integrating Wikipedia's infobox-class schemata with WordNet. We demonstrate how the resulting ontology may be used to enhance Wikipedia with improved query processing and other features.
%0 Conference Paper
%1 Wu:2008:ARW:1367497.1367583
%A Wu, Fei
%A Weld, Daniel S.
%B Proceedings of the 17th International Conference on World Wide Web
%C New York, NY, USA
%D 2008
%I ACM
%K infobox ontology semantic wikipedia
%P 635--644
%R 10.1145/1367497.1367583
%T Automatically Refining the Wikipedia Infobox Ontology
%U http://doi.acm.org/10.1145/1367497.1367583
%X The combined efforts of human volunteers have recently extracted numerous facts from Wikipedia, storing them as machine-harvestable object-attribute-value triples in Wikipedia infoboxes. Machine learning systems, such as Kylin, use these infoboxes as training data, accurately extracting even more semantic knowledge from natural language text. But in order to realize the full power of this information, it must be situated in a cleanly-structured ontology. This paper introduces KOG, an autonomous system for refining Wikipedia's infobox-class ontology towards this end. We cast the problem of ontology refinement as a machine learning problem and solve it using both SVMs and a more powerful joint-inference approach expressed in Markov Logic Networks. We present experiments demonstrating the superiority of the joint-inference approach and evaluating other aspects of our system. Using these techniques, we build a rich ontology, integrating Wikipedia's infobox-class schemata with WordNet. We demonstrate how the resulting ontology may be used to enhance Wikipedia with improved query processing and other features.
%@ 978-1-60558-085-2
@inproceedings{Wu:2008:ARW:1367497.1367583,
abstract = {The combined efforts of human volunteers have recently extracted numerous facts from Wikipedia, storing them as machine-harvestable object-attribute-value triples in Wikipedia infoboxes. Machine learning systems, such as Kylin, use these infoboxes as training data, accurately extracting even more semantic knowledge from natural language text. But in order to realize the full power of this information, it must be situated in a cleanly-structured ontology. This paper introduces KOG, an autonomous system for refining Wikipedia's infobox-class ontology towards this end. We cast the problem of ontology refinement as a machine learning problem and solve it using both SVMs and a more powerful joint-inference approach expressed in Markov Logic Networks. We present experiments demonstrating the superiority of the joint-inference approach and evaluating other aspects of our system. Using these techniques, we build a rich ontology, integrating Wikipedia's infobox-class schemata with WordNet. We demonstrate how the resulting ontology may be used to enhance Wikipedia with improved query processing and other features.},
acmid = {1367583},
added-at = {2016-07-16T17:01:04.000+0200},
address = {New York, NY, USA},
author = {Wu, Fei and Weld, Daniel S.},
biburl = {https://www.bibsonomy.org/bibtex/2053d2aa7091025d3fc039c2d51dcbe13/hotho},
booktitle = {Proceedings of the 17th International Conference on World Wide Web},
doi = {10.1145/1367497.1367583},
interhash = {d2d874a449876242a4e39f21c070318c},
intrahash = {053d2aa7091025d3fc039c2d51dcbe13},
isbn = {978-1-60558-085-2},
keywords = {infobox ontology semantic wikipedia},
location = {Beijing, China},
numpages = {10},
pages = {635--644},
publisher = {ACM},
series = {WWW '08},
timestamp = {2016-07-16T17:01:04.000+0200},
title = {Automatically Refining the Wikipedia Infobox Ontology},
url = {http://doi.acm.org/10.1145/1367497.1367583},
year = 2008
}