In this paper we present a class of general methods for information extraction and automatic categorization. These methods exploit the features of data compression techniques in order to define a measure of syntactic remoteness between pairs of sequences of characters (e.g. texts) based on their relative informatic content. Using this elementary tool it is possible to implement several algorithms to address problems of information retrieval in very different domains. We address in particular several linguistic motivated problems and we present results for automatic language recognition, authorship attribution, context-based classification as well as automatic universal classification. We also discuss in detail how specific features of data compression techniques could be used to introduce the notion of ``dictionary'' of a given sequence and of ``Artificial Text'' and we show how these new tools can be used for information retrieval purposes. We finally discuss the relevance of our results in non-linguistic fields, i.e. whenever the information is codified in generic sequences of characters.

Links and resources

BibTeX key:
search on:

Comments and Reviews  

There is no review or comment yet. You can write one!


Cite this publication