Web spam pages use various techniques to achieve
higher-than-deserved rankings in a search engine’s
results. While human experts can identify
spam, it is too expensive to manually evaluate a
large number of pages. Instead, we propose techniques
to semi-automatically separate reputable,
good pages from spam. We first select a small set
of seed pages to be evaluated by an expert. Once
we manually identify the reputable seed pages, we
use the link structure of the web to discover other
pages that are likely to be good. In this paper
we discuss possible ways to implement the seed
selection and the discovery of good pages. We
present results of experiments run on the World
Wide Web indexed by AltaVista and evaluate the
performance of our techniques. Our results show
that we can effectively filter out spam from a significant
fraction of the web, based on a good seed
set of less than 200 sites.
J. Hidalgo, G. Bringas, E. Sánz, and F. García. DocEng '06: Proceedings of the 2006 ACM symposium on Document engineering, page 107--114. New York, NY, USA, ACM Press, (2006)
B. Bullock, H. Lerch, A. Roßnagel, A. Hotho, and G. Stumme. Proceedings of the 11th International Conference on Knowledge Management and Knowledge Technologies, page 15:1--15:8. New York, NY, USA, ACM, (2011)
B. Krause, C. Schmitz, A. Hotho, and G. Stumme. AIRWeb '08: Proceedings of the 4th international workshop on Adversarial information retrieval on the web, page 61--68. New York, NY, USA, ACM, (2008)