Web spam pages use various techniques to achieve
higher-than-deserved rankings in a search engine’s
results. While human experts can identify
spam, it is too expensive to manually evaluate a
large number of pages. Instead, we propose techniques
to semi-automatically separate reputable,
good pages from spam. We first select a small set
of seed pages to be evaluated by an expert. Once
we manually identify the reputable seed pages, we
use the link structure of the web to discover other
pages that are likely to be good. In this paper
we discuss possible ways to implement the seed
selection and the discovery of good pages. We
present results of experiments run on the World
Wide Web indexed by AltaVista and evaluate the
performance of our techniques. Our results show
that we can effectively filter out spam from a significant
fraction of the web, based on a good seed
set of less than 200 sites.
With this Web page, we are opening some aspects of hakia R&D to the view of our users. We undertook highly specific research tasks solely dedicated to the advancement of the core-competency in Web search. The main challenge is to make science work in a co
Sproose provides user generated search results to give the power of search ranking to you, the user. Other search engines only rank web pages based on the number of pages that link to a web page, but such links can be bought by companies or reflect the op
Search algorithms & popularity factors: link, social, click, blog, industry. Since modern search engines are concerned with popularity and not direct relevancy, & big firms up the price of text link buying beyond affordability...