R. Baeza-Yates, and C. Castillo. Soft Computing Systems - Design, Management and, page 565--572. IOS Press Amsterdam, Berlin, Oxford, Tokyo, Washington, (2002)
R. Baeza-Yates, L. Calderón-Benavides, and C. González-Caro. Proceedings of String Processing and Information Retrieval (SPIRE ), volume 4209 of Lecture Notes in Computer Science, page 98--109. Springer, (2006)
Z. Bar-Yossef, I. Keidar, and U. Schonfeld. WWW '07: Proceedings of the 16th international conference on World Wide Web, page 111--120. New York, NY, USA, ACM, (2007)
K. Becker, and F. Stalder (Eds.) Studien-Verlag, Innsbruck u.a., (2009)Viele Beiträge in diesem Band wurden zuerst auf der Deep Search-Konferenz vorgestellt, die am 8. November 2008 in Wien stattfand..
S. Brin, and L. Page. http://ilpubs.stanford.edu:8090/361/, (1998)In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/. To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want..
A. Broder, M. Najork, and J. Wiener. WWW '03: Proceedings of the 12th international conference on World Wide Web, page 679--689. New York, NY, USA, ACM, (2003)
D. Cohn, and H. Chang. ICML '00: Proceedings of the Seventeenth International Conference on Machine Learning, page 167--174. San Francisco, CA, USA, Morgan Kaufmann Publishers Inc., (2000)
S. Dumais, E. Cutrell, J. Cadiz, G. Jancke, R. Sarin, and D. Robbins. Proc. of the 26th annual international ACM SIGIR conference on Researchand development in informaion retrieval, page 72 - 79. Torontoand Canada, ACM Press New Yorkand NYand USA, (2003)
C. Dwork, R. Kumar, M. Naor, and D. Sivakumar. WWW '01: Proceedings of the 10th international conference on World Wide Web, page 613--622. New York, NY, USA, ACM Press, (2001)
E. Giglia. European journal of physical and rehabilitation medicine, 44 (2):
221-30(June 2008)5633<m:linebreak></m:linebreak>LR: 20100601; JID: 101465662; ppublish;<m:linebreak></m:linebreak>Cerca bibliogràfica.
K. Kingsley, G. Galbraith, M. Herring, E. Stowers, T. Stewart, and K. Kingsley. BMC medical education, (January 2011)6676<br/>JID: 101088679; OID: NLM: PMC3097006; 2010/06/29 received; 2011/04/25 accepted; 2011/04/25 aheadofprint; epublish;<br/>Cerca bibliogràfica.