Web crawlers are programs that automatically scans various web sites.
Crawlers follow the links on a site to find other relevant pages.
Two search algorithms are widely used by crawlers to traverse the Web:
|
Search engines are different from information retrieval systems because of the runtime requirement. The huge amount of web data causes most indexing methods obsolete for search engines.