What are Robots, Web Crawlers and Spiders?
Robots, Web Crawlers and Spiders are essentially one in the same. They are programs that automatically peruse, or “crawl” the world wide web indexing the files and returning the information to the search engine’s database. There are thousands of robots operating at any given moment. Probably the most important robot is generated by Google. Google’s robot is called “Googlebot”. Other crawler-based search engines include Teoma and AlltheWeb and each have specific robots searching web pages throughout the world. Wikipedia describes robots as: This process is called web crawling or spidering. Many sites, in particular search engines, use spidering as a means of providing up-to-date data. Web crawlers are mainly used to create a copy of all the visited pages for later processing by a search engine that will index the downloaded pages to provide fast searches. Crawlers can also be used for automating maintenance tasks on a website, such as checking links or validating HTML code. Also, crawle