What is a Web Crawler?
The best way to describe a web crawler is a blind child with a clear sense of right and wrong. It is blind because it doesn’t see all the fancy graphics and creative flash. It only sees the text and alternative text on your site. It is a child because it has to be guided around your site by clear paths that it understands. The sense of right and wrong comes from it having the ability to determine if something is not right. i.e., if it comes across some red text that is on top of a red background it will see it as hidden text or if it sees a page redirect to a different site without any reason for it, it will see it as a gateway page, both of which are no-no’s. So to sum up on what is good SEO, the short answer is we will design your site for your customers first and web crawlers second taken into consideration browser compatibilities, text-only browsers and blind viewers. The result is a clear, informative experience for your users and a friendly experience for your web crawler.
A web crawler is a relatively simple, automated program, or script, that methodically scans or “crawls” through Internet pages to create an index of of the data it’s looking for. Alternative names for a web crawler include web spider, web robot, bot, crawler and automatic indexer. A web crawler can be used for many purposes. Probably the most common use associated with the term is related to search engines. Search engines use web crawlers to collect information about what is out there on public web pages. Their primary purpose is to collect data so that when Internet surfers enter a search term on their site, they can quickly provide the surfer with relevant web sites. When a search engine’s web crawler visits a web page it “reads” the visible text, the hyperlinks, and the content of the various tags used in the site, such as keyword rich meta tags. Using the information gathered from the crawler, a search engine will then determine what the site is about and index the information. The