What is the Robots Exclusion standard?
Robots (or ‘bots or webcrawlers) are automated web browsers that “crawl” the web to retrieve web pages, for example on behalf of search engines or price comparison sites. The Robots Exclusion standard is an informal convention many of these robots obey, by which webmasters can place a “robots.txt” file on the webserver to tell web robots to avoid some pages or entire sites.