What is robots.txt and why are people requesting it?
There are well over 250 known web robots that scour pages for their inclusion in search engines. Robots follow links in your HTML pages. “robots.txt” is part of the Robots Exclusion Protocol, which was intended to instruct robots on where not to follow links: such as private directories, image directories, cgi-bin directories, etc. It it up to you to determine whether you wish robots to scour your site. For more information on Web Robots and creating your robots.txt, visit The Web Robots Pages.