Why can’t Google’s crawlers access all the pages on my agency’s site?
There are a number of barriers that can keep Google’s crawlers from accessing the pages on your agency’s public website. The most common is a database application that’s accessed through a search form and that presents webpages dynamically, without static URLs for Google’s crawlers to follow. Other barriers include the use of a JavaScript drop-down menu or robots.txt restrictions. See below for further information on barriers to search engine crawlers.
Related Questions
- Why is it important to include the pages on my agencys website in Google’s index? Isn’t the search tool on my site enough?
- Google lists over 180 million pages for our main keyword. How can our site get close to the top?!
- How can I make sure all pages on my agencys site are accessible to Googles crawlers?