There are missing files! Whats happening?
You may want to capture files that exist in a different folder, or in another web site. You may also want to capture files that are forbidden by default by the robots.txt website rules. In these cases, HTTrack does not capture these links automatically, you have to tell it to do so. • Either use the filters. Example: You are downloading http://www.someweb.com/foo/ and can not get .jpg images located in http://www.someweb.com/bar/ (for example, http://www.someweb.com/bar/blue.jpg) Then, add the filter rule +www.someweb.com/bar/*.jpg to accept all .jpg files from this location You can, also, accept all files from the /bar folder with +www.someweb.com/bar/*, or only html files with +www.someweb.com/bar/*.html and so on.. • If the problems are related to robots.txt rules, that do not let you access some folders (check in the logs if you are not sure), you may want to disable the default robots.txt rules in the options. (but only disable this option with great care, some restricted parts of