How to access directories disallowed in robots.txt?

The robots.txt does not disallow you to access directories. It tells Google and Bing not to index certain folders. If you put secret folders in there, Google and Bing will ignore them, but other malicious scanners will probably do the opposite. In effect you're giving away what you want to keep secret. To disallow folders you should set this in Apache vhost or .htaccess. You can set a login on the folder if you want.


The robots.txt file isn't a security measure and has no incidence on access permission. This file only tells 'good' robots to skip a part of your website to avoid indexation. Bad robots don't even abide by those rules and scan all they can find. So security can never rely on the robots.txt file (that's not its purpose).

Is there a way to access the directories or files which are Disallowed?

Check your webserver's permissions.


Yes, just go to the directory.

robots.txt is nothing more than something a crawler follows out of courtesy. Crawlers are free to entirely ignore the file. It isn't a real method to keep directories private.