Stopping index of Github pages

Short answer:

You can use a robots.txt to stop indexing of your users GitHub Pages by adding it in your User Page. This robots.txt will be the active robots.txt for all your projects pages as the project pages are reachable as subdirectories (username.github.io/project) in your subdomain (username.github.io).


Longer answer:

You get your own subdomain for GitHub pages (username.github.io). According to this question on MOZ and googles reference each subdomain has/needs its own robots.txt.

This means that the valid/active robots.txt for project projectname by user username lives at username.github.io/robots.txt. You can put a robots.txtfile there by creating a GitHub Pages page for your user.

This is done by creating a new project/repository named username.github.io where username is your username. You can now create a robots.txt file in the master branch of this project/repository and it should be visible at username.github.io/robots.txt. More information about project, user and organization pages can be found here.

I have tested this with Google, confirming ownership of myusername.github.io by placing a html file in my project/repository https://github.com/myusername/myusername.github.io/tree/master, creating a robot.txt file there and then verifying that my robots.txt works by using Googles Search Console webmaster tools (googlebot-fetch). Google does indeed list it as blocked and Google Search Console webmaster tools (robots-testing-tool) confirms it.

To block robots for one projects GitHub Page:

User-agent: *
Disallow: /projectname/

To block robots for all GitHub Pages for your user (User Page and all Project Pages):

User-agent: *
Disallow: /

Other options

  • Look into the HTML meta tag
  • Look into custom domain (redirects) for GitHub Pages

I don't know if it is still relevant, but google says you can stop spiders with a meta tag:

<meta name="robots" content="noindex">

I'm not sure however if that works for all spiders or google only.


Google doesn't recommend using a robots.txt file to not index a website(GitHub page in this case). In fact most of the time it does get indexed even if you block the google bot.

Instead, you should add the following in your page head, which should be easy to control even if you are not using a custom domain.

<meta name='robots' content='noindex,nofollow' />

This will tell Google to NOT index it. Where if you only block google bot to access your website it will still index like 90% of the time just won't show meta description.


Will just using robots.txt in github pages work?

If you're using the default GitHub Pages subdomain, then no because Google would check https://github.io/robots.txt only.

You can make sure you don't have a master branch, or that your GitHub repo is a private one, although, as commented by olavimmanuel and detailed in olavimmanuel's answer, this would not change anything.

However, if you're using a custom domain with your GitHub Pages site, you can place a robots.txt file at the root of your repo and it will work as expected. One example of using this pattern is the repo for Bootstrap.

However, bmaupin points out, from Google's own documentation:

A robots.txt file tells search engine crawlers which URLs the crawler can access on your site.

This is used mainly to avoid overloading your site with requests; it is not a mechanism for keeping a web page out of Google.

To keep a web page out of Google, block indexing with noindex or password-protect the page."