停止Github页面索引 [英] Stopping index of Github pages

查看:91
本文介绍了停止Github页面索引的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我的存储库用户名中有一个github页面.github.io

I have a github page from my repository username.github.io

但是,我不希望Google抓取我的网站,也绝对不希望它出现在搜索结果中.

However I do not want Google to crawl my website and absolutely do not want it to show up on search results.

仅在github页面上使用robots.txt就能工作吗?我知道有一些教程可以停止对Github存储库建立索引,但是实际的Github页面呢?

Will just using robots.txt in github pages work? I know there are tutorials for stop indexing Github repository but what about the actual Github page?

推荐答案

仅在github页面上使用robots.txt就能工作吗?

Will just using robots.txt in github pages work?

如果您使用默认的GitHub Pages子域,则不会,因为Google只检查https://github.io/robots.txt.

If you're using the default GitHub Pages subdomain, then no because Google would check https://github.io/robots.txt only.

您可以确保您没有master分支,或者您的GitHub存储库是私有的,尽管评论 ="https://stackoverflow.com/users/243045/olavimmanuel"> olavimmanuel ,并在olavimmanuel的 answer 中进行了详细说明>,这不会改变任何内容.

You can make sure you don't have a master branch, or that your GitHub repo is a private one, although, as commented by olavimmanuel and detailed in olavimmanuel's answer, this would not change anything.

但是,如果您使用的是自定义域(通过GitHub Pages网站),您可以将robots.txt文件放在您的存储库的根目录下,它将按预期运行.使用此模式的一个示例是 Bootstrap 的仓库.

However, if you're using a custom domain with your GitHub Pages site, you can place a robots.txt file at the root of your repo and it will work as expected. One example of using this pattern is the repo for Bootstrap.

这篇关于停止Github页面索引的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆