如何更换的robots.txt用的.htaccess [英] How to replace robots.txt with .htaccess

查看:210
本文介绍了如何更换的robots.txt用的.htaccess的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个小的情况下,我不得不删除我的robots.txt文件,因为我不想和机器人爬虫来获取链接。

I have a small situation where i have to remove my robots.txt file because i don't want and robots crawlers to get the links.

此外,我希望他们能够被用户访问的,我不希望他们的搜索引擎缓存。

Also i want them to be accessible by the user and i don't want them to be cached by the search engines.

此外,我不能够以各种理由添加任何用户身份验证。

Also i cannot add any user authentications for various reasons.

因此​​,我想利用mod-rewrite来抓取它同时允许所有其他人去做禁止搜索引擎抓取。

So i am thinking about using mod-rewrite to disable search engine crawlers from crawling it while allowing all others to do it.

逻辑我想实现的是写一个条件检查进来的用户代理是一个搜索引擎,如果是,则重新引导他们到401。

The logic i am trying to implement is write a condition to check if the incomming user agent is a search engine and if yes then re-direct them to 401.

唯一的问题是我不知道如何实现它。 :(

The only problem is i don't know how to implement it. :(

有人可以帮我一下吧。

在此先感谢。

问候,

推荐答案

我可以理解你错了,但我觉得

I may be understanding you wrong, but I think

User-agent: *
Disallow: /

在robots.txt的会做只是你想要的 - 不要让任何履带式的,同时保持网站开放给普通用户

in robots.txt will do just what you want - not let any crawler in, while keeping website open for normal users.

或你需要专门从Web服务器上删除的robots.txt(为的是什么?)?

Or do you need to specifically remove robots.txt (for what reason?) from the web server?

这篇关于如何更换的robots.txt用的.htaccess的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆