检测诚实的网络爬虫 [英] Detecting honest web crawlers

查看:132
本文介绍了检测诚实的网络爬虫的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想检测(在服务器端)的请求是由机器人。我不在乎在这一点上的恶意僵尸,只是在玩漂亮的人。我见过大多涉及对像'僵尸'的关键字用户代理字符串相匹配的几种方法。但似乎尴尬的,不完整的,难以维护。因此,没有人有任何更坚实的方法?如果没有,你有你使用任何资源跟上最新的所有友好的用户代理?

I would like to detect (on the server side) which requests are from bots. I don't care about malicious bots at this point, just the ones that are playing nice. I've seen a few approaches that mostly involve matching the user agent string against keywords like 'bot'. But that seems awkward, incomplete, and unmaintainable. So does anyone have any more solid approaches? If not, do you have any resources you use to keep up to date with all the friendly user agents?

如果你好奇:我并不想这样做对任何搜索引擎政策什么。我们有一个网站的部分,在那里用户是随机美元,其中几个稍有不同的网页版本之一psented p $。但是,如果检测到一个网络爬虫,我们总是给他们相同的版本,这样的指数是一致的。

If you're curious: I'm not trying to do anything against any search engine policy. We have a section of the site where a user is randomly presented with one of several slightly different versions of a page. However if a web crawler is detected, we'd always give them the same version so that the index is consistent.

另外我使用Java,但我想的办法将是任何服务器端技术相似。

Also I'm using Java, but I would imagine the approach would be similar for any server-side technology.

推荐答案

您可以找到著名的好的网络爬虫数据的非常透彻的数据库中robotstxt.org的机器人数据库。不仅仅是匹配利用这个数据将更为有效的机器人的在用户代理。

You can find a very thorough database of data on known "good" web crawlers in the robotstxt.org Robots Database. Utilizing this data would be far more effective than just matching bot in the user-agent.

这篇关于检测诚实的网络爬虫的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆