网络抓取工具-忽略Robots.txt文件吗? [英] Web Crawler - Ignore Robots.txt file?
本文介绍了网络抓取工具-忽略Robots.txt文件吗?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
某些服务器具有robots.txt文件,以阻止Web爬网程序通过其网站进行爬网.有没有办法让网络抓取工具忽略robots.txt文件?我正在将Mechanize用于python.
Some servers have a robots.txt file in order to stop web crawlers from crawling through their websites. Is there a way to make a web crawler ignore the robots.txt file? I am using Mechanize for python.
推荐答案
用于机械化的文档具有此示例代码:
The documentation for mechanize has this sample code:
br = mechanize.Browser()
....
# Ignore robots.txt. Do not do this without thought and consideration.
br.set_handle_robots(False)
这正是您想要的.
这篇关于网络抓取工具-忽略Robots.txt文件吗?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!
查看全文