如何防止在抓取亚马逊时被列入黑名单 [英] How to prevent getting blacklisted while scraping Amazon

查看:34
本文介绍了如何防止在抓取亚马逊时被列入黑名单的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我尝试通过 Scrapy 抓取 Amazon.但我有这个错误

I try to scrape Amazon by Scrapy. but i have this error

DEBUG: Retrying <GET http://www.amazon.fr/Amuses-bouche-Peuvent-b%C3%A9n%C3%A9ficier-dAmazon-Premium-Epicerie/s?ie=UTF8&page=1&rh=n%3A6356734031%2Cp_76%3A437878031> 
(failed 1 times): 503 Service Unavailable

我认为这是因为 = 亚马逊非常擅长检测机器人.我怎样才能防止这种情况?

I think that it's because = Amazon is very good at detecting bots. How can i prevent this?

我在每次请求之前都使用了 time.sleep(6).

i used time.sleep(6) before every request.

我不想使用他们的 API.

I don't want to use their API.

我试过使用 Tor 和 polipo

I tried I use tor and polipo

推荐答案

您必须对 Amazon 非常小心,并遵守与网络抓取相关的 Amazon 使用条款和政策.

You have to be very careful with Amazon and follow the Amazon Terms of Use and policies related to web-scraping.

亚马逊非常擅长禁止机器人的 IP.您必须调整 DOWNLOAD_DELAYCONCURRENT_REQUESTS 减少访问网站的次数并成为一名优秀的网络抓取公民.而且,您需要轮换 IP 地址(例如,您可以查看 crawlera)和 用户代理.

Amazon is quite good at banning IPs of the bots. You would have to tweak the DOWNLOAD_DELAY and CONCURRENT_REQUESTS to hit the website less often and be a good web-scraping citizen. And, you would need to rotate IP addresses (you may look into, for instance, crawlera) and user agents.

这篇关于如何防止在抓取亚马逊时被列入黑名单的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆