Scrapy 延迟请求 [英] Scrapy delay request

查看:56
本文介绍了Scrapy 延迟请求的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

每次我运行我的代码时,我的 ip 都会被禁止.我需要帮助将每个请求延迟 10 秒.我试图在代码中放置 DOWNLOAD_DELAY 但它没有给出任何结果.任何帮助表示赞赏.

every time i run my code my ip gets banned. I need help to delay each request for 10 seconds. I've tried to place DOWNLOAD_DELAY in code but it gives no results. Any help is appreciated.

# item class included here
        class DmozItem(scrapy.Item):
            # define the fields for your item here like:
            link = scrapy.Field()
            attr = scrapy.Field()


        class DmozSpider(scrapy.Spider):
            name = "dmoz"
            allowed_domains = ["craigslist.org"]
            start_urls = [
            "https://washingtondc.craigslist.org/search/fua"
            ]

            BASE_URL = 'https://washingtondc.craigslist.org/'

            def parse(self, response):
                links = response.xpath('//a[@class="hdrlnk"]/@href').extract()
                for link in links:
                    absolute_url = self.BASE_URL + link
                    yield scrapy.Request(absolute_url, callback=self.parse_attr)

            def parse_attr(self, response):
                match = re.search(r"(\w+)\.html", response.url)
                if match:
                    item_id = match.group(1)
                    url = self.BASE_URL + "reply/nos/vgm/" + item_id

                    item = DmozItem()
                    item["link"] = response.url

                    return scrapy.Request(url, meta={'item': item}, callback=self.parse_contact)

            def parse_contact(self, response):
                item = response.meta['item']
                item["attr"] = "".join(response.xpath("//div[@class='anonemail']//text()").extract())
                return item

推荐答案

您需要设置 DOWNLOAD_DELAY 在你项目的 settings.py 中.请注意,您可能还需要限制并发.默认并发数为 8,因此您正在访问具有 8 个同时请求的网站.

You need to set DOWNLOAD_DELAY in settings.py of your project. Note that you may also need to limit concurrency. By default concurrency is 8 so you are hitting website with 8 simultaneous requests.

# settings.py
DOWNLOAD_DELAY = 1
CONCURRENT_REQUESTS_PER_DOMAIN = 2

Scrapy 1.0 开始,您还可以在蜘蛛中放置自定义设置,因此您可以执行以下操作这个:

Starting with Scrapy 1.0 you can also place custom settings in spider, so you could do something like this:

class DmozSpider(Spider):
    name = "dmoz"
    allowed_domains = ["dmoz.org"]
    start_urls = [
        "http://www.dmoz.org/Computers/Programming/Languages/Python/Books/",
        "http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/",
    ]

    custom_settings = {
        "DOWNLOAD_DELAY": 5,
        "CONCURRENT_REQUESTS_PER_DOMAIN": 2
    }

延迟和并发是按下载器插槽设置的,而不是按请求设置的.要实际检查您有什么下载,您可以尝试这样的操作

Delay and concurrency are set per downloader slot not per requests. To actually check what download you have you could try something like this

def parse(self, response):
    """
    """
    delay = self.crawler.engine.downloader.slots["www.dmoz.org"].delay
    concurrency = self.crawler.engine.downloader.slots["www.dmoz.org"].concurrency
    self.log("Delay {}, concurrency {} for request {}".format(delay, concurrency, response.request))
    return

这篇关于Scrapy 延迟请求的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆