Scrapy:重试图像下载后出现错误 10054 [英] Scrapy: Error 10054 after retrying image download

查看:40
本文介绍了Scrapy:重试图像下载后出现错误 10054的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在 python 中运行一个 Scrapy 蜘蛛来从网站上抓取图像.其中一张图像无法下载(即使我尝试通过网站定期下载),这是该网站的内部错误.这很好,我不在乎尝试获取图像,我只想在失败时跳过图像并移至其他图像,但我一直收到 10054 错误.

I'm running a Scrapy spider in python to scrape images from a website. One of the images fails to download (even if I try to download it regularly through the site) which is an internal error for the site. This is fine, I don't care about trying to get the image, I just want to skip over the image when it fails and move onto the other images, but I keep getting a 10054 error.

> Traceback (most recent call last):   File
> "c:\python27\lib\site-packages\twisted\internet\defer.py", line 588,
> in _runCallbacks
>     current.result = callback(current.result, *args, **kw)   File "C:\Python27\Scripts\nhtsa\nhtsa\spiders\NHTSA_spider.py", line 137,
> in parse_photo_page
>     self.retrievePhoto(base_url_photo + url[0], url_text)   File "C:\Python27\Scripts\nhtsa\nhtsa\retrying.py", line 49, in wrapped_f
>     return Retrying(*dargs, **dkw).call(f, *args, **kw)   File "C:\Python27\Scripts\nhtsa\nhtsa\retrying.py", line 212, in call
>     raise attempt.get()   File "C:\Python27\Scripts\nhtsa\nhtsa\retrying.py", line 247, in get
>     six.reraise(self.value[0], self.value[1], self.value[2])   File "C:\Python27\Scripts\nhtsa\nhtsa\retrying.py", line 200, in call
>     attempt = Attempt(fn(*args, **kwargs), attempt_number, False)   File "C:\Python27\Scripts\nhtsa\nhtsa\spiders\NHTSA_spider.py", line
> 216, in retrievePhoto
>     code.write(f.read())   File "c:\python27\lib\socket.py", line 355, in read
>     data = self._sock.recv(rbufsize)   File "c:\python27\lib\httplib.py", line 612, in read
>     s = self.fp.read(amt)   File "c:\python27\lib\socket.py", line 384, in read
>     data = self._sock.recv(left) error: [Errno 10054] An existing connection was forcibly closed by the remote

这是我的解析函数,它查看照片页面并找到重要的网址:

Here is my parse function that looks at the photo page and finds the important url's:

def parse_photo_page(self, response):
        for sel in response.xpath('//table[@id="tblData"]/tr'):
            url = sel.xpath('td/font/a/@href').extract()
            table_fields = sel.xpath('td/font/text()').extract()
            if url:
                base_url_photo = "http://www-nrd.nhtsa.dot.gov/"
                url_text = table_fields[3]
                url_text = string.replace(url_text, "&nbsp","")
                url_text = string.replace(url_text," ","")  
                self.retrievePhoto(base_url_photo + url[0], url_text)

这是我使用重试装饰器的下载功能:

Here is my download function with retry decorator:

from retrying import retry
@retry(stop_max_attempt_number=5, wait_fixed=2000)
    def retrievePhoto(self, url, filename): 
        fullPath = self.saveLocation + "/" + filename
        urllib.urlretrieve(url, fullPath)

它重试下载 5 次,但随后抛出 10054 错误并且不继续下一个图像.重试后如何让蜘蛛继续?再说一次,我不在乎下载有问题的图片,我只想跳过它.

It retries the download 5 times, but then throws the 10054 error and does not continue to the next image. How can I get the spider to continue after retrying? Again, I don't care about downloading the problem image, I just want to skip over it.

推荐答案

您不应该在 scrapy 中使用 urllib 是正确的,因为它会阻止一切.尝试阅读与scrapy扭曲"和scrapy异步"相关的资源.无论如何......我不认为你的主要问题是重试后继续",但没有在你的表达上使用相关的xpaths".这是一个对我有用的版本(注意 './td/font/a/@href' 中的 ./):

It's correct that you shouldn't use urllib inside scrapy because it blocks everything. Try to read resources related to "scrapy twisted" and "scrapy asynchronous". Anyway... I don't believe that your main problem is with "continue after retrying" but with not using "relevant xpaths" on your expressions. Here is a version that works for me (Note the ./ in './td/font/a/@href'):

import scrapy
import string
import urllib
import os

class MyspiderSpider(scrapy.Spider):
    name = "myspider"
    start_urls = (
        'file:index.html',
    )

    saveLocation = os.getcwd()

    def parse(self, response):
        for sel in response.xpath('//table[@id="tblData"]/tr'):
            url = sel.xpath('./td/font/a/@href').extract()
            table_fields = sel.xpath('./td/font/text()').extract()
            if url:
                base_url_photo = "http://www-nrd.nhtsa.dot.gov/"
                url_text = table_fields[3]
                url_text = string.replace(url_text, "&nbsp","")
                url_text = string.replace(url_text," ","")
                self.retrievePhoto(base_url_photo + url[0], url_text)

    from retrying import retry
    @retry(stop_max_attempt_number=5, wait_fixed=2000)
    def retrievePhoto(self, url, filename): 
        fullPath = self.saveLocation + "/" + filename
        urllib.urlretrieve(url, fullPath)

这是一个(更好)版本,它遵循您的模式,但使用@paul trmbrth 提到的 ImagesPipeline.

And here's a (much better) version that follows your patterns but uses ImagesPipeline that @paul trmbrth mentioned.

import scrapy
import string
import os

class MyspiderSpider(scrapy.Spider):
    name = "myspider2"
    start_urls = (
        'file:index.html',
    )

    saveLocation = os.getcwd()

    custom_settings = {
        "ITEM_PIPELINES": {'scrapy.pipelines.images.ImagesPipeline': 1},
        "IMAGES_STORE": saveLocation
    }

    def parse(self, response):
        image_urls = []
        image_texts = []
        for sel in response.xpath('//table[@id="tblData"]/tr'):
            url = sel.xpath('./td/font/a/@href').extract()
            table_fields = sel.xpath('./td/font/text()').extract()
            if url:
                base_url_photo = "http://www-nrd.nhtsa.dot.gov/"
                url_text = table_fields[3]
                url_text = string.replace(url_text, "&nbsp","")
                url_text = string.replace(url_text," ","")
                image_urls.append(base_url_photo + url[0])
                image_texts.append(url_text)

        return {"image_urls": image_urls, "image_texts": image_texts}

我使用的演示文件是这样的:

The demo file I use is this:

$ cat index.html 
<table id="tblData"><tr>

<td><font>hi <a href="img/2015/cav.jpg"> foo </a> <span /> <span /> green.jpg     </font></td>

</tr><tr>

<td><font>hi <a href="img/2015/caw.jpg"> foo </a> <span /> <span /> blue.jpg     </font></td>

</tr></table>

这篇关于Scrapy:重试图像下载后出现错误 10054的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆