Scrapy Shell:twisted.internet.error.ConnectionLost 尽管设置了 USER_AGENT [英] Scrapy Shell: twisted.internet.error.ConnectionLost although USER_AGENT is set

查看:31
本文介绍了Scrapy Shell:twisted.internet.error.ConnectionLost 尽管设置了 USER_AGENT的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

当我尝试抓取某个网站(同时使用蜘蛛和外壳)时,出现以下错误:

twisted.web._newclient.ResponseNeverReceived: [<twisted.python.failure.Failure twisted.internet.error.ConnectionLost: 与另一端的连接以非干净的方式丢失.>]

我发现当没有设置用户代理时会发生这种情况.但是手动设置后,还是出现同样的错误.

你可以在这里看到scrapy shell的整个输出:

When I try to scrape a certain web site (with both, spider and shell), I get the following error:

twisted.web._newclient.ResponseNeverReceived: [<twisted.python.failure.Failure twisted.internet.error.ConnectionLost: Connection to the other side was lost in a non-clean fashion.>]

I found out that this can happen, when no user agent is set. But after setting it manually, I still got the same error.

You can see the whole output of scrapy shell here: http://pastebin.com/ZFJZ2UXe

Notes:

I am not behind a proxy, and I can access other sites via scrapy shell without problems. I am also able to access the site with Chrome, so it is not a network or connection issue.

Maybe someone can give me a hint how I could solve this problem?

解决方案

Here is 100% working code.

What you need to do is you have to send request headers as well.

Also set ROBOTSTXT_OBEY = False in settings.py

# -*- coding: utf-8 -*-
import scrapy, logging
from scrapy.http.request import Request

class Test1SpiderSpider(scrapy.Spider):
    name = "test1_spider"

    def start_requests(self):

        headers = {
            "Host": "www.firmenabc.at",
            "Connection": "keep-alive",
            "Cache-Control": "max-age=0",
            "Upgrade-Insecure-Requests": "1",
            "User-Agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36",
            "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8",
            "DNT": "1",
            "Accept-Encoding": "gzip, deflate, sdch",
            "Accept-Language":"en-US,en;q=0.8"
        }

        yield Request(url= 'http://www.firmenabc.at/result.aspx?what=&where=Graz', callback=self.parse_detail_page, headers=headers)

    def parse_detail_page(self, response):
        logging.info(response.body)

EDIT:

You can see what headers to send by inspecting the URLs in Dev Tools

这篇关于Scrapy Shell:twisted.internet.error.ConnectionLost 尽管设置了 USER_AGENT的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆