使用带有scrapy-splash的代理 [英] using proxy with scrapy-splash

查看:83
本文介绍了使用带有scrapy-splash的代理的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试将代理(proxymesh)与scrapy-splash 一起使用.我有以下(相关)代码

I'm trying to use proxy (proxymesh) alongside scrapy-splash. I have following (relevant) code

PROXY = """splash:on_request(function(request)
    request:set_proxy{
        host = http://us-ny.proxymesh.com,
        port = 31280,
        username = username,
        password = secretpass,
    }
    return splash:html()
end)"""

并在 start_requests 中

and in start_requests

def start_requests(self):
    for url in self.start_urls:
        print url
        yield SplashRequest(url, self.parse,
            endpoint='execute',
            args={'wait': 5,
                  'lua_source': PROXY,
                  'js_source': 'document.body'},

但是好像不行.self.parse 根本没有被调用.如果我将端点更改为 'render.html' 我点击了 self.parse 方法,但是当我检查标头 (response.headers) 时,我可以看到它没有通过代理.我确认当我将 http://checkip.dyndns.org/ 设置为起始网址并看到时解析响应,我的旧 IP 地址.

But it does not seem to work. self.parse is not called at all. If I change endpoint to 'render.html' I hit the self.parse method, but when I inspect headers (response.headers) I can see that it is not going trough proxy. I confirmed that when I set http://checkip.dyndns.org/ as starting url and saw, upon parsing response, my old ip address.

我做错了什么?

推荐答案

您应该向 SplashRequest 对象添加 'proxy' 参数.

You should add 'proxy' argument to SplashRequest object.

def start_requests(self):
    for url in self.start_urls:
        print url
        yield SplashRequest(url, self.parse,
            endpoint='execute',
            args={'wait': 5,
                  'lua_source': PROXY,
                  'js_source': 'document.body',
                  'proxy': 'http://proxy_ip:proxy_port'}

这篇关于使用带有scrapy-splash的代理的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆