如何在scrapy中同时使用http和https代理? [英] How to use http and https proxy together in scrapy?

查看:36
本文介绍了如何在scrapy中同时使用http和https代理?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我是scrapy的新手.我发现使用 http 代理,但我想同时使用 http 和 https 代理,因为当我抓取链接时,有 http 和 https 链接.我如何同时使用 http 和 https 代理?

I am new in scrapy. I found that for use http proxy but I want to use http and https proxy together because when I crawl the links there has http and https links. How do I use also http and https proxy?

class ProxyMiddleware(object):
    def process_request(self, request, spider):
        request.meta['proxy'] = "http://YOUR_PROXY_IP:PORT"
        #like here request.meta['proxy'] = "https://YOUR_PROXY_IP:PORT"
        proxy_user_pass = "USERNAME:PASSWORD"
        # setup basic authentication for the proxy
        encoded_user_pass = base64.encodestring(proxy_user_pass)
        request.headers['Proxy-Authorization'] = 'Basic ' + encoded_user_pass

推荐答案

您可以使用标准环境变量与 HttpProxyMiddleware:

You could use standard environment variables with the combination of the HttpProxyMiddleware:

此中间件通过设置请求对象的代理元值来设置用于请求的 HTTP 代理.

This middleware sets the HTTP proxy to use for requests, by setting the proxy meta value for Request objects.

与 Python 标准库模块 urllib 和 urllib2 一样,它遵循以下环境变量:

Like the Python standard library modules urllib and urllib2, it obeys the following environment variables:

http_proxy
https_proxy
no_proxy

您还可以将每个请求的元密钥代理设置为 http://some_proxy_server:port 之类的值.

You can also set the meta key proxy per-request, to a value like http://some_proxy_server:port.

这篇关于如何在scrapy中同时使用http和https代理?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆