使用scrapy下载完整页面 [英] Download a full page with scrapy

查看:67
本文介绍了使用scrapy下载完整页面的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想使用scrapy下载整个页面的内容.

I want to download the content a whole page using scrapy.

使用硒这很容易:

import os,sys
reload(sys)  
sys.setdefaultencoding('utf8')
from selenium import webdriver


url = 'https://es.wikipedia.org/wiki/Python'

driver = webdriver.Firefox()
driver.get(url)
content = driver.page_source
with open('source','w') as output:
    output.write(content)

但是 selenium 比scrapy 慢得多.

But selenium is much slower than scrapy.

在scrapy中是否有一种简单的方法?

Is it an simple way to do in scrapy?

我想将每个页面的代码保存在不同的文件文本中,而不是保存为 csv 或 json 文件.此外,如果可以不创建项目,这对于这样一个简单的任务来说似乎有点矫枉过正.

I want to save the code of each page in a different file text, not as a csv or json file. Also, if posible without creating a project, which seems a bit of overkill for such a simple task.

推荐答案

代码会下载这个页面并保存在文件 download-a-full-page-with-scrapy.html

Code will download this page and save it in file download-a-full-page-with-scrapy.html

test_scr.py

test_scr.py

import scrapy
class TestSpider(scrapy.Spider):
    name = "test"

    start_urls = [
        "http://stackoverflow.com/questions/38233614/download-a-full-page-with-scrapy",
    ]

    def parse(self, response):
        filename = response.url.split("/")[-1] + '.html'
        with open(filename, 'wb') as f:
            f.write(response.body)

通过这个命令运行scrapy

run scrapy by this command

scrapy runspider test_scr.py

这篇关于使用scrapy下载完整页面的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆