我应该创建管道来使用scrapy保存文件吗? [英] Should I create pipeline to save files with scrapy?
问题描述
我需要保存一个文件 (.pdf),但我不确定如何保存.我需要保存 .pdf 并将它们存储在一个目录中,就像它们存储在我正在删除它们的站点上一样.
I need to save a file (.pdf) but I'm unsure how to do it. I need to save .pdfs and store them in such a way that they are organized in a directories much like they are stored on the site I'm scraping them off.
据我所知,我需要制作一个管道,但据我所知,管道保存的项目"和项目"只是字符串/数字等基本数据.保存文件是正确使用管道,还是应该将文件保存在蜘蛛中?
From what I can gather I need to make a pipeline, but from what I understand pipelines save "Items" and "items" are just basic data like strings/numbers. Is saving files a proper use of pipelines, or should I save file in spider instead?
推荐答案
是和否[1].如果您获取 pdf 文件,它将存储在内存中,但如果 pdf 文件不够大,无法填满您的可用内存,则可以.
Yes and no[1]. If you fetch a pdf it will be stored in memory, but if the pdfs are not big enough to fill up your available memory so it is ok.
您可以将 pdf 保存在蜘蛛回调中:
You could save the pdf in the spider callback:
def parse_listing(self, response):
# ... extract pdf urls
for url in pdf_urls:
yield Request(url, callback=self.save_pdf)
def save_pdf(self, response):
path = self.get_path(response.url)
with open(path, "wb") as f:
f.write(response.body)
如果您选择在管道中进行:
If you choose to do it in a pipeline:
# in the spider
def parse_pdf(self, response):
i = MyItem()
i['body'] = response.body
i['url'] = response.url
# you can add more metadata to the item
return i
# in your pipeline
def process_item(self, item, spider):
path = self.get_path(item['url'])
with open(path, "wb") as f:
f.write(item['body'])
# remove body and add path as reference
del item['body']
item['path'] = path
# let item be processed by other pipelines. ie. db store
return item
[1] 另一种方法可能是只存储 pdf 的 url,并使用另一个进程来获取文档而不缓冲到内存中.(例如 wget
)
[1] another approach could be only store pdfs' urls and use another process to fetch the documents without buffering into memory. (e.g. wget
)
这篇关于我应该创建管道来使用scrapy保存文件吗?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!