我应该创建管道来保存草率的文件吗? [英] Should I create pipeline to save files with scrapy?
问题描述
我需要保存一个文件(.pdf),但不确定如何执行.我需要保存.pdfs并以某种方式将它们存储在一个目录中,就像它们存储在我要删除的站点上一样.
I need to save a file (.pdf) but I'm unsure how to do it. I need to save .pdfs and store them in such a way that they are organized in a directories much like they are stored on the site I'm scraping them off.
据我所知,我需要建立一个管道,但据我所知,管道保存的项目"和项目"只是基本数据,例如字符串/数字.保存文件是对管道的正确使用,还是应该将文件保存在Spider中?
From what I can gather I need to make a pipeline, but from what I understand pipelines save "Items" and "items" are just basic data like strings/numbers. Is saving files a proper use of pipelines, or should I save file in spider instead?
推荐答案
是,不是[1].如果您获取一个pdf文件,它将被存储在内存中,但是如果pdf文件的大小不足以填满您的可用内存,那么就可以了.
Yes and no[1]. If you fetch a pdf it will be stored in memory, but if the pdfs are not big enough to fill up your available memory so it is ok.
您可以将pdf保存在Spider回调中:
You could save the pdf in the spider callback:
def parse_listing(self, response):
# ... extract pdf urls
for url in pdf_urls:
yield Request(url, callback=self.save_pdf)
def save_pdf(self, response):
path = self.get_path(response.url)
with open(path, "wb") as f:
f.write(response.body)
如果您选择在管道中执行此操作:
If you choose to do it in a pipeline:
# in the spider
def parse_pdf(self, response):
i = MyItem()
i['body'] = response.body
i['url'] = response.url
# you can add more metadata to the item
return i
# in your pipeline
def process_item(self, item, spider):
path = self.get_path(item['url'])
with open(path, "wb") as f:
f.write(item['body'])
# remove body and add path as reference
del item['body']
item['path'] = path
# let item be processed by other pipelines. ie. db store
return item
[1]另一种方法可能是只存储pdf的url,并使用另一种方法来获取文档而不缓冲到内存中. (例如wget
)
[1] another approach could be only store pdfs' urls and use another process to fetch the documents without buffering into memory. (e.g. wget
)
这篇关于我应该创建管道来保存草率的文件吗?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!