Scrapy 如何避免重新下载最近下载的媒体? [英] How does Scrapy avoid re-downloading media that was downloaded recently?
问题描述
根据 https://doc.scrapy.org/en/latest/topics/media-pipeline.html,Scrapy 的文件管道和图像管道避免重新下载最近下载的媒体".
According to https://doc.scrapy.org/en/latest/topics/media-pipeline.html, both Scrapy's Files Pipeline and Images Pipeline "avoid re-downloading media that was downloaded recently".
我有一个蜘蛛,我正在使用 作业目录 (JOBDIR
) 以暂停和恢复抓取.最初我是在不下载文件的情况下抓取项目;后来,我添加了一个文件管道.但是,在使用管道真正"重新运行蜘蛛之前,我忘记删除 JOBDIR
.
I have a spider which I'm running using a job directory (JOBDIR
) in order to pause and resume crawls. Initially I was scraping items without downloading files; later on, I added a Files Pipeline. However, I forgot to delete the JOBDIR
before re-running the spider 'for real' with the Pipeline.
我担心的是 JOBDIR
中的 requests.seen
文件将包含已被抓取但没有被抓取的项目的指纹文件(因为在抓取它们时管道尚未到位).我正在考虑做的是删除 JOBDIR
并重新开始从一个干净的石板上刮.
What I'm afraid of is that the requests.seen
file in the JOBDIR
will contain fingerprints of items which have been scraped, but of which there is no scraped file (because the pipeline was not yet in place when they were scraped). What I'm considering doing is to remove the JOBDIR
and start scraping again from a clean slate.
我的问题是:这会在不重新下载所有文件的情况下工作吗?还是 FilesPipeline
依赖 JOBDIR
跳过最近已经下载的文件?(顺便说一下,我的 FILES_SOURCE
是一个 S3 存储桶).
My question is: will this work without downloading all the files again? Or does the FilesPipeline
rely on the JOBDIR
to skip files that have already been downloaded recently? (My FILES_SOURCE
is a S3 bucket by the way).
推荐答案
据我所知,scrapy 计算文件名(通常是来自图像的 url 的 base64),如果文件夹中存在文件,scrapy 不会尝试下载它.
As i know, scrapy calculate file name (usually it's base64 from url for image) and if file exists in folder, scrapy not try download it.
这篇关于Scrapy 如何避免重新下载最近下载的媒体?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!