scrapy图像存储到Amazon S3 [英] scrapy store images to amazon s3
问题描述
我将图像存储在我的本地服务器,然后上传到S3
现在我想将它编辑到存储的图像直接到Amazon S3
I store images in my local server then upload to s3
Now I want to edit it to stored images directly to amazon s3
不过疗法是错误:
boto.exception.S3ResponseError: S3ResponseError: 403 Forbidden
这是我的settings.py
here is my settings.py
AWS_ACCESS_KEY_ID = "XXXX"
AWS_SECRET_ACCESS_KEY = "XXXX"
IMAGES_STORE = 's3://how.are.you/'
我是否需要添加的东西?
Do I need to add something??
我scrapy版:Scrapy == 0.22.2
my scrapy edition: Scrapy==0.22.2
请指引我,谢谢!
推荐答案
我发现这个问题的原因是上传的政策。该功能Key.set_contents_from_string()接受参数策略,默认设置为 S3FileStore.POLICY 。因此,修改code在scrapy /的contrib /管道/ files.py,改变
I found the cause of the problem is upload policy. The function Key.set_contents_from_string() takes argument policy, default set to S3FileStore.POLICY. So modify the code in scrapy/contrib/pipeline/files.py, change
return threads.deferToThread(k.set_contents_from_string, buf.getvalue(),
headers=h, policy=self.POLICY)
到
return threads.deferToThread(k.set_contents_from_string, buf.getvalue(),
headers=h)
也许你可以尝试一下,在这里分享的结果。
Maybe you can try it, and share the result here.
这篇关于scrapy图像存储到Amazon S3的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!