使用Feed Exporter将项目从Scrapyd保存到Amazon S3 [英] Saving items from Scrapyd to Amazon S3 using Feed Exporter

查看:74
本文介绍了使用Feed Exporter将项目从Scrapyd保存到Amazon S3的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

将Scrapy与Amazon S3一起使用非常简单,您可以进行设置:

Using Scrapy with amazon S3 is fairly simple, you set:

  • FEED_URI ='s3://MYBUCKET/feeds/%(name)s/%(time)s.jl'
  • FEED_FORMAT ='jsonlines'
  • AWS_ACCESS_KEY_ID = [访问密钥]
  • AWS_SECRET_ACCESS_KEY = [秘密密钥]

一切正常.

但是Scrapyd似乎会覆盖该设置并将这些项目保存在服务器上(带有网站中的链接)

But Scrapyd seems to override that setting and saves the items on the server (with a link in the web site)

添加"items_dir =设置似乎没有任何改变.

Adding the "items_dir =" setting doesn't seem to change anything.

哪种设置可以使其正常工作?

What kind of setting makes it work?

可能相关的其他信息-我们正在使用Scrapy-Heroku.

Extra info that might be relevant - we are using Scrapy-Heroku.

推荐答案

我也遇到了同样的问题. 从scrapyd.conf文件中删除items_dir =对我有用.

I also faced the same problem. Removing the items_dir= from scrapyd.conf file worked for me.

这篇关于使用Feed Exporter将项目从Scrapyd保存到Amazon S3的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆