scrapyd deploy 显示 0 蜘蛛 [英] scrapyd deploy shows 0 spiders

查看:40
本文介绍了scrapyd deploy 显示 0 蜘蛛的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在为一个项目使用scrapy.我运行了以下命令来部署项目:

I am using scrapy for a project. I ran the following commands for deploying the project :

$scrapy deploy -l

然后我得到了以下 o/p:

Then i got the following o/p:

scrapysite http://localhost:6800/

scrapysite http://localhost:6800/

$cat scrapy.cfg

[settings] 
default = scrapBib.settings

[deploy:scrapysite]  
url = http://localhost:6800/  
project = scrapBib

$scrapy deploy scrapysite -p scrapBib

'Building egg of scrapBib-1346242513
'build/lib.linux-x86_64-2.7' does not exist -- can't clean it

'build/bdist.linux-x86_64' does not exist -- can't clean it

'build/scripts-2.7' does not exist -- can't clean it

zip_safe flag not set; analyzing archive contents...

Deploying scrapBib-1346242513 to `http://localhost:6800/addversion.json`

2012-08-29 17:45:14+0530 [HTTPChannel,22,127.0.0.1] 127.0.0.1 - - [29/Aug/2012:12:15:13 

+0000] "POST /addversion.json HTTP/1.1" 200 79 "-" "Python-urllib/2.7"

Server response (200):

{"status": "ok", "project": "scrapBib", "version": "1346242513", "spiders": 0}

如您所见,虽然我在 project/spiders/文件夹中编写了 3 个蜘蛛,但让蜘蛛为 0 .因此,我无法使用 curl 请求开始爬网.请帮忙

As you can see, getting spiders as 0 , although i have written 3 spiders inside project/spiders/ folder. As as result i am unable to start the crawl with curl requests. Please help

推荐答案

这个问题我也遇到过一次,做两件事

i also faced this issue once , do two things

1) 从本地系统中删除 project.egg-infobuildsetup.py.

1) remove project.egg-info, build, setup.py from your local system.

2) 从您的服务器中删除所有已部署的版本.

2) remove all deployed version from your server .

然后尝试部署它会被修复...

then try to deploy it will be fixed ...

这篇关于scrapyd deploy 显示 0 蜘蛛的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆