从Python执行Scrapy [英] Scrapy run from Python

查看:165
本文介绍了从Python执行Scrapy的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试从Python运行Scrapy。我正在看这个代码(来源):

  from twisted.internet import reactor 
from scrapy.crawler import Crawler
from scrapy.settings import设置
from scrapy import log
from testspiders.spiders.followall import FollowAllSpider

spider = FollowAllSpider(domain ='scrapinghub.com')
crawler = Crawler(Settings())
crawler.configure()
crawler.crawl(spider)
crawler.start()
log.start()
reactor.run()#脚本将阻止这里

我的问题是我很困惑如何调整这个代码来运行我自己的蜘蛛。我已经打过我的蜘蛛项目spider_a,它指定了在蜘蛛本身内抓取的域名。



我问的是,如果我用下面的代码运行我的蜘蛛:

  scrapy crawl spider_a 

如何调整上面的示例python代码来做同样的操作?

解决方案

传递给 crawler.crawl(),例如:

 来自testspiders。 spider.spider_a import MySpider 

spider = MySpider()
crawler.crawl(spider)


I am trying to run Scrapy from Python. I'm looking at this code which (source):

from twisted.internet import reactor
from scrapy.crawler import Crawler
from scrapy.settings import Settings
from scrapy import log
from testspiders.spiders.followall import FollowAllSpider

spider = FollowAllSpider(domain='scrapinghub.com')
crawler = Crawler(Settings())
crawler.configure()
crawler.crawl(spider)
crawler.start()
log.start()
reactor.run() # the script will block here

My issue is that I'm confused on how to adjust this code to run my own spider. I have called my spider project "spider_a" which specifies the domain to crawl within the spider itself.

What I am asking is, if I run my spider with the following code:

scrapy crawl spider_a

How do I adjust the example python code above to do the same?

解决方案

Just import it and pass to crawler.crawl(), like:

from testspiders.spiders.spider_a import MySpider

spider = MySpider()
crawler.crawl(spider)

这篇关于从Python执行Scrapy的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆