scrapy python 从蜘蛛中调用蜘蛛 [英] scrapy python call spider from spider
本文介绍了scrapy python 从蜘蛛中调用蜘蛛的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
我有一个微调器可以抓取页面并获取所有网址.
I have one spiner which scrape the page and gets all the urls.
我有另一个微调器,它获取一个 url 并对其进行废弃.
i have another spiner which get a url and scrap on it.
我想为从第一个微调器获得的每个链接调用第二个微调器.
i want to call the second spiner for each link i get from the first spiner.
从第一个 spiner 获取所有链接的代码
the code for getting all links from the first spiner
for site in sites:
Link = site.xpath('a/@href').extract()
但我不知道如何为每个 Link
but i don't know how to call the spiner for each Link
请帮忙
推荐答案
我想你最好把这两个蜘蛛联合起来,做一些类似的事情:
I guess you better unite the two spiders and do something like:
def get_links(self, response):
for site in sites:
link = site.xpath('a/@href').extract()[0]
yield Request(url=link, callback=self.scrape_them)
def scrape_them(self, response):
# by now scrapy called the link and you get the response
...
这篇关于scrapy python 从蜘蛛中调用蜘蛛的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!
查看全文