Scrapy 和 Selenium:只刮两页 [英] Scrapy and Selenium : only scrape two pages

查看:55
本文介绍了Scrapy 和 Selenium:只刮两页的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想爬一个网站,有10多页
每个页面有 10 个链接,蜘蛛会得到链接def parse():
并转到链接以抓取另一个我想要的数据 def parse_detail():

I want to crawl a website, there are more than 10 pages
every page has 10 links, the spider will get the linksdef parse():
and go the the link to crawl another data I want def parse_detail():

请指导我如何编写只抓取两个页面而不是所有页面 THX这是我的代码,它只抓取一页而不是蜘蛛关闭

Please guide me how to write to crawl only two pages not all pages THX Here is my code it only crawl one pages and than the spider closed

def __init__(self):
    self.driver = webdriver.Firefox()
    dispatcher.connect(self.spider_closed, signals.spider_closed)

def parse(self, response):
    self.driver.implicitly_wait(20) 
    self.driver.get(response.url)
    sites = self.driver.find_elements_by_css_selector("")
    for site in sites:
        item = CItem()
        linkiwant = site.find_element_by_css_selector(" ") 
        start = site.find_element_by_css_selector(" ")  
        item['link'] = linkiwant.get_attribute("href") 
        item['start_date']  = start.text
        yield Request(url=item['link'], meta={'item':item}, callback=self.parse_detail)  

    #how to write to only catch 2 pages??
    i=0
    if i< 2:
        try:
            next = self.driver.find_element_by_xpath("/li[@class='p_next'][1]")   
            next_page = next.text
            if next_page == "next_page":  
                next.click()    
                self.driver.refresh()  
                yield Request(self.driver.current_url, callback=self.parse)
                i+=1
        except:
             print "page not found"     
def parse_detail(self,response):
    item = response.meta['item']
    self.driver.implicitly_wait(20)  
    self.driver.get(response.url)
    sel = Selector(response)
    sites = sel.css("")            
    for site in sites:
        item['title'] = site.css(" ").extract()[0] 
        item['titleURL'] = site.css(" ").extract()[0]
        ..
        yield item   
def spider_closed(self, spider):
    self.driver.close()

推荐答案

使 i 持久化:

def __init__(self):
    self.page_num = 0
    self.driver = webdriver.Firefox()
    dispatcher.connect(self.spider_closed, signals.spider_closed)

    #how to write to only catch 2 pages??
    if self.page_num < 2:
        try:
            next = self.driver.find_element_by_xpath("/li[@class='p_next'][1]")   
            next_page = next.text
            if next_page == "next_page":  
                next.click()    
                self.driver.refresh()  
                yield Request(self.driver.current_url, callback=self.parse)
                self.page_num += 1
        except:
             print "page not found"

这篇关于Scrapy 和 Selenium:只刮两页的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆