Scrapy和Selenium:仅废弃两页 [英] Scrapy and Selenium : only scrap two pages

查看:170
本文介绍了Scrapy和Selenium:仅废弃两页的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想抓取一个网站,页面超过10个
每个页面有10个链接,蜘蛛程序将获取链接def parse():
并通过链接爬网我想要的其他数据def parse_detail():

I want to crawl a website, there are more than 10 pages
every page has 10 links, the spider will get the linksdef parse():
and go the the link to crawl another data I want def parse_detail():

请指导我如何编写仅爬网两页而不是所有页面THX的爬网 这是我的代码,它只能抓取一页并且比蜘蛛网关闭

Please guide me how to write to crawl only two pages not all pages THX Here is my code it only crawl one pages and than the spider closed

def __init__(self):
    self.driver = webdriver.Firefox()
    dispatcher.connect(self.spider_closed, signals.spider_closed)

def parse(self, response):
    self.driver.implicitly_wait(20) 
    self.driver.get(response.url)
    sites = self.driver.find_elements_by_css_selector("")
    for site in sites:
        item = CItem()
        linkiwant = site.find_element_by_css_selector(" ") 
        start = site.find_element_by_css_selector(" ")  
        item['link'] = linkiwant.get_attribute("href") 
        item['start_date']  = start.text
        yield Request(url=item['link'], meta={'item':item}, callback=self.parse_detail)  

    #how to write to only catch 2 pages??
    i=0
    if i< 2:
        try:
            next = self.driver.find_element_by_xpath("/li[@class='p_next'][1]")   
            next_page = next.text
            if next_page == "next_page":  
                next.click()    
                self.driver.refresh()  
                yield Request(self.driver.current_url, callback=self.parse)
                i+=1
        except:
             print "page not found"     
def parse_detail(self,response):
    item = response.meta['item']
    self.driver.implicitly_wait(20)  
    self.driver.get(response.url)
    sel = Selector(response)
    sites = sel.css("")            
    for site in sites:
        item['title'] = site.css(" ").extract()[0] 
        item['titleURL'] = site.css(" ").extract()[0]
        ..
        yield item   
def spider_closed(self, spider):
    self.driver.close()

推荐答案

使i持久:

def __init__(self):
    self.page_num = 0
    self.driver = webdriver.Firefox()
    dispatcher.connect(self.spider_closed, signals.spider_closed)

    #how to write to only catch 2 pages??
    if self.page_num < 2:
        try:
            next = self.driver.find_element_by_xpath("/li[@class='p_next'][1]")   
            next_page = next.text
            if next_page == "next_page":  
                next.click()    
                self.driver.refresh()  
                yield Request(self.driver.current_url, callback=self.parse)
                self.page_num += 1
        except:
             print "page not found"

这篇关于Scrapy和Selenium:仅废弃两页的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆