Scrapy-不断获取URL以从数据库中进行爬网 [英] Scrapy - Continuously fetch urls to crawl from database

查看:303
本文介绍了Scrapy-不断获取URL以从数据库中进行爬网的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想不断获取网址以从数据库中进行抓取。到目前为止,我已经成功地从基址中获取了url,但是我希望我的Spider能够继续从该基址中读取数据,因为该表将由另一个线程填充。



我有一条管道,可在爬网(工作)后从表中删除url。换句话说,我想将数据库用作队列。我没有运气就尝试了其他方法。



这是我的spider.py

  class MySpider(scrapy.Spider ):
MAX_RETRY = 10
logger = logging.getLogger(__ name__)

name ='myspider'
start_urls = [
]

@classmethod
def from_crawler(cls,爬虫,* args,** kwargs):
spider = super(MySpider,cls).from_crawler(crawler,* args,** kwargs)
crawler.signals.connect(spider.spider_closed,signals.spider_closed)
返回蜘蛛

def __init __(self):
db = MySQLdb.connect(
user ='myuser',
passwd ='mypassword',
db ='mydatabase',
host ='myhost',
charset ='utf8',
use_unicode = True

self.db = db
self.logger.info('数据库连接已打开')
super(MySpider,self)

def spider_closed(self,spider):
self.db.close()
self.logger.info('Connectio n到数据库已关闭')


def start_requests(self):
cursor = self.db.cursor()
cursor.execute('SELECT * FROM mytable其中nbErrors< %s',(self.MAX_RETRY,))
行= cursor.fetchall()
行:
yield Request(row [0],self.parse,meta = {
'splash':{
'args':{
'html':1,
'wait':2
}
}
},errback = self.errback_httpbin)
cursor.close()

非常感谢$

编辑



这是我的新代码。

  @classmethod 
def from_crawler(cls,crawler,* args,** kwargs):
spider =超级(MySpider,cls).from_crawler(crawler,* args,** kwargs)
crawler.signals.connect(spider.spider_closed,signals.spider_closed)
crawler.signals.connect(spider.spider_idle ,signal.spider_idle)
返回蜘蛛

def spider_idle(self,spider):
self.logger.info('IDLE')
time.sleep(5 )
for self.getUrlsToCrawl()中的网址:
self.logger.info(url [1])$ ​​b $ b self.crawler.engine.crawl(Request(url [1],self。 parse,meta = {
'splash':{
'args':{
'html':1,1,
'wait':5
}
},
'dbId':url [0]
},errback = self.errback_httpbin),self)
提高DontCloseSpider

def getUrlsToCrawl(self):
dateNow Utc = datetime.utcnow()。strftime(%Y-%m-%dT%H:%M:%S)
cursor = self.db.cursor()
cursor.execute( 'SELECT ID,来自mytable的网址,nbErrors< %s AND域=%s,nextCrawl< %s',(self.MAX_RETRY,self.domain,dateNowUtc))
网址= cursor.fetchall()
cursor.close()
返回网址

在我的日志中,我可以看到:

信息:空闲

信息:someurl

INFO:空闲

INFO:someurl



但是当我更新表中的数据以获取更多或更少的url时,输出永不变。似乎收集到的数据并不新鲜,我也从不抓取spider_idle方法中的请求

解决方案

我个人建议每次必须抓取某些内容时都要启动一个新的Spider,但是如果您想保持该过程有效,我建议您使用 spider_idle 信号:

  @classmethod 
def from_crawler(cls,爬虫,* args,** kwargs):
spider = super(MySpider,cls).from_crawler(crawler,* args,** kwargs)
rawler.signals.connect(spider.spider_closed,signals.spider_closed)
rawler.signals.connect(spider.spider_idle,signals.spider_idle)
返回蜘蛛
...
def spider_idle(self,spider):
#再次读取数据库并发送新请求

#检查此处发送新请求是否不同
self.crawler.engine.crawl (
Request(
new_url,
callback = self.parse),
spider

在这里,您是在蜘蛛实际上关闭之前发送新请求。


I'd like to continuously fetch urls to crawl from a database. So far I succeeded in fetching urls from the base but I'd like my spider to keep reading from that base since the table will be populated by another thread.

I have a pipeline that removes url from the table once it is crawled (working). In other words, I'd like to use my database as a queue. I tried different approaches with no luck.

Here's my spider.py

class MySpider(scrapy.Spider):
  MAX_RETRY = 10
  logger = logging.getLogger(__name__)

  name = 'myspider'
  start_urls = [
      ]

  @classmethod
  def from_crawler(cls, crawler, *args, **kwargs):
      spider = super(MySpider, cls).from_crawler(crawler, *args, **kwargs)
      crawler.signals.connect(spider.spider_closed, signals.spider_closed)
      return spider

  def __init__(self):
      db = MySQLdb.connect(
          user='myuser',
          passwd='mypassword',
          db='mydatabase',
          host='myhost',
          charset='utf8',
          use_unicode=True
          )
      self.db = db
      self.logger.info('Connection to database opened')
      super(MySpider, self)

  def spider_closed(self, spider):
      self.db.close()
      self.logger.info('Connection to database closed')


  def start_requests(self):
      cursor = self.db.cursor()
      cursor.execute('SELECT * FROM mytable WHERE nbErrors < %s', (self.MAX_RETRY,))
      rows = cursor.fetchall()
        for row in rows:
          yield Request(row[0], self.parse, meta={
              'splash': {
                  'args':{
                      'html': 1,
                      'wait': 2
                      }
                  }
              }, errback=self.errback_httpbin)
      cursor.close()

Thank you very much

EDIT

Here's my new code.

@classmethod
def from_crawler(cls, crawler, *args, **kwargs):
    spider = super(MySpider, cls).from_crawler(crawler, *args, **kwargs)
    crawler.signals.connect(spider.spider_closed, signals.spider_closed)
    crawler.signals.connect(spider.spider_idle, signals.spider_idle)
    return spider

def spider_idle(self, spider):
    self.logger.info('IDLE')
    time.sleep(5)
        for url in self.getUrlsToCrawl():
            self.logger.info(url[1])
            self.crawler.engine.crawl(Request(url[1], self.parse, meta={
               'splash': {
                   'args':{
                       'html': 1,
                       'wait': 5
                       }
                   },
                'dbId': url[0]
               }, errback=self.errback_httpbin), self)
    raise DontCloseSpider       

def getUrlsToCrawl(self):
    dateNowUtc = datetime.utcnow().strftime("%Y-%m-%dT%H:%M:%S")
    cursor = self.db.cursor()
    cursor.execute('SELECT id, url FROM mytable WHERE nbErrors < %s AND domain = %s and nextCrawl < %s', (self.MAX_RETRY, self.domain, dateNowUtc))
    urls = cursor.fetchall()
    cursor.close()
    return urls

In my logs I can see :
INFO: IDLE
INFO: someurl
INFO: IDLE
INFO: someurl

But when I update the data in my table to fetch more or less urls, the output never changes. It seems that the data collected is not fresh and I never crawl the requests made in the spider_idle method

解决方案

I would personally recommend to start a new spider every time you have to crawl something but if you want to keep the process alive I would recommend using the spider_idle signal:

@classmethod
def from_crawler(cls, crawler, *args, **kwargs):
    spider = super(MySpider, cls).from_crawler(crawler, *args, **kwargs)
    crawler.signals.connect(spider.spider_closed, signals.spider_closed)
    crawler.signals.connect(spider.spider_idle, signals.spider_idle)
    return spider
...
def spider_idle(self, spider):
    # read database again and send new requests

    # check that sending new requests here is different
    self.crawler.engine.crawl(
                    Request(
                        new_url,
                        callback=self.parse),
                    spider
                )

Here you are sending new requests before the spider actually closes.

这篇关于Scrapy-不断获取URL以从数据库中进行爬网的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆