Scrapy将子站点项目与站点项目合并 [英] Scrapy merge subsite-item with site-item

查看:84
本文介绍了Scrapy将子站点项目与站点项目合并的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试从子网站中抓取详细信息,并与在网站中抓取的详细信息合并.我一直在研究stackoverflow和文档.但是,我仍然无法使我的代码正常工作.看来我从子站点提取其他详细信息的功能不起作用.如果有人可以看看,我将不胜感激.

Im trying to scrape details from a subsite and merge with the details scraped with site. I've been researching through stackoverflow, as well as documentation. However, I still cant get my code to work. It seems that my function to extract additional details from the subsite does not work. If anyone could take a look I would be very grateful.

# -*- coding: utf-8 -*-
from scrapy.spiders import Spider
from scrapy.selector import Selector
from scrapeInfo.items import infoItem
import pyodbc


class scrapeInfo(Spider):
    name = "info"
    allowed_domains = ["http://www.nevermind.com"]
    start_urls = []

    def start_requests(self):

        #Get infoID and Type from database
        self.conn = pyodbc.connect('DRIVER={SQL Server};SERVER=server;DATABASE=dbname;UID=user;PWD=password')
        self.cursor = self.conn.cursor()
        self.cursor.execute("SELECT InfoID, category FROM dbo.StageItem")

        rows = self.cursor.fetchall()

        for row in rows:
            url = 'http://www.nevermind.com/info/'
            InfoID = row[0]
            category = row[1]
            yield self.make_requests_from_url(url+InfoID, InfoID, category, self.parse)

    def make_requests_from_url(self, url, InfoID, category, callback):
        request = Request(url, callback)
        request.meta['InfoID'] = InfoID
        request.meta['category'] = category
        return request

    def parse(self, response):
        hxs = Selector(response)
        infodata = hxs.xpath('div[2]/div[2]')  # input item path

        itemPool = []

        InfoID = response.meta['InfoID']
        category = response.meta['category']

        for info in infodata:
            item = infoItem()
            item_cur, item_hist = InfoItemSubSite()

            # Stem Details
            item['id'] = InfoID
            item['field'] = info.xpath('tr[1]/td[2]/p/b/text()').extract()
            item['field2'] = info.xpath('tr[2]/td[2]/p/b/text()').extract()
            item['field3'] = info.xpath('tr[3]/td[2]/p/b/text()').extract()
            item_cur['field4'] = info.xpath('tr[4]/td[2]/p/b/text()').extract()
            item_cur['field5'] = info.xpath('tr[5]/td[2]/p/b/text()').extract()
            item_cur['field6'] = info.xpath('tr[6]/td[2]/p/b/@href').extract()

            # Extract additional information about item_cur from refering site
            # This part does not work
            if item_cur['field6'] = info.xpath('tr[6]/td[2]/p/b/@href').extract():
                url = 'http://www.nevermind.com/info/sub/' + item_cur['field6'] = info.xpath('tr[6]/td[2]/p/b/@href').extract()[0]
                request = Request(url, housingtype, self.parse_item_sub)
                request.meta['category'] = category
                yield self.parse_item_sub(url, category)
            item_his['field5'] = info.xpath('tr[5]/td[2]/p/b/text()').extract()
            item_his['field6'] = info.xpath('tr[6]/td[2]/p/b/text()').extract()
            item_his['field7'] = info.xpath('tr[7]/td[2]/p/b/@href').extract()      

            item['subsite_dic'] = [dict(item_cur), dict(item_his)]

            itemPool.append(item)
            yield item
        pass

        # Function to extract additional info from the subsite, and return it to the original item.
        def parse_item_sub(self, response, category):
            hxs = Selector(response)
            subsite = hxs.xpath('div/div[2]')  # input base path

            category = response.meta['category']

            for i in subsite:        
                item = InfoItemSubSite()    
                if (category == 'first'):
                    item['subsite_field1'] = i.xpath('/td[2]/span/@title').extract()            
                    item['subsite_field2'] = i.xpath('/tr[4]/td[2]/text()').extract()
                    item['subsite_field3'] = i.xpath('/div[5]/a[1]/@href').extract()
                else:
                    item['subsite_field1'] = i.xpath('/tr[10]/td[3]/span/@title').extract()            
                    item['subsite_field2'] = i.xpath('/tr[4]/td[1]/text()').extract()
                    item['subsite_field3'] = i.xpath('/div[7]/a[1]/@href').extract()
                return item
            pass

我一直在看这些示例以及其他许多示例(stackoverflow非常适合该示例!)以及草率的文档,但是仍然无法理解我如何从一个函数发送详细信息并与之合并从原始功能中抓取物品.

I've been looking at these examples together with a lot of other examples (stackoverflow is great for that!), as well as scrapy documentation, but still unable to understand how I get details send from one function and merged with the scraped items from the original function.

如何进行将结果从目标页面合并到当前页面? 推荐答案

在这里查找的内容称为请求链接.您的问题是-从多个请求中产生一项.一种解决方案是在将项目携带在请求meta属性中的同时链接请求.
示例:

What you are looking here is called request chaining. Your problem is - yield one item from several requests. A solution to this is to chain requests while carrying your item in requests meta attribute.
Example:

def parse(self, response):
    item = MyItem()
    item['name'] = response.xpath("//div[@id='name']/text()").extract()
    more_page = # some page that offers more details
    # go to more page and take your item with you.
    yield Request(more_page, 
                  self.parse_more,
                  meta={'item':item})  


def parse_more(self, response):
    # get your item from the meta
    item = response.meta['item']
    # fill it in with more data and yield!
    item['last_name'] = response.xpath("//div[@id='lastname']/text()").extract()
    yield item 

这篇关于Scrapy将子站点项目与站点项目合并的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆