2 个关键字之间的 Scrapy xpath [英] Scrapy xpath between 2 keywords

查看:36
本文介绍了2 个关键字之间的 Scrapy xpath的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试在 2 个关键字之间提取一些文本信息,如下所示:

I am trying to extract some text information between 2 keywords like this:

item['duties']=titles.select('.//span/text()[following-sibling::*[text()="Qualifications/Duties" and preceding-sibling::*text()="Entity Information"]').extract()

蜘蛛:

from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.http import request
from scrapy.selector import HtmlXPathSelector
from health.items import HealthItem

class healthspider(CrawlSpider):

    name="health"
    allowed_domains=['mysite']
    start_urls=['myurl']

    rules=(
    Rule(SgmlLinkExtractor(allow=("search/",)), callback="parse_health", follow=True),
    Rule(SgmlLinkExtractor(allow=("url",)),callback="parse_job",follow=True),
    )


    def parse_job(self, response):
        hxs=HtmlXPathSelector(response)
    titles=hxs.select('//*[@itemprop="description"]')
    items = []

    for titles in titles:
        item=HealthItem()
        item['duties']=titles.select('.//span[following-sibling::*[text()="Qualifications/Duties" and preceding-sibling::*text()="Entity Information"]/text()').extract()
        item['entity_info']=titles.select('.//span/text()[45]').extract()           items.append(item)
    print items
    return items

但我收到一条错误消息:

But i am getting an error saying:

raise ValueError("Invalid XPath: %s" % query)
    exceptions.ValueError: Invalid XPath: .//span/text()[following-sibling::*[text()="Qualifications/Duties" and preceding-sibling::*text()="Entity Information"]

有没有办法在我的蜘蛛中定义这样的 xpath?

Is there a way to define such xpaths in my spider?

推荐答案

几个选项:

使用 node() 测试选择(文本节点和元素节点)

Selecting with node() test (text nodes and element nodes)

In [1]: sel.xpath(""".//node()[preceding-sibling::*="Qualifications/Duties"]
                              [following-sibling::*="Entity Information"]""").extract()
Out[1]: 
[u'<br>',
 u'Texas Health Presbyterian Allen is currently in search of a Registered Nurse to help meet the growing needs of our Day Surgery Department to work PRN in Day Surgery and also float to PACU.',
 u'<br>',
 u'<br>',
 u'<b>Basic Qualifications:</b>',
 u'<br>',
 u'<br>',
 u'*Graduate of an accredited school of nursing',
 u'<br>',
 u'*Valid RN license in the state of Texas',
 u'<br>',
 u'*BLS',
 u'<br>',
 u'*ACLS',
 u'<br>',
 u'*PALS within 6 months of hire',
 u'<br>',
 u'*Minimum of 1 - 3 years experience as RN in Day Surgery, PACU, Outpatient Surgery, or ICU',
 u'<br>',
 u'*Strong organizational skills and ability to function in a fast paced work environment',
 u'<br>',
 u'*Ability to accept responsibility and show initiative to work without direct supervision',
 u'<br>',
 u'*A high degree of confidentiality, positive interpersonal skills and ability to function in a fast-paced environment',
 u'<br>',
 u'<br>',
 u'<b>Preferred Qualifications:</b>',
 u'<br>',
 u'<br>',
 u'*Three years RN experience in Outpatient Surgery along with some ICU experience.',
 u'<br>',
 u'*PALS',
 u'<br>',
 u'*PACU , Endoscopy or Ambulatory setting',
 u'<br>',
 u'*IV Conscious Sedation',
 u'<br>',
 u'<br>',
 u'<b>Hours/Schedule:</b>',
 u'<br>',
 u'<br>',
 u'*Variable',
 u'<br>',
 u'<br>',
 u'J2WPeriop',
 u'<br>',
 u'<br>']

仅选择文本节点:(松开"粗线)

Selecting text nodes only: (you "loose" the bold lines)

In [2]: sel.xpath(""".//text()[preceding-sibling::*="Qualifications/Duties"]
                              [following-sibling::*="Entity Information"]""").extract()
Out[2]: 
[u'Texas Health Presbyterian Allen is currently in search of a Registered Nurse to help meet the growing needs of our Day Surgery Department to work PRN in Day Surgery and also float to PACU.',
 u'*Graduate of an accredited school of nursing',
 u'*Valid RN license in the state of Texas',
 u'*BLS',
 u'*ACLS',
 u'*PALS within 6 months of hire',
 u'*Minimum of 1 - 3 years experience as RN in Day Surgery, PACU, Outpatient Surgery, or ICU',
 u'*Strong organizational skills and ability to function in a fast paced work environment',
 u'*Ability to accept responsibility and show initiative to work without direct supervision',
 u'*A high degree of confidentiality, positive interpersonal skills and ability to function in a fast-paced environment',
 u'*Three years RN experience in Outpatient Surgery along with some ICU experience.',
 u'*PALS',
 u'*PACU , Endoscopy or Ambulatory setting',
 u'*IV Conscious Sedation',
 u'*Variable',
 u'J2WPeriop']

选择您想要的部分的文本节点兄弟+粗线中的文本节点::

Selecting text nodes siblings of the sections you want + text nodes from the bold lines::

In [3]: sel.xpath(""".//*[preceding-sibling::*="Qualifications/Duties"]
                         [following-sibling::*="Entity Information"]/text()
                     |
                     .//text()[preceding-sibling::*="Qualifications/Duties"]
                              [following-sibling::*="Entity Information"]""").extract()
Out[3]: 
[u'Texas Health Presbyterian Allen is currently in search of a Registered Nurse to help meet the growing needs of our Day Surgery Department to work PRN in Day Surgery and also float to PACU.',
 u'Basic Qualifications:',
 u'*Graduate of an accredited school of nursing',
 u'*Valid RN license in the state of Texas',
 u'*BLS',
 u'*ACLS',
 u'*PALS within 6 months of hire',
 u'*Minimum of 1 - 3 years experience as RN in Day Surgery, PACU, Outpatient Surgery, or ICU',
 u'*Strong organizational skills and ability to function in a fast paced work environment',
 u'*Ability to accept responsibility and show initiative to work without direct supervision',
 u'*A high degree of confidentiality, positive interpersonal skills and ability to function in a fast-paced environment',
 u'Preferred Qualifications:',
 u'*Three years RN experience in Outpatient Surgery along with some ICU experience.',
 u'*PALS',
 u'*PACU , Endoscopy or Ambulatory setting',
 u'*IV Conscious Sedation',
 u'Hours/Schedule:',
 u'*Variable',
 u'J2WPeriop']

这篇关于2 个关键字之间的 Scrapy xpath的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆