scrapy :另一种避免大量尝试的方法,除了 [英] scrapy : another method to avoid a lot of try except
问题描述
我想问一个问题
当我使用 css 选择器时,extract()
将使输出的东西成为一个列表
所以如果css选择器没有价值
它将在终端中显示错误(如下所示),并且蜘蛛不会在我的 json 文件中获取任何项目
I want to ask a question
When I use css selector,extract()
will make the output thing a list
So if the css selector didn't have value
It will show error in terminal(like below),and the spider won't get any item in my json file
item['intro'] = intro[0]
exceptions.IndexError: list index out of range
所以我使用 try 和 except 检查列表是否存在
So I use try and except to check the list is exists
sel = Selector(response)
sites = sel.css("div.con ul > li")
for site in sites:
item = Shopping_appleItem()
links = site.css(" a::attr(href)").extract()
title = site.css(" a::text").extract()
date = site.css(" time::text").extract()
try:
item['link'] = urlparse.urljoin(response.url,links[0])
except:
print "link not found"
try:
item['title'] = title[0]
except:
print "title not found"
try:
item['date'] = date[0]
except:
print "date not found"
我觉得我用了很多try and except,不知道是不是一个好方法.
请指导我一点谢谢
I feel I use a lot of try and except,and I don't know if it is a good way.
Please guide me a bit Thank you
推荐答案
您可以使用单独的函数来提取数据.例如对于文本节点,示例代码在这里
You can use a separate function for extraction of data. e.g for text nodes, sample code is here
def extract_text(node):
if not node:
return ''
_text = './/text()'
extracted_list = [x.strip() for x in node.xpath(_text).extract() if len(x.strip()) > 0]
if not extracted_list:
return ''
return ' '.join(extracted_list)
你可以像这样调用这个方法
and you can call this method like this
self.extract_text(sel.css("your_path"))
这篇关于scrapy :另一种避免大量尝试的方法,除了的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!