设置restrict_xpaths设置后的UnicodeEncodeError [英] UnicodeEncodeError after setting restrict_xpaths settings

查看:950
本文介绍了设置restrict_xpaths设置后的UnicodeEncodeError的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我是新来的python和scrapy。将restrict_xpaths设置设置为// table [@ class =lista]后,我收到了以下回溯。奇怪的是,通过使用其他xpath规则,抓取工具正常工作。

 追溯(最近的最后一次呼叫):
文件/System/Library/Frameworks/Python.framework/Versions /2.7/Extras/lib/python/twisted/internet/base.py,第800行,在runUntilCurrent
call.func(* call.args,** call.kw)
文件/系统/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/twisted/internet/task.py,第602行,在_tick
taskObj._oneWorkUnit()
文件/系统/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/twisted/internet/task.py,第479行,在_oneWorkUnit
result = self._iterator.next()
文件/Library/Python/2.7/site-packages/scrapy/utils/defer.py,第57行,< genexpr>
work =(可调用的elem,* args,**命名)
---<这里捕获的异常> ---
文件/Library/Python/2.7/site-packages/scrapy/utils/defer.py,第96行,在iter_errback中
yield it.next()
文件 /Library/Python/2.7/site-packages/scrapy/contrib/spidermiddleware/offsite.py,第23行,在process_spider_output
中为x结果:
文件/Library/Python/2.7/site < genexpr>>中的第22行,packages / scrapy / contrib / spidermiddleware / referer.py
return(_set_referer(r)for r in result or())
文件/Library/Python/2.7/site-packages/scrapy/contrib/spidermiddleware/urllength.py,第33行, < genexpr>
return(r for result in or()if _filter(r))
文件/Library/Python/2.7/site-packages/scrapy/contrib/spidermiddleware/depth.py,第50行,< genexpr>
return(r for result in or()if _filter(r))
文件/Library/Python/2.7/site-packages/scrapy/contrib/spiders/crawl.py,第73行,在_parse_response
for request_or_item in self._requests_to_follow(response):
文件/Library/Python/2.7/site-packages/scrapy/contrib/spiders/crawl.py,第52行,_requests_to_follow
links = [l for l in rule.link_extractor.extract_links(response)if l not in seen]
File/Library/Python/2.7/site-packages/scrapy/contrib/linkextractors/sgml。 py,第124行,在extract_links
).encode(response.encoding)
文件/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/encodings/iso8859_2 .py,第12行,编码
返回codecs.charmap_encode(input,errors,encoding_table)
exceptions.UnicodeEncodeError:'charmap'编解码器不能编码字符u'\xbb'在位置686 :字符映射到< undefined>

这是MySpider类。



<$ p $从scrapy.contrib.spiders导入CrawlSpider的p> 从scrapy.contrib.linkextractors.sgml导入CrawlSpider规则
从scrapy.selector导入HtmlXPathSelector
从ds_crawl.items导入SgmlLinkExtractor
导入DsCrawlItem

class MySpider(CrawlSpider):
name ='inside'
allowed_domains = ['wroclaw.dlastudenta.pl']
start_urls = ['http: //wroclaw.dlastudenta.pl/stancje/']

rules =(
规则(SgmlLinkExtractor(allow =('show_stancja')),restrict_xpaths =('// table [@ class = lista]')),callback ='parse_item',follow = True),

def parse_item(self,response):
hxs = HtmlXPathSelector(response)
title = hxs.select(// p [@ class ='bbtext intextAd'])
标题中的标题:
item = DsCrawlItem()
item ['content'] = titles.select(text())。extract()
打印项目

任何解释此错误和帮助将不胜感激。谢谢。

解决方案

这是使用& raquo; 实体,由 lxml 转换为unicode字符 \xbb ,当您使用 restrict_xpaths 参数链接提取器将内容编码为原始编码 iso8859-2 ,因为 \\ \\ xbb 在该编码中不是有效的字符。



这个简单的行再现了例外:

 >>> u'\xbb'.encode('iso8859-2')
...
UnicodeEncodeError:'charmap'编解码器无法在位置0中编码字符u'\xbb':字符映射到<未定义>

此解决方法可能会强制使用 utf8 为所有回应。这可以通过一个简单的下载器中间件完成:

 #file:myproject / middlewares.py 

类ForceResF8Response(对象):
一个下载器中间件强制所有响应的UTF-8编码。
encoding ='utf-8'

def process_response (self,request,response,spider):
#注意:在Scrapy< 1.0中使用response.body_as_unicode()而不是response.text。
new_body = response.text.encode(self.encoding)
return response.replace(body = new_body,encoding = self.encoding)

在您的设置中:

  DOWNLOADER_MIDDLEWARES = {
' myproject.middlewares.ForceUTF8Response':100,
}


i'm new to python and scrapy. After setting restrict_xpaths settings to "//table[@class="lista"]" I've received following traceback. What's strange, by using other xpath rule the crawler works properly.

Traceback (most recent call last):
  File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/twisted/internet/base.py", line 800, in runUntilCurrent
    call.func(*call.args, **call.kw)
  File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/twisted/internet/task.py", line 602, in _tick
    taskObj._oneWorkUnit()
  File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/twisted/internet/task.py", line 479, in _oneWorkUnit
    result = self._iterator.next()
  File "/Library/Python/2.7/site-packages/scrapy/utils/defer.py", line 57, in <genexpr>
    work = (callable(elem, *args, **named) for elem in iterable)
--- <exception caught here> ---
  File "/Library/Python/2.7/site-packages/scrapy/utils/defer.py", line 96, in iter_errback
    yield it.next()
  File "/Library/Python/2.7/site-packages/scrapy/contrib/spidermiddleware/offsite.py", line 23, in process_spider_output
    for x in result:
  File "/Library/Python/2.7/site-packages/scrapy/contrib/spidermiddleware/referer.py", line 22, in <genexpr>
    return (_set_referer(r) for r in result or ())
  File "/Library/Python/2.7/site-packages/scrapy/contrib/spidermiddleware/urllength.py", line 33, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "/Library/Python/2.7/site-packages/scrapy/contrib/spidermiddleware/depth.py", line 50, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "/Library/Python/2.7/site-packages/scrapy/contrib/spiders/crawl.py", line 73, in _parse_response
    for request_or_item in self._requests_to_follow(response):
  File "/Library/Python/2.7/site-packages/scrapy/contrib/spiders/crawl.py", line 52, in _requests_to_follow
    links = [l for l in rule.link_extractor.extract_links(response) if l not in seen]
  File "/Library/Python/2.7/site-packages/scrapy/contrib/linkextractors/sgml.py", line 124, in extract_links
    ).encode(response.encoding)
  File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/encodings/iso8859_2.py", line 12, in encode
    return codecs.charmap_encode(input,errors,encoding_table)
exceptions.UnicodeEncodeError: 'charmap' codec can't encode character u'\xbb' in position 686: character maps to <undefined>

Here is MySpider Class.

from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from ds_crawl.items import DsCrawlItem

class MySpider(CrawlSpider):
    name = 'inside'
    allowed_domains = ['wroclaw.dlastudenta.pl']
    start_urls = ['http://wroclaw.dlastudenta.pl/stancje/']

    rules = (
        Rule(SgmlLinkExtractor(allow=('show_stancja'), restrict_xpaths=('//table[@class="lista"]')),  callback='parse_item', follow= True),)

    def parse_item(self, response):
        hxs = HtmlXPathSelector(response)
        titles = hxs.select("//p[@class='bbtext intextAd']")
        for titles in titles:
            item = DsCrawlItem()
            item['content'] = titles.select("text()").extract()
            print item

Any explanation of this error and help will be appreciated. Thank you.

解决方案

That's a bug caused by the web page using the &raquo; entity which is translated by lxml to the unicode character \xbb and when you use the restrict_xpaths argument the link extractors encodes the content to the original encoding iso8859-2 which fails because \xbb is not valid character in that encoding.

This simple line reproduces the exception:

>>> u'\xbb'.encode('iso8859-2')
...
UnicodeEncodeError: 'charmap' codec can't encode character u'\xbb' in position 0: character maps to <undefined>

A workaround for this can be forcing to use utf8 for all responses. This can be done by a simple downloader middleware:

# file: myproject/middlewares.py

class ForceUTF8Response(object):
    """A downloader middleware to force UTF-8 encoding for all responses."""
    encoding = 'utf-8'

    def process_response(self, request, response, spider):
        # Note: Use response.body_as_unicode() instead of response.text in in Scrapy <1.0.
        new_body = response.text.encode(self.encoding)
        return response.replace(body=new_body, encoding=self.encoding)

In your settings:

DOWNLOADER_MIDDLEWARES = {
    'myproject.middlewares.ForceUTF8Response': 100,
}

这篇关于设置restrict_xpaths设置后的UnicodeEncodeError的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆