在Python中获取HTTP响应的字符集/编码的好方法 [英] A good way to get the charset/encoding of an HTTP response in Python

查看:2077
本文介绍了在Python中获取HTTP响应的字符集/编码的好方法的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

寻找一种简单的方法来获取使用Python urllib2或任何其他Python库的HTTP响应的字符集/编码信息。

Looking for an easy way to get the charset/encoding information of an HTTP response using Python urllib2, or any other Python library.

>>> url = 'http://some.url.value'
>>> request = urllib2.Request(url)
>>> conn = urllib2.urlopen(request)
>>> response_encoding = ?

我知道它有时出现在'Content-Type'标题中,信息,它嵌入在我需要解析的字符串中。例如,Google返回的Content-Type标头是

I know that it is sometimes present in the 'Content-Type' header, but that header has other information, and it's embedded in a string that I would need to parse. For example, the Content-Type header returned by Google is

>>> conn.headers.getheader('content-type')
'text/html; charset=utf-8'

我可以使用,但我不知道如何一致格式将是。我确信字符集可能完全丢失,所以我必须处理这种边缘情况。某种类型的字符串拆分操作,以获得'utf-8'出来似乎它必须是错误的方式做这种事情。

I could work with that, but I'm not sure how consistent the format will be. I'm pretty sure it's possible for charset to be missing entirely, so I'd have to handle that edge case. Some kind of string split operation to get the 'utf-8' out of it seems like it has to be the wrong way to do this kind of thing.

>>> content_type_header = conn.headers.getheader('content-type')
>>> if '=' in content_type_header:
>>>  charset = content_type_header.split('=')[1]

这是一种感觉类似的代码它做了太多的工作。我也不知道它是否会在每种情况下工作。

That's the kind of code that feels like it's doing too much work. I'm also not sure if it will work in every case. Does anyone have a better way to do this?

推荐答案

要解析http标头,您可以使用 cgi.parse_header()

To parse http header you could use cgi.parse_header():

_, params = cgi.parse_header('text/html; charset=utf-8')
print params['charset'] # -> utf-8

或使用响应对象:

response = urllib2.urlopen('http://example.com')
response_encoding = response.headers.getparam('charset')
# or in Python 3: response.headers.get_content_charset(default)

通常,服务器可能会涉及编码或者根本不报告它(默认取决于内容类型),或者可以在响应主体内指定编码,例如,html文档中的< meta> 元素,在xml文档的xml声明中。作为最后一个手段,编码可以从内容本身猜出。

In general the server may lie about the encoding or do not report it at all (the default depends on content-type) or the encoding might be specified inside the response body e.g., <meta> element in html documents or in xml declaration for xml documents. As a last resort the encoding could be guessed from the content itself.

您可以使用 请求 以获取Unicode文本:

You could use requests to get Unicode text:

import requests # pip install requests

r = requests.get(url)
unicode_str = r.text # may use `chardet` to auto-detect encoding

BeautifulSoup 来解析html(并将其转换为Unicode作为副作用):

Or BeautifulSoup to parse html (and convert to Unicode as a side-effect):

from bs4 import BeautifulSoup # pip install beautifulsoup4

soup = BeautifulSoup(urllib2.urlopen(url)) # may use `cchardet` for speed
# ...

/ BeautifulSoup / bs4 / doc /#unicode-dammit> bs4.UnicodeDammit
$ b

Or bs4.UnicodeDammit directly for arbitrary content (not necessarily an html):

from bs4 import UnicodeDammit

dammit = UnicodeDammit(b"Sacr\xc3\xa9 bleu!")
print(dammit.unicode_markup)
# -> Sacré bleu!
print(dammit.original_encoding)
# -> utf-8

这篇关于在Python中获取HTTP响应的字符集/编码的好方法的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆