PySpark — UnicodeEncodeError: 'ascii' 编解码器无法编码字符 [英] PySpark — UnicodeEncodeError: 'ascii' codec can't encode character
问题描述
使用 spark.read.csv
和 encoding='utf-8'
将带有外来字符 (åäö) 的数据帧加载到 Spark 中,并尝试做一个简单的展示().
我认为这可能与 Python 本身有关,但我无法理解提到的任何技巧 这里例如 可以应用在 PySpark 和 show() 函数的上下文中.
https://issues.apache.org/jira/browse/SPARK-11772 谈到了这个问题并给出了一个运行的解决方案:
导出 PYTHONIOENCODING=utf8
在运行 pyspark
之前.我想知道为什么上面有效,因为 sys.getdefaultencoding()
为我返回 utf-8
即使没有它.
如何在 Python 3 中设置 sys.stdout 编码? 也谈到了这一点,并给出了以下针对 Python 3 的解决方案:
导入系统sys.stdout = open(sys.stdout.fileno(), mode='w', encoding='utf8', buffering=1)
Loading a dataframe with foreign characters (åäö) into Spark using spark.read.csv
, with encoding='utf-8'
and trying to do a simple show().
>>> df.show()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/spark/python/pyspark/sql/dataframe.py", line 287, in show
print(self._jdf.showString(n, truncate))
UnicodeEncodeError: 'ascii' codec can't encode character u'ufffd' in position 579: ordinal not in range(128)
I figure this is probably related to Python itself but I cannot understand how any of the tricks that are mentioned here for example can be applied in the context of PySpark and the show()-function.
https://issues.apache.org/jira/browse/SPARK-11772 talks about this issue and gives a solution that runs:
export PYTHONIOENCODING=utf8
before running pyspark
. I wonder why above works, because sys.getdefaultencoding()
returned utf-8
for me even without it.
How to set sys.stdout encoding in Python 3? also talks about this and gives the following solution for Python 3:
import sys
sys.stdout = open(sys.stdout.fileno(), mode='w', encoding='utf8', buffering=1)
这篇关于PySpark — UnicodeEncodeError: 'ascii' 编解码器无法编码字符的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!