PySpark — UnicodeEncodeError:'ascii'编解码器无法编码字符 [英] PySpark — UnicodeEncodeError: 'ascii' codec can't encode character

查看:412
本文介绍了PySpark — UnicodeEncodeError:'ascii'编解码器无法编码字符的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

使用spark.read.csvencoding='utf-8'将具有外来字符(åäö)的数据帧加载到Spark中,并尝试执行简单的show().

Loading a dataframe with foreign characters (åäö) into Spark using spark.read.csv, with encoding='utf-8' and trying to do a simple show().

>>> df.show()

Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/spark/python/pyspark/sql/dataframe.py", line 287, in show
print(self._jdf.showString(n, truncate))
UnicodeEncodeError: 'ascii' codec can't encode character u'\ufffd' in position 579: ordinal not in range(128)

我认为这可能与Python本身有关,但我无法理解所提到的任何技巧

I figure this is probably related to Python itself but I cannot understand how any of the tricks that are mentioned here for example can be applied in the context of PySpark and the show()-function.

推荐答案

https://issues.apache.org/jira/browse/SPARK-11772 讨论了此问题,并给出了可以运行的解决方案:

https://issues.apache.org/jira/browse/SPARK-11772 talks about this issue and gives a solution that runs:

export PYTHONIOENCODING=utf8

在运行pyspark之前.我想知道为什么上面的方法可行,因为sys.getdefaultencoding()即使没有它也会为我返回utf-8.

before running pyspark. I wonder why above works, because sys.getdefaultencoding() returned utf-8 for me even without it.

如何在Python 3中设置sys.stdout编码? 也对此进行了讨论,并为Python 3提供了以下解决方案:

How to set sys.stdout encoding in Python 3? also talks about this and gives the following solution for Python 3:

import sys
sys.stdout = open(sys.stdout.fileno(), mode='w', encoding='utf8', buffering=1)

这篇关于PySpark — UnicodeEncodeError:'ascii'编解码器无法编码字符的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆