在Spark Scala数据框的列中使用非英语字符 [英] Working with non-english characters in columns of spark scala dataframes
问题描述
这是我要加载到数据帧中的文件的一部分:
字母|句子|注释1
è|小e |无
Ü|大写U |无
ã|小a |
Ç|大写C |无
当我将此文件加载到数据框中时,所有非英语字符都将转换为框.试图提供 option("encoding","UTF-8")
,但没有任何变化.
val nonEnglishDF = spark.read.format("com.databricks.spark.csv").option(定界符","|").option("header",true).option(编码","UTF-8").load(hdfs文件路径)
请让我知道对此有任何解决方案.我最终需要保存文件,并且不会更改非英语字符.当前,文件保存时会放置方框或问号,而不是非英语字符.
它与选项("encoding","ISO-8859-1")一起使用.例如
val nonEnglishDF = spark.read.format("com.databricks.spark.csv").option(定界符","|").option("header",true).option(编码","ISO-8859-1").load(hdfs文件路径)Here is part of a file I am trying to load into a dataframe:
alphabet|Sentence|Comment1
è|Small e|None
Ü|Capital U|None
ã|Small a|
Ç|Capital C|None
When I load this file into a dataframe all the non-english characters get converted into boxes. Tried to give option("encoding","UTF-8")
, but there is no change.
val nonEnglishDF = spark.read.format("com.databricks.spark.csv").option("delimiter","|").option("header",true).option("encoding","UTF-8").load(hdfs file path)
Please let me know is there is any solution for this. I need to save the file finally with no change in the non-english characters. Currently when the file is saved, it puts boxes or question mark instead of the non-english characters.
It works with option("encoding","ISO-8859-1"). e.g.
val nonEnglishDF = spark.read.format("com.databricks.spark.csv").option("delimiter","|").option("header",true).option("encoding","ISO-8859-1").load(hdfs file path)
这篇关于在Spark Scala数据框的列中使用非英语字符的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!