Spark解析CSV文件中未用引号括起来的反斜杠转义逗号 [英] Spark to parse backslash escaped comma in CSV files that are not enclosed by quotes
问题描述
spark 似乎无法对 CSV 文件中没有用引号括起来的字符进行转义,例如,
姓名、年龄、地址、薪水Luke,24,Mountain View\,CA,100
我正在使用 pyspark,以下代码显然不适用于地址字段中的逗号.
df = spark.read.csv(fname, schema=given_schema,sep=',', quote='',mode="FAILFAST")
有什么建议吗?
能否请您先尝试使用 rdd,重新格式化它,然后在其上创建一个数据框.
df = sc.textFile(PATH_TO_FILE) \.map(lambda x: x.replace("\\," ,"|")) \.mapPartitions(lambda line: csv.reader(line,delimiter=','))\.filter(lambda line: line[0] != 'Name') \.toDF(['姓名','年龄','地址','工资'])
这是您的数据框现在的样子:
<预><代码>>>>df.show();+----+---+----------------+------+|姓名|年龄|地址|薪资|+----+---+----------------+------+|卢克|24|山景|加州|100|+----+---+----------------+------+我必须用|"替换地址栏\,"然后我使用分隔符,"分割数据.不确定它如何满足您的要求,但它正在工作.
Seems spark is not able to escape characters in CSV files that are not enclosed by quotes, for example,
Name,Age,Address,Salary
Luke,24,Mountain View\,CA,100
I am using pyspark, the following code apparently won't work with the comma inside Address field.
df = spark.read.csv(fname, schema=given_schema,
sep=',', quote='',mode="FAILFAST")
Any suggestions?
Could you please give a try using rdd first, reformat it and then create a dataframe over it.
df = sc.textFile(PATH_TO_FILE) \
.map(lambda x: x.replace("\\," ,"|")) \
.mapPartitions(lambda line: csv.reader(line,delimiter=','))\
.filter(lambda line: line[0] != 'Name') \
.toDF(['Name','Age','Address','Salary'])
this is how your dataframe looks like now:
>>> df.show();
+----+---+----------------+------+
|Name|Age| Address|Salary|
+----+---+----------------+------+
|Luke| 24|Mountain View|CA| 100|
+----+---+----------------+------+
I have to replace address column "\," with "|" and then I splitted the data using delimiter ','. Not sure how it matches your requirement but it's working.
这篇关于Spark解析CSV文件中未用引号括起来的反斜杠转义逗号的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!