当将JSON文件读入Spark时_corrupt_record错误 [英] _corrupt_record error when reading a JSON file into Spark
问题描述
我有这个JSON文件
{
"a": 1,
"b": 2
}
这是使用Python json.dump方法获得的。
现在,我想使用pyspark将此文件读入Spark中的DataFrame。以下文档,我正在这样做
which has been obtained with Python json.dump method. Now, I want to read this file into a DataFrame in Spark, using pyspark. Following documentation, I'm doing this
sc = SparkContext()
sc = SparkContext()
sqlc = SQLContext(sc)
sqlc = SQLContext(sc)
df = sqlc.read.json('my_file.json')
df = sqlc.read.json('my_file.json')
print df.show()
print df.show()
打印声明会吐出来:
+---------------+
|_corrupt_record|
+---------------+
| {|
| "a": 1, |
| "b": 2|
| }|
+---------------+
任何人都知道发生了什么,为什么不正确地解释文件?
Anyone knows what's going on and why it is not interpreting the file correctly?
推荐答案
你需要在每行中有一个json对象您的输入文件,请参阅 http://spark.apache .org / docs / latest / api / python / pyspark.sql.html#pyspark.sql.DataFrameReader.json
You need to have one json object per row in your input file, see http://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.DataFrameReader.json
如果你的json文件看起来像这样它会给你预期的数据框:
If your json file looks like this it will give you the expected dataframe:
{ "a": 1, "b": 2 }
{ "a": 3, "b": 4 }
....
df.show()
+---+---+
| a| b|
+---+---+
| 1| 2|
| 3| 4|
+---+---+
这篇关于当将JSON文件读入Spark时_corrupt_record错误的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!