在pyspark中读取嵌套的JSON文件 [英] reading a nested JSON file in pyspark

查看:215
本文介绍了在pyspark中读取嵌套的JSON文件的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想根据hdfs中的json文件创建pyspark数据框.

I'd like to create a pyspark dataframe from a json file in hdfs.

json文件具有以下内容:

the json file has the following contet:

{ 产品": { "0":台式计算机", "1":平板电脑", "2":"iPhone", "3":笔记本电脑" }, 价格": { "0":700, "1":250, "2":800, "3":1200 } }

{ "Product": { "0": "Desktop Computer", "1": "Tablet", "2": "iPhone", "3": "Laptop" }, "Price": { "0": 700, "1": 250, "2": 800, "3": 1200 } }

然后,我使用pyspark 2.4.4 df = spark.read.json("/path/file.json")

Then, I read this file using pyspark 2.4.4 df = spark.read.json("/path/file.json")

所以,我得到这样的结果:

So, I get a result like this:

df.show(truncate=False)
+---------------------+---------------------------------+
|Price                |Product                          |
+---------------------+---------------------------------+
|[700, 250, 800, 1200]|[Desktop, Tablet, Iphone, Laptop]|
+---------------------+---------------------------------+

但是我想要一个具有以下结构的数据框:

But I'd like a dataframe with the following structure:

+-------+--------+
|Price  |Product |
+-------+--------+
|700    |Desktop | 
|250    |Tablet  |
|800    |Iphone  |
|1200   |Laptop  |
+-------+--------+

如何使用pyspark获取具有先前结构的数据框?

How can I get a dataframe with the prevvious structure using pyspark?

我尝试使用爆炸df.select(explode("Price")),但出现以下错误:

I tried to use explode df.select(explode("Price")) but I got the following error:

---------------------------------------------------------------------------
Py4JJavaError                             Traceback (most recent call last)
/usr/lib/spark/python/pyspark/sql/utils.py in deco(*a, **kw)
     62         try:
---> 63             return f(*a, **kw)
     64         except py4j.protocol.Py4JJavaError as e:

/usr/lib/spark/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
    327                     "An error occurred while calling {0}{1}{2}.\n".
--> 328                     format(target_id, ".", name), value)
    329             else:

Py4JJavaError: An error occurred while calling o688.select.
: org.apache.spark.sql.AnalysisException: cannot resolve 'explode(`Price`)' due to data type mismatch: input to function explode should be array or map type, not struct<0:bigint,1:bigint,2:bigint,3:bigint>;;
'Project [explode(Price#107) AS List()]
+- LogicalRDD [Price#107, Product#108], false

    at org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42)
    at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$2.applyOrElse(CheckAnalysis.scala:97)
    at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$2.applyOrElse(CheckAnalysis.scala:89)
    at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:289)
    at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:289)
    at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
    at org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:288)
    at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:286)
    at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:286)
    at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:306)
    at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
    at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:304)
    at org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:286)
    at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$transformExpressionsUp$1.apply(QueryPlan.scala:95)
    at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$transformExpressionsUp$1.apply(QueryPlan.scala:95)
    at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$1.apply(QueryPlan.scala:107)
    at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$1.apply(QueryPlan.scala:107)
    at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
    at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpression$1(QueryPlan.scala:106)
    at org.apache.spark.sql.catalyst.plans.QueryPlan.org$apache$spark$sql$catalyst$plans$QueryPlan$$recursiveTransform$1(QueryPlan.scala:118)
    at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$org$apache$spark$sql$catalyst$plans$QueryPlan$$recursiveTransform$1$1.apply(QueryPlan.scala:122)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
    at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
    at scala.collection.AbstractTraversable.map(Traversable.scala:104)
    at org.apache.spark.sql.catalyst.plans.QueryPlan.org$apache$spark$sql$catalyst$plans$QueryPlan$$recursiveTransform$1(QueryPlan.scala:122)
    at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$2.apply(QueryPlan.scala:127)
    at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
    at org.apache.spark.sql.catalyst.plans.QueryPlan.mapExpressions(QueryPlan.scala:127)
    at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpressionsUp(QueryPlan.scala:95)
    at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:89)
    at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:84)
    at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:127)
    at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$class.checkAnalysis(CheckAnalysis.scala:84)
    at org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis(Analyzer.scala:92)
    at org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:105)
    at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:57)
    at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:55)
    at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:47)
    at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:74)
    at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$withPlan(Dataset.scala:3301)
    at org.apache.spark.sql.Dataset.select(Dataset.scala:1312)
    at sun.reflect.GeneratedMethodAccessor47.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:282)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:238)
    at java.lang.Thread.run(Thread.java:748)


During handling of the above exception, another exception occurred:

AnalysisException                         Traceback (most recent call last)
<ipython-input-46-463397adf153> in <module>
----> 1 df.select(explode("Price"))

/usr/lib/spark/python/pyspark/sql/dataframe.py in select(self, *cols)
   1200         [Row(name=u'Alice', age=12), Row(name=u'Bob', age=15)]
   1201         """
-> 1202         jdf = self._jdf.select(self._jcols(*cols))
   1203         return DataFrame(jdf, self.sql_ctx)
   1204 

/usr/lib/spark/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py in __call__(self, *args)
   1255         answer = self.gateway_client.send_command(command)
   1256         return_value = get_return_value(
-> 1257             answer, self.gateway_client, self.target_id, self.name)
   1258 
   1259         for temp_arg in temp_args:

/usr/lib/spark/python/pyspark/sql/utils.py in deco(*a, **kw)
     67                                              e.java_exception.getStackTrace()))
     68             if s.startswith('org.apache.spark.sql.AnalysisException: '):
---> 69                 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
     70             if s.startswith('org.apache.spark.sql.catalyst.analysis'):
     71                 raise AnalysisException(s.split(': ', 1)[1], stackTrace)

AnalysisException: "cannot resolve 'explode(`Price`)' due to data type mismatch: input to function explode should be array or map type, not struct<0:bigint,1:bigint,2:bigint,3:bigint>;;\n'Project [explode(Price#107) AS List()]\n+- LogicalRDD [Price#107, Product#108], false\n"

推荐答案

重新创建您的DataFrame:

Recreating your DataFrame:

from pyspark.sql import functions as F

df = spark.read.json("./row.json") 
df.printSchema()
#root
# |-- Price: struct (nullable = true)
# |    |-- 0: long (nullable = true)
# |    |-- 1: long (nullable = true)
# |    |-- 2: long (nullable = true)
# |    |-- 3: long (nullable = true)
# |-- Product: struct (nullable = true)
# |    |-- 0: string (nullable = true)
# |    |-- 1: string (nullable = true)
# |    |-- 2: string (nullable = true)
# |    |-- 3: string (nullable = true)

如上面在printSchema输出中所示,您的PriceProduct列是struct.因此explode将不起作用,因为它需要ArrayTypeMapType.

As shown above in the printSchema output, your Price and Product columns are structs. Thus explode will not work since it requires an ArrayType or MapType.

首先,使用.*表示法将struct转换为arrays,如查询带有复杂符号的Spark SQL DataFrame类型:

First, convert the structs to arrays using the .* notation as shown in Querying Spark SQL DataFrame with complex types:

df = df.select(
    F.array(F.expr("Price.*")).alias("Price"),
    F.array(F.expr("Product.*")).alias("Product")
)

df.printSchema()

#root
# |-- Price: array (nullable = false)
# |    |-- element: long (containsNull = true)
# |-- Product: array (nullable = false)
# |    |-- element: string (containsNull = true)

现在,由于您正在使用 Spark 2.4 + ,因此可以使用

Now since you're using Spark 2.4+, you can use arrays_zip to zip the Price and Product arrays together, before using explode:

df.withColumn("price_product", F.explode(F.arrays_zip("Price", "Product")))\
    .select("price_product.Price", "price_product.Product")\
    .show()

#+-----+----------------+
#|Price|         Product|
#+-----+----------------+
#|  700|Desktop Computer|
#|  250|          Tablet|
#|  800|          iPhone|
#| 1200|          Laptop|
#+-----+----------------+


对于旧版本的Spark,在arrays_zip之前,您可以分别展开每一列并将结果重新组合在一起:


For older versions of Spark, before arrays_zip, you can explode each column separately and join the results back together:

df1 = df\
.withColumn("price_map", F.explode("Price"))\
.withColumn("id", F.monotonically_increasing_id())\
.drop("Price", "Product")

df2 = df\
.withColumn("product_map", F.explode("Product"))\
.withColumn("id", F.monotonically_increasing_id())\
.drop("Price", "Product")

df3 = df1.join(df2, "id", "outer").drop("id")

df3.show()

#+---------+----------------+
#|price_map|     product_map|
#+---------+----------------+
#|      700|Desktop Computer|
#|      250|          Tablet|
#|     1200|          Laptop|
#|      800|          iPhone|
#+---------+----------------+

这篇关于在pyspark中读取嵌套的JSON文件的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆