在星火压扁行 [英] Flattening Rows in Spark

查看:154
本文介绍了在星火压扁行的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我使用Scala做一些测试的火花。我们平时读的JSON文件,这需要像下面的例子来进行操作:

I am doing some testing for spark using scala. We usually read json files which needs to be manipulated like the following example:

test.json:

test.json:

{"a":1,"b":[2,3]}

val test = sqlContext.read.json("test.json")

我怎样才能将其转换为以下格式:

How can I convert it to the following format:

{"a":1,"b":2}
{"a":1,"b":3}

感谢

推荐答案

您可以使用爆炸功能:

scala> import org.apache.spark.sql.functions.explode
import org.apache.spark.sql.functions.explode


scala> val test = sqlContext.read.json(sc.parallelize(Seq("""{"a":1,"b":[2,3]}""")))
test: org.apache.spark.sql.DataFrame = [a: bigint, b: array<bigint>]

scala> test.printSchema
root
 |-- a: long (nullable = true)
 |-- b: array (nullable = true)
 |    |-- element: long (containsNull = true)

scala> val flattened = test.withColumn("b", explode($"b"))
flattened: org.apache.spark.sql.DataFrame = [a: bigint, b: bigint]

scala> flattened.printSchema
root
 |-- a: long (nullable = true)
 |-- b: long (nullable = true)

scala> flattened.show
+---+---+
|  a|  b|
+---+---+
|  1|  2|
|  1|  3|
+---+---+

这篇关于在星火压扁行的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆