使用 Spark Dataframe Scala 将 Array[Double] 列转换为字符串或两个不同的列 [英] Converting an Array[Double] Column into a string or two different columns with Spark Dataframe Scala
问题描述
我之前遇到了一个障碍,试图在 Spark Dataframes 中进行一些转换.
I hit a snag earlier, trying to do some transformations within Spark Dataframes.
假设我有一个架构数据框:
Let's say I have a dataframe of schema :
root
|-- coordinates: array (nullable = true)
| |-- element: double (containsNull = true)
|-- userid: string (nullable = true)
|-- pubuid: string (nullable = true)
我想去掉坐标中的数组(双精度),而是得到一个 DF,行看起来像
I would like to get rid of the array(double) in coordinates, and instead get a DF with row that look like
"coordinates(0),coordinates(1)", userid, pubuid
or something like
coordinates(0), coordinates(1), userid, pubuid .
使用 Scala 我可以做到
With Scala I could do
coordinates.mkString(",")
但在 DataFrames 坐标解析为 java.util.List.
but in DataFrames coordinates resolves to a java.util.List.
到目前为止,我通过读入 RDD、转换然后构建新的 DF 来解决这个问题.但我想知道是否有更优雅的方式来使用 Dataframes.
So far I worked around the issue, by reading into an RDD, transforming then building a new DF. But I was wondering if there's a more elegant way to do that with Dataframes.
感谢您的帮助.
推荐答案
您可以使用 UDF:
import org.apache.spark.sql.functions.{udf, lit}
val mkString = udf((a: Seq[Double]) => a.mkString(", "))
df.withColumn("coordinates_string", mkString($"coordinates"))
或
val apply = udf((a: Seq[Double], i: Int) => a(i))
df.select(
$"*",
apply($"coordinates", lit(0)).alias("x"),
apply($"coordinates", lit(1)).alias("y")
)
编辑:
在最近的版本中,您还可以使用 concat_ws
:
In the recent versions you can also use concat_ws
:
import org.apache.spark.sql.functions.concat_ws
df.withColumn(
"coordinates_string", concat_ws(",", $"coordinates")
)
或简单的Column.apply
:
df.select($"*", $"coordinates"(0).alias("x"), $"coordinates"(1).alias("y"))
这篇关于使用 Spark Dataframe Scala 将 Array[Double] 列转换为字符串或两个不同的列的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!