从结构数组中选择Spark DataFrames中的特定列 [英] select specific columns in Spark DataFrames from Array of Struct
问题描述
我有一个具有以下架构的Spark DataFrame df
:
I have a Spark DataFrame df
with the following Schema:
root
|-- k: integer (nullable = false)
|-- v: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- a: integer (nullable = false)
| | |-- b: double (nullable = false)
| | |-- c: string (nullable = true)
是否可以在不执行map
的情况下从df
中选择v
中的a, c
?特别是df
是从Parquet文件中加载的,我甚至不希望c
的值被加载/读取.
Is it possible to just select a, c
in v
from df
without doing a map
? In particular, df
is loaded from a Parquet file and I don't want the values for c
to even be loaded/read.
推荐答案
这完全取决于您期望的输出结果,但您的问题尚不清楚.让我澄清一下. 你可以做
It depends on exactly what you expect as an output, which is not clear from your question. Let me clarify. You can do
df.select($"v.a",$"v.b").show()
但是,结果可能不是您想要的,因为v
是一个数组,它将产生一个数组a和b每个.您可能想要做的是explode
数组v,然后从爆炸的数据框中选择:
however, the result may be not what you want, since v
is an array, it will yield an array for a and one per b. What you may want to do is explode
the array v then select from the exploded dataframe:
df.select(explode($"v").as("v" :: Nil )).select($"v.a", $"v.b").show()
这会将v展平到一个表,其中所有值都展平. 在这两种情况下,spark/parquet 应该足够聪明,可以使用谓词下推而不加载c.
this will flatten v to a table with all its values flattened. In either case, spark/parquet should be smart enough to use predicate push down and not load c at all.
这篇关于从结构数组中选择Spark DataFrames中的特定列的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!