如何在 Spark DataFrame 中添加常量列? [英] How to add a constant column in a Spark DataFrame?
问题描述
我想在 DataFrame
中添加一个具有任意值的列(每行都相同).当我使用 withColumn
时出现错误,如下所示:
I want to add a column in a DataFrame
with some arbitrary value (that is the same for each row). I get an error when I use withColumn
as follows:
dt.withColumn('new_column', 10).head(5)
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-50-a6d0257ca2be> in <module>()
1 dt = (messages
2 .select(messages.fromuserid, messages.messagetype, floor(messages.datetime/(1000*60*5)).alias("dt")))
----> 3 dt.withColumn('new_column', 10).head(5)
/Users/evanzamir/spark-1.4.1/python/pyspark/sql/dataframe.pyc in withColumn(self, colName, col)
1166 [Row(age=2, name=u'Alice', age2=4), Row(age=5, name=u'Bob', age2=7)]
1167 """
-> 1168 return self.select('*', col.alias(colName))
1169
1170 @ignore_unicode_prefix
AttributeError: 'int' object has no attribute 'alias'
似乎我可以通过添加和减去其他列之一(因此它们添加为零)然后添加我想要的数字(在这种情况下为 10)来诱使函数按照我想要的方式工作:
It seems that I can trick the function into working as I want by adding and subtracting one of the other columns (so they add to zero) and then adding the number I want (10 in this case):
dt.withColumn('new_column', dt.messagetype - dt.messagetype + 10).head(5)
[Row(fromuserid=425, messagetype=1, dt=4809600.0, new_column=10),
Row(fromuserid=47019141, messagetype=1, dt=4809600.0, new_column=10),
Row(fromuserid=49746356, messagetype=1, dt=4809600.0, new_column=10),
Row(fromuserid=93506471, messagetype=1, dt=4809600.0, new_column=10),
Row(fromuserid=80488242, messagetype=1, dt=4809600.0, new_column=10)]
这太hacky了,对吧?我认为有更合法的方法可以做到这一点?
This is supremely hacky, right? I assume there is a more legit way to do this?
推荐答案
Spark 2.2+
Spark 2.2 引入了 typedLit
以支持 Seq
、Map
和 Tuples
(SPARK-19254) 并且应该支持以下调用(Scala):
Spark 2.2 introduces typedLit
to support Seq
, Map
, and Tuples
(SPARK-19254) and following calls should be supported (Scala):
import org.apache.spark.sql.functions.typedLit
df.withColumn("some_array", typedLit(Seq(1, 2, 3)))
df.withColumn("some_struct", typedLit(("foo", 1, 0.3)))
df.withColumn("some_map", typedLit(Map("key1" -> 1, "key2" -> 2)))
Spark 1.3+ (lit
), 1.4+ (array
, struct
>)、2.0+(map
):
Spark 1.3+ (lit
), 1.4+ (array
, struct
), 2.0+ (map
):
DataFrame.withColumn
的第二个参数应该是 Column
所以你必须使用文字:
The second argument for DataFrame.withColumn
should be a Column
so you have to use a literal:
from pyspark.sql.functions import lit
df.withColumn('new_column', lit(10))
如果你需要复杂的列,你可以使用像 array
这样的块来构建它们:
If you need complex columns you can build these using blocks like array
:
from pyspark.sql.functions import array, create_map, struct
df.withColumn("some_array", array(lit(1), lit(2), lit(3)))
df.withColumn("some_struct", struct(lit("foo"), lit(1), lit(.3)))
df.withColumn("some_map", create_map(lit("key1"), lit(1), lit("key2"), lit(2)))
在 Scala 中可以使用完全相同的方法.
Exactly the same methods can be used in Scala.
import org.apache.spark.sql.functions.{array, lit, map, struct}
df.withColumn("new_column", lit(10))
df.withColumn("map", map(lit("key1"), lit(1), lit("key2"), lit(2)))
要为 structs
提供名称,请在每个字段上使用 alias
:
To provide names for structs
use either alias
on each field:
df.withColumn(
"some_struct",
struct(lit("foo").alias("x"), lit(1).alias("y"), lit(0.3).alias("z"))
)
或 cast
对整个对象
df.withColumn(
"some_struct",
struct(lit("foo"), lit(1), lit(0.3)).cast("struct<x: string, y: integer, z: double>")
)
虽然速度较慢,但也可以使用 UDF.
It is also possible, although slower, to use an UDF.
注意:
可以使用相同的构造将常量参数传递给 UDF 或 SQL 函数.
The same constructs can be used to pass constant arguments to UDFs or SQL functions.
这篇关于如何在 Spark DataFrame 中添加常量列?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!