使用列值作为 spark DataFrame 函数的参数 [英] Using a column value as a parameter to a spark DataFrame function
问题描述
考虑以下数据帧:
#+------+---+
#|letter|rpt|
#+------+---+
#| X| 3|
#| Y| 1|
#| Z| 2|
#+------+---+
可以使用以下代码创建:
which can be created using the following code:
df = spark.createDataFrame([("X", 3),("Y", 1),("Z", 2)], ["letter", "rpt"])
假设我想按照 rpt
列中指定的次数重复每一行,就像在这个 问题.
Suppose I wanted to repeat each row the number of times specified in the column rpt
, just like in this question.
一种方法是使用以下 pyspark-sql
解决方案复制到该问题> 查询:
One way would be to replicate my solution to that question using the following pyspark-sql
query:
query = """
SELECT *
FROM
(SELECT DISTINCT *,
posexplode(split(repeat(",", rpt), ",")) AS (index, col)
FROM df) AS a
WHERE index > 0
"""
query = query.replace("\n", " ") # replace newlines with spaces, avoid EOF error
spark.sql(query).drop("col").sort('letter', 'index').show()
#+------+---+-----+
#|letter|rpt|index|
#+------+---+-----+
#| X| 3| 1|
#| X| 3| 2|
#| X| 3| 3|
#| Y| 1| 1|
#| Z| 2| 1|
#| Z| 2| 2|
#+------+---+-----+
这有效并产生正确的答案.但是,我无法使用 DataFrame API 函数复制此行为.
This works and produces the correct answer. However, I am unable to replicate this behavior using the DataFrame API functions.
我试过了:
import pyspark.sql.functions as f
df.select(
f.posexplode(f.split(f.repeat(",", f.col("rpt")), ",")).alias("index", "col")
).show()
但这会导致:
TypeError: 'Column' 对象不可调用
为什么我可以在查询中将列作为输入传递给 repeat
,但不能从 API 中传递?有没有办法使用 spark DataFrame 函数来复制这种行为?
Why am I able to pass the column as an input to repeat
within the query, but not from the API? Is there a way to replicate this behavior using the spark DataFrame functions?
推荐答案
一种选择是使用 pyspark.sql.functions.expr
,它允许您使用列值作为 spark-sql 函数的输入.
One option is to use pyspark.sql.functions.expr
, which allows you to use columns values as inputs to spark-sql functions.
基于@user8371915 的 comment 我发现以下有效:
Based on @user8371915's comment I have found that the following works:
from pyspark.sql.functions import expr
df.select(
'*',
expr('posexplode(split(repeat(",", rpt), ","))').alias("index", "col")
).where('index > 0').drop("col").sort('letter', 'index').show()
#+------+---+-----+
#|letter|rpt|index|
#+------+---+-----+
#| X| 3| 1|
#| X| 3| 2|
#| X| 3| 3|
#| Y| 1| 1|
#| Z| 2| 1|
#| Z| 2| 2|
#+------+---+-----+
这篇关于使用列值作为 spark DataFrame 函数的参数的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!