Pyspark DataFrame - 使用基于列名而不是字符串值的 LIKE 函数 [英] Pyspark DataFrame - using LIKE function based on column name instead of string value
问题描述
我正在尝试在具有另一列的列上使用 like
函数.是否可以在like函数中使用Column
?
I'm trying to use the like
function on a Column with another Column. Is it possible to use Column
inside the like function?
示例代码:
df['col1'].like(concat('%',df2['col2'], '%'))
错误日志:
py4j.Py4JException: Method like([class org.apache.spark.sql.Column])不存在在 py4j.reflection.ReflectionEngine.getMethod(ReflectionEngine.java:318)在 py4j.reflection.ReflectionEngine.getMethod(ReflectionEngine.java:326)在 py4j.Gateway.invoke(Gateway.java:274)在 py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)在 py4j.commands.CallCommand.execute(CallCommand.java:79)在 py4j.GatewayConnection.run(GatewayConnection.java:214)在 java.lang.Thread.run(Thread.java:748)
py4j.Py4JException: Method like([class org.apache.spark.sql.Column]) does not exist at py4j.reflection.ReflectionEngine.getMethod(ReflectionEngine.java:318) at py4j.reflection.ReflectionEngine.getMethod(ReflectionEngine.java:326) at py4j.Gateway.invoke(Gateway.java:274) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.GatewayConnection.run(GatewayConnection.java:214) at java.lang.Thread.run(Thread.java:748)
推荐答案
您可以使用 SQL 表达式来代替.出于某种原因,python API 不直接支持它.例如:
You can do it using a SQL expression instead. For some reason the python API doesn't directly support it. For example:
from pyspark.sql.functions import expr
data = [
("aaaa", "aa"),
("bbbb", "cc")
]
df = sc.parallelize(data).toDF(["value", "pattern"])
df = df.withColumn("match", expr("value like concat('%', pattern, '%')"))
df.show()
输出:
+-----+-------+-----+
|value|pattern|match|
+-----+-------+-----+
| aaaa| aa| true|
| bbbb| cc|false|
+-----+-------+-----+
这篇关于Pyspark DataFrame - 使用基于列名而不是字符串值的 LIKE 函数的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!