TypeError:“列"对象无法使用WithColumn调用 [英] TypeError: 'Column' object is not callable using WithColumn
本文介绍了TypeError:“列"对象无法使用WithColumn调用的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
我想在函数get_distance
的数据框"df"上添加新列:
I would like append a new column on dataframe "df" from function get_distance
:
def get_distance(x, y):
dfDistPerc = hiveContext.sql("select column3 as column3, \
from tab \
where column1 = '" + x + "' \
and column2 = " + y + " \
limit 1")
result = dfDistPerc.select("column3").take(1)
return result
df = df.withColumn(
"distance",
lit(get_distance(df["column1"], df["column2"]))
)
但是,我明白了:
TypeError: 'Column' object is not callable
我认为这是因为x和y是Column
对象,我需要转换为String
才能在查询中使用.我对吗?如果是这样,我该怎么办?
I think it happens because x and y are Column
objects and I need to be converted to String
to use in my query. Am I right? If so, how can I do this?
推荐答案
Spark应该知道您使用的功能不是普通功能,而是UDF.
Spark should know the function that you are using is not ordinary function but the UDF.
因此,有两种方法可以在数据帧上使用UDF.
So, there are 2 ways by which we can use the UDF on dataframes.
方法1:具有@udf批注
@udf
def get_distance(x, y):
dfDistPerc = hiveContext.sql("select column3 as column3, \
from tab \
where column1 = '" + x + "' \
and column2 = " + y + " \
limit 1")
result = dfDistPerc.select("column3").take(1)
return result
df = df.withColumn(
"distance",
lit(get_distance(df["column1"], df["column2"]))
)
方法2:使用pyspark.sql.functions.udf替代udf
def get_distance(x, y):
dfDistPerc = hiveContext.sql("select column3 as column3, \
from tab \
where column1 = '" + x + "' \
and column2 = " + y + " \
limit 1")
result = dfDistPerc.select("column3").take(1)
return result
calculate_distance_udf = udf(get_distance, IntegerType())
df = df.withColumn(
"distance",
lit(calculate_distance_udf(df["column1"], df["column2"]))
)
这篇关于TypeError:“列"对象无法使用WithColumn调用的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!
查看全文