将自定义函数应用于 PySpark 中数据框选定列的单元格 [英] Apply custom function to cells of selected columns of a data frame in PySpark
问题描述
假设我有一个如下所示的数据框:
+---+-----------+-----------+|身份证|地址1|地址2|+---+-----------+-----------+|1|地址 1.1|地址 1.2||2|地址 2.1|地址 2.2|+---+-----------+-----------+
我想直接对 address1 和 address2 列中的字符串应用自定义函数,例如:
def 示例(字符串 1,字符串 2):name_1 = string1.lower().split(' ')name_2 = string2.lower().split(' ')交叉计数 = len(set(name_1) & set(name_2))返回intersection_count
我想将结果存储在一个新列中,以便我的最终数据框看起来像:
+---+-----------+-----------+------+|身份证|地址1|地址2|结果|+---+-----------+-----------+------+|1|地址 1.1|地址 1.2|2||2|地址 2.1|地址 2.2|7|+---+-----------+-----------+------+
我曾尝试以一种曾经将内置函数应用于整个列的方式来执行它,但出现错误:
<预><代码>>>>df.withColumn('result', 例子(df.address1, df.address2))回溯(最近一次调用最后一次):文件<stdin>",第 1 行,在 <module> 中文件<stdin>",第 2 行,例如类型错误:列"对象不可调用我做错了什么以及如何将自定义函数应用于选定列中的字符串?
你必须在 spark 中使用 udf(用户定义函数)
from pyspark.sql.functions import udf示例_udf = udf(示例, LongType())df.withColumn('result', example_udf(df.address1, df.address2))
Let's say I have a data frame which looks like this:
+---+-----------+-----------+
| id| address1| address2|
+---+-----------+-----------+
| 1|address 1.1|address 1.2|
| 2|address 2.1|address 2.2|
+---+-----------+-----------+
I would like to apply a custom function directly to the strings in the address1 and address2 columns, for example:
def example(string1, string2):
name_1 = string1.lower().split(' ')
name_2 = string2.lower().split(' ')
intersection_count = len(set(name_1) & set(name_2))
return intersection_count
I want to store the result in a new column, so that my final data frame would look like:
+---+-----------+-----------+------+
| id| address1| address2|result|
+---+-----------+-----------+------+
| 1|address 1.1|address 1.2| 2|
| 2|address 2.1|address 2.2| 7|
+---+-----------+-----------+------+
I've tried to execute it in a way I once applied a built-in function to the whole column, but I got an error:
>>> df.withColumn('result', example(df.address1, df.address2))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 2, in example
TypeError: 'Column' object is not callable
What am I doing wrong and how I can apply a custom function to strings in selected columns?
You have to use udf (user defined function) in spark
from pyspark.sql.functions import udf
example_udf = udf(example, LongType())
df.withColumn('result', example_udf(df.address1, df.address2))
这篇关于将自定义函数应用于 PySpark 中数据框选定列的单元格的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!