从 PySpark DataFrame 中的非空列中选择值 [英] Selecting values from non-null columns in a PySpark DataFrame
问题描述
有一个缺失值的 pyspark 数据框:
There is a pyspark dataframe with missing values:
tbl = sc.parallelize([
Row(first_name='Alice', last_name='Cooper'),
Row(first_name='Prince', last_name=None),
Row(first_name=None, last_name='Lenon')
]).toDF()
tbl.show()
这是表:
+----------+---------+
|first_name|last_name|
+----------+---------+
| Alice| Cooper|
| Prince| null|
| null| Lenon|
+----------+---------+
我想创建一个新列,如下所示:
I would like to create a new column as follows:
- 如果名字为无,则取姓氏
- 如果姓氏为无,则取名字
- 如果它们都存在,则将它们连接起来
- 我们可以安全地假设至少存在其中之一
我可以构造一个简单的函数:
I can construct a simple function:
def combine_data(row):
if row.last_name is None:
return row.first_name
elif row.first_name is None:
return row.last_name
else:
return '%s %s' % (row.first_name, row.last_name)
tbl.map(combine_data).collect()
我确实得到了正确的结果,但我无法将其作为列附加到表中:tbl.withColumn('new_col', tbl.map(combine_data))
结果为 AssertionError: col 应该是 Column
I do get the correct result, but I can't append it to the table as a column: tbl.withColumn('new_col', tbl.map(combine_data))
results in AssertionError: col should be Column
将 map
的结果转换为 Column
的最佳方法是什么?是否有处理 null
值的首选方法?
What is the best way to convert the result of map
to a Column
? Is there a preferred way to deal with null
values?
推荐答案
你只需要使用一个 UDF 接收两个 columns
作为参数.
You just need to use a UDF that receives two columns
as arguments.
from pyspark.sql.functions import *
from pyspark.sql import Row
tbl = sc.parallelize([
Row(first_name='Alice', last_name='Cooper'),
Row(first_name='Prince', last_name=None),
Row(first_name=None, last_name='Lenon')
]).toDF()
tbl.show()
def combine(c1, c2):
if c1 != None and c2 != None:
return c1 + " " + c2
elif c1 == None:
return c2
else:
return c1
combineUDF = udf(combine)
expr = [c for c in ["first_name", "last_name"]] + [combineUDF(col("first_name"), col("last_name")).alias("full_name")]
tbl.select(*expr).show()
#+----------+---------+------------+
#|first_name|last_name| full_name|
#+----------+---------+------------+
#| Alice| Cooper|Alice Cooper|
#| Prince| null| Prince|
#| null| Lenon| Lenon|
#+----------+---------+------------+
这篇关于从 PySpark DataFrame 中的非空列中选择值的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!