从PySpark DataFrame中的非null列中选择值 [英] Selecting values from non-null columns in a PySpark DataFrame

查看:583
本文介绍了从PySpark DataFrame中的非null列中选择值的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

存在一个缺少值的pyspark数据框:

There is a pyspark dataframe with missing values:

tbl = sc.parallelize([
        Row(first_name='Alice', last_name='Cooper'),             
        Row(first_name='Prince', last_name=None),
        Row(first_name=None, last_name='Lenon')
    ]).toDF()
tbl.show()

这是桌子:

  +----------+---------+
  |first_name|last_name|
  +----------+---------+
  |     Alice|   Cooper|
  |    Prince|     null|
  |      null|    Lenon|
  +----------+---------+

我想创建一个新列,如下所示:

I would like to create a new column as follows:

  • 如果名字为无",请姓氏
  • 如果姓氏为无",请姓氏
  • 如果它们都存在,则将它们串联起来
  • 我们可以安全地假设其中至少有一个

我可以构造一个简单的函数:

I can construct a simple function:

def combine_data(row):
    if row.last_name is None:
        return row.first_name
    elif row.first_name is None:
        return row.last_name
    else:
        return '%s %s' % (row.first_name, row.last_name)
tbl.map(combine_data).collect()

我确实得到了正确的结果,但是我无法将其作为列追加到表中:tbl.withColumn('new_col', tbl.map(combine_data))导致AssertionError: col should be Column

I do get the correct result, but I can't append it to the table as a column: tbl.withColumn('new_col', tbl.map(combine_data)) results in AssertionError: col should be Column

map的结果转换为Column的最佳方法是什么?有没有一种首选的方法来处理null值?

What is the best way to convert the result of map to a Column? Is there a preferred way to deal with null values?

推荐答案

您只需要使用

You just need to use a UDF that receives two columns as arguments.

from pyspark.sql.functions import *
from pyspark.sql import Row

tbl = sc.parallelize([
        Row(first_name='Alice', last_name='Cooper'),             
        Row(first_name='Prince', last_name=None),
        Row(first_name=None, last_name='Lenon')
    ]).toDF()

tbl.show()

def combine(c1, c2):
  if c1 != None and c2 != None:
    return c1 + " " + c2
  elif c1 == None:
    return c2
  else:
    return c1

combineUDF = udf(combine)

expr = [c for c in ["first_name", "last_name"]] + [combineUDF(col("first_name"), col("last_name")).alias("full_name")]

tbl.select(*expr).show()

#+----------+---------+------------+
#|first_name|last_name|   full_name|
#+----------+---------+------------+
#|     Alice|   Cooper|Alice Cooper|
#|    Prince|     null|      Prince|
#|      null|    Lenon|       Lenon|
#+----------+---------+------------+

这篇关于从PySpark DataFrame中的非null列中选择值的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆