我可以使用 regexp_replace 或某些等效项来用一行代码替换 pyspark 数据框列中的多个值吗? [英] Can I use regexp_replace or some equivalent to replace multiple values in a pyspark dataframe column with one line of code?

查看:83
本文介绍了我可以使用 regexp_replace 或某些等效项来用一行代码替换 pyspark 数据框列中的多个值吗?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我可以使用 regexp_replace 或一些等效的东西来用一行代码替换 pyspark 数据帧列中的多个值吗?

Can I use regexp_replace or some equivalent to replace multiple values in a pyspark dataframe column with one line of code?

这是创建我的数据框的代码:

Here is the code to create my dataframe:

from pyspark import SparkContext, SparkConf, SQLContext
from datetime import datetime

sc = SparkContext().getOrCreate()
sqlContext = SQLContext(sc)

data1 = [
  ('George', datetime(2010, 3, 24, 3, 19, 58), 13),
  ('George', datetime(2020, 9, 24, 3, 19, 6), 8),
  ('George', datetime(2009, 12, 12, 17, 21, 30), 5),
  ('Micheal', datetime(2010, 11, 22, 13, 29, 40), 12),
  ('Maggie', datetime(2010, 2, 8, 3, 31, 23), 8),
  ('Ravi', datetime(2009, 1, 1, 4, 19, 47), 2),
  ('Xien', datetime(2010, 3, 2, 4, 33, 51), 3),
]
 
df1 = sqlContext.createDataFrame(data1, ['name', 'trial_start_time', 'purchase_time'])
df1.show(truncate=False)

这是数据框:

+-------+-------------------+-------------+
|name   |trial_start_time   |purchase_time|
+-------+-------------------+-------------+
|George |2010-03-24 07:19:58|13           |
|George |2020-09-24 07:19:06|8            |
|George |2009-12-12 22:21:30|5            |
|Micheal|2010-11-22 18:29:40|12           |
|Maggie |2010-02-08 08:31:23|8            |
|Ravi   |2009-01-01 09:19:47|2            |
|Xien   |2010-03-02 09:33:51|3            |
+-------+-------------------+-------------+

这是一个替换一个字符串的工作示例:

Here is a working example to replace one string:

from pyspark.sql.functions import regexp_replace, regexp_extract, col
df1.withColumn("name", regexp_replace('name', "Ravi", "Ravi_renamed")).show()

输出如下:

+------------+-------------------+-------------+
|        name|   trial_start_time|purchase_time|
+------------+-------------------+-------------+
|      George|2010-03-24 07:19:58|           13|
|      George|2020-09-24 07:19:06|            8|
|      George|2009-12-12 22:21:30|            5|
|     Micheal|2010-11-22 18:29:40|           12|
|      Maggie|2010-02-08 08:31:23|            8|
|Ravi_renamed|2009-01-01 09:19:47|            2|
|        Xien|2010-03-02 09:33:51|            3|
+------------+-------------------+-------------+

在 Pandas 中,我可以用 lambda 表达式替换一行代码中的多个字符串:

In pandas I could replace multiple strings in one line of code with a lambda expression:

df1[name].apply(lambda x: x.replace('George','George_renamed1').replace('Ravi', 'Ravi_renamed2')

我不确定这是否可以在 pyspark 中使用 regexp_replace 完成.也许另一种选择?当我读到在 pyspark 中使用 lambda 表达式时,似乎我必须创建 udf 函数(这似乎有点长).但是我很好奇我是否可以在一行代码中简单地在多个字符串上运行某种类型的正则表达式.

I am not sure if this can be done in pyspark with regexp_replace. Perhaps another alternative? When I read about using lambda expressions in pyspark it seems I have to create udf functions (which seem to get a little long). But I am curious if I can simply run some type of regex expression on multiple strings like above in one line of code.

推荐答案

这就是您要找的:

df1.withColumn('name', 
               when(col('name') == 'George', 'George_renamed1')
               .when(col('name') == 'Ravi', 'Ravi_renamed2')
               .otherwise(col('name'))
              )

使用映射 expr(不太明确但如果要替换的值很多,则很方便)

df1 = df1.withColumn('name', F.expr("coalesce(map('George', 'George_renamed1', 'Ravi', 'Ravi_renamed2')[name], name)"))

或者如果您已经有一个要使用的列表,即name_changes = ['George', 'George_renamed1', 'Ravi', 'Ravi_renamed2']

or if you already have a list to use i.e. name_changes = ['George', 'George_renamed1', 'Ravi', 'Ravi_renamed2']

# str()[1:-1] to convert list to string and remove [ ]
df1 = df1.withColumn('name', expr(f'coalesce(map({str(name_changes)[1:-1]})[name], name)'))

以上但仅使用 pyspark 导入的函数

the above but only using pyspark imported functions

mapping_expr = create_map([lit(x) for x in name_changes])

df1 = df1.withColumn('name', coalesce(mapping_expr[df1['name']], 'name'))

结果

df1.withColumn('name', F.expr("coalesce(map('George', 'George_renamed1', 'Ravi', 'Ravi_renamed2')[name],name)")).show()
+---------------+-------------------+-------------+
|           name|   trial_start_time|purchase_time|
+---------------+-------------------+-------------+
|George_renamed1|2010-03-24 03:19:58|           13|
|George_renamed1|2020-09-24 03:19:06|            8|
|George_renamed1|2009-12-12 17:21:30|            5|
|        Micheal|2010-11-22 13:29:40|           12|
|         Maggie|2010-02-08 03:31:23|            8|
|  Ravi_renamed2|2009-01-01 04:19:47|            2|
|           Xien|2010-03-02 04:33:51|            3|
+---------------+-------------------+-------------+

这篇关于我可以使用 regexp_replace 或某些等效项来用一行代码替换 pyspark 数据框列中的多个值吗?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆