PySpark-使用withColumnRenamed重命名多个列 [英] PySpark - rename more than one column using withColumnRenamed

查看:1725
本文介绍了PySpark-使用withColumnRenamed重命名多个列的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想使用spark withColumnRenamed函数更改两列的名称.当然,我可以写:

I want to change names of two columns using spark withColumnRenamed function. Of course, I can write:

data = sqlContext.createDataFrame([(1,2), (3,4)], ['x1', 'x2'])
data = (data
       .withColumnRenamed('x1','x3')
       .withColumnRenamed('x2', 'x4'))

,但我想一步(具有新名称的列表/元组)来执行此操作.不幸的是,这都不是

but I want to do this in one step (having list/tuple of new names). Unfortunately, neither this:

data = data.withColumnRenamed(['x1', 'x2'], ['x3', 'x4'])

也不是:

data = data.withColumnRenamed(('x1', 'x2'), ('x3', 'x4'))

正在工作.可以那样做吗?

is working. Is it possible to do this that way?

推荐答案

不能使用单个withColumnRenamed调用.

data.toDF('x3', 'x4')

new_names = ['x3', 'x4']
data.toDF(*new_names)

  • 也可以使用简单的select重命名:

  • It is also possible to rename with simple select:

    from pyspark.sql.functions import col
    
    mapping = dict(zip(['x1', 'x2'], ['x3', 'x4']))
    data.select([col(c).alias(mapping.get(c, c)) for c in data.columns])
    

  • 类似地,在Scala中,您可以:

    Similarly in Scala you can:

    • 重命名所有列:

    • Rename all columns:

    val newNames = Seq("x3", "x4")
    
    data.toDF(newNames: _*)
    

  • 使用select进行映射重命名:

  • Rename from mapping with select:

    val  mapping = Map("x1" -> "x3", "x2" -> "x4")
    
    df.select(
      df.columns.map(c => df(c).alias(mapping.get(c).getOrElse(c))): _*
    )
    

    foldLeft + withColumnRenamed

    mapping.foldLeft(data){
      case (data, (oldName, newName)) => data.withColumnRenamed(oldName, newName) 
    }
    

  • *不要与不是可变参数的 RDD.toDF 混淆,并将列名作为列表,

    * Not to be confused with RDD.toDF which is not a variadic functions, and takes column names as a list,

    这篇关于PySpark-使用withColumnRenamed重命名多个列的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

    查看全文
    登录 关闭
    扫码关注1秒登录
    发送“验证码”获取 | 15天全站免登陆