如何在pyspark中更改数据框列名称? [英] How to change dataframe column names in pyspark?

查看:605
本文介绍了如何在pyspark中更改数据框列名称?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我来自熊猫背景,习惯于将CSV文件中的数据读取到数据框中,然后使用简单的命令将列名简单地更改为有用的内容:

I come from pandas background and am used to reading data from CSV files into a dataframe and then simply changing the column names to something useful using the simple command:

df.columns = new_column_name_list

但是,在使用sqlContext创建的pyspark数据帧中,这是行不通的. 我唯一能想到的解决方案是:

However, the same doesn't work in pyspark dataframes created using sqlContext. The only solution I could figure out to do this easily is the following:

df = sqlContext.read.format("com.databricks.spark.csv").options(header='false', inferschema='true', delimiter='\t').load("data.txt")
oldSchema = df.schema
for i,k in enumerate(oldSchema.fields):
  k.name = new_column_name_list[i]
df = sqlContext.read.format("com.databricks.spark.csv").options(header='false', delimiter='\t').load("data.txt", schema=oldSchema)

这基本上是两次定义变量,然后首先推断模式,然后重命名列名,然后使用更新后的模式再次加载数据框.

This is basically defining the variable twice and inferring the schema first then renaming the column names and then loading the dataframe again with the updated schema.

有没有像我们在大熊猫中那样更好,更有效的方式来做到这一点?

Is there a better and more efficient way to do this like we do in pandas ?

我的Spark版本是1.5.0

My spark version is 1.5.0

推荐答案

有很多方法可以做到这一点:

There are many ways to do that:

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆