如何将数据帧保存到CSV pyspark [英] how to save dataframe into csv pyspark

查看:68
本文介绍了如何将数据帧保存到CSV pyspark的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试将数据帧保存到hdfs系统中.它将另存为0000,并分成多个部分.我想将其另存为Excel工作表还是仅保存为一个零件文件?我们怎样才能做到这一点?

I am trying to save dataframe into hdfs system. It gets saved as part-0000 and into multiple parts. I want to save it as an excel sheet or just one part file? How can we achieve this?

代码:

  df1.write.csv('/user/gtree/tree.csv')

推荐答案

正在根据其分区(多个分区=多个文件)保存您的数据框.您可以合并或将分区降低到1,这样就只能写入1个文件.

Your dataframe is being saved based on its partitions(multiple partitions= multiple files). You can coalesce or bring your partitions down to 1, so that only 1 file can be written.

链接: https://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.DataFrame.coalesce

df1.coalesce(1).write.csv('/user/gtree/tree.csv')

这篇关于如何将数据帧保存到CSV pyspark的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆