使用 spark-csv 编写单个 CSV 文件 [英] Write single CSV file using spark-csv

查看:46
本文介绍了使用 spark-csv 编写单个 CSV 文件的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用 https://github.com/databricks/spark-csv ,我我正在尝试编写单个 CSV,但无法编写,它正在创建一个文件夹.

I am using https://github.com/databricks/spark-csv , I am trying to write a single CSV, but not able to, it is making a folder.

需要一个 Scala 函数,该函数将采用路径和文件名等参数并写入该 CSV 文件.

Need a Scala function which will take parameter like path and file name and write that CSV file.

推荐答案

它正在创建一个包含多个文件的文件夹,因为每个分区都是单独保存的.如果您需要单个输出文件(仍在文件夹中),您可以重新分区(如果上游数据很大,但需要洗牌,则首选):

It is creating a folder with multiple files, because each partition is saved individually. If you need a single output file (still in a folder) you can repartition (preferred if upstream data is large, but requires a shuffle):

df
   .repartition(1)
   .write.format("com.databricks.spark.csv")
   .option("header", "true")
   .save("mydata.csv")

coalesce:

df
   .coalesce(1)
   .write.format("com.databricks.spark.csv")
   .option("header", "true")
   .save("mydata.csv")

保存前的数据框:

所有数据都将写入mydata.csv/part-00000.在您使用此选项之前确保您了解正在发生的事情以及将所有数据传输到单个工作人员的成本是多少.如果您将分布式文件系统与复制一起使用,数据将被多次传输——首先提取到单个工作程序,然后分布在存储节点上.

All data will be written to mydata.csv/part-00000. Before you use this option be sure you understand what is going on and what is the cost of transferring all data to a single worker. If you use distributed file system with replication, data will be transfered multiple times - first fetched to a single worker and subsequently distributed over storage nodes.

或者,您可以保留代码原样并使用通用工具,例如 catHDFS getmerge 之后简单地合并所有部分.

Alternatively you can leave your code as it is and use general purpose tools like cat or HDFS getmerge to simply merge all the parts afterwards.

这篇关于使用 spark-csv 编写单个 CSV 文件的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆