更改 DataFrame.write() 的输出文件名前缀 [英] Change output filename prefix for DataFrame.write()
问题描述
通过 Spark SQL DataFrame.write() 方法生成的输出文件以part"基本名称前缀开头.例如
Output files generated via the Spark SQL DataFrame.write() method begin with the "part" basename prefix. e.g.
DataFrame sample_07 = hiveContext.table("sample_07");
sample_07.write().parquet("sample_07_parquet");
结果:
hdfs dfs -ls sample_07_parquet/
Found 4 items
-rw-r--r-- 1 rob rob 0 2016-03-19 16:40 sample_07_parquet/_SUCCESS
-rw-r--r-- 1 rob rob 491 2016-03-19 16:40 sample_07_parquet/_common_metadata
-rw-r--r-- 1 rob rob 1025 2016-03-19 16:40 sample_07_parquet/_metadata
-rw-r--r-- 1 rob rob 17194 2016-03-19 16:40 sample_07_parquet/part-r-00000-cefb2ac6-9f44-4ce4-93d9-8e7de3f2cb92.gz.parquet
我想更改使用 Spark SQL DataFrame.write() 创建文件时使用的输出文件名前缀.我尝试在 Spark 上下文的 hadoop 配置上设置mapreduce.output.basename"属性.例如
I would like to change the output filename prefix used when creating a file using Spark SQL DataFrame.write(). I tried setting the "mapreduce.output.basename" property on the hadoop configuration for the Spark context. e.g.
public class MyJavaSparkSQL {
public static void main(String[] args) throws Exception {
SparkConf sparkConf = new SparkConf().setAppName("MyJavaSparkSQL");
JavaSparkContext ctx = new JavaSparkContext(sparkConf);
ctx.hadoopConfiguration().set("mapreduce.output.basename", "myprefix");
HiveContext hiveContext = new org.apache.spark.sql.hive.HiveContext(ctx.sc());
DataFrame sample_07 = hiveContext.table("sample_07");
sample_07.write().parquet("sample_07_parquet");
ctx.stop();
}
这并没有改变生成文件的输出文件名前缀.
That did not change the output filename prefix for the generated files.
有没有办法在使用 DataFrame.write() 方法时覆盖输出文件名前缀?
Is there a way to override the output filename prefix when using the DataFrame.write() method?
推荐答案
在使用任何标准输出格式(如 Parquet)时,您不能更改part"前缀.请参阅 ParquetRelation 源代码:
You cannot change the "part" prefix while using any of the standard output formats (like Parquet). See this snippet from ParquetRelation source code:
private val recordWriter: RecordWriter[Void, InternalRow] = {
val outputFormat = {
new ParquetOutputFormat[InternalRow]() {
// ...
override def getDefaultWorkFile(context: TaskAttemptContext, extension: String): Path = {
// ..
// prefix is hard-coded here:
new Path(path, f"part-r-$split%05d-$uniqueWriteJobId$bucketString$extension")
}
}
}
如果您确实必须控制部件文件名,您可能必须实现自定义 FileOutputFormat 并使用 Spark 的一种接受 FileOutputFormat 类的保存方法(例如 saveAsHadoopFile).
If you really must control the part file names, you'll probably have to implement a custom FileOutputFormat and use one of Spark's save methods that accept a FileOutputFormat class (e.g. saveAsHadoopFile).
这篇关于更改 DataFrame.write() 的输出文件名前缀的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!