Spark Data Frame写入镶木地板表格 - 在更新分区统计数据时速度缓慢 [英] Spark Data Frame write to parquet table - slow at updating partition stats
问题描述
在所有任务成功完成后,我将数据从数据框写入parquet表(分区)时,进程停留在更新分区统计信息中。
16/10/05 03:46:13 WARN log:快速更新分区统计信息:
16/10/05 03:46:14 WARN log:更新大小为143452576
16/10/05 03:48:30 WARN log:快速更新分区统计信息:
16/10/05 03:48:31 WARN log:更新大小为147382813
16/10/05 03 :51:02 WARN log:快速更新分区统计信息:
df.write.format(parquet)。mode(overwrite)。partitionBy(part1 ).insertInto(db.tbl)
我的表格有> 400列和> 1000个分区。
请让我知道我们是否可以优化和加速更新分区统计信息。 解决方案
我感觉问题在这里> 400列文件的分区太多。每次您在配置单元中覆盖表格时,统计信息都会更新。在你的情况下,它会尝试更新1000个分区的统计信息,并且每个分区的数据也有> 400列。
尝试减少分区数量(使用另一个分区列或者如果它是日期列考虑按月分区),您应该能够看到性能发生重大变化。
When I write data from dataframe into parquet table ( which is partitioned ) after all the tasks are successful, process is stuck at updating partition stats.
16/10/05 03:46:13 WARN log: Updating partition stats fast for:
16/10/05 03:46:14 WARN log: Updated size to 143452576
16/10/05 03:48:30 WARN log: Updating partition stats fast for:
16/10/05 03:48:31 WARN log: Updated size to 147382813
16/10/05 03:51:02 WARN log: Updating partition stats fast for:
df.write.format("parquet").mode("overwrite").partitionBy(part1).insertInto(db.tbl)
My table has > 400 columns and > 1000 partitions. Please let me know if we can optimize and speedup updating partition stats.
I feel the problem here is there are too many partitions for a > 400 columns file. Every time you overwrite a table in hive , the statistics are updated. IN your case it will try to update statistics for 1000 partitions and again each partition has data with > 400 columns.
Try reducing the number of partitions (use another partition column or if it is a date column consider partitioning by month) and you should be able to see a significant change in performance.
这篇关于Spark Data Frame写入镶木地板表格 - 在更新分区统计数据时速度缓慢的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!