Spark DataFrame - 最后一个分区收集缓慢 [英] Spark DataFrame - Last Partition Collect Slow

查看:24
本文介绍了Spark DataFrame - 最后一个分区收集缓慢的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个从远程 Oracle 数据库读取记录的 Java 片段(至少 65k 条记录).本质上,我们试图将每小时过滤器传递给数据帧以获取记录,每小时分区 x 24.

I have a Java snippet that reads records from a remote Oracle DB (atleast 65k records). Essentially, we are trying to pass the hourly filter to the dataframe to fetch the records, on an hourly partition x 24.

源视图基于包含数百万条记录的表.

The source view is based on a table with millions of records.

我们面临的问题是,Spark(在 YARN 上或作为 SPARK 集群)在 3 分钟内处理了 24 个分区中的 22 个.最后 2 个分区需要 5 个多小时才能完成.

The problem we are facing is that, Spark (on YARN or as a SPARK cluster) processes 22 out of 24 partitions in under 3 mins. The last 2 partitions are taking more than 5 hours to complete.

有什么办法可以使用 DataFrames 加快速度吗?

Is there any way we can speed this up using DataFrames ?

HashMap<String, String> options = new HashMap<>();
sqlContext.setConf("spark.sql.shuffle.partition", "50");
options.put("dbtable", "( select * from "+VIEW_NAME+" where 1=1)");
options.put("driver", "oracle.jdbc.OracleDriver");
options.put("url", JDBC_URL);
options.put("partitionColumn", "hrs");
options.put("lowerBound", "00");
options.put("upperBound", "23");
options.put("numPartitions", "24");

DataFrame dk = sqlContext.load("jdbc", options).cache();   
dk.registerTempTable(VIEW_NAME);
dk.printSchema();
DateTime dt = new DateTime(2015, 5, 8, 10, 0, 0);
String s = SQL_DATE_FORMATTER.print(dt);
dt = dt.plusHours(24);
String t = SQL_DATE_FORMATTER.print(dt);
System.out.println("S is " + s + "and t is "+ t);
Stream<Row> rows = dk.filter("DATETIME >= '" + s + "' and DATETIME <= '" + t + "'").collectAsList().parallelStream();
    System.out.println("Collected" + rows.count());

推荐答案

不确定这是否是完整的答案,但作为解决方案,如果我们执行以下操作

Not Sure if this is an answer in complete, but as a work around, if we do the following

dt = dt.plusHours(24).minusSeconds(1)

它更快了,但仍然没有前 23 个分区那么快

It is faster, but still not a as fast as First 23 partitions

这篇关于Spark DataFrame - 最后一个分区收集缓慢的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆