如何在Spark中并行化多个数据集? [英] How can I parallelize multiple Datasets in Spark?

查看:339
本文介绍了如何在Spark中并行化多个数据集?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个Spark 2.1作业,在其中维护多个数据集对象/RDD,它们表示对我们基础Hive/HDFS数据存储区的不同查询.我注意到,如果仅迭代数据集列表,它们将一次执行一个.每个查询都是并行运行的,但是我觉得我们也没有通过并行运行不同的数据集来最大化资源.

I have a Spark 2.1 job where I maintain multiple Dataset objects/RDD's that represent different queries over our underlying Hive/HDFS datastore. I've noticed that if I simply iterate over the List of Datasets, they execute one at a time. Each individual query operates in parallel, but I feel that we are not maximizing our resources by not running the different datasets in parallel as well.

执行此操作的地方似乎没有很多,因为大多数问题似乎都围绕着并行化单个RDD或数据集,而不是并行化同一作业中的多个.

There doesn't seem to be a lot out there regarding doing this, as most questions appear to be around parallelizing a single RDD or Dataset, not parallelizing multiple within the same job.

由于某些原因,这是不可取的吗?我可以仅使用执行程序服务,线程池或期货来执行此操作吗?

Is this inadvisable for some reason? Can I just use a executor service, thread pool, or futures to do this?

谢谢!

推荐答案

是的,您可以在驱动程序代码中使用多线程,但这通常不会提高性能,除非您的查询对非常偏斜的数据进行操作和/或无法很好地并行化足以充分利用资源.

Yes you can use multithreading in the driver code, but normally this does not increase performance, unless your queries operate on very skewed data and/or cannot be parallelized well enough to fully utilize the resources.

您可以执行以下操作:

val datasets : Seq[Dataset[_]] = ???

datasets
  .par // transform to parallel Seq
  .foreach(ds => ds.write.saveAsTable(...) 

这篇关于如何在Spark中并行化多个数据集?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆