可在Scala中使用访问依赖项,但没有PySpark [英] Access dependencies available in Scala but no PySpark

查看:84
本文介绍了可在Scala中使用访问依赖项,但没有PySpark的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试访问RDD的依赖项.在Scala中,这是一个非常简单的代码:

I am trying to access the dependencies of an RDD. In Scala it is a pretty simple code:

scala> val myRdd = sc.parallelize(0 to 9).groupBy(_ % 2)
myRdd: org.apache.spark.rdd.RDD[(Int, Iterable[Int])] = ShuffledRDD[2] at groupBy at <console>:24

scala> myRdd.dependencies
res0: Seq[org.apache.spark.Dependency[_]] = List(org.apache.spark.ShuffleDependency@6c427386)

但是PySpark中没有依赖项.关于如何访问它们的任何指示?

But dependencies is not available in PySpark. Any pointers on how I can access them?

>>> myRdd.dependencies
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
AttributeError: 'PipelinedRDD' object has no attribute 'dependencies'

推荐答案

目前尚无受支持的方法,因为它没有那么大的意义.你可以

There is no supported way to do it, because it is not that meaningful. You can

rdd = sc.parallelize([1, 2, 3]).map(lambda x: x)
deps = sc._jvm.org.apache.spark.api.java.JavaRDD.toRDD(rdd._jrdd).dependencies()
print(deps)
## List(org.apache.spark.OneToOneDependency@63b86b0d)

for i in range(deps.size()):
    print(deps.apply(i))

## org.apache.spark.OneToOneDependency@63b86b0d

但我认为这不会使您走远.

but I don't think it will get you far.

这篇关于可在Scala中使用访问依赖项,但没有PySpark的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆