如何单独构建 Spark Mllib 子模块 [英] How to build Spark Mllib submodule individually

查看:27
本文介绍了如何单独构建 Spark Mllib 子模块的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我修改了 Spark 中的 mllib,想在其他项目中使用自定义的 mllib jar.当我使用以下方法构建 spark 时它会起作用:

I modified the mllib in Spark and want to use the customized mllib jar in other projects. It works when I build spark using:

build/mvn -Pyarn -Phadoop-2.4 -Dhadoop.version=2.4.0 -DskipTests clean package

从位于 http://spark 的 Spark 文档中学习.apache.org/docs/latest/building-spark.html#building-submodules-individually.但是构建整个 Spark 包需要很长时间(在我的桌面上大约需要 7 分钟),所以我只想单独构建 mllib.在 Spark 中构建子模块的说明也来自同一个链接,我使用了:

learned from Spark's document at http://spark.apache.org/docs/latest/building-spark.html#building-submodules-individually. But building the whole Spark package took quite long (about 7 minutes on my desktop) so I would like to just build the mllib alone. The instruction for building a submodule in Spark is also from the same link and I used:

build/mvn -pl :spark-mllib_2.10 clean install

只构建 Mllib 本身.它成功构建,但是,在运行其他使用 mllib 的项目时,我看不到我在 mllib 中所做的更改.虽然这在我从头开始构建整个 Spark 时确实有效,但我想知道我应该如何使用 maven 来单独构建 mllib?

to just build Mllib itself. It built successfully, however, I couldn't see the changes I made in the mllib when running other projects that use mllib. While this did work when I build the whole Spark from scratch, I am wondering how should I use maven in order to build the mllib individually?

推荐答案

我怀疑在运行应用程序时没有真正使用编译的 mllib jar.所以我在运行应用程序时通过在代码中添加这一行来打印出修改后的类的位置:

I was suspecting that the compiled mllib jar is not really used when running the application. So I print out the location of the modified class when running the application by adding this line in the code:

logInfo(getClass.getProtectionDomain.getCodeSource.getLocation.getPath)

事实证明,Spark 使用的是 spark-assembly-1.6.0-hadoop2.4.0.jar,而它仍在使用旧的 mllib jar.所以我改为使用以下方法编译 mllib 和程序集:

And it turned out that Spark is using the spark-assembly-1.6.0-hadoop2.4.0.jar, and it is still using the old mllib jar. So I instead compiled both mllib and assembly by using:

build/mvn -DskipTests -pl :spark-mllib_2.10 install
build/mvn -Pyarn -Phadoop-2.4 -Dhadoop.version=2.4.0 -DskipTests -pl :spark-assembly_2.10 install

这将我的机器上的整个编译时间减少到 1 分钟多一点.必须有比这更短的增量编译方法,我仍在寻找这样的解决方案.但目前,我将使用此修补程序.

This reduced the whole compiling time to a little over 1 minutes on my machine. There must be better method to do incremental compiling than this that takes much shorter, I am still looking for such a solution. But at the moment, I will use this hot fix.

这篇关于如何单独构建 Spark Mllib 子模块的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆