如何单独构建Spark Mllib子模块 [英] How to build Spark Mllib submodule individually

查看:114
本文介绍了如何单独构建Spark Mllib子模块的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在Spark中修改了mllib,并希望在其他项目中使用自定义的mllib jar.当我使用以下命令构建spark时,它可以工作:

I modified the mllib in Spark and want to use the customized mllib jar in other projects. It works when I build spark using:

build/mvn -Pyarn -Phadoop-2.4 -Dhadoop.version=2.4.0 -DskipTests clean package

从Spark的文档中学到,网址为 http://spark .apache.org/docs/latest/building-spark.html#building-submodules-单独.但是构建整个Spark软件包花费了很长时间(在我的桌面上大约需要7分钟),因此我只想单独构建mllib.在Spark中构建子模块的指令也来自同一链接,而我使用的是:

learned from Spark's document at http://spark.apache.org/docs/latest/building-spark.html#building-submodules-individually. But building the whole Spark package took quite long (about 7 minutes on my desktop) so I would like to just build the mllib alone. The instruction for building a submodule in Spark is also from the same link and I used:

build/mvn -pl :spark-mllib_2.10 clean install

仅构建Mllib本身.它已成功构建,但是,当运行使用mllib的其他项目时,我看不到在mllib中所做的更改.当我从头开始构建整个Spark时,这确实可行,但我想知道如何使用maven来分别构建mllib?

to just build Mllib itself. It built successfully, however, I couldn't see the changes I made in the mllib when running other projects that use mllib. While this did work when I build the whole Spark from scratch, I am wondering how should I use maven in order to build the mllib individually?

推荐答案

我怀疑运行该应用程序时未真正使用已编译的mllib jar.因此,在运行应用程序时,我通过在代码中添加以下行来打印出修改后的类的位置:

I was suspecting that the compiled mllib jar is not really used when running the application. So I print out the location of the modified class when running the application by adding this line in the code:

logInfo(getClass.getProtectionDomain.getCodeSource.getLocation.getPath)

事实证明,Spark正在使用spark-assembly-1.6.0-hadoop2.4.0.jar,并且仍在使用旧的mllib jar.因此,我改为使用以下命令编译mllib和汇编程序:

And it turned out that Spark is using the spark-assembly-1.6.0-hadoop2.4.0.jar, and it is still using the old mllib jar. So I instead compiled both mllib and assembly by using:

build/mvn -DskipTests -pl :spark-mllib_2.10 install
build/mvn -Pyarn -Phadoop-2.4 -Dhadoop.version=2.4.0 -DskipTests -pl :spark-assembly_2.10 install

这将我的计算机上的整个编译时间缩短到1分钟多一点.必须有比这更短的更好的方法来进行增量编译,而我仍在寻找这样的解决方案.但是此刻,我将使用此修补程序.

This reduced the whole compiling time to a little over 1 minutes on my machine. There must be better method to do incremental compiling than this that takes much shorter, I am still looking for such a solution. But at the moment, I will use this hot fix.

这篇关于如何单独构建Spark Mllib子模块的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆