无法使用 PySpark xgboost4j 保存模型 [英] Cannot save model using PySpark xgboost4j

查看:71
本文介绍了无法使用 PySpark xgboost4j 保存模型的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个小的 PySpark 程序,它使用 xgboost4jxgboost4j-spark 来以 spark 数据帧形式训练给定的数据集.

I have a small PySpark program that uses xgboost4j and xgboost4j-spark in order to train a given dataset in a spark dataframe form.

训练已经完成,但我似乎无法保存模型.

The training is done, but It seems I cannot save the model.

当前库版本:

  • Pyspark 2.4.0
  • xgboost4j 0.90
  • xgboost4j-spark 0.90

Spark 提交参数:

Spark submit args:

    os.environ['PYSPARK_SUBMIT_ARGS'] = "--py-files dist/DNA-0.0.2-py3.6.egg " \
                                        "--jars dna/resources/xgboost4j-spark-0.90.jar," \
                                        "dna/resources/xgboost4j-0.90.jar pyspark-shell"

训练过程如下:

def spark_xgboost_train(spark=None, models_path='', train_df=None):
    spark.sparkContext.addPyFile("dna/resources/xgboost4j-spark-0.90.jar")
    spark.sparkContext.addPyFile("dna/resources/xgboost4j-0.90.jar")
    spark.sparkContext.addPyFile('dna/resources/pyspark-xgboost_0.90_261ab52e07bec461c711d209b70428ab481db470.zip')

    import sparkxgb as sxgb
    from sparkxgb import XGBoostClassifier, XGBoostClassificationModel

    # pre-process
    train_df = train_df.drop('url')
    train_df = train_df.na.fill(0)

    x = train_df.columns
    x.remove('label')

    vectorAssembler = VectorAssembler() \
        .setInputCols(x) \
        .setOutputCol("features")

    xgboost = XGBoostClassifier(
        featuresCol="features",
        labelCol="label",
        predictionCol="prediction",
    )

    pipeline = Pipeline().setStages([vectorAssembler])
    df = pipeline.fit(train_df).transform(train_df)
    model = xgboost.fit(df)

    # save
    model.write().overwrite().save(models_path + "model.dat")

我得到的错误:

Traceback (most recent call last):
  File "/storage/env/DNAtestenv/lib/python3.7/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/storage/env/DNAtestenv/lib/python3.7/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/home/elad/DNA/dna/__main__.py", line 360, in <module>
    main()
  File "/home/elad/DNA/dna/__main__.py", line 325, in main
    run_pipelines(config)
  File "/home/elad/DNA/dna/__main__.py", line 311, in run_pipelines
    objective=config['objective'], nthread=config['nthread'])
  File "/home/elad/DNA/dna/__main__.py", line 234, in train_model
    max_depth=max_depth, eta=eta, silent=silent, objective=objective, nthread=1)
  File "/home/elad/DNA/dna/model/xgboost_train.py", line 82, in spark_xgboost_train
    model.write().save(models_path + '/model.dat')
  File "/storage/env/DNAtestenv/lib/python3.7/site-packages/pyspark/ml/util.py", line 183, in save
    self._jwrite.save(path)
  File "/storage/env/DNAtestenv/lib/python3.7/site-packages/py4j/java_gateway.py", line 1257, in __call__
    answer, self.gateway_client, self.target_id, self.name)
  File "/storage/env/DNAtestenv/lib/python3.7/site-packages/pyspark/sql/utils.py", line 63, in deco
    return f(*a, **kw)
  File "/storage/env/DNAtestenv/lib/python3.7/site-packages/py4j/protocol.py", line 328, in get_return_value
    format(target_id, ".", name), value)
py4j.protocol.Py4JJavaError: An error occurred while calling o484.save.
: java.lang.NoSuchMethodError: org.json4s.jackson.JsonMethods$.parse(Lorg/json4s/JsonInput;Z)Lorg/json4s/JsonAST$JValue;
    at ml.dmlc.xgboost4j.scala.spark.params.DefaultXGBoostParamsWriter$$anonfun$1$$anonfun$3.apply(DefaultXGBoostParamsWriter.scala:73)
    at ml.dmlc.xgboost4j.scala.spark.params.DefaultXGBoostParamsWriter$$anonfun$1$$anonfun$3.apply(DefaultXGBoostParamsWriter.scala:71)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
    at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
    at scala.collection.AbstractTraversable.map(Traversable.scala:104)
    at ml.dmlc.xgboost4j.scala.spark.params.DefaultXGBoostParamsWriter$$anonfun$1.apply(DefaultXGBoostParamsWriter.scala:71)
    at ml.dmlc.xgboost4j.scala.spark.params.DefaultXGBoostParamsWriter$$anonfun$1.apply(DefaultXGBoostParamsWriter.scala:69)
    at scala.Option.getOrElse(Option.scala:121)
    at ml.dmlc.xgboost4j.scala.spark.params.DefaultXGBoostParamsWriter$.getMetadataToSave(DefaultXGBoostParamsWriter.scala:69)
    at ml.dmlc.xgboost4j.scala.spark.params.DefaultXGBoostParamsWriter$.saveMetadata(DefaultXGBoostParamsWriter.scala:51)
    at ml.dmlc.xgboost4j.scala.spark.XGBoostModel$XGBoostModelModelWriter.saveImpl(XGBoostModel.scala:371)
    at org.apache.spark.ml.util.MLWriter.save(ReadWrite.scala:180)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:282)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:238)
    at java.lang.Thread.run(Thread.java:745)

我想做的是保存和加载模型,如下所示:

What I would like to do is to save and load the model, like this:

    # save
    model.write().save(models_path + '/model.dat')

    # load
    model2 = sxgb.xgboost.XGBoostClassificationModel().load(models_path + '/model.dat')

我也尝试使用其他 xgboost4j 版本(0.800.72)我似乎找不到原因,我什至试图阅读包装器源代码和 jars 源代码,但找不到任何东西.

I tried using other xgboost4j versions as well (0.80, 0.72) I cant seem to find the cause for this, I was even trying to read the wrapper source code and the jars source code, I could not find anything.

提前致谢.

推荐答案

经过数小时的研究,我通过将 xgboost 添加到管道中,然后生成一个 PipelineModel 而不是 xgboost 模型.

After hours of researching, I got it to work by adding xgboost to the pipeline, which then produces a PipelineModel rather than an xgboost model.

我能够保存 PipelineModel 然后加载它就好了.

I was able to save the PipelineModel and then load it just fine.

这是我更改的内容:

    xgboost = XGBoostClassifier(
        featuresCol="features",
        labelCol="label",
        predictionCol="prediction",
    )

    pipeline = Pipeline().setStages([vectorAssembler, xgboost])
    model = pipeline.fit(train_df)

    # save
    model.write().overwrite().save(models_path + "/xgb_model.model")

    # load
    model2 = PipelineModel.load(models_path + "/xgb_model.model"

这篇关于无法使用 PySpark xgboost4j 保存模型的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆