为什么Spark ML ALS算法打印RMSE = NaN? [英] Why Spark ML ALS algorithm print RMSE = NaN?
问题描述
我使用ALS来预测收视率,这是我的代码:
I use ALS to predict rating, this is my code:
val als = new ALS()
.setMaxIter(5)
.setRegParam(0.01)
.setUserCol("user_id")
.setItemCol("business_id")
.setRatingCol("stars")
val model = als.fit(training)
// Evaluate the model by computing the RMSE on the test data
val predictions = model.transform(testing)
predictions.sort("user_id").show(1000)
val evaluator = new RegressionEvaluator()
.setMetricName("rmse")
.setLabelCol("stars")
.setPredictionCol("prediction")
val rmse = evaluator.evaluate(predictions)
println(s"Root-mean-square error = $rmse")
但是得到一些负分数,RMSE是Nan:
But get some negative scores and RMSE is Nan:
+-------+-----------+---------+------------+
|user_id|business_id| stars| prediction|
+-------+-----------+---------+------------+
| 0| 2175| 4.0| 4.0388923|
| 0| 5753| 3.0| 2.6875196|
| 0| 9199| 4.0| 4.1753435|
| 0| 16416| 2.0| -2.710618|
| 0| 6063| 3.0| NaN|
| 0| 23076| 2.0| -0.8930751|
Root-mean-square error = NaN
如何获得良好的结果?
推荐答案
负值无关紧要,因为RMSE首先将值平方.您的预测值可能为空.您可以放下它们:
Negative values don't matter as RMSE squares the values first. Probably you have empty prediction values. You could drop them:
predictions.na().drop(["prediction"])
尽管这可能会引起误解,或者您可以使用最低/最高/平均评分来填充这些值.
Although, that can be a bit misleading, alternatively you could fill those values with your lowest/highest/average rating.
我还建议将x < min_rating
和x > max_rating
舍入到最低/最高评分,这将改善您的RMSE.
I'd also recommend to round x < min_rating
and x > max_rating
to the lowest/highest ratings, which would improve your RMSE.
此处有一些其他信息: https://issues.apache.org/jira/浏览/SPARK-14489
Some extra info here: https://issues.apache.org/jira/browse/SPARK-14489
这篇关于为什么Spark ML ALS算法打印RMSE = NaN?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!