如何使用MLlib在Spark上生成元组(原始标签,预测标签)? [英] How to generate tuples of (original label, predicted label) on Spark with MLlib?

查看:100
本文介绍了如何使用MLlib在Spark上生成元组(原始标签,预测标签)?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试使用从MLlib在Spark上获得的模型进行预测.目标是生成(orinalLabelInData,predictedLabel)的元组.然后可以将这些元组用于模型评估目的.实现此目标的最佳方法是什么?谢谢.

I am trying to make predictions with the model that I got back from MLlib on Spark. The goal is to generate tuples of (orinalLabelInData, predictedLabel). Then those tuples can be used for model evaluation purpose. What is the best way to achieve this? Thanks.

假设parsedTrainData是LabeledPoint的RDD

Assuming parsedTrainData is a RDD of LabeledPoint

from pyspark.mllib.regression import LabeledPoint
from pyspark.mllib.tree import DecisionTree, DecisionTreeModel
from pyspark.mllib.util import MLUtils

parsedTrainData = sc.parallelize([LabeledPoint(1.0, [11.0,-12.0,23.0]), 
                                  LabeledPoint(3.0, [-1.0,12.0,-23.0])])

model = DecisionTree.trainClassifier(parsedTrainData, numClasses=7,
categoricalFeaturesInfo={}, impurity='gini', maxDepth=8, maxBins=32)

model.predict(parsedTrainData.map(lambda x: x.features)).take(1)

这会返回预测,但是我不确定如何将每个预测与数据中的原始标签匹配.

This gives back the predictions, but I am not sure how to match each prediction back to the original labels in data.

我尝试了

parsedTrainData.map(lambda x: (x.label, dtModel.predict(x.features))).take(1)

但是,似乎我将模型发送给工作人员的方法在这里不是有效的事情

however, it seems like my way of sending model to worker is not a valid thing to do here

/spark140/python/pyspark/context.pyc in __getnewargs__(self)
    250         # This method is called when attempting to pickle SparkContext, which is always an error:
    251         raise Exception(
--> 252             "It appears that you are attempting to reference SparkContext from a broadcast "
    253             "variable, action, or transforamtion. SparkContext can only be used on the driver, "
    254             "not in code that it run on workers. For more information, see SPARK-5063."

Exception: It appears that you are attempting to reference SparkContext from a broadcast variable, action, or transforamtion. SparkContext can only be used on the driver, not in code that it run on workers. For more information, see SPARK-5063. 

推荐答案

根据

Well, according to the official documentation you can simply zip predictions and labels like this:

predictions = model.predict(parsedTrainData.map(lambda x: x.features))
labelsAndPredictions = parsedTrainData.map(lambda x: x.label).zip(predictions)

这篇关于如何使用MLlib在Spark上生成元组(原始标签,预测标签)?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆