交叉验证SPARK期间的自定义评估程序 [英] Custom Evaluator during cross validation SPARK

查看:243
本文介绍了交叉验证SPARK期间的自定义评估程序的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我的目标是向CrossValidator函数(PySpark)添加一个基于排名的评估器

My aim is to add a rank based evaluator to the CrossValidator function (PySpark)

cvExplicit = CrossValidator(estimator=cvSet, numFolds=8, estimatorParamMaps=paramMap,evaluator=rnkEvaluate)

尽管我需要传递评估后的数据框

Although I need to pass the evaluated dataframe into the function, and I do not know how to do that part.

class rnkEvaluate():
def __init__(self, user_col = "user", rating_col ="rating", prediction_col = "prediction"):
    print(user_col)
    print(rating_col)
    print(prediction_col)

def isLargerBetter():
    return True


def evaluate(self,predictions):
    denominator = 
    predictions.groupBy().sum(self._rating_col).collect()[0][0]
    TODO 
    rest of the calculation ...
    return numerator / denominator

以某种方式,我需要在每次折叠迭代时传递预测数据帧,但是我不能

Somehow I need to pass the predictions dataframe at every fold iteration, but I could not manage it.

推荐答案

我已经解决了这个问题,这里的代码如下:

I've solved this issue, here follows the code:

import numpy as np

from pyspark.ml.tuning import CrossValidator, CrossValidatorModel
from pyspark.sql.functions import rand

result = []
class CrossValidatorVerbose(CrossValidator):

    def writeResult(result):
        resfile = open('executions/results.txt', 'a')
        resfile.writelines("\n")
        resfile.writelines(result)
        resfile.close()

    def _fit(self, dataset):
        est = self.getOrDefault(self.estimator)
        epm = self.getOrDefault(self.estimatorParamMaps)
        numModels = len(epm)

        eva = self.getOrDefault(self.evaluator)
        metricName = eva.getMetricName()

        nFolds = self.getOrDefault(self.numFolds)
        seed = self.getOrDefault(self.seed)
        h = 1.0 / nFolds

        randCol = self.uid + "_rand"
        df = dataset.select("*", rand(seed).alias(randCol))
        metrics = [0.0] * numModels

        for i in range(nFolds):
            foldNum = i + 1
            print("Comparing models on fold %d" % foldNum)

            validateLB = i * h
            validateUB = (i + 1) * h
            condition = (df[randCol] >= validateLB) & (df[randCol] < validateUB)
            validation = df.filter(condition)
            train = df.filter(~condition)

            for j in range(numModels):
                paramMap = epm[j]
                model = est.fit(train, paramMap)

                predictions = model.transform(validation, paramMap)
                #print(predictions.show())
                metric = eva.evaluate(spark=spark, predictions=predictions)
                metrics[j] += metric

                avgSoFar = metrics[j] / foldNum

                res=("params: %s\t%s: %f\tavg: %f" % (
                    {param.name: val for (param, val) in paramMap.items()},
                    metricName, metric, avgSoFar))
                writeResult(res)
                result.append(res)
                print(res)

        if eva.isLargerBetter():
            bestIndex = np.argmax(metrics)
        else:
            bestIndex = np.argmin(metrics)

        bestParams = epm[bestIndex]
        bestModel = est.fit(dataset, bestParams)
        avgMetrics = [m / nFolds for m in metrics]
        bestAvg = avgMetrics[bestIndex]
        print("Best model:\nparams: %s\t%s: %f" % (
            {param.name: val for (param, val) in bestParams.items()},
            metricName, bestAvg))

        return self._copyValues(CrossValidatorModel(bestModel, avgMetrics))


evaluator = RankUserWeighted("user","rating","prediction")

cvImplicit = CrossValidatorVerbose(estimator=customImplicit, numFolds=8, estimatorParamMaps=paramMap
                            ,evaluator=evaluator)

这篇关于交叉验证SPARK期间的自定义评估程序的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆