PySpark中的CrossValidator是否分发执行? [英] Does CrossValidator in PySpark distribute the execution?

查看:363
本文介绍了PySpark中的CrossValidator是否分发执行?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在PySpark中玩机器学习,并且正在使用RandomForestClassifier.到目前为止,我一直使用Sklearn.我正在使用CrossValidator调整参数并获得最佳模型.下面是从Spark网站获取的示例代码.

I am playing with Machine Learning in PySpark and am using a RandomForestClassifier. I have used Sklearn till now. I am using CrossValidator to tune the parameters and get the best model. A sample code taken from Spark's website is below.

从我一直在阅读的内容中,我不知道spark是否也分配了参数调整,或者与Sklearn的GridSearchCV情况相同.

From what I have been reading, I do not understand whether spark distributes the parameter tuning as well or it is the same as in case of GridSearchCV of Sklearn.

任何帮助都将不胜感激.

Any help would really appreciated.

from pyspark.ml import Pipeline
from pyspark.ml.classification import LogisticRegression
from pyspark.ml.evaluation import BinaryClassificationEvaluator
from pyspark.ml.feature import HashingTF, Tokenizer
from pyspark.ml.tuning import CrossValidator, ParamGridBuilder

# Prepare training documents, which are labeled.
training = spark.createDataFrame([
    (0, "a b c d e spark", 1.0),
    (1, "b d", 0.0),
    (2, "spark f g h", 1.0),
    (3, "hadoop mapreduce", 0.0),
    (4, "b spark who", 1.0),
    (5, "g d a y", 0.0),
    (6, "spark fly", 1.0),
    (7, "was mapreduce", 0.0),
    (8, "e spark program", 1.0),
    (9, "a e c l", 0.0),
    (10, "spark compile", 1.0),
    (11, "hadoop software", 0.0)
], ["id", "text", "label"])

# Configure an ML pipeline, which consists of tree stages: tokenizer, hashingTF, and lr.
tokenizer = Tokenizer(inputCol="text", outputCol="words")
hashingTF = HashingTF(inputCol=tokenizer.getOutputCol(), outputCol="features")
lr = LogisticRegression(maxIter=10)
pipeline = Pipeline(stages=[tokenizer, hashingTF, lr])

# We now treat the Pipeline as an Estimator, wrapping it in a CrossValidator instance.
# This will allow us to jointly choose parameters for all Pipeline stages.
# A CrossValidator requires an Estimator, a set of Estimator ParamMaps, and an Evaluator.
# We use a ParamGridBuilder to construct a grid of parameters to search over.
# With 3 values for hashingTF.numFeatures and 2 values for lr.regParam,
# this grid will have 3 x 2 = 6 parameter settings for CrossValidator to choose from.
paramGrid = ParamGridBuilder() \
    .addGrid(hashingTF.numFeatures, [10, 100, 1000]) \
    .addGrid(lr.regParam, [0.1, 0.01]) \
    .build()

crossval = CrossValidator(estimator=pipeline,
                          estimatorParamMaps=paramGrid,
                          evaluator=BinaryClassificationEvaluator(),
                          numFolds=2)  # use 3+ folds in practice

# Run cross-validation, and choose the best set of parameters.
cvModel = crossval.fit(training)

推荐答案

Spark 2.3 +

SPARK-21911 包括并行模型拟合.并行度由parallelism Param控制.

SPARK-21911 included parallel model fitting. The level of parallelism is controlled with parallelism Param.

火花< 2.3

不是.交叉验证是作为普通的嵌套而实现的for循环:

It does not. Cross validation is implemented as a plain nested for loop:

for i in range(nFolds):
    ...
    for j in range(numModels):
        ...

仅分配训练单个模型的过程.

Only the process of training individual models is distributed.

这篇关于PySpark中的CrossValidator是否分发执行?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆