有什么方法可以从 PySpark PipelineModel 的各个阶段访问方法? [英] Any way to access methods from individual stages in PySpark PipelineModel?
问题描述
我创建了一个 PipelineModel
用于在 Spark 2.0 中执行 LDA(通过 PySpark API):
I've created a PipelineModel
for doing LDA in Spark 2.0 (via PySpark API):
def create_lda_pipeline(minTokenLength=1, minDF=1, minTF=1, numTopics=10, seed=42, pattern='[\W]+'):
"""
Create a pipeline for running an LDA model on a corpus. This function does not need data and will not actually do
any fitting until invoked by the caller.
Args:
minTokenLength:
minDF: minimum number of documents word is present in corpus
minTF: minimum number of times word is found in a document
numTopics:
seed:
pattern: regular expression to split words
Returns:
pipeline: class pyspark.ml.PipelineModel
"""
reTokenizer = RegexTokenizer(inputCol="text", outputCol="tokens", pattern=pattern, minTokenLength=minTokenLength)
cntVec = CountVectorizer(inputCol=reTokenizer.getOutputCol(), outputCol="vectors", minDF=minDF, minTF=minTF)
lda = LDA(k=numTopics, seed=seed, optimizer="em", featuresCol=cntVec.getOutputCol())
pipeline = Pipeline(stages=[reTokenizer, cntVec, lda])
return pipeline
我想使用经过训练的模型和 LDAModel.logPerplexity()
方法计算数据集的困惑度,所以我尝试运行以下命令:
I want to calculate the perplexity on a dataset using the trained model with the LDAModel.logPerplexity()
method, so I tried running the following:
try:
training = get_20_newsgroups_data(test_or_train='test')
pipeline = create_lda_pipeline(numTopics=20, minDF=3, minTokenLength=5)
model = pipeline.fit(training) # train model on training data
testing = get_20_newsgroups_data(test_or_train='test')
perplexity = model.logPerplexity(testing)
pprint(perplexity)
这只会导致以下AttributeError
:
'PipelineModel' object has no attribute 'logPerplexity'
我明白为什么会发生这个错误,因为 logPerplexity
方法属于 LDAModel
,而不是 PipelineModel
,但我想知道是否有从那个阶段访问方法的方法.
I understand why this error happens, since the logPerplexity
method belongs to LDAModel
, not PipelineModel
, but I am wondering if there is a way to access the method from that stage.
推荐答案
管道中的所有转换器都存储在 stages
属性中.提取 stages
,选择最后一个,你就可以开始了:
All transformers in the pipeline are stored in stages
property. Extract stages
, take the last one, and you're ready to go:
model.stages[-1].logPerplexity(testing)
这篇关于有什么方法可以从 PySpark PipelineModel 的各个阶段访问方法?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!