使用Spark ml进行逻辑回归(数据框) [英] Logistic regression with spark ml (data frames)

查看:439
本文介绍了使用Spark ml进行逻辑回归(数据框)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我为逻辑回归编写了以下代码,我想使用spark.ml提供的管道API.但是,在尝试打印系数和截距后,它给了我一个错误.另外,我在计算混淆矩阵和其他指标(如精度,召回率)时遇到了麻烦.

I wrote the following code for logistic regression, I want to use the pipeline API provided by spark.ml. However it gave me an error after I try to print coefficients and intercepts. Also I am having trouble computing the confusion matrix and other metrics like precision, recall.

#Logistic Regression:
from pyspark.mllib.linalg import Vectors
from pyspark.ml.classification import LogisticRegression
from pyspark.sql  import SQLContext
from pyspark import SparkContext
from pyspark.sql.types import *
from pyspark.sql.functions import *
from pyspark.ml.feature import StringIndexer,VectorAssembler
from pyspark.ml import Pipeline
from pyspark.ml.evaluation import MulticlassClassificationEvaluator


sc = SparkContext("local", "predictive")
sqlContext=SQLContext(sc)

df = sqlContext.read.load('/user/bna_ads_final.csv', 
                      format='com.databricks.spark.csv', 
                      header='true', 
                      inferSchema='true')

df.show(5)
df.count()
df.dtypes  
df=df.withColumn("load_date",df.load_date.cast("timestamp"))
df_withday= df.withColumn("day",dayofmonth(df.load_date))
df_new=df_withday.withColumn("Month",month(df.load_date))
df_new=df_new.withColumn("classname",df_new.classname.cast("string"))
ignore = ["load_date","wo_flag","serialnumber", "classname"]

def modify_values(r):
if r == "A" or r =="B":
    return "dispatch"
else:
    return "non-dispatch"

def show_metrics(metrics):
# Overall statistics
precision = metrics.precision()
recall = metrics.recall()
f1Score = metrics.fMeasure()
print("Summary Stats")
print("Precision = %s" % precision)
print("Recall = %s" % recall)
print("F1 Score = %s" % f1Score)
print (metrics.confusionMatrix())   

ol_val = udf(modify_values, StringType())
df_final = df_new.withColumn("wo_flag",ol_val(df_new.wo_flag))
indexer= StringIndexer(inputCol="classname", outputCol="classnamecat")
indexed = indexer.fit(df_final).transform(df_final)
indexed=indexed.withColumn("classnamecat",indexed.classnamecat.cast("int"))
indexed.show(5)
(trainingData, testData) = indexed.randomSplit([0.7, 0.3])
assembler = VectorAssembler(inputCols=[x for x in indexed.columns if x not in ignore],outputCol='features')
stringindexer=StringIndexer(inputCol="wo_flag", outputCol="labellr")
Classifier= LogisticRegression(labelCol="labellr", featuresCol="features")
pipeline=Pipeline(stages=[stringindexer,assembler,Classifier])
model = pipeline.fit(trainingData)
predictions = model.transform(testData)

selected = predictions.select("features", "labellr", "probability", "prediction")
for row in selected.collect():
print row


evaluator = MulticlassClassificationEvaluator(
labelCol="labellr", predictionCol="prediction", metricName="precision")
accuracy = evaluator.evaluate(predictions)
print("Test Error = %g" % (1.0 - accuracy))
print("Accuracy= %g" % (accuracy))

print("Coefficients: " + str(model.coefficients))
print("Intercept: " + str(model.intercept))

我得到的错误是:

print("Coefficients: " + str(model.coefficients))
AttributeError: 'PipelineModel' object has no attribute 'coefficients'

我在Hadoop集群上安装了Spark 1.5,很快将无法升级.有没有解决这个问题的方法.

I have Spark 1.5 installed on the Hadoop cluster, I will not be able to upgrade anytime soon. Is there a work around to solve this issue.

load_date           |  r         |   classname| mstatus34_timdiff|  day|Month| classnamecat| serialnumber
+-----------+------------------+----------+--------------------+------------+--- +-----------+----
2013-12-29 10:55:...|non-dispatch|        6634|               19|    1|    7|         0.0| 231234     
2014-10-05 23:43:...|non-dispatch|        6634|                4|    5|   10|         0.0| 342345
2014-10-09 09:39:...|    dispatch|        5886|               36|    9|   10|         1.0| 563472
2014-09-16 09:47:...|    dispatch|        6634|               53|   16|    9|         0.0| 134657

推荐答案

尝试一下

pipeline=Pipeline(stages=[assembler, lr])
model = pipeline.fit(trainingData)
lrm = model.stages[-1]

lrm.coefficients

这篇关于使用Spark ml进行逻辑回归(数据框)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆