'PipelinedRDD'对象在PySpark中没有属性'toDF' [英] 'PipelinedRDD' object has no attribute 'toDF' in PySpark

查看:675
本文介绍了'PipelinedRDD'对象在PySpark中没有属性'toDF'的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试加载SVM文件并将其转换为DataFrame,因此我可以使用Spark中的ML模块(Pipeline ML). 我刚刚在Ubuntu 14.04(未配置spark-env.sh)上安装了新的Spark 1.5.0.

I'm trying to load an SVM file and convert it to a DataFrame so I can use the ML module (Pipeline ML) from Spark. I've just installed a fresh Spark 1.5.0 on an Ubuntu 14.04 (no spark-env.sh configured).

我的my_script.py是:

from pyspark.mllib.util import MLUtils
from pyspark import SparkContext

sc = SparkContext("local", "Teste Original")
data = MLUtils.loadLibSVMFile(sc, "/home/svm_capture").toDF()

并且我正在使用:./spark-submit my_script.py

我得到了错误:

Traceback (most recent call last):
File "/home/fred-spark/spark-1.5.0-bin-hadoop2.6/pipeline_teste_original.py", line 34, in <module>
data = MLUtils.loadLibSVMFile(sc, "/home/fred-spark/svm_capture").toDF()
AttributeError: 'PipelinedRDD' object has no attribute 'toDF'

我不明白的是,如果我跑步:

What I can't understand is that if I run:

data = MLUtils.loadLibSVMFile(sc, "/home/svm_capture").toDF()

直接在PySpark外壳内,即可正常工作.

directly inside PySpark shell, it works.

推荐答案

toDF方法是一个猴子补丁

toDF method is a monkey patch executed inside SparkSession (SQLContext constructor in 1.x) constructor so to be able to use it you have to create a SQLContext (or SparkSession) first:

# SQLContext or HiveContext in Spark 1.x
from pyspark.sql import SparkSession
from pyspark import SparkContext

sc = SparkContext()

rdd = sc.parallelize([("a", 1)])
hasattr(rdd, "toDF")
## False

spark = SparkSession(sc)
hasattr(rdd, "toDF")
## True

rdd.toDF().show()
## +---+---+
## | _1| _2|
## +---+---+
## |  a|  1|
## +---+---+

更不用说,首先需要SQLContextSparkSessionDataFrames一起使用.

Not to mention you need a SQLContext or SparkSession to work with DataFrames in the first place.

这篇关于'PipelinedRDD'对象在PySpark中没有属性'toDF'的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆