无法使用rdd.toDF(),但spark.createDataFrame(rdd)可以工作 [英] Unable to use rdd.toDF() but spark.createDataFrame(rdd) Works

查看:158
本文介绍了无法使用rdd.toDF(),但spark.createDataFrame(rdd)可以工作的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个 RDD [(string,List(Tuple))] 形式的RDD,如下所示:

I have an RDD of the form RDD[(string, List(Tuple))], like below:

[(u'C1589::HG02922', [(83779208, 2), (677873089, 0), ...]

尝试运行以下代码将其转换为数据帧时, spark.createDataFrame(rdd)可以正常工作,但是 rdd.toDF()失败.

When attempting to run the below code to convert it to a dataframe, spark.createDataFrame(rdd) works fine but rdd.toDF() fails.

vector_df1 = spark.createDataFrame(vector_rdd) # Works fine.
vector_df1.show()
+--------------+--------------------+
|            _1|                  _2|
+--------------+--------------------+
|C1589::HG02922|[[83779208,2], [6...|
|       HG00367|[[83779208,0], [6...|
| C477::HG00731|[[83779208,0], [6...|
|       HG00626|[[83779208,0], [6...|
|       HG00622|[[83779208,0], [6...|
                   ...
vector_df2 = vector_rdd.toDF() # Tosses the error.

引发的错误是:

Traceback (most recent call last):
  File "/tmp/7ff0f62d-d849-4884-960f-bb89b5f3dd80/ml_on_vds.py", line 47, in <module>
    vector_df2 = vector_rdd.toDF().show()
  File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/sql/session.py", line 57, in toDF
  File "/usr/lib/spark/python/lib/py4j-0.10.3-src.zip/py4j/java_gateway.py", line 1124, in __call__
  File "/usr/lib/spark/python/lib/py4j-0.10.3-src.zip/py4j/java_gateway.py", line 1094, in _build_args
  File "/usr/lib/spark/python/lib/py4j-0.10.3-src.zip/py4j/protocol.py", line 289, in get_command_part
AttributeError: 'PipelinedRDD' object has no attribute '_get_object_id'
ERROR: (gcloud.dataproc.jobs.submit.pyspark) Job [7ff0f62d-d849-4884-960f-bb89b5f3dd80] entered state [ERROR] while waiting for [DONE].

有人以前遇到过类似的问题吗? .toDF()只是 createDataFrame()的简单包装,所以我不明白为什么它会失败.我已在运行时验证我正在使用Spark 2.0.2.

Has anyone encountered an issue similar to this before? .toDF() is just a simple wrapper for createDataFrame() so I don't understand why it would fail. I have verified at runtime I am using Spark 2.0.2.

# Imports    
from pyspark.sql import SparkSession
from pyspark.sql.functions import udf, hash
from pyspark.sql.types import *
from pyspark.ml.clustering import KMeans
from pyspark.ml.linalg import Vectors
from pyspark.ml.feature import StringIndexer
from hail import *

# SparkSession
spark = (SparkSession.builder.appName("PopulationGenomics")
        .config("spark.sql.files.openCostInBytes", "1099511627776")
        .config("spark.sql.files.maxPartitionBytes", "1099511627776")
        .config("spark.hadoop.io.compression.codecs", "org.apache.hadoop.io.compress.DefaultCodec,is.hail.io.compress.BGzipCodec,org.apache.hadoop.io.compress.GzipCodec")
        .getOrCreate())

每个请求都会生成更多错误代码:

Per request, some more of the code which generates the error:

vector_rdd = (indexed_df.rdd.map(lambda r: (r[0], (r[3], r[2])))
              .groupByKey()
              .mapValues(lambda l: Vectors.sparse((max_index + 1), list(l))))
vector_df = spark.createDataFrame(vector_rdd, ['s', 'features']) # Works
vector_df1 = vector_rdd.toDF()
vector_df1.show() # Fails

indexed_df 是架构的DataFrame:

indexed_df is a DataFrame of the schema:

StructType(List(StructField(s,StringType,true),StructField(variant_hash,IntegerType,false),StructField(call,IntegerType,true),StructField(index,DoubleType,true)))

它看起来像...

+--------------+------------+----+-----+
|             s|variant_hash|call|index|
+--------------+------------+----+-----+
|C1046::HG02024|   -60010252|   0|225.0|
|C1046::HG02025|   -60010252|   1|225.0|
|C1046::HG02026|   -60010252|   0|225.0|
|C1047::HG00731|   -60010252|   0|225.0|
|C1047::HG00732|   -60010252|   1|225.0|
|C1047::HG00733|   -60010252|   0|225.0|
|C1048::HG02024|   -60010252|   0|225.0|
|C1048::HG02025|   -60010252|   1|225.0|
|C1048::HG02026|   -60010252|   0|225.0|
|C1049::HG00731|   -60010252|   0|225.0|
|C1049::HG00732|   -60010252|   1|225.0|
|C1049::HG00733|   -60010252|   0|225.0|
|C1050::HG03006|   -60010252|   0|225.0|
|C1051::HG03642|   -60010252|   0|225.0|
|C1589::HG02922|   -60010252|   2|225.0|
|C1589::HG03006|   -60010252|   0|225.0|
|C1589::HG03052|   -60010252|   2|225.0|
|C1589::HG03642|   -60010252|   0|225.0|
|C1589::NA12878|   -60010252|   1|225.0|
|C1589::NA19017|   -60010252|   1|225.0|
+--------------+------------+----+-----+

推荐答案

toDF 方法在1中的 SparkSession 和1中的 SQLContex 下执行.x版本.所以

toDF method is executed under SparkSession in and SQLContex in 1.x version. So

spark = SparkSession(sc)
hasattr(rdd, "toDF")

如果您使用的是Scala,则需要导入 import spark.implicits ._ ,其中spark是您创建的SparkSession对象.

If you are using scala you need to inport import spark.implicits._ where spark is the SparkSession object that you created.

希望这会有所帮助!

这篇关于无法使用rdd.toDF(),但spark.createDataFrame(rdd)可以工作的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆