如何将带有 SparseVector 列的 RDD 转换为带有列作为向量的 DataFrame [英] How do I convert an RDD with a SparseVector Column to a DataFrame with a column as Vector

查看:21
本文介绍了如何将带有 SparseVector 列的 RDD 转换为带有列作为向量的 DataFrame的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个带有值元组(String、SparseVector)的 RDD,我想使用 RDD 创建一个 DataFrame.获取 (label:string, features:vector) DataFrame 这是大多数 ml 算法库所需的 Schema.我知道可以这样做,因为 HashingTF ml 库在给定 DataFrame 的特征列时输出一个向量.

I have an RDD with a tuple of values (String, SparseVector) and I want to create a DataFrame using the RDD. To get a (label:string, features:vector) DataFrame which is the Schema required by most of the ml algorithm's libraries. I know it can be done because HashingTF ml Library outputs a vector when given a features column of a DataFrame.

temp_df = sqlContext.createDataFrame(temp_rdd, StructType([
        StructField("label", DoubleType(), False),
        StructField("tokens", ArrayType(StringType()), False)
    ]))

#assumming there is an RDD (double,array(strings))

hashingTF = HashingTF(numFeatures=COMBINATIONS, inputCol="tokens", outputCol="features")

ndf = hashingTF.transform(temp_df)
ndf.printSchema()

#outputs 
#root
#|-- label: double (nullable = false)
#|-- tokens: array (nullable = false)
#|    |-- element: string (containsNull = true)
#|-- features: vector (nullable = true)

所以我的问题是,我能否以某种方式将 (String, SparseVector) 的 RDD 转换为 (String, vector) 的 DataFrame.我尝试使用通常的 sqlContext.createDataFrame 但没有 DataType 适合我的需求.

So my question is, can I somehow having an RDD of (String, SparseVector) convert it to a DataFrame of (String, vector). I tried with the usual sqlContext.createDataFrame but there is no DataType that fits the needs I have.

df = sqlContext.createDataFrame(rdd,StructType([
        StructField("label" , StringType(),True),
        StructField("features" , ?Type(),True)
    ]))

推荐答案

你必须在这里使用 VectorUDT:

# In Spark 1.x
# from pyspark.mllib.linalg import SparseVector, VectorUDT
from pyspark.ml.linalg import SparseVector, VectorUDT

temp_rdd = sc.parallelize([
    (0.0, SparseVector(4, {1: 1.0, 3: 5.5})),
    (1.0, SparseVector(4, {0: -1.0, 2: 0.5}))])

schema = StructType([
    StructField("label", DoubleType(), True),
    StructField("features", VectorUDT(), True)
])

temp_rdd.toDF(schema).printSchema()

## root
##  |-- label: double (nullable = true)
##  |-- features: vector (nullable = true)

只是为了完整性 Scala 等价物:

Just for completeness Scala equivalent:

import org.apache.spark.sql.Row
import org.apache.spark.rdd.RDD
import org.apache.spark.sql.types.{DoubleType, StructType}
// In Spark 1x.
// import org.apache.spark.mllib.linalg.{Vectors, VectorUDT}
import org.apache.spark.ml.linalg.Vectors
import org.apache.spark.ml.linalg.SQLDataTypes.VectorType

val schema = new StructType()
  .add("label", DoubleType)
   // In Spark 1.x
   //.add("features", new VectorUDT())
  .add("features",VectorType)

val temp_rdd: RDD[Row]  = sc.parallelize(Seq(
  Row(0.0, Vectors.sparse(4, Seq((1, 1.0), (3, 5.5)))),
  Row(1.0, Vectors.sparse(4, Seq((0, -1.0), (2, 0.5))))
))

spark.createDataFrame(temp_rdd, schema).printSchema

// root
// |-- label: double (nullable = true)
// |-- features: vector (nullable = true)

这篇关于如何将带有 SparseVector 列的 RDD 转换为带有列作为向量的 DataFrame的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆