如何将 org.apache.spark.rdd.RDD[Array[Double]] 转换为 Spark MLlib 所需的 Array[Double] [英] How to convert org.apache.spark.rdd.RDD[Array[Double]] to Array[Double] which is required by Spark MLlib

查看:24
本文介绍了如何将 org.apache.spark.rdd.RDD[Array[Double]] 转换为 Spark MLlib 所需的 Array[Double]的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试使用 Apache Spark 实现 KMeans.

I am trying to implement KMeans using Apache Spark.

val data = sc.textFile(irisDatasetString)
val parsedData = data.map(_.split(',').map(_.toDouble)).cache()

val clusters = KMeans.train(parsedData,3,numIterations = 20)

我收到以下错误:

error: overloaded method value train with alternatives:
  (data: org.apache.spark.rdd.RDD[org.apache.spark.mllib.linalg.Vector],k: Int,maxIterations: Int,runs: Int)org.apache.spark.mllib.clustering.KMeansModel <and>
  (data: org.apache.spark.rdd.RDD[org.apache.spark.mllib.linalg.Vector],k: Int,maxIterations: Int)org.apache.spark.mllib.clustering.KMeansModel <and>
  (data: org.apache.spark.rdd.RDD[org.apache.spark.mllib.linalg.Vector],k: Int,maxIterations: Int,runs: Int,initializationMode: String)org.apache.spark.mllib.clustering.KMeansModel
 cannot be applied to (org.apache.spark.rdd.RDD[Array[Double]], Int, numIterations: Int)
       val clusters = KMeans.train(parsedData,3,numIterations = 20)

所以我尝试将 Array[Double] 转换为 Vector,如图 here

so I tried converting Array[Double] to Vector as shown here

scala> val vectorData: Vector = Vectors.dense(parsedData)

我收到以下错误:

error: type Vector takes type parameters
   val vectorData: Vector = Vectors.dense(parsedData)
                   ^
error: overloaded method value dense with alternatives:
  (values: Array[Double])org.apache.spark.mllib.linalg.Vector <and>
  (firstValue: Double,otherValues: Double*)org.apache.spark.mllib.linalg.Vector
 cannot be applied to (org.apache.spark.rdd.RDD[Array[Double]])
       val vectorData: Vector = Vectors.dense(parsedData)

所以我推断 org.apache.spark.rdd.RDD[Array[Double]] 与 Array[Double]

So I am inferring that org.apache.spark.rdd.RDD[Array[Double]] is not the same as Array[Double]

我如何以 org.apache.spark.rdd.RDD[Array[Double]] 的形式处理我的数据?或者如何将 org.apache.spark.rdd.RDD[Array[Double]] 转换为 Array[Double] ?

How can I proceed with my data as org.apache.spark.rdd.RDD[Array[Double]] ? or how can I convert org.apache.spark.rdd.RDD[Array[Double]] to Array[Double] ?

推荐答案

KMeans.train 期待 RDD[Vector] 而不是 RDD[Array[Double]]].在我看来,你需要做的就是改变

KMeans.train is expecting RDD[Vector] instead of RDD[Array[Double]]. It seems to me that all you need to do is change

val parsedData = data.map(_.split(',').map(_.toDouble)).cache()

val parsedData = data.map(x => Vectors.dense(x.split(',').map(_.toDouble))).cache()

这篇关于如何将 org.apache.spark.rdd.RDD[Array[Double]] 转换为 Spark MLlib 所需的 Array[Double]的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆