如何从org.apache.spark.mllib.linalg.VectorUDT转换为ml.linalg.VectorUDT [英] How to convert from org.apache.spark.mllib.linalg.VectorUDT to ml.linalg.VectorUDT
问题描述
我正在使用Spark cluster 2.0,我想将向量从org.apache.spark.mllib.linalg.VectorUDT
转换为org.apache.spark.ml.linalg.VectorUDT
.
I am using Spark cluster 2.0 and I would like to convert a vector from org.apache.spark.mllib.linalg.VectorUDT
to org.apache.spark.ml.linalg.VectorUDT
.
# Import LinearRegression class
from pyspark.ml.regression import LinearRegression
# Define LinearRegression algorithm
lr = LinearRegression()
modelA = lr.fit(data, {lr.regParam:0.0})
错误:
您的要求失败:列要素的类型必须为org.apache.spark.ml.linalg.VectorUDT@3bfc3ba7,但实际上是org.apache.spark.mllib.linalg.VectorUDT@f71b0bce.'
u'requirement failed: Column features must be of type org.apache.spark.ml.linalg.VectorUDT@3bfc3ba7 but was actually org.apache.spark.mllib.linalg.VectorUDT@f71b0bce.'
任何人都在想如何在向量类型之间进行这种转换.
Any thoughts how would I do this conversion between vector types.
非常感谢.
推荐答案
在PySpark中,您需要在RDD上使用或map
.让我们使用第一个选项.首先是几个进口:
In PySpark you'll need an or map
over RDD. Let's use the first option. First a couple of imports:
from pyspark.ml.linalg import VectorUDT
from pyspark.sql.functions import udf
和一个功能
as_ml = udf(lambda v: v.asML() if v is not None else None, VectorUDT())
带有示例数据:
from pyspark.mllib.linalg import Vectors as MLLibVectors
df = sc.parallelize([
(MLLibVectors.sparse(4, [0, 2], [1, -1]), ),
(MLLibVectors.dense([1, 2, 3, 4]), )
]).toDF(["features"])
result = df.withColumn("features", as_ml("features"))
结果是
+--------------------+
| features|
+--------------------+
|(4,[0,2],[1.0,-1.0])|
| [1.0,2.0,3.0,4.0]|
+--------------------+
这篇关于如何从org.apache.spark.mllib.linalg.VectorUDT转换为ml.linalg.VectorUDT的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!