Spark 2.1 无法在 CSV 上写入 Vector 字段 [英] Spark 2.1 cannot write Vector field on CSV

查看:24
本文介绍了Spark 2.1 无法在 CSV 上写入 Vector 字段的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在将代码从 Spark 2.0 迁移到 2.1 时偶然发现了与 Dataframe 保存相关的问题.

I was migrating my code from Spark 2.0 to 2.1 when I stumbled into a problem related to Dataframe saving.

这是代码

import org.apache.spark.sql.types._
import org.apache.spark.ml.linalg.VectorUDT
val df = spark.createDataFrame(Seq(Tuple1(1))).toDF("values")
val toSave = new org.apache.spark.ml.feature.VectorAssembler().setInputCols(Array("values")).transform(df)
toSave.write.csv(path)

此代码在使用 Spark 2.0.0 时成功

This code succeeds when using Spark 2.0.0

使用 Spark 2.1.0.cloudera1,我收到以下错误:

Using Spark 2.1.0.cloudera1, I get the following error :

java.lang.UnsupportedOperationException: CSV data source does not support struct<type:tinyint,size:int,indices:array<int>,values:array<double>> data type.
  at org.apache.spark.sql.execution.datasources.csv.CSVFileFormat.org$apache$spark$sql$execution$datasources$csv$CSVFileFormat$$verifyType$1(CSVFileFormat.scala:233)
  at org.apache.spark.sql.execution.datasources.csv.CSVFileFormat$$anonfun$verifySchema$1.apply(CSVFileFormat.scala:237)
  at org.apache.spark.sql.execution.datasources.csv.CSVFileFormat$$anonfun$verifySchema$1.apply(CSVFileFormat.scala:237)
  at scala.collection.Iterator$class.foreach(Iterator.scala:893)
  at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
  at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
  at org.apache.spark.sql.types.StructType.foreach(StructType.scala:96)
  at org.apache.spark.sql.execution.datasources.csv.CSVFileFormat.verifySchema(CSVFileFormat.scala:237)
  at org.apache.spark.sql.execution.datasources.csv.CSVFileFormat.prepareWrite(CSVFileFormat.scala:121)
  at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:108)
  at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:101)
  at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
  at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
  at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
  at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
  at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
  at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:135)
  at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
  at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:132)
  at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:113)
  at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:87)
  at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:87)
  at org.apache.spark.sql.execution.datasources.DataSource.writeInFileFormat(DataSource.scala:484)
  at org.apache.spark.sql.execution.datasources.DataSource.write(DataSource.scala:520)
  at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:215)
  at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:198)
  at org.apache.spark.sql.DataFrameWriter.csv(DataFrameWriter.scala:579)
  ... 50 elided

这只是在我身边吗?

这与 Spark 2.1 的 cloudera 版本有关吗?(从他们的 repo 来看,他们似乎没有弄乱 spark.sql 所以也许不是)

Is this related to the cloudera release of Spark 2.1 ? (from their repo, it seems they didn't mess with spark.sql so maybe not)

谢谢!

推荐答案

以下回答来自@zero323 的评论.

The following answer is composed from @zero323's comment.

CSV 源不支持复杂对象.与例外情况完全一样:CSV 数据源不支持 struct,values:array‌ > 数据类型. 是预期行为.它不适用于 Spark 2.x,尽管它曾经与 1.x 中的 spark-csv 一起使用,其中向量已转换为字符串.

CSV source doesn't support complex objects. Exactly as you from the exception: CSV data source does not support struct,values:array‌​> data type. is an expected behavior. It doesn't work with Spark 2.x although it used to work with spark-csv in 1.x where vectors have been converted to strings.

此行为在以下 jira SPARK-16216 中是正确的.

This behavior was correct in the following jira SPARK-16216.

这篇关于Spark 2.1 无法在 CSV 上写入 Vector 字段的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆