Spark 2.1无法在CSV上写入Vector字段 [英] Spark 2.1 cannot write Vector field on CSV

查看:509
本文介绍了Spark 2.1无法在CSV上写入Vector字段的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

当我偶然发现与保存Dataframe有关的问题时,我正在将代码从Spark 2.0迁移到2.1.

I was migrating my code from Spark 2.0 to 2.1 when I stumbled into a problem related to Dataframe saving.

这是代码

import org.apache.spark.sql.types._
import org.apache.spark.ml.linalg.VectorUDT
val df = spark.createDataFrame(Seq(Tuple1(1))).toDF("values")
val toSave = new org.apache.spark.ml.feature.VectorAssembler().setInputCols(Array("values")).transform(df)
toSave.write.csv(path)

使用Spark 2.0.0时此代码成功

This code succeeds when using Spark 2.0.0

使用Spark 2.1.0.cloudera1,出现以下错误:

Using Spark 2.1.0.cloudera1, I get the following error :

java.lang.UnsupportedOperationException: CSV data source does not support struct<type:tinyint,size:int,indices:array<int>,values:array<double>> data type.
  at org.apache.spark.sql.execution.datasources.csv.CSVFileFormat.org$apache$spark$sql$execution$datasources$csv$CSVFileFormat$$verifyType$1(CSVFileFormat.scala:233)
  at org.apache.spark.sql.execution.datasources.csv.CSVFileFormat$$anonfun$verifySchema$1.apply(CSVFileFormat.scala:237)
  at org.apache.spark.sql.execution.datasources.csv.CSVFileFormat$$anonfun$verifySchema$1.apply(CSVFileFormat.scala:237)
  at scala.collection.Iterator$class.foreach(Iterator.scala:893)
  at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
  at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
  at org.apache.spark.sql.types.StructType.foreach(StructType.scala:96)
  at org.apache.spark.sql.execution.datasources.csv.CSVFileFormat.verifySchema(CSVFileFormat.scala:237)
  at org.apache.spark.sql.execution.datasources.csv.CSVFileFormat.prepareWrite(CSVFileFormat.scala:121)
  at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:108)
  at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:101)
  at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
  at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
  at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
  at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
  at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
  at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:135)
  at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
  at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:132)
  at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:113)
  at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:87)
  at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:87)
  at org.apache.spark.sql.execution.datasources.DataSource.writeInFileFormat(DataSource.scala:484)
  at org.apache.spark.sql.execution.datasources.DataSource.write(DataSource.scala:520)
  at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:215)
  at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:198)
  at org.apache.spark.sql.DataFrameWriter.csv(DataFrameWriter.scala:579)
  ... 50 elided

这只在我这边吗?

这与Spark 2.1的cloudera版本有关吗? (从他们的仓库中,看来他们并没有把spark.sql弄乱了,所以也许没有)

Is this related to the cloudera release of Spark 2.1 ? (from their repo, it seems they didn't mess with spark.sql so maybe not)

谢谢!

推荐答案

以下答案由@ zero323的注释组成.

The following answer is composed from @zero323's comment.

CSV源不支持复杂的对象.与您完全一样的例外:CSV数据源不支持struct,values:array‌>数据类型.是一种预期的行为.尽管以前在将1.x将向量转换为字符串的1.x中使用spark-csv时,它不适用于Spark 2.x.

CSV source doesn't support complex objects. Exactly as you from the exception: CSV data source does not support struct,values:array‌​> data type. is an expected behavior. It doesn't work with Spark 2.x although it used to work with spark-csv in 1.x where vectors have been converted to strings.

在以下jira SPARK-16216 中是正确的.

This behavior was correct in the following jira SPARK-16216.

这篇关于Spark 2.1无法在CSV上写入Vector字段的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆