如何创建地图数据集? [英] How to create a Dataset of Maps?

查看:43
本文介绍了如何创建地图数据集?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我使用的是 Spark 2.2,在尝试对 MapSeq 调用 spark.createDataset 时遇到了麻烦.

我的 Spark Shell 会话的代码和输出如下:

//createDataSet on Seq[T] where T = Int 有效标度>spark.createDataset(Seq(1, 2, 3)).collectres0: Array[Int] = Array(1, 2, 3)标度>spark.createDataset(Seq(Map(1 -> 2))).collect<console>:24: 错误:无法找到存储在数据集中的类型的编码器.原始类型(Int、String 等)和产品类型(案例类)是导入 spark.implicits._ 支持未来版本中将添加对序列化其他类型的支持.spark.createDataset(Seq(Map(1 -> 2))).collect^//在包含 Map 的自定义案例类上 createDataSet 工作标度>案例类 MapHolder(m: Map[Int, Int])定义类 MapHolder标度>spark.createDataset(Seq(MapHolder(Map(1 -> 2)))).collectres2: Array[MapHolder] = Array(MapHolder(Map(1 -> 2)))

我已经尝试过 import spark.implicits._,但我很确定它是由 Spark shell 会话隐式导入的.

这是当前编码器未涵盖的情况吗?

解决方案

它没有在 2.2 中涵盖,但可以轻松解决.您可以使用 ExpressionEncoder 添加所需的 Encoder,或者明确地:

import org.apache.spark.sql.catalyst.encoders.ExpressionEncoder导入 org.apache.spark.sql.Encoder火花.createDataset(Seq(Map(1 -> 2)))(ExpressionEncoder(): Encoder[Map[Int, Int]])

隐式:

implicit def mapIntIntEncoder: Encoder[Map[Int, Int]] = ExpressionEncoder()spark.createDataset(Seq(Map(1 -> 2)))

I'm using Spark 2.2 and am running into troubles when attempting to call spark.createDataset on a Seq of Map.

Code and output from my Spark Shell session follow:

// createDataSet on Seq[T] where T = Int works
scala> spark.createDataset(Seq(1, 2, 3)).collect
res0: Array[Int] = Array(1, 2, 3)

scala> spark.createDataset(Seq(Map(1 -> 2))).collect
<console>:24: error: Unable to find encoder for type stored in a Dataset.  
Primitive types (Int, String, etc) and Product types (case classes) are 
supported by importing spark.implicits._
Support for serializing other types will be added in future releases.
       spark.createDataset(Seq(Map(1 -> 2))).collect
                          ^

// createDataSet on a custom case class containing Map works
scala> case class MapHolder(m: Map[Int, Int])
defined class MapHolder

scala> spark.createDataset(Seq(MapHolder(Map(1 -> 2)))).collect
res2: Array[MapHolder] = Array(MapHolder(Map(1 -> 2)))

I've tried import spark.implicits._, though I'm fairly certain that's implicitly imported by the Spark shell session.

Is this is a case not covered by current encoders?

解决方案

It is not covered in 2.2, but can be easily addressed. You can add required Encoder using ExpressionEncoder, either explicitly:

import org.apache.spark.sql.catalyst.encoders.ExpressionEncoder  
import org.apache.spark.sql.Encoder

spark
  .createDataset(Seq(Map(1 -> 2)))(ExpressionEncoder(): Encoder[Map[Int, Int]])

or implicitly:

implicit def mapIntIntEncoder: Encoder[Map[Int, Int]] = ExpressionEncoder()
spark.createDataset(Seq(Map(1 -> 2)))

这篇关于如何创建地图数据集?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆