如何使用 spark-avro 包从 spark-shell 读取 avro 文件? [英] How to use spark-avro package to read avro file from spark-shell?

查看:43
本文介绍了如何使用 spark-avro 包从 spark-shell 读取 avro 文件?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试使用 spark-avro 包,如

解决方案

tl;dr 由于 Spark 2.4.x+ 提供了对读取和写入 Apache Avro 数据的内置支持,但是 spark-avro 模块是外部的,默认情况下不包含在 spark-submit 或 spark-shell 中,您应该确保您对 spark-shell--packages 使用相同的 Scala 版本(例如 2.12).

<小时>

异常的原因是您使用了来自 Spark 的 spark-shell,它是针对 Scala 2.11.12 构建的,而 --packages 指定了与 Scala 2.12 的依赖关系(在 org.apache.spark:spark-avro_2.12:2.4.0 中).

使用 --packages org.apache.spark:spark-avro_2.11:2.4.0 你应该没问题.

I'm trying to use the spark-avro package as described in Apache Avro Data Source Guide.

When I submit the following command:

val df = spark.read.format("avro").load("~/foo.avro")

I get an error:

java.util.ServiceConfigurationError: org.apache.spark.sql.sources.DataSourceRegister: Provider org.apache.spark.sql.avro.AvroFileFormat could not be instantiated
  at java.util.ServiceLoader.fail(ServiceLoader.java:232)
  at java.util.ServiceLoader.access$100(ServiceLoader.java:185)
  at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:384)
  at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
  at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
  at scala.collection.convert.Wrappers$JIteratorWrapper.next(Wrappers.scala:43)
  at scala.collection.Iterator$class.foreach(Iterator.scala:891)
  at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
  at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
  at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
  at scala.collection.TraversableLike$class.filterImpl(TraversableLike.scala:247)
  at scala.collection.TraversableLike$class.filter(TraversableLike.scala:259)
  at scala.collection.AbstractTraversable.filter(Traversable.scala:104)
  at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:630)
  at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:194)
  at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:178)
  ... 49 elided
Caused by: java.lang.NoSuchMethodError: org.apache.spark.sql.execution.datasources.FileFormat.$init$(Lorg/apache/spark/sql/execution/datasources/FileFormat;)V
  at org.apache.spark.sql.avro.AvroFileFormat.<init>(AvroFileFormat.scala:44)
  at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
  at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
  at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
  at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
  at java.lang.Class.newInstance(Class.java:442)
  at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:380)
  ... 62 more

I've tried different versions of the org.apache.spark:spark-avro_2.12:2.4.0 package (2.4.0, 2.4.1, and 2.4.2), and I currently use Spark 2.4.1, but neither worked.

I start my spark-shell with the following command:

spark-shell --packages org.apache.spark:spark-avro_2.12:2.4.0

解决方案

tl;dr Since Spark 2.4.x+ provides built-in support for reading and writing Apache Avro data, but the spark-avro module is external and not included in spark-submit or spark-shell by default, you should make sure that you use the same Scala version (ex. 2.12) for the spark-shell and --packages.


The reason for the exception is that you use spark-shell that is from Spark built against Scala 2.11.12 while --packages specifies a dependency with Scala 2.12 (in org.apache.spark:spark-avro_2.12:2.4.0).

Use --packages org.apache.spark:spark-avro_2.11:2.4.0 and you should be fine.

这篇关于如何使用 spark-avro 包从 spark-shell 读取 avro 文件?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆